Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Study of seismotectonic and seismicity of Isfahan regionStudy of seismotectonic and seismicity of Isfahan region1223020210.22059/jesphys.2013.30202FAHadiDehghan-ManshadiNoorbakhshMirzaeiMortezaEskandari-GhadiJournal Article19700101This paper presents the seismicity and seismotectonic characteristics of Esfahan and adjoining regions, an area bounded between 49.5-54o E and 31-34.1o N. According to Mirzaei et al. (1998) this region is placed between two major seismotectonic provinces: Zagros and Central-East Iran. The boundary between these two provinces is known to be the Main Zagros Reverse Fault. Geological maps with scales of 1:100000 and 1:250000 were used to provide the faults map of this region. We determined several probable faults in the region that can help us to introduce the potential seismic source more precisely. Among a total of eighteen focal mechanism solutions of earthquakes with Mw ? 4.3 in the region of interest, one event is normal, six events show dominant strike-slip components, and eleven events have dominant reverse components, which confirms the dominance of reverse/thrust faulting.
Using global database of earthquakes provided by USGS/NEIC and ISC, as well as catalog of historical (pre-1900) and early- instrumental (1900-1963) earthquakes provided by Ambraseys and Melville (1982), a uniform catalog of earthquakes for the interest region has been provided that involves 170 instrumental and 10 historical earthquakes. To unify the scale of earthquake magnitude for each seismotectonic province, we established empirical relations to convert mb to Ms. Because historical earthquake in Iran have been defined in Ms and lack of events with magnitudes above the saturation level of Ms in the interest region, surface wave magnitude is the most appropriate magnitude scale for the region, and on a broader scale, is appropriate for Iran. In order to establish empirical relations between mb and Ms we applied the orthogonal regression (OR) method that takes into accounts the errors of measurements in both variables.
Based on this uniform catalog, seismicity of the interest region was studied, and seismicity parameters were calculated utilizing the method proposed by Kijko and Sellevoll (1992), in which one can consider magnitude uncertainty and completeness of data in calculations. In order to achieve more reliable results, the completeness of catalog and uncertainty of magnitudes have been estimated and considered in our calculations. In this method, dataset of the catalog should be divided into extreme and complete part, and each complete part can be subdivided again into several complete parts that have their own completeness threshold. In this work, the whole data was separated into six complete parts with threshold magnitudes between Mc=3.2 (for events after 1996) to Mc=5 (for events after 1939). To estimate the magnitude threshold (Mc) combination of two methods were used. The first, is maximum curvature (MAXC) and the second, is goodness of fit test (GFT). In cases where lack of data does not allow using GFT, only MAXC method is used.
Magnitude uncertainty of each event is considered to be 0.2 and 0.3 for modern-instrumental and early-instrumental earthquakes, respectively, when Ms has been assigned directly based on seismogram readings. For the events that their surface-wave magnitudes have been obtained by empirical conversion relations, magnitude uncertainty is considered to be 0.4. In the case of historical events, uncertainties are considered to be in the order of 0.4 to 0.8 (Mirzaei et al., 1997a), based on their quality (a, b, c, d) that was assigned by Ambraseys and Melville (1982).
For Central-East Iran part of the interest region, the results show that b-value is equal to 0.81 ± 0.12, ? is equal to 0.48 ± 0.129 and Mmax=7.8 ± 1.74. For Zagros part, b-value is equal to 0.95 ± 0.05, ? is equal to 4.976 ± 0.413 and Mmax=7.4 ± 0.67. The impact of classification of data on seismicity parameters has been investigated, and the results show that it has a significant impact on the b-value (or ?) and recurrence rate of the seismic events (?); while its effect on Mmax is negligible. Furthermore, classification of complete part of the catalog significantly decreases the uncertainty in the evaluated parameters of Mmax, ? and b (or ?). The minimum impact on uncertainty of parameters appears in b, and the maximum appears in Mmax.This paper presents the seismicity and seismotectonic characteristics of Esfahan and adjoining regions, an area bounded between 49.5-54o E and 31-34.1o N. According to Mirzaei et al. (1998) this region is placed between two major seismotectonic provinces: Zagros and Central-East Iran. The boundary between these two provinces is known to be the Main Zagros Reverse Fault. Geological maps with scales of 1:100000 and 1:250000 were used to provide the faults map of this region. We determined several probable faults in the region that can help us to introduce the potential seismic source more precisely. Among a total of eighteen focal mechanism solutions of earthquakes with Mw ? 4.3 in the region of interest, one event is normal, six events show dominant strike-slip components, and eleven events have dominant reverse components, which confirms the dominance of reverse/thrust faulting.
Using global database of earthquakes provided by USGS/NEIC and ISC, as well as catalog of historical (pre-1900) and early- instrumental (1900-1963) earthquakes provided by Ambraseys and Melville (1982), a uniform catalog of earthquakes for the interest region has been provided that involves 170 instrumental and 10 historical earthquakes. To unify the scale of earthquake magnitude for each seismotectonic province, we established empirical relations to convert mb to Ms. Because historical earthquake in Iran have been defined in Ms and lack of events with magnitudes above the saturation level of Ms in the interest region, surface wave magnitude is the most appropriate magnitude scale for the region, and on a broader scale, is appropriate for Iran. In order to establish empirical relations between mb and Ms we applied the orthogonal regression (OR) method that takes into accounts the errors of measurements in both variables.
Based on this uniform catalog, seismicity of the interest region was studied, and seismicity parameters were calculated utilizing the method proposed by Kijko and Sellevoll (1992), in which one can consider magnitude uncertainty and completeness of data in calculations. In order to achieve more reliable results, the completeness of catalog and uncertainty of magnitudes have been estimated and considered in our calculations. In this method, dataset of the catalog should be divided into extreme and complete part, and each complete part can be subdivided again into several complete parts that have their own completeness threshold. In this work, the whole data was separated into six complete parts with threshold magnitudes between Mc=3.2 (for events after 1996) to Mc=5 (for events after 1939). To estimate the magnitude threshold (Mc) combination of two methods were used. The first, is maximum curvature (MAXC) and the second, is goodness of fit test (GFT). In cases where lack of data does not allow using GFT, only MAXC method is used.
Magnitude uncertainty of each event is considered to be 0.2 and 0.3 for modern-instrumental and early-instrumental earthquakes, respectively, when Ms has been assigned directly based on seismogram readings. For the events that their surface-wave magnitudes have been obtained by empirical conversion relations, magnitude uncertainty is considered to be 0.4. In the case of historical events, uncertainties are considered to be in the order of 0.4 to 0.8 (Mirzaei et al., 1997a), based on their quality (a, b, c, d) that was assigned by Ambraseys and Melville (1982).
For Central-East Iran part of the interest region, the results show that b-value is equal to 0.81 ± 0.12, ? is equal to 0.48 ± 0.129 and Mmax=7.8 ± 1.74. For Zagros part, b-value is equal to 0.95 ± 0.05, ? is equal to 4.976 ± 0.413 and Mmax=7.4 ± 0.67. The impact of classification of data on seismicity parameters has been investigated, and the results show that it has a significant impact on the b-value (or ?) and recurrence rate of the seismic events (?); while its effect on Mmax is negligible. Furthermore, classification of complete part of the catalog significantly decreases the uncertainty in the evaluated parameters of Mmax, ? and b (or ?). The minimum impact on uncertainty of parameters appears in b, and the maximum appears in Mmax.https://jesphys.ut.ac.ir/article_30202_035b677c36a25335192c69d6e487c8e4.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Analysis of local earthquake relocation method using a nonlinear method: an application to relocate earthquakes with Mn ≥ 4.0 in Central Alborz region (2006-2010)Analysis of local earthquake relocation method using a nonlinear method: an application to relocate earthquakes with Mn ≥ 4.0 in Central Alborz region (2006-2010)23373020310.22059/jesphys.2013.30203FAVahidMalekiZaher HosseinShomaliMohammad RezaHatamiJournal Article19700101Earthquake location process has an important role in any seismological applications including seismic tomography etc. The relationship between the travel-time of seismic phases and earthquake hypocenter (latitude, longitude, depth) and origin-time is non-linear and many different linearized methods have been implemented in recent years to linearize the relationship. The principal underlying linearized methods is given by Geirger (1912) based on the Taylor series. However, attempts have been made to incorporate higher terms of the Taylor series in earthquake location process (e.g. Thurber, 1985). Using more higher-terms of the Taylor series provides more constrained solution at the expense of computation costs. Full-nonlinear earthquake location algorithm was also developed and discussed earlier e.g. by Tarantola and Vallette (1982) and Tranatola (1987). In this algorithm, the location of earthquake is defined using Probability Density Functions (PDF) of all possible points around the hypocenter. In this study we use a nonlinear probabilistic method based on global search methods. In this method, the calculations of the partial derivatives are not required, and there is a higher probability to converge to the global minima due to the nonlinearity of the problem. (E.g. Lomax et al, 2008). The optimal solution in this method can be found using different algorithms such as Metropolis -Gibs (Metropolis et al, 1953), grid search (e.g. Sambridge and Mosegaard, 2002) and Octtree (oct-tree) Importance sampling algorithm (Lomax and Curtis, 2001). The oct-tree importance sampling algorithm is very faster, more complete and simple in comparison to the other methods e.g. grid search and metropolis-Gibs algorithms. The application using oct-tree algorithm is provided to produce accurate, efficient and complete mapping of Probability Density Functions (PDFs) (e.g. Lomax and Curtis, 2001). In this study we use the oct-tree importance sampling to find the optimized solution.
In this study, we apply non-linear earthquake location method developed by Lomax et al. (2000) for local earthquakes during 2006-2010, for magnitude Mn?4 occurred in the Central Alborz region. Non-linear location method is based on a posterior Probability Density Functions (PDF) determined for the model parameters. This function represents a complete probabilistic solution for the earthquake location problem including information on the uncertainties due to phase-picking uncertainties, calculated travel-time and the network geometry. We perform different synthetic tests to evaluate the performances of non-linear method, where the location problem is ill-conditioned due to station geometry and phase picking error. In this regard we test the effect of azimuthal gap and distance to the nearest station using various synthetic tests conducted in this research.
In the synthetic tests conducted, we consider 4 events in different situations located outside of an assumed seismic network. The seismic network includes 8 stations with station-spacing of order of 15 km. The azimuthal gap of the events varies in the range of 225-317 degree. In these tests we also add noise with Gaussian distribution in arrival-times, in order to investigate the performance of the nonlinear location method in the presence of large azimuthal gap and noise in the data simultaneously. In other synthetic-tests to consider the performance of nonlinear location method due to distance of nearest station to earthquake, we perform tests using different situations in presence of noise in the data with different levels in arrival times. In first case we consider an event in a dense network with station spacing of order of 15 km, and in second case we expand the network to 150 km station spacing.
In this study to relocate earthquakes occurred in the Central Alborz region during 2006-2010, for magnitude Mn?4, we have used the data set of Iranian Seismological Center (IRSC), including the arrival times of P and S phases. In this regard we used three sub networks, Tehran, Sari and Semnan belonging to the IRSC network. Also in order to enhance the station coverage especially in the southern part of the Alborz region, we added data-set of two other stations from Isfahan sub network, namely KLH and ZEF stations.
Finally, we show the robustness of the non-linear location algorithm in the presence of outliers by analyzing the shape, size and position of the 68% confidence ellipsoid that can be calculated from the PDF to track the changes in the distribution of the PDF with changing station geometry.
We find that the non-linear method is robust in the presence of high azimuthal gap e.g. 300 degree and high Gaussian errors up to 1.0 sec, and is able to locate earthquakes with error less than 5 km. We relocate 16 earthquake occurred in the Central Alborz region with Mn ? 4.0 between 2006 -2010. Despite of high azimuthal gap and high station spacing in the dataset used in this study, 10 earthquakes located with horizontal error less than 3 km. In order to verify the quality of results, we compare the non-linear location results with those reported by IGUT and IIEES. The comparison shows that the nonlinear relocation solutions are, in most of cases, closer to IGUT solutions.Earthquake location process has an important role in any seismological applications including seismic tomography etc. The relationship between the travel-time of seismic phases and earthquake hypocenter (latitude, longitude, depth) and origin-time is non-linear and many different linearized methods have been implemented in recent years to linearize the relationship. The principal underlying linearized methods is given by Geirger (1912) based on the Taylor series. However, attempts have been made to incorporate higher terms of the Taylor series in earthquake location process (e.g. Thurber, 1985). Using more higher-terms of the Taylor series provides more constrained solution at the expense of computation costs. Full-nonlinear earthquake location algorithm was also developed and discussed earlier e.g. by Tarantola and Vallette (1982) and Tranatola (1987). In this algorithm, the location of earthquake is defined using Probability Density Functions (PDF) of all possible points around the hypocenter. In this study we use a nonlinear probabilistic method based on global search methods. In this method, the calculations of the partial derivatives are not required, and there is a higher probability to converge to the global minima due to the nonlinearity of the problem. (E.g. Lomax et al, 2008). The optimal solution in this method can be found using different algorithms such as Metropolis -Gibs (Metropolis et al, 1953), grid search (e.g. Sambridge and Mosegaard, 2002) and Octtree (oct-tree) Importance sampling algorithm (Lomax and Curtis, 2001). The oct-tree importance sampling algorithm is very faster, more complete and simple in comparison to the other methods e.g. grid search and metropolis-Gibs algorithms. The application using oct-tree algorithm is provided to produce accurate, efficient and complete mapping of Probability Density Functions (PDFs) (e.g. Lomax and Curtis, 2001). In this study we use the oct-tree importance sampling to find the optimized solution.
In this study, we apply non-linear earthquake location method developed by Lomax et al. (2000) for local earthquakes during 2006-2010, for magnitude Mn?4 occurred in the Central Alborz region. Non-linear location method is based on a posterior Probability Density Functions (PDF) determined for the model parameters. This function represents a complete probabilistic solution for the earthquake location problem including information on the uncertainties due to phase-picking uncertainties, calculated travel-time and the network geometry. We perform different synthetic tests to evaluate the performances of non-linear method, where the location problem is ill-conditioned due to station geometry and phase picking error. In this regard we test the effect of azimuthal gap and distance to the nearest station using various synthetic tests conducted in this research.
In the synthetic tests conducted, we consider 4 events in different situations located outside of an assumed seismic network. The seismic network includes 8 stations with station-spacing of order of 15 km. The azimuthal gap of the events varies in the range of 225-317 degree. In these tests we also add noise with Gaussian distribution in arrival-times, in order to investigate the performance of the nonlinear location method in the presence of large azimuthal gap and noise in the data simultaneously. In other synthetic-tests to consider the performance of nonlinear location method due to distance of nearest station to earthquake, we perform tests using different situations in presence of noise in the data with different levels in arrival times. In first case we consider an event in a dense network with station spacing of order of 15 km, and in second case we expand the network to 150 km station spacing.
In this study to relocate earthquakes occurred in the Central Alborz region during 2006-2010, for magnitude Mn?4, we have used the data set of Iranian Seismological Center (IRSC), including the arrival times of P and S phases. In this regard we used three sub networks, Tehran, Sari and Semnan belonging to the IRSC network. Also in order to enhance the station coverage especially in the southern part of the Alborz region, we added data-set of two other stations from Isfahan sub network, namely KLH and ZEF stations.
Finally, we show the robustness of the non-linear location algorithm in the presence of outliers by analyzing the shape, size and position of the 68% confidence ellipsoid that can be calculated from the PDF to track the changes in the distribution of the PDF with changing station geometry.
We find that the non-linear method is robust in the presence of high azimuthal gap e.g. 300 degree and high Gaussian errors up to 1.0 sec, and is able to locate earthquakes with error less than 5 km. We relocate 16 earthquake occurred in the Central Alborz region with Mn ? 4.0 between 2006 -2010. Despite of high azimuthal gap and high station spacing in the dataset used in this study, 10 earthquakes located with horizontal error less than 3 km. In order to verify the quality of results, we compare the non-linear location results with those reported by IGUT and IIEES. The comparison shows that the nonlinear relocation solutions are, in most of cases, closer to IGUT solutions.https://jesphys.ut.ac.ir/article_30203_ccb08b0fa238d34b0bff7eff0153d99e.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Experimental study of the effect of thermal stresses on the failure of a carbonate rock, southern IranExperimental study of the effect of thermal stresses on the failure of a carbonate rock, southern Iran39483020410.22059/jesphys.2013.30204FAMahdiAlipour KallehbastiBahmanBohloliJournal Article19700101Productivity of an oil well is an important parameter in any oil field development. Low production rate of the wells, impose more production wells drilling which directly jeopardizes the economical feasibility of the project. Oil production rate in some of the wells is very low due to low permeability of the reservoir rock. Several practice like hydraulic fracturing and acid fracturing is very common in the oil industry. In these methods high pressure fluid (or acid) is injected into the reservoir and when the pressure exceeds the rock strength, it starts to fall apart. Thermal cracking is one of the methods used indirectly to increase rock mass permeability and consequently oil production. Thermal stress induced by injecting cold fluid into the reservoir is actually part of the stress needed to break down the reservoir rock during hydraulic fracturing process. If this exceeds the critical magnitude, the rock will fail leading to permeability and productivity enhancement. Effect of thermal stress in enhancement of injection well during water flooding process and wellbore stability has been investigated and reported in several papers, however thermal stress as a method is not addressed and reported elsewhere.
In this experimental study effect of temperature gradient and induced thermal stress was investigated on one of the Iranian carbonate oil reservoir. The field is a carbonate reservoir with low permeability less than few mili-darcy. Well production is very small as low as 300 stb/day in some wells which makes the field a good candidate of stimulation job. Several samples have been taken from the reservoir and prepared for the experiments. The carbonate rock of the reservoir is very heterogeneous and contains layers of clay. Initial study on thin section shows that heterogeneous texture of the reservoir rock has the potential of thermal stress concentration. Three categories of fractures were distinguished in the reservoir rock which could be considered as weak surface in the process. Numerical simulation performed with FLAC simulator to understand how heat propagates with time in the rock sample. An experimental set up was designed for cooling the rock samples. The rock sample heated up to reservoir temperature which is almost 90oC and contained with a glass cap on its top. Hot water was circulated through the glass cap to keep the sample in the reservoir temperature during the experiment. Then the sample was cooled from one side to study the effect of temperature on the rock sample. The samples were CT scanned along and perpendicular to its axis before and after the experiment and each section was carefully analyzed for possible fracture. Experimental results showed that thermal cracking is feasible by imposing 90 C degrees temperature variation. Experiments performed on four samples of which two showed induced tiny fractures. The mini-fractures were mostly in the middle part of the sample which is related to temperature profile and induced thermal stress front inside the sample. Our experiment shows that thermal stress effect could cause fracture initiation in the sample and may be used as a driving force of fracturing in conjunction with hydraulic fracturing process for well stimulation. The results also indicate that thermal stress is important and should be considered in process like wellbore stability.Productivity of an oil well is an important parameter in any oil field development. Low production rate of the wells, impose more production wells drilling which directly jeopardizes the economical feasibility of the project. Oil production rate in some of the wells is very low due to low permeability of the reservoir rock. Several practice like hydraulic fracturing and acid fracturing is very common in the oil industry. In these methods high pressure fluid (or acid) is injected into the reservoir and when the pressure exceeds the rock strength, it starts to fall apart. Thermal cracking is one of the methods used indirectly to increase rock mass permeability and consequently oil production. Thermal stress induced by injecting cold fluid into the reservoir is actually part of the stress needed to break down the reservoir rock during hydraulic fracturing process. If this exceeds the critical magnitude, the rock will fail leading to permeability and productivity enhancement. Effect of thermal stress in enhancement of injection well during water flooding process and wellbore stability has been investigated and reported in several papers, however thermal stress as a method is not addressed and reported elsewhere.
In this experimental study effect of temperature gradient and induced thermal stress was investigated on one of the Iranian carbonate oil reservoir. The field is a carbonate reservoir with low permeability less than few mili-darcy. Well production is very small as low as 300 stb/day in some wells which makes the field a good candidate of stimulation job. Several samples have been taken from the reservoir and prepared for the experiments. The carbonate rock of the reservoir is very heterogeneous and contains layers of clay. Initial study on thin section shows that heterogeneous texture of the reservoir rock has the potential of thermal stress concentration. Three categories of fractures were distinguished in the reservoir rock which could be considered as weak surface in the process. Numerical simulation performed with FLAC simulator to understand how heat propagates with time in the rock sample. An experimental set up was designed for cooling the rock samples. The rock sample heated up to reservoir temperature which is almost 90oC and contained with a glass cap on its top. Hot water was circulated through the glass cap to keep the sample in the reservoir temperature during the experiment. Then the sample was cooled from one side to study the effect of temperature on the rock sample. The samples were CT scanned along and perpendicular to its axis before and after the experiment and each section was carefully analyzed for possible fracture. Experimental results showed that thermal cracking is feasible by imposing 90 C degrees temperature variation. Experiments performed on four samples of which two showed induced tiny fractures. The mini-fractures were mostly in the middle part of the sample which is related to temperature profile and induced thermal stress front inside the sample. Our experiment shows that thermal stress effect could cause fracture initiation in the sample and may be used as a driving force of fracturing in conjunction with hydraulic fracturing process for well stimulation. The results also indicate that thermal stress is important and should be considered in process like wellbore stability.https://jesphys.ut.ac.ir/article_30204_da8e3ab6267063d983b444fdcfdcb7ed.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Simulation of reservoir permeability using porosity and acoustic impedance dataSimulation of reservoir permeability using porosity and acoustic impedance data49563020510.22059/jesphys.2013.30205FAKeyvanNajafzadehMohammad AliRiahiMohsenSeyedaliJournal Article19700101Permeability is a key parameter in reservoir characterization. In fact, without accurate information about reservoir permeability, there is no good solution for reservoir engineering problems. Up to now, permeability values of a reservoir have been calculated through laboratory measurements or well testing methods. These methods are useful but they cannot describe a reservoir reliably and precisely. Also, well logs and core analysis have been used for permeability estimation, but correlation methods also show that, estimation through these methods is not reliable.
Direct prediction of reservoir permeability from seismic data is often supposed impossible due to resolution limitations of seismic data and hydraulic nature of permeability. In many cases reservoir permeability estimation is restricted to core scale and wellbore proximity. Regarding the existing limitations, recent studies have shown that there are some experimental relationships between some petrophysical parameters and permeability.
In recent years, geostatistics and stochastic modeling have made a tremendous impact on scientific investigations. Geostatistics has been an increasingly important factor in the development of petroleum resources. So, a stochastic modeling of geostatistics is a reliable tool for estimating reservoir parameters including porosity and permeability. In geostatistics we suppose that variables have an effect on each other. So, the first concept of geostatistics is related to the variogram of values we want to estimate. It tells about the spatial correlation that exists between values. Distance that variables have an effect on each other and is called range of a variogram and is an important concept because after that distance, variables have no effect on each other. Geostatistical methods are meaningful in the range of variables and outside the range geostatistics is not applicable.
The investigated field is located in the Persian Gulf between Iran and Qatar, 80 km south west of Lavan Island and 40 km south east of south Pars gas field. The study area consists of two reservoir structure including Sarvak and Surmeh. In order to estimate permeability values of the reservoir, first, all data required for this approach should be collected. Needed data are: porosity logs and 3D seismic data of the study area. 3D seismic acquisition of this field has been done by TOTAL FINA ELF in 2000. Total area of investigation is about 96 km2. Interpretation of data has been done by TOTAL in 2003 and interpretations of this article have been validated by that. There are twelve wells in the study area of which seven of them have porosity logs.
First, a reliable structural model of study area (which is one of hydrocarbon fields in south of Iran) has been built from seismic data and wells information. Seismic data have been used in interpretation of horizons and characterization of faults in the reservoir. After determination of horizon surfaces in the reservoir, some horizons that have not been seen in seismic data have been determined using wells information by use of well tops. Using all horizons and fault plates we construct a structural model. Distances between horizons are divided into zones. In the model, with attention to importance of each zone, some layers have been defined. It's obvious that zones imbedding in reservoir parts, have more layers in order to increasing the validation of estimation.
After collecting required data and making the model, estimation of reservoir porosity values has been done with the aid of seismic impedance values through Geostatistics. Collocated co-Kriging method has been selected for this geostatistical estimation. In this method there is a secondary variable in addition to main data that improve accuracy of estimation. For starting geostatistical estimation, firstly analysis of data has been carried out. The analysis is based on calculating variograms. Then with the use of variogram values and considering acoustic impedance values as the second parameter in collocated co-Kriging method, porosity of the reservoir has been calculated. There are some anhydrite layers in the reservoir that we put zero value of porosity for them in order to have more control on our results and having more reliable porosity estimation for the reservoir. So the value of reservoir porosities becomes known in all parts of the model. In all parts of the reservoir, porosity values change between 0 to 30 percent.
In the next step, permeability values have been estimated using porosity values. There is a good correlation between porosity and permeability values. First, variography between permeability values have been done. Then using Sequential Gaussian Simulation (SGS) method as a reliable tool for the estimation, reservoir permeability values from porosity, have been calculated. Estimation has been done in all zones of the reservoir separately.
Again, as we have some anhydrite layers in the reservoir structure, for a better control on estimating permeability values of the reservoir, zero value of permeability has been devoted to anhydrite layers. So we have a better estimation of permeability in total volume of the reservoir. Values of permeability of the reservoir change between 0 to 100md.
Finally, validity of permeability estimation of the reservoir should be investigated. This has been done by cross validation method. Information of one well has been eliminated from the model, and using other wells, permeability estimation has been done. Then a comparison between permeability of the model at well location and well permeability was done and the correlation coefficient has been calculated. This process has been done for well 3W-01 and well 3W-03. Correlation coefficient for well 3W-01 is 0.86 and this coefficient for well 3W-03 is 0.81. Correlation coefficients show that results of permeability estimation are acceptable and Sequential Gaussian Simulation combined with collocated co-Kriging is a capable method in reservoir characterization. Using Sequential Gaussian Simulation alone is not reliable enough in porosity and permeability estimation especially when there is a complicated geological structure, because it uses mathematical calculations only, and its results may have no good compatibility with the truth of the geology. This problem has been solved by combination of acoustic impedance data with permeability values. Importing acoustic impedance values conduct the SGS algorithm and make it more compatible with the reality of the earth model.Permeability is a key parameter in reservoir characterization. In fact, without accurate information about reservoir permeability, there is no good solution for reservoir engineering problems. Up to now, permeability values of a reservoir have been calculated through laboratory measurements or well testing methods. These methods are useful but they cannot describe a reservoir reliably and precisely. Also, well logs and core analysis have been used for permeability estimation, but correlation methods also show that, estimation through these methods is not reliable.
Direct prediction of reservoir permeability from seismic data is often supposed impossible due to resolution limitations of seismic data and hydraulic nature of permeability. In many cases reservoir permeability estimation is restricted to core scale and wellbore proximity. Regarding the existing limitations, recent studies have shown that there are some experimental relationships between some petrophysical parameters and permeability.
In recent years, geostatistics and stochastic modeling have made a tremendous impact on scientific investigations. Geostatistics has been an increasingly important factor in the development of petroleum resources. So, a stochastic modeling of geostatistics is a reliable tool for estimating reservoir parameters including porosity and permeability. In geostatistics we suppose that variables have an effect on each other. So, the first concept of geostatistics is related to the variogram of values we want to estimate. It tells about the spatial correlation that exists between values. Distance that variables have an effect on each other and is called range of a variogram and is an important concept because after that distance, variables have no effect on each other. Geostatistical methods are meaningful in the range of variables and outside the range geostatistics is not applicable.
The investigated field is located in the Persian Gulf between Iran and Qatar, 80 km south west of Lavan Island and 40 km south east of south Pars gas field. The study area consists of two reservoir structure including Sarvak and Surmeh. In order to estimate permeability values of the reservoir, first, all data required for this approach should be collected. Needed data are: porosity logs and 3D seismic data of the study area. 3D seismic acquisition of this field has been done by TOTAL FINA ELF in 2000. Total area of investigation is about 96 km2. Interpretation of data has been done by TOTAL in 2003 and interpretations of this article have been validated by that. There are twelve wells in the study area of which seven of them have porosity logs.
First, a reliable structural model of study area (which is one of hydrocarbon fields in south of Iran) has been built from seismic data and wells information. Seismic data have been used in interpretation of horizons and characterization of faults in the reservoir. After determination of horizon surfaces in the reservoir, some horizons that have not been seen in seismic data have been determined using wells information by use of well tops. Using all horizons and fault plates we construct a structural model. Distances between horizons are divided into zones. In the model, with attention to importance of each zone, some layers have been defined. It's obvious that zones imbedding in reservoir parts, have more layers in order to increasing the validation of estimation.
After collecting required data and making the model, estimation of reservoir porosity values has been done with the aid of seismic impedance values through Geostatistics. Collocated co-Kriging method has been selected for this geostatistical estimation. In this method there is a secondary variable in addition to main data that improve accuracy of estimation. For starting geostatistical estimation, firstly analysis of data has been carried out. The analysis is based on calculating variograms. Then with the use of variogram values and considering acoustic impedance values as the second parameter in collocated co-Kriging method, porosity of the reservoir has been calculated. There are some anhydrite layers in the reservoir that we put zero value of porosity for them in order to have more control on our results and having more reliable porosity estimation for the reservoir. So the value of reservoir porosities becomes known in all parts of the model. In all parts of the reservoir, porosity values change between 0 to 30 percent.
In the next step, permeability values have been estimated using porosity values. There is a good correlation between porosity and permeability values. First, variography between permeability values have been done. Then using Sequential Gaussian Simulation (SGS) method as a reliable tool for the estimation, reservoir permeability values from porosity, have been calculated. Estimation has been done in all zones of the reservoir separately.
Again, as we have some anhydrite layers in the reservoir structure, for a better control on estimating permeability values of the reservoir, zero value of permeability has been devoted to anhydrite layers. So we have a better estimation of permeability in total volume of the reservoir. Values of permeability of the reservoir change between 0 to 100md.
Finally, validity of permeability estimation of the reservoir should be investigated. This has been done by cross validation method. Information of one well has been eliminated from the model, and using other wells, permeability estimation has been done. Then a comparison between permeability of the model at well location and well permeability was done and the correlation coefficient has been calculated. This process has been done for well 3W-01 and well 3W-03. Correlation coefficient for well 3W-01 is 0.86 and this coefficient for well 3W-03 is 0.81. Correlation coefficients show that results of permeability estimation are acceptable and Sequential Gaussian Simulation combined with collocated co-Kriging is a capable method in reservoir characterization. Using Sequential Gaussian Simulation alone is not reliable enough in porosity and permeability estimation especially when there is a complicated geological structure, because it uses mathematical calculations only, and its results may have no good compatibility with the truth of the geology. This problem has been solved by combination of acoustic impedance data with permeability values. Importing acoustic impedance values conduct the SGS algorithm and make it more compatible with the reality of the earth model.https://jesphys.ut.ac.ir/article_30205_76e88e492dd950a2e7655dbea4dc505d.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Porepressure prediction using seismic inversion and velocity analysisPorepressure prediction using seismic inversion and velocity analysis57703020610.22059/jesphys.2013.30206FAHamidrezaSoleymaniMohammadrezaSokootiMohammad AliRiahiJournal Article19700101Pore pressure could be estimated prior to drilling using seismic velocities. Applying this approach, the final calculated pore pressure is highly affected by accuracy of the obtained velocity data. In order to generate an accurate velocity cube with proper resolution, velocities from well logs and post-stack 3-D seismic inversion are used along with stacking velocities from velocity analysis of 3-D seismic data. Using Bowers’s effective pressure method (1995), including known velocity and density values at well locations, the coefficients of Bowers’s equation could be calculated. Also, through applying Gardener’s equation, one could relate velocity to density in each well location and finally, it is possible to determine overburden stress for the entire survey.
Pore pressure prediction is one of the most important stages in drilling of the new wells. According to Huffman 2002, “pressure prediction at the basin scale can be very powerful in (1) determining where the source rock is actively maturing (2) determining where large scale fluid migration is occurring in a basin (3) predicting the behavior of large regional faults and structures (4) identify the presence of secondary pressure area (5) constraining the porosity model and (6) evaluating the integrity of vertical seal in the basin”. Moreover, pressure information prior to drilling can insure economic advantages for safe drilling.
Overpressure is referred to state where the pore fluid pressure exceeds normal hydrostatic pressure at the particular depth. This phenomenon usually results from restriction of the fluid flow due to rapid sedimentation or fluid expansion, which in turn results in increase in pore pressure. Since the pioneer work by Pennebaker (1968), many papers has begun to deals with the pressure prediction prior to drilling. A paper by Dutta (2002) provides a good insight into using seismic based methods and possible pitfalls of using different velocities in estimation of pore fluid pressure. Bowers (1995) suggested an outstanding empirical equation, using data from the Gulf of Mexico which governs the compaction and fluid expansion mechanisms. Since these two mechanisms are the most frequent causes of overpressure, the Bowers’s equation is an appropriate approach in most cases. It is important to note that to obtain the desired accuracy, constants A and B (i.e. the virgin curve parameters) in the Bowers’s equation should be determined using the available data from offset wells in every survey.
In the process of pressure estimation, overburden pressure needs to be estimated. Hence, Gardner equation (Gardner et al., 1974) is calibrated by calculating related parameters, using available density and velocity logs. Knowing the velocity and having the calibrated Gardner’s equation in hand, one can calculate the overburden pressure. Moreover, effective pressure can be acquired with acceptable accuracy using calibrated Bowers’s equation. Eventually, porepressure could be estimated using Terzaghi’s equation (Terzaghi, 1943). It should be noted again, generating velocities from processing of 3-D seismic data, rock densities and calibrated Bowers’s equation are crucial steps, through our workflow.
The final goal of this paper is to present a proper method to generate and validate the 3-D pore pressure model using conventional data for a large survey at Mansouri oil field (one of the Iranian south west oil fields).
It is shown that the accuracy and resolution of the velocity model could be greatly enhanced by using available well logs and a proper geostatistical method. This high resolution velocity model results high resolution pore pressure model which reveals much information concerning reservoir integrity and characterization. The effective pressure is calculated using Bowers’s method which is calibrated using measured pressure data at some well locations and at the end; the porepressure is calculated using Terzaghi’s equation.
Studying the final pore pressure map reveals that there are no major pressure anomaly within the Asmari reservoir. In addition, a bizarre looking low pressure anomaly in the beginning of the second segment is not due to a real drop in pressure (base on measured pressure in the well). In fact, this error is introduced due to changes in formation petrology at the reservoir seal. It should be noted that the geological study prior to pressure interpretation is an imperative stage in pressure estimation.
Comparison of the predicted pore pressure with the measured pressure at well location which was not included in calibration phase reveals that final pressure estimation is reliable and it is in good agreement with the measured pressure data.
The high resolution pressure data is suitable for well planning and provides detailed information on pressure variation along the future well trajectories. The 3-D pore pressure cube, along with effective pressure, overburden pressure and fracture pressures are obtained and used for regional geomechanical and pressure assessments.Pore pressure could be estimated prior to drilling using seismic velocities. Applying this approach, the final calculated pore pressure is highly affected by accuracy of the obtained velocity data. In order to generate an accurate velocity cube with proper resolution, velocities from well logs and post-stack 3-D seismic inversion are used along with stacking velocities from velocity analysis of 3-D seismic data. Using Bowers’s effective pressure method (1995), including known velocity and density values at well locations, the coefficients of Bowers’s equation could be calculated. Also, through applying Gardener’s equation, one could relate velocity to density in each well location and finally, it is possible to determine overburden stress for the entire survey.
Pore pressure prediction is one of the most important stages in drilling of the new wells. According to Huffman 2002, “pressure prediction at the basin scale can be very powerful in (1) determining where the source rock is actively maturing (2) determining where large scale fluid migration is occurring in a basin (3) predicting the behavior of large regional faults and structures (4) identify the presence of secondary pressure area (5) constraining the porosity model and (6) evaluating the integrity of vertical seal in the basin”. Moreover, pressure information prior to drilling can insure economic advantages for safe drilling.
Overpressure is referred to state where the pore fluid pressure exceeds normal hydrostatic pressure at the particular depth. This phenomenon usually results from restriction of the fluid flow due to rapid sedimentation or fluid expansion, which in turn results in increase in pore pressure. Since the pioneer work by Pennebaker (1968), many papers has begun to deals with the pressure prediction prior to drilling. A paper by Dutta (2002) provides a good insight into using seismic based methods and possible pitfalls of using different velocities in estimation of pore fluid pressure. Bowers (1995) suggested an outstanding empirical equation, using data from the Gulf of Mexico which governs the compaction and fluid expansion mechanisms. Since these two mechanisms are the most frequent causes of overpressure, the Bowers’s equation is an appropriate approach in most cases. It is important to note that to obtain the desired accuracy, constants A and B (i.e. the virgin curve parameters) in the Bowers’s equation should be determined using the available data from offset wells in every survey.
In the process of pressure estimation, overburden pressure needs to be estimated. Hence, Gardner equation (Gardner et al., 1974) is calibrated by calculating related parameters, using available density and velocity logs. Knowing the velocity and having the calibrated Gardner’s equation in hand, one can calculate the overburden pressure. Moreover, effective pressure can be acquired with acceptable accuracy using calibrated Bowers’s equation. Eventually, porepressure could be estimated using Terzaghi’s equation (Terzaghi, 1943). It should be noted again, generating velocities from processing of 3-D seismic data, rock densities and calibrated Bowers’s equation are crucial steps, through our workflow.
The final goal of this paper is to present a proper method to generate and validate the 3-D pore pressure model using conventional data for a large survey at Mansouri oil field (one of the Iranian south west oil fields).
It is shown that the accuracy and resolution of the velocity model could be greatly enhanced by using available well logs and a proper geostatistical method. This high resolution velocity model results high resolution pore pressure model which reveals much information concerning reservoir integrity and characterization. The effective pressure is calculated using Bowers’s method which is calibrated using measured pressure data at some well locations and at the end; the porepressure is calculated using Terzaghi’s equation.
Studying the final pore pressure map reveals that there are no major pressure anomaly within the Asmari reservoir. In addition, a bizarre looking low pressure anomaly in the beginning of the second segment is not due to a real drop in pressure (base on measured pressure in the well). In fact, this error is introduced due to changes in formation petrology at the reservoir seal. It should be noted that the geological study prior to pressure interpretation is an imperative stage in pressure estimation.
Comparison of the predicted pore pressure with the measured pressure at well location which was not included in calibration phase reveals that final pressure estimation is reliable and it is in good agreement with the measured pressure data.
The high resolution pressure data is suitable for well planning and provides detailed information on pressure variation along the future well trajectories. The 3-D pore pressure cube, along with effective pressure, overburden pressure and fracture pressures are obtained and used for regional geomechanical and pressure assessments.https://jesphys.ut.ac.ir/article_30206_e70b8909a6486ac3e504c7b2b361f53f.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Sensitivity analysis of viscoelastic models of the earth crustal deformation to input parameters based on outcome surface deformation fieldSensitivity analysis of viscoelastic models of the earth crustal deformation to input parameters based on outcome surface deformation field71883020710.22059/jesphys.2013.30207FAAsgharRastboodBehzadVoosoghiJournal Article19700101The scope of this research is to investigate the effect of geometrical and physical input parameters in crustal viscoelastic deformation models. To do this analysis viscoelastic model of Wang et al., 2006 is used. The increasing quality of data on time-dependent deformation of the Earth's surface can be used to extract more details on the spatial and temporal development of earthquake-related crustal deformation. Different variables are involved in these processes; some of them perform more accurately than others. In this research, surface deformation is modeled in two dip-slip and strike slip faults, in a medium composed of an elastic layer over an inelastic half-space, sensitivity analysis is performed on all geometrical and physical parameters. Among physical and geometrical parameters, performing of this analysis on less accurately determined parameters from displacement field data is recommended. These parameters are viscosity of the half-space, thickness of the elastic layer and dip angle of the fault plane. According to obtained results of this research, sensitivity of the viscoelastic model to input parameters is independent of type of faulting. From the variability analysis, the location of most appropriate displacement data to obtain values for the studied parameters is determined. For example according to obtained results, coseismic and postseismic displacement analysis show high dependency on fault slip above rupture surface of the fault, however it shows less sensitivity to fault length.
When analyzing the co-seismic displacement, a strong dependency on the dip angle of the fault plane is found. Points with large displacements also show a large variability when the dip angle varies. The area over the rupture plane is the one where the largest displacements take place. Therefore, surface measurements in this area are the most suitable for ascertaining the most likely value for the dip angle.
By contrast, varying the thickness of the elastic layer leads to small differences in general, especially small in the area immediately beyond the surface projection of the lower end of the rupture plane. This indicates that co-seismic displacement measurements, especially around the mentioned area, are not recommendable for trying to ascertain an accurate value for this parameter.
In the analysis of the post-seismic deformation it is found that, on average, deviations from a reference model are large above the rupture plane when varying the viscosity and the thickness of the elastic layer. In this area, the dip angle does not influence the results as much as the other two parameters. Further away from the rupture surface along dip direction, in the area immediately beyond the surface projection of the lower end of the rupture plane, the dip angle becomes the most influential parameter. Further away, where the magnitude of the horizontal displacement reaches another maximum, the viscosity and the thickness of the elastic layer have again a greater effect than the dip angle of the rupture surface.
Accordingly, measurements in areas of large post-seismic displacements are more suitable for deriving a value for the viscosity. In particular, above the rupture area, values depend heavily on this parameter. The same region can also provide data that is useful for ascertaining the thickness of the elastic layer, although for this parameter the area where the minimum displacement occurs is not as appropriate as for the viscosity.
The dip angle, in general, cannot be accurately derived using post-seismic deformation data. The magnitude of the variability associated with this parameter is very small. The best place to find a value for this parameter is the area where the post-seismic displacement changes direction.
So the best data to derive the value of the viscosity is the post-seismic deformation over the area where the rupture takes place, although any area with large magnitude post-seismic displacements can provide more useful data. For the thickness of the elastic layer it is also advisable to use postseismic data from the area above the fault plane, whereas the dip is better determined by means of coseismic data.
Both of coseismic and postseismic displacement analysis show high dependency to locking depth of the fault especially above the rupture surface of the fault. The amount of sensitivity is especially high for coseismic displacements.
Coseismic and postseismic displacement analysis show small sensitivity to Lame coefficients of elastic layer above rupture surface of the fault and it is reduced for Lame coefficients of visco-elastic layer.
Coseismic displacement analysis does not show sensitivity to elastic or visco-elastic layer density, however postseismic displacement analysis shows small sensitivity, so modeling is not an appropriate tool for elastic layer or visco-elastic half space density determination.The scope of this research is to investigate the effect of geometrical and physical input parameters in crustal viscoelastic deformation models. To do this analysis viscoelastic model of Wang et al., 2006 is used. The increasing quality of data on time-dependent deformation of the Earth's surface can be used to extract more details on the spatial and temporal development of earthquake-related crustal deformation. Different variables are involved in these processes; some of them perform more accurately than others. In this research, surface deformation is modeled in two dip-slip and strike slip faults, in a medium composed of an elastic layer over an inelastic half-space, sensitivity analysis is performed on all geometrical and physical parameters. Among physical and geometrical parameters, performing of this analysis on less accurately determined parameters from displacement field data is recommended. These parameters are viscosity of the half-space, thickness of the elastic layer and dip angle of the fault plane. According to obtained results of this research, sensitivity of the viscoelastic model to input parameters is independent of type of faulting. From the variability analysis, the location of most appropriate displacement data to obtain values for the studied parameters is determined. For example according to obtained results, coseismic and postseismic displacement analysis show high dependency on fault slip above rupture surface of the fault, however it shows less sensitivity to fault length.
When analyzing the co-seismic displacement, a strong dependency on the dip angle of the fault plane is found. Points with large displacements also show a large variability when the dip angle varies. The area over the rupture plane is the one where the largest displacements take place. Therefore, surface measurements in this area are the most suitable for ascertaining the most likely value for the dip angle.
By contrast, varying the thickness of the elastic layer leads to small differences in general, especially small in the area immediately beyond the surface projection of the lower end of the rupture plane. This indicates that co-seismic displacement measurements, especially around the mentioned area, are not recommendable for trying to ascertain an accurate value for this parameter.
In the analysis of the post-seismic deformation it is found that, on average, deviations from a reference model are large above the rupture plane when varying the viscosity and the thickness of the elastic layer. In this area, the dip angle does not influence the results as much as the other two parameters. Further away from the rupture surface along dip direction, in the area immediately beyond the surface projection of the lower end of the rupture plane, the dip angle becomes the most influential parameter. Further away, where the magnitude of the horizontal displacement reaches another maximum, the viscosity and the thickness of the elastic layer have again a greater effect than the dip angle of the rupture surface.
Accordingly, measurements in areas of large post-seismic displacements are more suitable for deriving a value for the viscosity. In particular, above the rupture area, values depend heavily on this parameter. The same region can also provide data that is useful for ascertaining the thickness of the elastic layer, although for this parameter the area where the minimum displacement occurs is not as appropriate as for the viscosity.
The dip angle, in general, cannot be accurately derived using post-seismic deformation data. The magnitude of the variability associated with this parameter is very small. The best place to find a value for this parameter is the area where the post-seismic displacement changes direction.
So the best data to derive the value of the viscosity is the post-seismic deformation over the area where the rupture takes place, although any area with large magnitude post-seismic displacements can provide more useful data. For the thickness of the elastic layer it is also advisable to use postseismic data from the area above the fault plane, whereas the dip is better determined by means of coseismic data.
Both of coseismic and postseismic displacement analysis show high dependency to locking depth of the fault especially above the rupture surface of the fault. The amount of sensitivity is especially high for coseismic displacements.
Coseismic and postseismic displacement analysis show small sensitivity to Lame coefficients of elastic layer above rupture surface of the fault and it is reduced for Lame coefficients of visco-elastic layer.
Coseismic displacement analysis does not show sensitivity to elastic or visco-elastic layer density, however postseismic displacement analysis shows small sensitivity, so modeling is not an appropriate tool for elastic layer or visco-elastic half space density determination.https://jesphys.ut.ac.ir/article_30207_d02644ea078e81a86fe21c7ddfad1224.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219The effect of numerical differentiation methods on the earth’s gravity field recoveryThe effect of numerical differentiation methods on the earth’s gravity field recovery891033020810.22059/jesphys.2013.30208FAMahrouzKhademiMahdiNajafi AlamdariMohammad AliSharifi0000-0003-0745-4147Journal Article19700101The recent dedicated satellite gravimetry missions have provided huge amount of high quality gravity data with global coverage. From computational point of view, estimation of the unknown gravity field parameters is a highly demanding task due to the sheer number of observations and the unknown coefficients. Different computational schemes have been proposed to tackle the problem. Since the early days of satellite geodesy, energy balance based methods for gravity field determination have been considered. If non-conservative forces are known the Hamiltonian along the orbit will be a constant function of the motion. Thus the gravity field can be determined if position and velocity of the satellite are known and accelerometer measurements are available to model the non-conservative part. A satellite mission dedicated to the improvement of our knowledge of the earth’s gravitational field with a direct (in situ) measurement system has been in the proposal stages for a long time and at several agencies. Of course, gravitational field knowledge comes also by tracking satellites from ground stations, and many long wavelength models of the field have been deduced from such data. But, these models derive from the observations of a large collection of satellites that have been tracked over various periods during the long history of earth-orbiting satellites, where none of these was launched for the expressed purpose of providing a global and detailed model of the gravitational field. The method has been applied in a close-loop simulation to the Gravity Recovery and Climate Experiment (GRACE) data and the achieved results show high performance of the proposed method. This article focuses on the development of new techniques for global gravity field recovery from high-low (hl) and low-low (ll) satellite-to-satellite tracking (SST) data. There are a number of approaches to global gravity field recovery known from literature, including the variational equations approach, short arc approach, energy balance approach and acceleration approach. The focus of the article is the energy balance approach with an aim to produce high-quality global gravity field models using simulated data from GRACE satellite missions. The GRACE mission has substantiated the low–low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high–low SST concept in the CHAMP mission to provide a much higher accuracy.
A new, rigorous model is developed for the difference of gravitational potential between two close earth-orbiting satellites in terms of measured range-rates, velocities and velocity differences, and specific forces. It is particularly suited to regional geopotential determination from a satellite-to-satellite tracking mission. Based on energy considerations, the model specifically accounts for the time variability of the potential in inertial space, principally due to earth’s rotation. Analysis shows the latter to be a significant (~1m2/s2) effect that overshadows by many orders of magnitude other time dependencies caused by solar and lunar tidal potentials. Also, variations in earth rotation with respect to terrestrial and celestial coordinate frames are inconsequential. Results of simulations contrast of the new model to the simplified linear model (relating potential difference to range-rate) and delineate accuracy requirements in velocity vector measurements needed to supplement the range-rate measurements. The numerical analysis is oriented toward the scheduled Gravity Recovery and Climate Experiment (GRACE) mission and shows that accuracy in the velocity difference vector of 2~10?5 m/s would be commensurate within the model to the anticipated accuracy of 10?6 m/s in range-rate. A fast iterative method for gravity field determination from low Earth satellite orbit coordinates has been developed and implemented successfully. As the method is based on energy conservation and it avoids problems related to orbit dynamics and initial state. In addition, the particular geometry of a repeating orbit is exploited using a very efficient iterative estimation scheme, in which a set of normal equations is approximated by a sparse block-diagonal equivalent. Recovery experiments for spherical harmonic gravity field models up to degree and order 70 were conducted based on a 29-day simulated data set of orbit coordinates. The method was found to be very flexible and could be easily adapted to include observations of non-conservative accelerations, such as (to be) provided by satellites like CHAMP, GRACE, and GOCE.
So, calculation of velocity and acceleration vectors is a necessary stage in Earth’s gravity field recovery using GRACE observations. Different numerical differentiation methods have been proposed to compute the acceleration vector. In this paper, Newton, spline and Taylor methods have been implemented. The effect of outliers has also been investigated in different differentiation techniques. The numerical analysis of the recovered solutions shows that the Newton method yields the optimal solution. The comparison is performed based on the difference in the simulated and recovered gravity anomalies and the geoidal heights.The recent dedicated satellite gravimetry missions have provided huge amount of high quality gravity data with global coverage. From computational point of view, estimation of the unknown gravity field parameters is a highly demanding task due to the sheer number of observations and the unknown coefficients. Different computational schemes have been proposed to tackle the problem. Since the early days of satellite geodesy, energy balance based methods for gravity field determination have been considered. If non-conservative forces are known the Hamiltonian along the orbit will be a constant function of the motion. Thus the gravity field can be determined if position and velocity of the satellite are known and accelerometer measurements are available to model the non-conservative part. A satellite mission dedicated to the improvement of our knowledge of the earth’s gravitational field with a direct (in situ) measurement system has been in the proposal stages for a long time and at several agencies. Of course, gravitational field knowledge comes also by tracking satellites from ground stations, and many long wavelength models of the field have been deduced from such data. But, these models derive from the observations of a large collection of satellites that have been tracked over various periods during the long history of earth-orbiting satellites, where none of these was launched for the expressed purpose of providing a global and detailed model of the gravitational field. The method has been applied in a close-loop simulation to the Gravity Recovery and Climate Experiment (GRACE) data and the achieved results show high performance of the proposed method. This article focuses on the development of new techniques for global gravity field recovery from high-low (hl) and low-low (ll) satellite-to-satellite tracking (SST) data. There are a number of approaches to global gravity field recovery known from literature, including the variational equations approach, short arc approach, energy balance approach and acceleration approach. The focus of the article is the energy balance approach with an aim to produce high-quality global gravity field models using simulated data from GRACE satellite missions. The GRACE mission has substantiated the low–low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high–low SST concept in the CHAMP mission to provide a much higher accuracy.
A new, rigorous model is developed for the difference of gravitational potential between two close earth-orbiting satellites in terms of measured range-rates, velocities and velocity differences, and specific forces. It is particularly suited to regional geopotential determination from a satellite-to-satellite tracking mission. Based on energy considerations, the model specifically accounts for the time variability of the potential in inertial space, principally due to earth’s rotation. Analysis shows the latter to be a significant (~1m2/s2) effect that overshadows by many orders of magnitude other time dependencies caused by solar and lunar tidal potentials. Also, variations in earth rotation with respect to terrestrial and celestial coordinate frames are inconsequential. Results of simulations contrast of the new model to the simplified linear model (relating potential difference to range-rate) and delineate accuracy requirements in velocity vector measurements needed to supplement the range-rate measurements. The numerical analysis is oriented toward the scheduled Gravity Recovery and Climate Experiment (GRACE) mission and shows that accuracy in the velocity difference vector of 2~10?5 m/s would be commensurate within the model to the anticipated accuracy of 10?6 m/s in range-rate. A fast iterative method for gravity field determination from low Earth satellite orbit coordinates has been developed and implemented successfully. As the method is based on energy conservation and it avoids problems related to orbit dynamics and initial state. In addition, the particular geometry of a repeating orbit is exploited using a very efficient iterative estimation scheme, in which a set of normal equations is approximated by a sparse block-diagonal equivalent. Recovery experiments for spherical harmonic gravity field models up to degree and order 70 were conducted based on a 29-day simulated data set of orbit coordinates. The method was found to be very flexible and could be easily adapted to include observations of non-conservative accelerations, such as (to be) provided by satellites like CHAMP, GRACE, and GOCE.
So, calculation of velocity and acceleration vectors is a necessary stage in Earth’s gravity field recovery using GRACE observations. Different numerical differentiation methods have been proposed to compute the acceleration vector. In this paper, Newton, spline and Taylor methods have been implemented. The effect of outliers has also been investigated in different differentiation techniques. The numerical analysis of the recovered solutions shows that the Newton method yields the optimal solution. The comparison is performed based on the difference in the simulated and recovered gravity anomalies and the geoidal heights.https://jesphys.ut.ac.ir/article_30208_1043d553d04b3cb5db7d1ba317246cc5.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219A comparison between the horizontal gradient of analytic signal method and the analytic signal-Euler deconvolution combined method using synthetic data in interpretation of magnetic geological structuresA comparison between the horizontal gradient of analytic signal method and the analytic signal-Euler deconvolution combined method using synthetic data in interpretation of magnetic geological structures1051163020910.22059/jesphys.2013.30209FAMojtabaRashvandBehroozOskooi0000-0003-3065-194XJournal Article19700101The analytic signal method is a semiautomatic method for estimating the location of causative bodies in magnetic and gravity methods. The application of analytic signal for interpretation of two dimensional (2D) structures was introduced by Nabighian (1972). The analytic signal is defined as a complex function in which the real and imaginary parts are a pair of Hilbert transforms. In other words, the analytic signal is a combination of horizontal and vertical gradients of potential field.
Analytic signal is a symmetric function with amplitude sensitive to parameters of the source. In case of 2D structures, the amplitude of the analytic signal is independent of the directional parameters such as inclination, declination and strike angle (Nabighian, 1972; Atchuta et al., 1981; Roest et al., 1992).
The depth of 2D structures can be estimated using the width of the analytic signal or the ratio of the analytic signal to its higher derivatives (Hsu et al., 1996; Roest et al., 1992). Source’s parameters of a dyke such as width, dip, strike, magnetization and depth can be estimated by analytic signal method (Bastani & Pedersen, 2001). The nth-order enhanced analytic signal is defined as the analytic signal of the nth-order vertical derivative of the potential field.
An automated method for estimating the depth, horizontal location and shape of 2D magnetic structures is the horizontal gradient of analytic signal method. This method is capable of interpreting low quality data because of using the first and second order derivatives of potential field in the main equations. The method of analytic signal estimates the horizontal location of the source by approximating the maximum amplitude of the signal; hence noise can affect the estimations. On the other hand, by using the horizontal gradient of analytic signal expressions, all of the source’s parameters could be approximated simultaneously.
In this method, equations of the analytic signal, Euler enhanced analytic signal and horizontal gradient of analytic signal are combined to derive a linear equation. Using the first order analytic signal, horizontal gradient of analytic signal and linear inversion method, the depth and horizontal location of 2D magnetic bodies are obtained. The location estimation is independent of the shape of the causative bodies. The causative body’s geometry is estimated as a structural index by applying the least squares method.
Data selection for solving the equations or width of windows is based on data quality. The optimum size is defined somehow to detect a signal specific anomaly and also variations of the anomaly in one window. In this study, in order to solve the equations of the horizontal gradient of analytic signal method, the data greater than twenty percent of maximum amplitude of the analytic signal were used.
The analytic signal-Euler deconvolution combined method is an automated method to estimate depth and shape of the sources. This method is used to interpret 2D & 3D magnetic and gravity data. After substituting the appropriate derivatives of the Euler’s homogeneous equation in the equation of the analytic signal, major independent equations which are used to estimate the depth and shape of causative bodies, are derived. The horizontal location of causative bodies is estimated by Euler method or locating the maximum amplitude of the analytic signal.
In this study, the accuracy and efficiency of each of the mentioned methods in interpretation of magnetic anomalies are evaluated. Methods were tested for different synthetic datasets provided by forward modeling. 2D magnetic models placed at different depths and random noise added for some models. Derivatives were calculated in frequency domain by using Fourier transform techniques. In this technique, bell-shapedness effect appears at the edges of the profiles. This effect could be corrected by linearly expanding the profiles. Upward continuation filter was applied on some synthetic data to decrease the noise level.
In this paper, the applicability of the horizontal gradient of analytic signal method and the analytic signal-Euler combined method were tested. Both methods estimate the parameters of the causative bodies without any prior information. In both methods, there is not any explicit dependence on directional parameters (e.g. magnetization) in the main equations; hence, as the results show, estimations were not affected by remanent magnetization. The results also show accurate estimations of the horizontal gradient of analytic signal method for shape and horizontal location and efficient estimations of the analytic signal-Euler deconvolution combined method for depth.The analytic signal method is a semiautomatic method for estimating the location of causative bodies in magnetic and gravity methods. The application of analytic signal for interpretation of two dimensional (2D) structures was introduced by Nabighian (1972). The analytic signal is defined as a complex function in which the real and imaginary parts are a pair of Hilbert transforms. In other words, the analytic signal is a combination of horizontal and vertical gradients of potential field.
Analytic signal is a symmetric function with amplitude sensitive to parameters of the source. In case of 2D structures, the amplitude of the analytic signal is independent of the directional parameters such as inclination, declination and strike angle (Nabighian, 1972; Atchuta et al., 1981; Roest et al., 1992).
The depth of 2D structures can be estimated using the width of the analytic signal or the ratio of the analytic signal to its higher derivatives (Hsu et al., 1996; Roest et al., 1992). Source’s parameters of a dyke such as width, dip, strike, magnetization and depth can be estimated by analytic signal method (Bastani & Pedersen, 2001). The nth-order enhanced analytic signal is defined as the analytic signal of the nth-order vertical derivative of the potential field.
An automated method for estimating the depth, horizontal location and shape of 2D magnetic structures is the horizontal gradient of analytic signal method. This method is capable of interpreting low quality data because of using the first and second order derivatives of potential field in the main equations. The method of analytic signal estimates the horizontal location of the source by approximating the maximum amplitude of the signal; hence noise can affect the estimations. On the other hand, by using the horizontal gradient of analytic signal expressions, all of the source’s parameters could be approximated simultaneously.
In this method, equations of the analytic signal, Euler enhanced analytic signal and horizontal gradient of analytic signal are combined to derive a linear equation. Using the first order analytic signal, horizontal gradient of analytic signal and linear inversion method, the depth and horizontal location of 2D magnetic bodies are obtained. The location estimation is independent of the shape of the causative bodies. The causative body’s geometry is estimated as a structural index by applying the least squares method.
Data selection for solving the equations or width of windows is based on data quality. The optimum size is defined somehow to detect a signal specific anomaly and also variations of the anomaly in one window. In this study, in order to solve the equations of the horizontal gradient of analytic signal method, the data greater than twenty percent of maximum amplitude of the analytic signal were used.
The analytic signal-Euler deconvolution combined method is an automated method to estimate depth and shape of the sources. This method is used to interpret 2D & 3D magnetic and gravity data. After substituting the appropriate derivatives of the Euler’s homogeneous equation in the equation of the analytic signal, major independent equations which are used to estimate the depth and shape of causative bodies, are derived. The horizontal location of causative bodies is estimated by Euler method or locating the maximum amplitude of the analytic signal.
In this study, the accuracy and efficiency of each of the mentioned methods in interpretation of magnetic anomalies are evaluated. Methods were tested for different synthetic datasets provided by forward modeling. 2D magnetic models placed at different depths and random noise added for some models. Derivatives were calculated in frequency domain by using Fourier transform techniques. In this technique, bell-shapedness effect appears at the edges of the profiles. This effect could be corrected by linearly expanding the profiles. Upward continuation filter was applied on some synthetic data to decrease the noise level.
In this paper, the applicability of the horizontal gradient of analytic signal method and the analytic signal-Euler combined method were tested. Both methods estimate the parameters of the causative bodies without any prior information. In both methods, there is not any explicit dependence on directional parameters (e.g. magnetization) in the main equations; hence, as the results show, estimations were not affected by remanent magnetization. The results also show accurate estimations of the horizontal gradient of analytic signal method for shape and horizontal location and efficient estimations of the analytic signal-Euler deconvolution combined method for depth.https://jesphys.ut.ac.ir/article_30209_99a36d7a74a49e5f55d9e9337d90b477.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Enhancing the resolving power of 2D least-squares inversion of magnetotelluric dataEnhancing the resolving power of 2D least-squares inversion of magnetotelluric data1171303021010.22059/jesphys.2013.30210FAMajidJamieBehroozOskooi0000-0003-3065-194XJournal Article19700101This paper presents results of applying a new approach on 2D inversion of Magnetotelluric (MT) data in order to enhance resolution and stability of the inversion results. Due to non-linearity and limited coverage of data acquisition in an MT field campaign, minimizing the error by linearization of the problem in least squares inversion usually leads to an ill-posed problem. In general, an inverse problem is unstable, ill-posed and is characterized by non-uniqueness (Tikhonov et al., 1998). The concept of ill-posed problems goes back to Hadamard (1923). He defined a problem to be ill-posed if the solution is not unique or if it is not a continuous function of the data i.e., arbitrarily small perturbations in the input data can cause great changes in the solution. Hence, in order to stabilize the problem and come to a stable solution, further information should be incorporated. In order to acquire reasonable geoelectrical models, regularization of the problem by imposing definite constraints is necessary. Determination of a suitable Lagrangian multiplier in order to balance minimization of error and model roughness could be a useful approach to achieve both the resolution and the stability in inversion.
In order to achieve both the resolution and the stability in least-squares inverse modelling in our study, an intermediate value of the Lagrangian multiplier must be chosen. Too large or small Lagrangian multipliers yield to undesirable effects on resolution and stability. In this paper, the regularization parameter is set by a value from log-linear interpolation (Yi et al., 2003).
Where, , and , are minimum and maximum for Lagrangian multipliers and spread function respectively. The regularization parameter can be set optimally by the spread function of the ith model parameter which is defined by the parameter resolution matrix R. The spread function shows how much the ith model parameter is not resolvable and is written as bellow:
Where, M is the total number of inversion parameters, is a weighting factor defined by the spatial distance between the ith and jth model parameters, and is a factor which indicates whether the constraint or regularization is imposed on the ith parameter and its neighboring parameters. An alternative to varying the Lagrangian multiplier as the iterations proceed is to use the spatially varying Lagrangian multiplier (Sasaki, 1989). Hence, varying the Lagrangian multiplier by trial and error is preferred to get resolution and stability. Small regularization parameters mean higher resolvable inversion blocks in parameter resolution analysis sense (Menke, 1989).
We tested the capability of the Active Constraint Balancing (ACB) approach (Yi et al., 2003) in enhancing the resolving power of least-squares inversion results by applying it on 2D synthetic MT data generated from forward modeling code of Geotools-MT for simple models of the earth and then on the field data. Using ACB approach, the rms error and data misfit is much less than the conventional approach with fixed Lagrangian multiplier, which leads to higher resolving power and the stability of the inversion results. The inversion code which was used in this paper (Lee et al., 2009) consists of finite element for computing 2D MT model responses, and smoothness-constrained least-squares inversion. By comparing the resistivity sections, the anomalous object can be seen much clearer and distinct in the case of ACB approach. This enhancement in the resolution could be well interpreted as the result of using varying Lagrangian multipliers in the smoothness-constrained least-squares inversion using ACB approach.This paper presents results of applying a new approach on 2D inversion of Magnetotelluric (MT) data in order to enhance resolution and stability of the inversion results. Due to non-linearity and limited coverage of data acquisition in an MT field campaign, minimizing the error by linearization of the problem in least squares inversion usually leads to an ill-posed problem. In general, an inverse problem is unstable, ill-posed and is characterized by non-uniqueness (Tikhonov et al., 1998). The concept of ill-posed problems goes back to Hadamard (1923). He defined a problem to be ill-posed if the solution is not unique or if it is not a continuous function of the data i.e., arbitrarily small perturbations in the input data can cause great changes in the solution. Hence, in order to stabilize the problem and come to a stable solution, further information should be incorporated. In order to acquire reasonable geoelectrical models, regularization of the problem by imposing definite constraints is necessary. Determination of a suitable Lagrangian multiplier in order to balance minimization of error and model roughness could be a useful approach to achieve both the resolution and the stability in inversion.
In order to achieve both the resolution and the stability in least-squares inverse modelling in our study, an intermediate value of the Lagrangian multiplier must be chosen. Too large or small Lagrangian multipliers yield to undesirable effects on resolution and stability. In this paper, the regularization parameter is set by a value from log-linear interpolation (Yi et al., 2003).
Where, , and , are minimum and maximum for Lagrangian multipliers and spread function respectively. The regularization parameter can be set optimally by the spread function of the ith model parameter which is defined by the parameter resolution matrix R. The spread function shows how much the ith model parameter is not resolvable and is written as bellow:
Where, M is the total number of inversion parameters, is a weighting factor defined by the spatial distance between the ith and jth model parameters, and is a factor which indicates whether the constraint or regularization is imposed on the ith parameter and its neighboring parameters. An alternative to varying the Lagrangian multiplier as the iterations proceed is to use the spatially varying Lagrangian multiplier (Sasaki, 1989). Hence, varying the Lagrangian multiplier by trial and error is preferred to get resolution and stability. Small regularization parameters mean higher resolvable inversion blocks in parameter resolution analysis sense (Menke, 1989).
We tested the capability of the Active Constraint Balancing (ACB) approach (Yi et al., 2003) in enhancing the resolving power of least-squares inversion results by applying it on 2D synthetic MT data generated from forward modeling code of Geotools-MT for simple models of the earth and then on the field data. Using ACB approach, the rms error and data misfit is much less than the conventional approach with fixed Lagrangian multiplier, which leads to higher resolving power and the stability of the inversion results. The inversion code which was used in this paper (Lee et al., 2009) consists of finite element for computing 2D MT model responses, and smoothness-constrained least-squares inversion. By comparing the resistivity sections, the anomalous object can be seen much clearer and distinct in the case of ACB approach. This enhancement in the resolution could be well interpreted as the result of using varying Lagrangian multipliers in the smoothness-constrained least-squares inversion using ACB approach.https://jesphys.ut.ac.ir/article_30210_926fd3964a0fd09939559150b418b968.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Inspection of changing processing parameters in GPR data interpretationInspection of changing processing parameters in GPR data interpretation1311433021110.22059/jesphys.2013.30211FASanazSmaeiliMohammad KazemHafizi0000-0002-5634-1141SajadJazayeri- JonaghaniJournal Article19700101Ground Penetrating Radar (GPR) is one of high resolution geophysical methods for locating missed objects, buried equipments. GPR is a nondestructive testing method that is useful in various kinds of explorations such as mapping the archaeological sites. The safety of the archaeological sites is one of the most concerns of the archaeologists. One of the best GPR applications is in archaeology as a non-destructive exploration method.
This method uses high frequency electromagnetic waves or Radio Frequency range (12.5–2300 MHz) to investigate the subsurface and detecting underground objects. It is used to measure features such as depth of the buried anomalies, layers thickness, moisture content, horizontal cracks, voids and hidden objects. The electromagnetic waves reflected from the interfaces in the subsurface are used to analyze the structure of the area under scan. The electromagnetic waves are emitted from an antenna that is not necessarily in direct contact with the earth surface. The same antenna can be used to record the reflected energy stemming from the dielectrical interfaces. The penetration depth of the GPR technique varies with acquisition and subsurface conditions.
The data gathering procedure can be designed in two or three dimensions. If the target is such as pipe or the pre-known target, the data gathering can be done in 2D. But if the target is completely unknown, especially as the matter of its place, the three dimensional data grid will be more useful and accurate. The GPR signals diffract when they contact the anomalies. The anomalies seem as hyperboles in sections. With these hyperboles in the sections, accurate detection of anomaly's depth and other properties are impossible. For solving this problem, a processing method should be applied.
The processing methods vary depends on the different goals. Using various processing methods with parameters can be applied for different purposes. The operator's ability in choosing the proportionate parameters is as important as choosing an appropriate processing algorithm to aim for an acceptable image. Although in some special conditions, some processing may cause unwanted problems in data.
This paper contains investigations in an archaeological site in Kerman Province, Iran. The exact location of the site is in the middle of the Kerman city, the capital of the Province. The antenna which has been used was a 500 MHz GPR antenna, made by MALA.
The target was the remains of one the most important and also old schools of Iran which had been buried under the soil for hundreds of years. On the top of the data collection areas, where had been specified by the archeologists, some houses and buildings had been made. The data gathering procedure in the mentioned area was a three dimensional one. The data were collected in 2009. The Reflex 3D software was used for the processing and the depth estimation of the acquisitioned data.
Regarding the properties of the target in the study area, the processing algorithms can be different from case to case. For example, the data filtering that is usually being used for windowing the environmental noises in some cases, is sufficient for bolding the subsurface reflections. Although in some other cases, more and complicated levels of processing are needed.
The whole processing algorithm depends on the land’s properties, the GPR instruments, the softwares and mainly the data gathering methods. So, in all of the cases not all of the processing levels are necessarily needed. Perhaps in some situations, some of the processing levels are not useful and may cause the operator unwanted difficulties in the data sections. In this paper, the needed processing levels in archaeological cases were studied, and also the results on the Kerman area have been presented.
The gain filter was the most sufficient processing step in the processing algorithms. The changing of different parameters in this filter caused the most clear improvement in the results and for making the anomaly’s source known. So, in the cases that the depth range of anomaly’s source had been guessed by the experts or the other methods and the archaeological areas, a proper gain filter was recommended.
One of the other methods that will be so useful in making the anomalies more clear in the GPR sections is the amplitude limitation. The results of applying this method are shown in this paper. The unwanted amplitudes are trouble-making in the GPR anomaly interpretations. The amplitude limitation helps the operator and the interpreter to distinguish the main anomalies from the environmental noises.
The Dewow processing step is also one of the steps which will decrease the near surface or air signals between the transmitter and the receiver. As conclusion, two mentioned processing methods above, namely the gain filter and amplitude limitation, were used as two major parts of processing algorithms and helped the exploration team to clarify the source’s exact coordination.Ground Penetrating Radar (GPR) is one of high resolution geophysical methods for locating missed objects, buried equipments. GPR is a nondestructive testing method that is useful in various kinds of explorations such as mapping the archaeological sites. The safety of the archaeological sites is one of the most concerns of the archaeologists. One of the best GPR applications is in archaeology as a non-destructive exploration method.
This method uses high frequency electromagnetic waves or Radio Frequency range (12.5–2300 MHz) to investigate the subsurface and detecting underground objects. It is used to measure features such as depth of the buried anomalies, layers thickness, moisture content, horizontal cracks, voids and hidden objects. The electromagnetic waves reflected from the interfaces in the subsurface are used to analyze the structure of the area under scan. The electromagnetic waves are emitted from an antenna that is not necessarily in direct contact with the earth surface. The same antenna can be used to record the reflected energy stemming from the dielectrical interfaces. The penetration depth of the GPR technique varies with acquisition and subsurface conditions.
The data gathering procedure can be designed in two or three dimensions. If the target is such as pipe or the pre-known target, the data gathering can be done in 2D. But if the target is completely unknown, especially as the matter of its place, the three dimensional data grid will be more useful and accurate. The GPR signals diffract when they contact the anomalies. The anomalies seem as hyperboles in sections. With these hyperboles in the sections, accurate detection of anomaly's depth and other properties are impossible. For solving this problem, a processing method should be applied.
The processing methods vary depends on the different goals. Using various processing methods with parameters can be applied for different purposes. The operator's ability in choosing the proportionate parameters is as important as choosing an appropriate processing algorithm to aim for an acceptable image. Although in some special conditions, some processing may cause unwanted problems in data.
This paper contains investigations in an archaeological site in Kerman Province, Iran. The exact location of the site is in the middle of the Kerman city, the capital of the Province. The antenna which has been used was a 500 MHz GPR antenna, made by MALA.
The target was the remains of one the most important and also old schools of Iran which had been buried under the soil for hundreds of years. On the top of the data collection areas, where had been specified by the archeologists, some houses and buildings had been made. The data gathering procedure in the mentioned area was a three dimensional one. The data were collected in 2009. The Reflex 3D software was used for the processing and the depth estimation of the acquisitioned data.
Regarding the properties of the target in the study area, the processing algorithms can be different from case to case. For example, the data filtering that is usually being used for windowing the environmental noises in some cases, is sufficient for bolding the subsurface reflections. Although in some other cases, more and complicated levels of processing are needed.
The whole processing algorithm depends on the land’s properties, the GPR instruments, the softwares and mainly the data gathering methods. So, in all of the cases not all of the processing levels are necessarily needed. Perhaps in some situations, some of the processing levels are not useful and may cause the operator unwanted difficulties in the data sections. In this paper, the needed processing levels in archaeological cases were studied, and also the results on the Kerman area have been presented.
The gain filter was the most sufficient processing step in the processing algorithms. The changing of different parameters in this filter caused the most clear improvement in the results and for making the anomaly’s source known. So, in the cases that the depth range of anomaly’s source had been guessed by the experts or the other methods and the archaeological areas, a proper gain filter was recommended.
One of the other methods that will be so useful in making the anomalies more clear in the GPR sections is the amplitude limitation. The results of applying this method are shown in this paper. The unwanted amplitudes are trouble-making in the GPR anomaly interpretations. The amplitude limitation helps the operator and the interpreter to distinguish the main anomalies from the environmental noises.
The Dewow processing step is also one of the steps which will decrease the near surface or air signals between the transmitter and the receiver. As conclusion, two mentioned processing methods above, namely the gain filter and amplitude limitation, were used as two major parts of processing algorithms and helped the exploration team to clarify the source’s exact coordination.https://jesphys.ut.ac.ir/article_30211_5d0c0c7807212b1a65807bdebfd0a7a3.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Potential field data balancing using the Hilbert transformPotential field data balancing using the Hilbert transform1451533021210.22059/jesphys.2013.30212FAAliNejati KalatehAminRoshandel KahooJournal Article19700101Potential field images obtained in potential field data measurements are suitable tools for mineral and hydrocarbons resources explorations. These images consist of different anomalies which in many cases are contaminated with noise. The horizontal location of the boundaries of potential field anomaly sources is of interest in potential field interpretation. However, the edge of potential field sources is not clear, because of the loss of resolution of the anomaly shape with respect to the shape of their sources. Edge enhancement is a technique, applied to potential field data to produce regions of constant field amplitude that are separated by sharp boundaries, as an aid to interpretation.
Various methods have been introduced for anomaly edge detection, such as the analytic signal, tilt angle, total horizontal gradient and profile curvature. The tilt angle is the ratio of the first vertical derivative to the horizontal gradient. Curvature of the geophysical data is one of the most important attributes with many applications in geophysical data processing and interpretation. The profile curvature at a point shows the change in slope in maximum gradient direction. We can compute the tilt angle and profile curvature by Eq. (1) and Eq. (2), respectively.
(1)
(2)
where, is potential field and
(3)
In many cases of geological interpretation of potential field data, the study of low-amplitude anomalies is more important than the high-amplitude anomalies. However, most existing methods for determining the lateral expansion of geological structure are sensitive to amplitude of the potential data. Amplitude of potential field data has a direct relationship with depth of the geological structure. So we can say that the efficiency of methods to detect geological structures severely depends on the depth of geological structures. Automatic gain control filters can construct the balanced image of all anomalies by assigning the computed gain of each window of the data to the center of it.
In this paper, we used two-dimensional Hilbert transform to balance the image of potential field data as Eq. (4).
(4)
Where, and are 2-D Hilbert transforms of image in and directions, respectively.
In order to show the efficiency of proposed method to balance the potential field images, we test the method on two usual edge detection methods entitled as profile curvature and the tilt angle filter. We used a model with three anomalies with various depths as the synthetic model. Results from synthetic data showed that the profile curvature and the tilt angle filter cannot detect the edges of the deep anomalies as well as shallow anomalies. By applying the 2-D Hilbert transform based balancing method on images of common method the resolution of detected edges of anomalies for deep and shallow anomalies are balanced. We used the gravity anomalies related to granite intrusion at Trompsburge, South Africa and salt dome at sedimentary basin, Center of Iran as real data. Balancing method improved significantly the images resulted from profile curvature and the tilt angle filter.Potential field images obtained in potential field data measurements are suitable tools for mineral and hydrocarbons resources explorations. These images consist of different anomalies which in many cases are contaminated with noise. The horizontal location of the boundaries of potential field anomaly sources is of interest in potential field interpretation. However, the edge of potential field sources is not clear, because of the loss of resolution of the anomaly shape with respect to the shape of their sources. Edge enhancement is a technique, applied to potential field data to produce regions of constant field amplitude that are separated by sharp boundaries, as an aid to interpretation.
Various methods have been introduced for anomaly edge detection, such as the analytic signal, tilt angle, total horizontal gradient and profile curvature. The tilt angle is the ratio of the first vertical derivative to the horizontal gradient. Curvature of the geophysical data is one of the most important attributes with many applications in geophysical data processing and interpretation. The profile curvature at a point shows the change in slope in maximum gradient direction. We can compute the tilt angle and profile curvature by Eq. (1) and Eq. (2), respectively.
(1)
(2)
where, is potential field and
(3)
In many cases of geological interpretation of potential field data, the study of low-amplitude anomalies is more important than the high-amplitude anomalies. However, most existing methods for determining the lateral expansion of geological structure are sensitive to amplitude of the potential data. Amplitude of potential field data has a direct relationship with depth of the geological structure. So we can say that the efficiency of methods to detect geological structures severely depends on the depth of geological structures. Automatic gain control filters can construct the balanced image of all anomalies by assigning the computed gain of each window of the data to the center of it.
In this paper, we used two-dimensional Hilbert transform to balance the image of potential field data as Eq. (4).
(4)
Where, and are 2-D Hilbert transforms of image in and directions, respectively.
In order to show the efficiency of proposed method to balance the potential field images, we test the method on two usual edge detection methods entitled as profile curvature and the tilt angle filter. We used a model with three anomalies with various depths as the synthetic model. Results from synthetic data showed that the profile curvature and the tilt angle filter cannot detect the edges of the deep anomalies as well as shallow anomalies. By applying the 2-D Hilbert transform based balancing method on images of common method the resolution of detected edges of anomalies for deep and shallow anomalies are balanced. We used the gravity anomalies related to granite intrusion at Trompsburge, South Africa and salt dome at sedimentary basin, Center of Iran as real data. Balancing method improved significantly the images resulted from profile curvature and the tilt angle filter.https://jesphys.ut.ac.ir/article_30212_bcaaaa793749cc1bb61f6e9b6d43fa36.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Processing and interpretation of ground-penetrating radar (GPR) data for detection of cavities, investigation of bedding and grain sizes and also estimation of clay content in shallow subsurface sedimentsProcessing and interpretation of ground-penetrating radar (GPR) data for detection of cavities, investigation of bedding and grain sizes and also estimation of clay content in shallow subsurface sediments1551733021310.22059/jesphys.2013.30213FAAbolghasemKamkar RouhaniEsmaeilEshaghiAlirezaArab AmiriJournal Article19700101Ground penetrating radar (GPR) method as a high resolution non-destructive geophysical method is used for detection of shallow subsurface targets. This method is based on the transmission of electromagnetic waves inside the earth and recording the reflected waves from the subsurface. As the method uses high-frequency electromagnetic waves in the frequency ranges of 12.5 to 2500 MHz (called GPR waves), it can only be used for shallow subsurface investigations. Using this method, continuous images of the reflections of GPR waves from the interfaces of subsurface media with different electrical properties are obtained. Shallow cavities, due to their different electrical characteristics from the background, are among the targets which can be detected by this method. Since the depth of penetration of GPR waves in an area is controlled by the electrical conductivity and permittivity of the ground of the survey area, the depth of penetration of GPR waves, where fine-grained sediments are present, is relatively lower due to higher electrical conductivity of fine sedimentary grains compared to coarse sedimentary grains. Thus, the relative grain sizes and clay content of the subsurface sediments can be investigated by the GPR method. In general, shallow subsurface structures, having different materials and thicknesses, can be detected by the method as the structures and their host media normally have different electrical (namely, conductivity and permittivity) properties. In this research work, the GPR data acquisition has been carried out using 250 MHz shielded antenna along 5 lines in Darkhanyab area near Mojen Town, which is located at the distance of approximately 25 km northwest of Shahrood City. The purpose of this GPR survey is to detect shallow subsurface structures such as the water Qanat in the area. Due to the low distance between the GPR transmitter and receiver as well as the electrical properties, especially the conductivity of the ground, and also, to remove the unwanted low frequency signals or reflections while preserving the high frequency signals, the Dewow filter was applied before any other processing to all GPR data sets. Short time intervals between the transmitted GPR pulses and the pulses received directly from the air-ground surface, and also, the existence of reflections from the shallow subsurface targets, cause signal saturation in the receiver. For this, the Dewow filter is applied on the GPR data to correct for signal saturation or Wow in the data. Different types of gains are also among the processing methods applied on the data to reduce the attenuating effect of the GPR waves as the depth increases. To demonstrate the effects of different gains and to select the optimum gain, we applied different gains on the GPR data. To convert the trace from a wavelet with both positive and negative components (i.e. sine or cosine nature) to a mono-pulse wavelet with positive components, we used the envelope filter. This process removes the oscillatory nature of the radar wavelet and shows the data in its true resolution, making it easier to interpret. In this research, for processing the two-dimensional (2-D) GPR data or sections, Win_Ekko_Pro software was used. In addition, for three-dimensional (3-D) processing and modelling of the GPR data, EKKO-Mapper and EKKO-3D software programs were used. To display the output data from the Win_Ekko_Pro and EKKO-3D software programs, we also used T3D software. These software programs have been developed by Canadian Sensors & software Company.
The results of this research work indicate that using the characteristics of GPR waves in the 2-D GPR sections; we can detect the subsurface targets such as cavities and discriminate coarse-grained sediments from fine-grained sediments, and also, determine the electrical properties of subsurface layers with high success. High resolution of the GPR data in this research have enabled us to determine the water qanat interfaces with its surroundings such as soil-concrete, concrete-air, air-water and water-concrete interfaces in the subsurface. Furthermore, the high conductive clayey soils above the water qanat canal in some places cause high attenuation of the GPR waves, and thus, highly limit the depth of penetration of the GPR waves. This phenomenon is also seen in the surrounding zone of the water qanat canal that mainly occurs due to the seepage of water to the ground. The soil bedding can also be easily observed in the obtained GPR sections. The horizontal soil layers, having thicknesses of several centimeters, have covered the surface of the ground in the survey area. A high resistive subsurface zone in the GPR sections, characterized by the ringing phenomenon, is interpreted as a cavity. In general, the relative high conductivity of the ground in the area causes to have a limited depth of penetration of the GPR waves that rarely exceeds 2 meters. The location of the water qanat in the shallow subsurface of the area was evident from the 2-D and 3-D GPR modeled sections. However, the detection of the water qanat in depths greater than 1 or 2 meters was difficult or even impossible from the GPR results due to the limited depth of penetration of the GPR waves in the area. Overall, it was possible to discriminate coarse-grained sediments from fine-grained sediments, and to some extent, to determine the amount of clay content and moisture in the subsurface from processing, modeling and interpretation of GPR data.Ground penetrating radar (GPR) method as a high resolution non-destructive geophysical method is used for detection of shallow subsurface targets. This method is based on the transmission of electromagnetic waves inside the earth and recording the reflected waves from the subsurface. As the method uses high-frequency electromagnetic waves in the frequency ranges of 12.5 to 2500 MHz (called GPR waves), it can only be used for shallow subsurface investigations. Using this method, continuous images of the reflections of GPR waves from the interfaces of subsurface media with different electrical properties are obtained. Shallow cavities, due to their different electrical characteristics from the background, are among the targets which can be detected by this method. Since the depth of penetration of GPR waves in an area is controlled by the electrical conductivity and permittivity of the ground of the survey area, the depth of penetration of GPR waves, where fine-grained sediments are present, is relatively lower due to higher electrical conductivity of fine sedimentary grains compared to coarse sedimentary grains. Thus, the relative grain sizes and clay content of the subsurface sediments can be investigated by the GPR method. In general, shallow subsurface structures, having different materials and thicknesses, can be detected by the method as the structures and their host media normally have different electrical (namely, conductivity and permittivity) properties. In this research work, the GPR data acquisition has been carried out using 250 MHz shielded antenna along 5 lines in Darkhanyab area near Mojen Town, which is located at the distance of approximately 25 km northwest of Shahrood City. The purpose of this GPR survey is to detect shallow subsurface structures such as the water Qanat in the area. Due to the low distance between the GPR transmitter and receiver as well as the electrical properties, especially the conductivity of the ground, and also, to remove the unwanted low frequency signals or reflections while preserving the high frequency signals, the Dewow filter was applied before any other processing to all GPR data sets. Short time intervals between the transmitted GPR pulses and the pulses received directly from the air-ground surface, and also, the existence of reflections from the shallow subsurface targets, cause signal saturation in the receiver. For this, the Dewow filter is applied on the GPR data to correct for signal saturation or Wow in the data. Different types of gains are also among the processing methods applied on the data to reduce the attenuating effect of the GPR waves as the depth increases. To demonstrate the effects of different gains and to select the optimum gain, we applied different gains on the GPR data. To convert the trace from a wavelet with both positive and negative components (i.e. sine or cosine nature) to a mono-pulse wavelet with positive components, we used the envelope filter. This process removes the oscillatory nature of the radar wavelet and shows the data in its true resolution, making it easier to interpret. In this research, for processing the two-dimensional (2-D) GPR data or sections, Win_Ekko_Pro software was used. In addition, for three-dimensional (3-D) processing and modelling of the GPR data, EKKO-Mapper and EKKO-3D software programs were used. To display the output data from the Win_Ekko_Pro and EKKO-3D software programs, we also used T3D software. These software programs have been developed by Canadian Sensors & software Company.
The results of this research work indicate that using the characteristics of GPR waves in the 2-D GPR sections; we can detect the subsurface targets such as cavities and discriminate coarse-grained sediments from fine-grained sediments, and also, determine the electrical properties of subsurface layers with high success. High resolution of the GPR data in this research have enabled us to determine the water qanat interfaces with its surroundings such as soil-concrete, concrete-air, air-water and water-concrete interfaces in the subsurface. Furthermore, the high conductive clayey soils above the water qanat canal in some places cause high attenuation of the GPR waves, and thus, highly limit the depth of penetration of the GPR waves. This phenomenon is also seen in the surrounding zone of the water qanat canal that mainly occurs due to the seepage of water to the ground. The soil bedding can also be easily observed in the obtained GPR sections. The horizontal soil layers, having thicknesses of several centimeters, have covered the surface of the ground in the survey area. A high resistive subsurface zone in the GPR sections, characterized by the ringing phenomenon, is interpreted as a cavity. In general, the relative high conductivity of the ground in the area causes to have a limited depth of penetration of the GPR waves that rarely exceeds 2 meters. The location of the water qanat in the shallow subsurface of the area was evident from the 2-D and 3-D GPR modeled sections. However, the detection of the water qanat in depths greater than 1 or 2 meters was difficult or even impossible from the GPR results due to the limited depth of penetration of the GPR waves in the area. Overall, it was possible to discriminate coarse-grained sediments from fine-grained sediments, and to some extent, to determine the amount of clay content and moisture in the subsurface from processing, modeling and interpretation of GPR data.https://jesphys.ut.ac.ir/article_30213_72148ae1e534955a2c7927e96e12615a.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Storm track dynamics in anomalous winter 2007-2008 from energetic perspectiveStorm track dynamics in anomalous winter 2007-2008 from energetic perspective1751873021410.22059/jesphys.2013.30214FAFarnazHosseinpourAlirezaMohebalhojehFarhangAhmadi-GiviJournal Article19700101Climatological study of the mid-latitude storm tracks was investigated using the daily Global Forecast System (GFS) and the National Centers for Environmental Prediction (NCEP) Reanalysis data sets from 1948 to 2008 winter seasons. In anomalous winter 2007-2008, decreasing of eddy kinetic energy along with the reduction of eddy available potential energy upstream of Atlantic storm track are related to weakening of the baroclinic generation with respect to the long-term means. In addition to the baroclinic conversion, the most important term in strengthening and stretching of the jet-stream upstream of Atlantic storm track, the increase of the positive values of barotropic conversion plays a controlling role.
Study of the energetics over the Middle East shows that in January 2008, the domination of the subtropical jet stream core over the south-western Iran was responsible for the increase of the mean kinetic energy. Baroclinic generation had a remarkable role in energy feeding from downstream of Mediterranean, which produced strong convergence of ageostrophic flux due to the downstream development over the Mediterranean storm track. Significant increase of baroclinic conversion, baroclinic generation, and divergence of ageostrophic flux have their maximum values over the north of Arabian Peninsula causing intense eastward radiation of eddy energy toward the central part of Iran. In this month, the maximum convergence of ageostrophic flux shows that on average, Iran was a strong sink of energy associated with deepening of thermal trough and dominance of unprecedented cold anomalies over the northern Iran in the recent decades.
In February 2008, in addition to the increase of the mean kinetic energy due to the downstream displacement of subtropical jet-stream over south-western Iran, the maximum of positive barotropic conversion anomaly in Northern Hemisphere was located over this region. Local core of baroclinic generation over the central part of Iran elongated between the significant positive bands of barotropic conversion was accompanied by the increase of the eddy kinetic energy and eddy available potential energy, respectively. This indicates that Iran, on average, was a transmitter of energy toward the adjacent eastern regions during this month.Climatological study of the mid-latitude storm tracks was investigated using the daily Global Forecast System (GFS) and the National Centers for Environmental Prediction (NCEP) Reanalysis data sets from 1948 to 2008 winter seasons. In anomalous winter 2007-2008, decreasing of eddy kinetic energy along with the reduction of eddy available potential energy upstream of Atlantic storm track are related to weakening of the baroclinic generation with respect to the long-term means. In addition to the baroclinic conversion, the most important term in strengthening and stretching of the jet-stream upstream of Atlantic storm track, the increase of the positive values of barotropic conversion plays a controlling role.
Study of the energetics over the Middle East shows that in January 2008, the domination of the subtropical jet stream core over the south-western Iran was responsible for the increase of the mean kinetic energy. Baroclinic generation had a remarkable role in energy feeding from downstream of Mediterranean, which produced strong convergence of ageostrophic flux due to the downstream development over the Mediterranean storm track. Significant increase of baroclinic conversion, baroclinic generation, and divergence of ageostrophic flux have their maximum values over the north of Arabian Peninsula causing intense eastward radiation of eddy energy toward the central part of Iran. In this month, the maximum convergence of ageostrophic flux shows that on average, Iran was a strong sink of energy associated with deepening of thermal trough and dominance of unprecedented cold anomalies over the northern Iran in the recent decades.
In February 2008, in addition to the increase of the mean kinetic energy due to the downstream displacement of subtropical jet-stream over south-western Iran, the maximum of positive barotropic conversion anomaly in Northern Hemisphere was located over this region. Local core of baroclinic generation over the central part of Iran elongated between the significant positive bands of barotropic conversion was accompanied by the increase of the eddy kinetic energy and eddy available potential energy, respectively. This indicates that Iran, on average, was a transmitter of energy toward the adjacent eastern regions during this month.https://jesphys.ut.ac.ir/article_30214_e0e166c17a303b95a89a9e10bc66aa06.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Parameterization of nocturnal stable boundary layer height (NSBLH) and effect of NSBLH on air pollution in an urban area with complex topography (Tehran)Parameterization of nocturnal stable boundary layer height (NSBLH) and effect of NSBLH on air pollution in an urban area with complex topography (Tehran)1892063021510.22059/jesphys.2013.30215FANafisehPegahfarAbbas AliAliakbari-BidokhtiPeymanZawar-RezaJournal Article19700101لایه مرزی شهری بهدلیل دارا بودن تراز آلودگی بیشتر، همواره مورد توجه هواشناسان بوده است. لذا کوششهای فراوانی درجهت پیشبینی و پیشیابی پارامترهای موثر در این لایه صورت گرفته است. از آن جمله میتوان به اجرای آزمایشهای میدانی، تهیه الگوریتمهای متنوع به کمک پارامتریکردن و مدل سازیهای عددی اشاره کرد که اغلب آنها در منطقه روستایی کاربرد دارد. شهر تهران نیز بهدلیل توپوگرافی ویژه، حجم ترافیک زیاد، تمرکز مراکز صنعتی، یکی از آلودهترین شهرها بهشمار میآید. علاوه بر موارد فوق، تغییرات فصلی و چرخه شبانهروزی برخی میدانهای هواشناختی و پارامترهای لایه مرزی نیز بر غلظت آلایندهها در این مکان تاثیرگذار است. تاثیر پارامترهای لایه مرزی در شب که اثر ترافیک و مراکز صنعتی کم میشود، بیشتر نمایان است. دلیل آن را میتوان تشکیل لایه وارونه و همچنین پایدار شبانه دانست که عمق اختلاط را کاهش میدهد. بنابراین در این زمان غلظت آلایندهها بهدلیل کاهش حجم در دسترس برای پخش، انتقال و ماندگاری افزایش مییابد.
با وجود پیشرفتهای فراوان در این زمینه هنوز برآورد عمق لایه پایدار شبانه یک چالش بزرگ برای تحقیق در فضای شهری محسوب میشود؛ زیرا که در شب و در منطقه شهری جهت شار گرما با منطقه غیر شهری متفاوت است که کاربرد الگوریتمهای تهیه شده برای منطقه غیر شهری را در این مناطق با مشکل مواجه میکند.
با توجه به اهمیت موضوع در این پژوهش سعی شده است تا ابتدا ارتباط بین عمق لایه پایدار شبانه و غلظت آلاینده ذرهای 10PM بررسی شود. در ادامه، اعتبار پارامتریکردنهای متنوعی که برای برآورد عمق لایه پایدار شبانه در الگوریتمهای لایه مرزی در مدلهای پخش بهکار میرود، مورد ارزیابی قرار گیرد و از بین آنها بهترین رابطه برای این منطقه معرفی شود. نتایج نشان میدهد که از بین 10 رابطه بررسی شده، روابطی که تاثیر پارامترها و متغیرهای میدانی هواشناسی را بهطور جداگانه در نظر میگیرند خطای بیشتری تولید میکنند. از بین این روابط، رابطهای که زیلیتینکویچ و میرانو (1996) پیشنهاد کردهاند، خطای کمتری ایجاد کرد. این رابطه با استفاده از 5 مقیاس طولی تهیه شده بود که پارامترهای بیشتری از لایه مرزی را شامل میشد. نتایج حاصل از این پژوهش نشان میدهد که این پارامتریکردن برای برآورد عمق لایه پایدار در شهر تهران بزرگترین ضریب همبستگی و کمترین خطا را بهوجود میآورد.لایه مرزی شهری بهدلیل دارا بودن تراز آلودگی بیشتر، همواره مورد توجه هواشناسان بوده است. لذا کوششهای فراوانی درجهت پیشبینی و پیشیابی پارامترهای موثر در این لایه صورت گرفته است. از آن جمله میتوان به اجرای آزمایشهای میدانی، تهیه الگوریتمهای متنوع به کمک پارامتریکردن و مدل سازیهای عددی اشاره کرد که اغلب آنها در منطقه روستایی کاربرد دارد. شهر تهران نیز بهدلیل توپوگرافی ویژه، حجم ترافیک زیاد، تمرکز مراکز صنعتی، یکی از آلودهترین شهرها بهشمار میآید. علاوه بر موارد فوق، تغییرات فصلی و چرخه شبانهروزی برخی میدانهای هواشناختی و پارامترهای لایه مرزی نیز بر غلظت آلایندهها در این مکان تاثیرگذار است. تاثیر پارامترهای لایه مرزی در شب که اثر ترافیک و مراکز صنعتی کم میشود، بیشتر نمایان است. دلیل آن را میتوان تشکیل لایه وارونه و همچنین پایدار شبانه دانست که عمق اختلاط را کاهش میدهد. بنابراین در این زمان غلظت آلایندهها بهدلیل کاهش حجم در دسترس برای پخش، انتقال و ماندگاری افزایش مییابد.
با وجود پیشرفتهای فراوان در این زمینه هنوز برآورد عمق لایه پایدار شبانه یک چالش بزرگ برای تحقیق در فضای شهری محسوب میشود؛ زیرا که در شب و در منطقه شهری جهت شار گرما با منطقه غیر شهری متفاوت است که کاربرد الگوریتمهای تهیه شده برای منطقه غیر شهری را در این مناطق با مشکل مواجه میکند.
با توجه به اهمیت موضوع در این پژوهش سعی شده است تا ابتدا ارتباط بین عمق لایه پایدار شبانه و غلظت آلاینده ذرهای 10PM بررسی شود. در ادامه، اعتبار پارامتریکردنهای متنوعی که برای برآورد عمق لایه پایدار شبانه در الگوریتمهای لایه مرزی در مدلهای پخش بهکار میرود، مورد ارزیابی قرار گیرد و از بین آنها بهترین رابطه برای این منطقه معرفی شود. نتایج نشان میدهد که از بین 10 رابطه بررسی شده، روابطی که تاثیر پارامترها و متغیرهای میدانی هواشناسی را بهطور جداگانه در نظر میگیرند خطای بیشتری تولید میکنند. از بین این روابط، رابطهای که زیلیتینکویچ و میرانو (1996) پیشنهاد کردهاند، خطای کمتری ایجاد کرد. این رابطه با استفاده از 5 مقیاس طولی تهیه شده بود که پارامترهای بیشتری از لایه مرزی را شامل میشد. نتایج حاصل از این پژوهش نشان میدهد که این پارامتریکردن برای برآورد عمق لایه پایدار در شهر تهران بزرگترین ضریب همبستگی و کمترین خطا را بهوجود میآورد.https://jesphys.ut.ac.ir/article_30215_f981638479733ff7f0a7606a96267b99.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Study of local winds over Tehran using a single-layer urban model coupled with WRF under ideal conditionsStudy of local winds over Tehran using a single-layer urban model coupled with WRF under ideal conditions2072213021610.22059/jesphys.2013.30216FAImanSoltanzadehAbbas AliAliakbari-BidokhtiPeymanZawar-RezaJournal Article19700101Wind is the carrier of pollutants and any other gaseous or particle matters in the atmosphere. Stable atmosphere with low wind provides favorable conditions for high contamination of pollutants in urban areas. The importance of mesoscale atmospheric flows in air pollution dispersion has been recognized in the past three decades and has been the focus of intensive research; both observational and numerical. Mesoscale or local scale circulations are more prominent when the synoptic pressure gradients are weak, allowing horizontal temperature contrasts to develop, which in turn lead to mesoscale pressure perturbations.
Tehran, a city which is situated at the southern foothills of the Alborz Mountain chain has an average elevation of 1500m, and covers an area of 864 km2. Alborz Mountains have a significant influence on the dynamics and thermodynamic modification of air movement over the city. At the same time, the Urban Heat Island effect (UHI) can cause its own mesoscale flow, complicating an already complex local scale flow. The topography and the urban fabric can cause slope flows, mountain flows, and valley flows amongst many others factors.
This paper focuses on the use of state-of-the-art atmospheric numerical model – The Weather Research and Forecasting (WRF) – in an ideal situation to study the characteristics of the mesoscale flow systems that prevail over Tehran when air quality is unfavourable. So the average of soundings of Radiosonde at Mehrabad station, for the almost all fair days of cold seasons of 2005 to 2008 has been selected as ideal initial condition and boundary condition with 10 × 10 km spatial and 12-hour temporal resolutions. The simulations were carried out for a 3-day period in December 2005 when an aircraft, due to low visibility caused by high concentration of air pollution, crashed in 2 miles away from the end of runway into an inhabited area.
Three simulations are prepared. For the first experiment, called control run, we used the default urban setting of Tehran. In the second simulation urban properties of Tehran was removed completely from land-use fed to the model to investigate the effect of urban area on thermal induced circulation of Tehran. This simulation is called NoURB simulation. To investigate the role of urbanization the 3rd simulation was prepared. In this simulation which will be referred as Urban simulation, three urban categories are used; class 31 of USGS land use/land cover used for “Low Intensity Residential” which includes areas with a mixture of constructed materials and vegetation. These areas are most commonly included as single-family housing units in which the population densities is lower than that in high intensity residential areas. Class 32 of USGS represented as “High Intensity Residential” which includes highly developed areas where people reside in high numbers. Finally class 33 of USGS used for “Commercial/Industrial” which includes infrastructure (e.g. roads, railroads, etc.) and all highly developed areas not classified as High Intensity Residential.
The Noah LSM provides surface sensible and latent heat fluxes, and surface skin temperature as lower boundary conditions. It has asingle vegetation canopy layer and the following prognostic variables: soil moisture and temperature in the soil layers, water stored on the canopy, and snow stored on the ground. It includes: 1) increasing the roughness length from 0.5 m to 0.8 m to represent turbulence generated by roughness elements and drag due to buildings; 2) reducing surface albedo from 0.18 to 0.15 to represent the shortwave radiation trapping in the urban canyons; 3) using a larger volumetric heat capacity of 3.0 J m-3 K-1 for the urban surface (walls, roofs, and roads) which is usually consisted of concrete or asphalt materials; 4) increasing the value of soil thermal conductivity to 3.24 W m-1 K-1 to parameterize large heat storage in the urban surface and underlying surfaces, and 5) reducing green vegetation fraction over urban city to decrease evaporation.
In order to better represent the physical processes involved in the exchange of heat, momentum, and water vapor in urban environment in mesoscale model, an UCM is coupled to the WRF model. The main purpose of the coupled model is to improve the description of lower boundary conditions and to provide more accurate forecasts for urban regions. The UCM is a single layer model which has a simplified urban geometry. Some of the features of the UCM include, shadowing from buildings, reflection of short and longwave radiation, wind profile in the canopy layer and multi-layer heat transfer equation for roof, wall and road surfaces.
The basic function of an UCM is to take the urban geometry into account in its surface energy budgets and wind shear calculations. The urban model is based on the urban canopy model which includes: 1) 2-D street canyons that are parameterized to represent the effects of urban geometry on urban canyon heat distribution; 2) shadowing from buildings and reflection of radiation in the canopy layer; 3) the canyon orientation and diurnal cycle of solar azimuth angle, 4) man-made surface consists of eight canyons with different orientation; 5) Inoue’s model for canopy flows; 6) the multi-layer heat equation for the roof, wall, and road interior temperatures; 7) anthropogenic heating associated with energy consumption by human activities; and 8) a very thin bucket model for evaporation and runoff from road surface.
The main limits in this kind of study over Tehran metropolitan are lack of observation data beside lack of documentation. The previous studies over Tehran indicate a significant increase in minimum temperatures in 50-year trend especially in cold seasons. This indicates that the artificial and anthropogenic heating leading to urban heat island in Tehran have been significant in this period. These studies also indicate that nocturnal drainage flows in Mehrabad Int. Airport synoptic station has also been weakened, in this 50-year period. This paper focuses on the use of state-of-the-art atmospheric numerical model – The Weather Research and Forecasting (WRF) – in an ideal situation to study the characteristics of the mesoscale flow systems that prevail over Tehran when air quality is bad. The results indicate that urban areas near complex topography can increase transfer of material (pollution) and energy in the boundary layer and from this layer to the free atmosphere. Other results from this study show UHI induces rural-urban flows which significantly reduce drainage wind speed.Wind is the carrier of pollutants and any other gaseous or particle matters in the atmosphere. Stable atmosphere with low wind provides favorable conditions for high contamination of pollutants in urban areas. The importance of mesoscale atmospheric flows in air pollution dispersion has been recognized in the past three decades and has been the focus of intensive research; both observational and numerical. Mesoscale or local scale circulations are more prominent when the synoptic pressure gradients are weak, allowing horizontal temperature contrasts to develop, which in turn lead to mesoscale pressure perturbations.
Tehran, a city which is situated at the southern foothills of the Alborz Mountain chain has an average elevation of 1500m, and covers an area of 864 km2. Alborz Mountains have a significant influence on the dynamics and thermodynamic modification of air movement over the city. At the same time, the Urban Heat Island effect (UHI) can cause its own mesoscale flow, complicating an already complex local scale flow. The topography and the urban fabric can cause slope flows, mountain flows, and valley flows amongst many others factors.
This paper focuses on the use of state-of-the-art atmospheric numerical model – The Weather Research and Forecasting (WRF) – in an ideal situation to study the characteristics of the mesoscale flow systems that prevail over Tehran when air quality is unfavourable. So the average of soundings of Radiosonde at Mehrabad station, for the almost all fair days of cold seasons of 2005 to 2008 has been selected as ideal initial condition and boundary condition with 10 × 10 km spatial and 12-hour temporal resolutions. The simulations were carried out for a 3-day period in December 2005 when an aircraft, due to low visibility caused by high concentration of air pollution, crashed in 2 miles away from the end of runway into an inhabited area.
Three simulations are prepared. For the first experiment, called control run, we used the default urban setting of Tehran. In the second simulation urban properties of Tehran was removed completely from land-use fed to the model to investigate the effect of urban area on thermal induced circulation of Tehran. This simulation is called NoURB simulation. To investigate the role of urbanization the 3rd simulation was prepared. In this simulation which will be referred as Urban simulation, three urban categories are used; class 31 of USGS land use/land cover used for “Low Intensity Residential” which includes areas with a mixture of constructed materials and vegetation. These areas are most commonly included as single-family housing units in which the population densities is lower than that in high intensity residential areas. Class 32 of USGS represented as “High Intensity Residential” which includes highly developed areas where people reside in high numbers. Finally class 33 of USGS used for “Commercial/Industrial” which includes infrastructure (e.g. roads, railroads, etc.) and all highly developed areas not classified as High Intensity Residential.
The Noah LSM provides surface sensible and latent heat fluxes, and surface skin temperature as lower boundary conditions. It has asingle vegetation canopy layer and the following prognostic variables: soil moisture and temperature in the soil layers, water stored on the canopy, and snow stored on the ground. It includes: 1) increasing the roughness length from 0.5 m to 0.8 m to represent turbulence generated by roughness elements and drag due to buildings; 2) reducing surface albedo from 0.18 to 0.15 to represent the shortwave radiation trapping in the urban canyons; 3) using a larger volumetric heat capacity of 3.0 J m-3 K-1 for the urban surface (walls, roofs, and roads) which is usually consisted of concrete or asphalt materials; 4) increasing the value of soil thermal conductivity to 3.24 W m-1 K-1 to parameterize large heat storage in the urban surface and underlying surfaces, and 5) reducing green vegetation fraction over urban city to decrease evaporation.
In order to better represent the physical processes involved in the exchange of heat, momentum, and water vapor in urban environment in mesoscale model, an UCM is coupled to the WRF model. The main purpose of the coupled model is to improve the description of lower boundary conditions and to provide more accurate forecasts for urban regions. The UCM is a single layer model which has a simplified urban geometry. Some of the features of the UCM include, shadowing from buildings, reflection of short and longwave radiation, wind profile in the canopy layer and multi-layer heat transfer equation for roof, wall and road surfaces.
The basic function of an UCM is to take the urban geometry into account in its surface energy budgets and wind shear calculations. The urban model is based on the urban canopy model which includes: 1) 2-D street canyons that are parameterized to represent the effects of urban geometry on urban canyon heat distribution; 2) shadowing from buildings and reflection of radiation in the canopy layer; 3) the canyon orientation and diurnal cycle of solar azimuth angle, 4) man-made surface consists of eight canyons with different orientation; 5) Inoue’s model for canopy flows; 6) the multi-layer heat equation for the roof, wall, and road interior temperatures; 7) anthropogenic heating associated with energy consumption by human activities; and 8) a very thin bucket model for evaporation and runoff from road surface.
The main limits in this kind of study over Tehran metropolitan are lack of observation data beside lack of documentation. The previous studies over Tehran indicate a significant increase in minimum temperatures in 50-year trend especially in cold seasons. This indicates that the artificial and anthropogenic heating leading to urban heat island in Tehran have been significant in this period. These studies also indicate that nocturnal drainage flows in Mehrabad Int. Airport synoptic station has also been weakened, in this 50-year period. This paper focuses on the use of state-of-the-art atmospheric numerical model – The Weather Research and Forecasting (WRF) – in an ideal situation to study the characteristics of the mesoscale flow systems that prevail over Tehran when air quality is bad. The results indicate that urban areas near complex topography can increase transfer of material (pollution) and energy in the boundary layer and from this layer to the free atmosphere. Other results from this study show UHI induces rural-urban flows which significantly reduce drainage wind speed.https://jesphys.ut.ac.ir/article_30216_54eac7fbe480bfb11201969c576eced6.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Assessing the influence of numerical diffusion on the accuracy of numerical solution of the shallow water equationsAssessing the influence of numerical diffusion on the accuracy of numerical solution of the shallow water equations2232393021710.22059/jesphys.2013.30217FASarmadGhader0000-0001-9666-5493AlirezaMohebalhojehMarziyehSharifidoostJournal Article19700101To control the nonlinear numerical instability, throughout the time evolution of the Eulerian form of the nonlinear rotating shallow water equations, it is necessary to add numerical diffusion to the solution. It is clear that, this extra numerical diffusion degrades the accuracy of the numerical solution and should be kept as small as possible. In a conventional approach a hyper-diffusion is used to maintain the numerical stability. In the present work, the influence of using different orders of hyper-diffusion on the accuracy of the numerical solution of the shallow water equations generated by some high-order numerical methods is examined. Furthermore, application of an eighth-order compact spatial filter as an alternative way to provide the numerical diffusion is considered.
In the present work, the vorticity-divergence-mass representation of the shallow water equations is considered for numerical simulation. To advance the governing equations in time the semi-implicit approach combined with the three level leapfrog method for the time discretization of the temporal derivatives, is used. The second-order central, the fourth-order compact, the sixth-order super compact finite difference and the pseudo-spectral methods are applied to spatial differencing of the shallow water equations.
For a quantitative assessment of accuracy, the global measures of the distribution of mass between potential vorticity iso-levels, called mass error, is used. In addition, based on the results of some recent investigations, decomposing a flow into a balanced part representing vortical flow and an unbalanced part representing freely propagating inertia-gravity waves has found significant usefulness in the accuracy analysis of the numerical solution of the primitive equations. Therefore, in this work the representation of balance and imbalance are also used for quantitative assessment of the numerical accuracy.
It is found that the numerical diffusion plays a crucial role in the accuracy of numerical solution. Results show that using lower order hyper-diffusion terms degrades the numerical accuracy. Furthermore, results show that using higher orders of hyper-diffusion for the sixth-order super compact and pseudo-spectral method is essential. In addition, it is observed that based on the quantitative measures of the mass error, using a sixth-order hyper-diffusion term or using the eighth-order compact spatial filter has nearly a similar effect on the numerical accuracy of the shallow water equations generated by the sixth-order super compact finite difference and the pseudo-spectral methods. The same conclusion is valid based on the quantitative measures of the imbalance in particular for hyper-diffusion terms with orders greater than two.To control the nonlinear numerical instability, throughout the time evolution of the Eulerian form of the nonlinear rotating shallow water equations, it is necessary to add numerical diffusion to the solution. It is clear that, this extra numerical diffusion degrades the accuracy of the numerical solution and should be kept as small as possible. In a conventional approach a hyper-diffusion is used to maintain the numerical stability. In the present work, the influence of using different orders of hyper-diffusion on the accuracy of the numerical solution of the shallow water equations generated by some high-order numerical methods is examined. Furthermore, application of an eighth-order compact spatial filter as an alternative way to provide the numerical diffusion is considered.
In the present work, the vorticity-divergence-mass representation of the shallow water equations is considered for numerical simulation. To advance the governing equations in time the semi-implicit approach combined with the three level leapfrog method for the time discretization of the temporal derivatives, is used. The second-order central, the fourth-order compact, the sixth-order super compact finite difference and the pseudo-spectral methods are applied to spatial differencing of the shallow water equations.
For a quantitative assessment of accuracy, the global measures of the distribution of mass between potential vorticity iso-levels, called mass error, is used. In addition, based on the results of some recent investigations, decomposing a flow into a balanced part representing vortical flow and an unbalanced part representing freely propagating inertia-gravity waves has found significant usefulness in the accuracy analysis of the numerical solution of the primitive equations. Therefore, in this work the representation of balance and imbalance are also used for quantitative assessment of the numerical accuracy.
It is found that the numerical diffusion plays a crucial role in the accuracy of numerical solution. Results show that using lower order hyper-diffusion terms degrades the numerical accuracy. Furthermore, results show that using higher orders of hyper-diffusion for the sixth-order super compact and pseudo-spectral method is essential. In addition, it is observed that based on the quantitative measures of the mass error, using a sixth-order hyper-diffusion term or using the eighth-order compact spatial filter has nearly a similar effect on the numerical accuracy of the shallow water equations generated by the sixth-order super compact finite difference and the pseudo-spectral methods. The same conclusion is valid based on the quantitative measures of the imbalance in particular for hyper-diffusion terms with orders greater than two.https://jesphys.ut.ac.ir/article_30217_1bc45c27da7e91e75d8109578c72fe85.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219An investigation of the spatial changes of one-day (24-Hour) precipitation in the supply of Iran precipitation days and amountAn investigation of the spatial changes of one-day (24-Hour) precipitation in the supply of Iran precipitation days and amount2412583021810.22059/jesphys.2013.30218FAHamidNazari PourSeyed AbolfazlMasoodian0000-0001-6227-6713ZahraKarimiJournal Article19700101In different regions, precipitation takes place with different persistencies and every persistency supplies a share of rainfall days and precipitation. Therefore, the importance of rainfall persistence could be evaluated in all places. Iran is located in Mid-Latitude of an arid region, in which the mean rainfall is 250 mm and it has dramatic tempo-spatial changes. Rainfalls with short persistence are of characteristics of arid regions and it is also tangible in Iran. However, Iran’s rainfalls persistence ranges from 1 to 45 days and have dramatic tempo-spatial changes, but the maximum amount and days of rainfalls are supplied by rainfalls with short persistency. So, the phenomenon of rainfalls with long persistency is considered as an extreme event which has extreme variability. As the persistence of precipitations increases, their role in generating Iran’s rainfall days decreases severely in such a way that the maximum rainfall days of Iran is supplied by one-day rainfalls. However, the share of one-day rainfalls in the supply of precipitation days of Iran’s Western half is more accentuated. In contrast, the increase in the persistence of rainfalls does not have an identical role in decreasing the supply of Iran’s precipitation. As the persistence of precipitations increases, the share of precipitation in the Central and Southwestern Iran decrease severely, but in Western and Northern Iran, vice versa is the case. In some heavy precipitation regions of Iran’s Western half, the decrease of precipitation persistence is associated with the decrease of the share of precipitation supply and in other regions; the decrease of the share of precipitation supply is gradual. Therefore, in every space, some of the persistent rainfalls supply the great share of precipitation days and precipitation amount and are considered important. However, it is possible that this precipitation persistency do not have such importance in those areas. Every kind of variability and change in the role of precipitation persistence in every space will be considerable. Spatial changes of one-day precipitation’s share in the supply of Iran’s precipitation days and precipitation amount could be evaluated from this angle.
To evaluate the changes in one-day precipitation’ share in the supply of precipitation days and precipitation amount, the daily observations of precipitations in 1437 stations of throughout Iran was used. Drawing upon Kriging method, the observations of the stations were generalized in a regular network by 15*15 km dimensions and Iran’s isotheral digital maps were developed from 1961/03/21 till 2004/12/30. These digital maps include daily time series (15991 days) of precipitation amount for 7187 cells. Precipitation persistence in the time series of every cell was evaluated and in addition to that, their share in the supply of precipitation days and precipitation amount of each cell were also calculated. Then, the most important persistence of Iran’s precipitations (one-day persistence) was identified and their importance was investigated. Yearly and monthly time series of one-day precipitation’ share in the supply of precipitation days and precipitation amount were entered in a trend analysis for evaluating and understanding its changes and its results were considered.
In spatial analyses including identification of climatologically variables trend, more confident way is that firstly, spatial interpolation is done; then, an appropriate trend test is performed on the data on the nodes. The results obtained from such analyses not only enjoy higher degree of spatial attribution, but based on closeness principle, spatial order of points themselves provide intuitional reason for accepting or rejecting trend analysis. One-day precipitations supply more share of Iran’s precipitation days compared to remaining precipitation persistencies in such a manner that it may be noted that in all regions of Iran, the frequency of one-day precipitations is maximum compared to remaining precipitation persistence. In contrast, Iran’s precipitation is provided by different persistencies and the share of one-day precipitations in precipitation supply is maximum only in Western half (Central and Southeastern parts). However, although one-day precipitations do not have much importance throughout Iran, the degree of their importance in Eastern half is maximum compared to Western half. The share of on-day precipitations in the supply of Iran’s precipitation days and precipitation amount has changed with time.
The results of yearly changes of share of on-day precipitations in the supply of Iran’s precipitation days and precipitation amount indicate that their share in the supply of precipitation days decreases in one quarter of Iran’s area and only in 3% of Iran’s area, their share increases. Given that Western and Central Iran’s maximum precipitation days are provided by one-day precipitations, precipitation days of Eastern Iran have decreased. In addition, their share in the supply of precipitation days decreases in 1/5 of Iran’s area and only in 6% of Iran’s area, their share has increased. On the other hand, Given that Central Iran’s maximum precipitation days are provided by one-day precipitations, their share in the supply of precipitation days has decreased; just in discrete regions and along with Zagros and Alborz unevennesses, their share increases.
The results of yearly changes of share of one-day precipitations in the supply of Iran’s precipitation indicate that their negative trend in all rainfall months is greater than their positive trend. Looking more generally into the share of monthly changes of one-day precipitations in the supply of Iran’s precipitation, the aspects of Iran’s precipitation concentration becomes evident, especially in Eastern and Central Iran.In different regions, precipitation takes place with different persistencies and every persistency supplies a share of rainfall days and precipitation. Therefore, the importance of rainfall persistence could be evaluated in all places. Iran is located in Mid-Latitude of an arid region, in which the mean rainfall is 250 mm and it has dramatic tempo-spatial changes. Rainfalls with short persistence are of characteristics of arid regions and it is also tangible in Iran. However, Iran’s rainfalls persistence ranges from 1 to 45 days and have dramatic tempo-spatial changes, but the maximum amount and days of rainfalls are supplied by rainfalls with short persistency. So, the phenomenon of rainfalls with long persistency is considered as an extreme event which has extreme variability. As the persistence of precipitations increases, their role in generating Iran’s rainfall days decreases severely in such a way that the maximum rainfall days of Iran is supplied by one-day rainfalls. However, the share of one-day rainfalls in the supply of precipitation days of Iran’s Western half is more accentuated. In contrast, the increase in the persistence of rainfalls does not have an identical role in decreasing the supply of Iran’s precipitation. As the persistence of precipitations increases, the share of precipitation in the Central and Southwestern Iran decrease severely, but in Western and Northern Iran, vice versa is the case. In some heavy precipitation regions of Iran’s Western half, the decrease of precipitation persistence is associated with the decrease of the share of precipitation supply and in other regions; the decrease of the share of precipitation supply is gradual. Therefore, in every space, some of the persistent rainfalls supply the great share of precipitation days and precipitation amount and are considered important. However, it is possible that this precipitation persistency do not have such importance in those areas. Every kind of variability and change in the role of precipitation persistence in every space will be considerable. Spatial changes of one-day precipitation’s share in the supply of Iran’s precipitation days and precipitation amount could be evaluated from this angle.
To evaluate the changes in one-day precipitation’ share in the supply of precipitation days and precipitation amount, the daily observations of precipitations in 1437 stations of throughout Iran was used. Drawing upon Kriging method, the observations of the stations were generalized in a regular network by 15*15 km dimensions and Iran’s isotheral digital maps were developed from 1961/03/21 till 2004/12/30. These digital maps include daily time series (15991 days) of precipitation amount for 7187 cells. Precipitation persistence in the time series of every cell was evaluated and in addition to that, their share in the supply of precipitation days and precipitation amount of each cell were also calculated. Then, the most important persistence of Iran’s precipitations (one-day persistence) was identified and their importance was investigated. Yearly and monthly time series of one-day precipitation’ share in the supply of precipitation days and precipitation amount were entered in a trend analysis for evaluating and understanding its changes and its results were considered.
In spatial analyses including identification of climatologically variables trend, more confident way is that firstly, spatial interpolation is done; then, an appropriate trend test is performed on the data on the nodes. The results obtained from such analyses not only enjoy higher degree of spatial attribution, but based on closeness principle, spatial order of points themselves provide intuitional reason for accepting or rejecting trend analysis. One-day precipitations supply more share of Iran’s precipitation days compared to remaining precipitation persistencies in such a manner that it may be noted that in all regions of Iran, the frequency of one-day precipitations is maximum compared to remaining precipitation persistence. In contrast, Iran’s precipitation is provided by different persistencies and the share of one-day precipitations in precipitation supply is maximum only in Western half (Central and Southeastern parts). However, although one-day precipitations do not have much importance throughout Iran, the degree of their importance in Eastern half is maximum compared to Western half. The share of on-day precipitations in the supply of Iran’s precipitation days and precipitation amount has changed with time.
The results of yearly changes of share of on-day precipitations in the supply of Iran’s precipitation days and precipitation amount indicate that their share in the supply of precipitation days decreases in one quarter of Iran’s area and only in 3% of Iran’s area, their share increases. Given that Western and Central Iran’s maximum precipitation days are provided by one-day precipitations, precipitation days of Eastern Iran have decreased. In addition, their share in the supply of precipitation days decreases in 1/5 of Iran’s area and only in 6% of Iran’s area, their share has increased. On the other hand, Given that Central Iran’s maximum precipitation days are provided by one-day precipitations, their share in the supply of precipitation days has decreased; just in discrete regions and along with Zagros and Alborz unevennesses, their share increases.
The results of yearly changes of share of one-day precipitations in the supply of Iran’s precipitation indicate that their negative trend in all rainfall months is greater than their positive trend. Looking more generally into the share of monthly changes of one-day precipitations in the supply of Iran’s precipitation, the aspects of Iran’s precipitation concentration becomes evident, especially in Eastern and Central Iran.https://jesphys.ut.ac.ir/article_30218_4b0298eedfc01687c86068305e733542.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38420130219Study of meteorological parameters effects on lunar crescent visibility using a genetic algorithmStudy of meteorological parameters effects on lunar crescent visibility using a genetic algorithm2592713021910.22059/jesphys.2013.30219FAJamshidGhanbariMaryamKarimianImanBabaeianMahdiMotieiJournal Article19700101The Islamic calendar is based on lunar months, which begin when the thin crescent Moon is actually sighted in the western sky after sunset within a day or so after the New Moon. The Islamic dates begin at sunset on the previous day. The visibility of the lunar crescent as a function of the Moon's age - the time counted from the New Moon - is obviously of great importance to Muslims. The date and time of each New Moon can be computed exactly but the time that the Moon first becomes visible after the New Moon depends on many factors and cannot be predicted with certainty. The sighting of the lunar crescent within one day of New Moon is usually difficult. The crescent at this time is quite thin, has a low surface brightness and can easily be lost in the twilight. Generally, the lunar crescent will become visible to suitably-located, experienced observers with good sky conditions about one day after the New Moon. However, the time that the crescent actually becomes visible varies quite a bit from one month to another. The record for an early sighting of a lunar crescent with a telescope is 12.1 hours after New Moon; for naked-eye sightings, the record is 15.5 hours from New Moon. For Islamic calendar purposes, the sighting must be made with the unaided eye. Obviously, the visibility of the young lunar crescent depends on atmosphere conditions, the location and preparation of the observer.
The prediction of the first sighting of the early crescent Moon is an interesting problem because it simultaneously involves a number of highly non-linear effects. Effects to be considered are the geometry of the Sun, Moon, the width and surface brightness of the crescent, the absorption of the Moon's light and the scattering of the Sun's light in the Earth's atmosphere, the physiology of human vision and natural horizon. The effects of meteorological conditions such as mean sea level pressure, visibility, mean temperature and humidity on Crescent visibility are studied in this paper.
Our studied sites are located in the south, center and eastern part of Iran including Mashad, Bojnord, Birjand, Isfahan, Shiraz and Kerman cities. Two series of data are used in this study. The first one data were sighting and visibility of the lunar crescents which recorded by Moon's sighting groups in the above mentioned cities and the second series of data were the meteorological observations of mean sea level pressure, mean temperature, horizontal visibility and relative humidity in the same dates and locations of Moon's sighting. Horizontal visibility is divided into two categories of bellow and above 10kM. Period of study was 8 years starting from 1423 to 1430 according to Islamic calendar.
Genetic algorithm is used to formulate the relations between moon visibilities and meteorological parameters. Genetic algorithms are one of the best ways to solve a problem for which little is known. They are very general algorithms and so may work well in any search space. Genetic algorithms use the principles of selection and evolution to produce several solutions to a given problem. Two methods of linear and non-linear approaches are used to model the statistical relations between the lunar visibilities and meteorological parameters.
For linear-based method the following formula is used:
We used the bellow formula for the nonlinear approach:
Where MSE is the Mean Square Error, Robs and Rmod represent actual and modeled visibility of the Moon. P, RH, T and V are mean seas level pressure, relative humidity, temperature and visibility, respectively. (n-p) is degree of freedom and ai is constants.
One of the important factors affecting crescent visibility is meteorological parameters, but they have not been considered well up to now. In this paper a Genetic algorithm has been used to find relationship between percentage of crescent lighting and meteorological parameters such as sea level pressure; mean temperature, relative humidity and horizontal visibility. In this regards, observations have been considered during the period of 1423-1430 lunar Hejri(Islamic calender) calendar for Mashad, Kerman, Shiraz, Esfahan, Birjand, and Boujnourd for two cases with about 10 km horizontal visibility.
Error, bias and weighted factor of meteorological impacts on crescent visibility have been calculated after comparing modeled and observed crescent visibility. Results generally show that non-linear parameterization equations have more bias than linear equations. Maximum bias with 3.24 has been occurred in nonlinear model for horizontal visibility less than 10km over Birjand and Bojnourd sites. The minimum bias of crescent visibility has been occurred in Shiraz by 0.01 percent. The minimum and maximum percentages of relative error are found in Shiraz and Birjand by 1.96% and 99%, respectively. We also found that in linear modeling with horizontal visibility more than 10km, weighted effect of pressure increase by decreasing altitude from mean sea level and effect of humidity decreases by increasing altitude from mean sea level.
Our result confirms that the crescent visibility is more sensitive both to pressure and horizontal visibility. Overlay, linear and nonlinear equations have acceptable results for modeling crescent visibility. Results of this paper reveal that meteorological parameterization of crescent visibility can be used for prediction of crescent visibility from meteorological view point.The Islamic calendar is based on lunar months, which begin when the thin crescent Moon is actually sighted in the western sky after sunset within a day or so after the New Moon. The Islamic dates begin at sunset on the previous day. The visibility of the lunar crescent as a function of the Moon's age - the time counted from the New Moon - is obviously of great importance to Muslims. The date and time of each New Moon can be computed exactly but the time that the Moon first becomes visible after the New Moon depends on many factors and cannot be predicted with certainty. The sighting of the lunar crescent within one day of New Moon is usually difficult. The crescent at this time is quite thin, has a low surface brightness and can easily be lost in the twilight. Generally, the lunar crescent will become visible to suitably-located, experienced observers with good sky conditions about one day after the New Moon. However, the time that the crescent actually becomes visible varies quite a bit from one month to another. The record for an early sighting of a lunar crescent with a telescope is 12.1 hours after New Moon; for naked-eye sightings, the record is 15.5 hours from New Moon. For Islamic calendar purposes, the sighting must be made with the unaided eye. Obviously, the visibility of the young lunar crescent depends on atmosphere conditions, the location and preparation of the observer.
The prediction of the first sighting of the early crescent Moon is an interesting problem because it simultaneously involves a number of highly non-linear effects. Effects to be considered are the geometry of the Sun, Moon, the width and surface brightness of the crescent, the absorption of the Moon's light and the scattering of the Sun's light in the Earth's atmosphere, the physiology of human vision and natural horizon. The effects of meteorological conditions such as mean sea level pressure, visibility, mean temperature and humidity on Crescent visibility are studied in this paper.
Our studied sites are located in the south, center and eastern part of Iran including Mashad, Bojnord, Birjand, Isfahan, Shiraz and Kerman cities. Two series of data are used in this study. The first one data were sighting and visibility of the lunar crescents which recorded by Moon's sighting groups in the above mentioned cities and the second series of data were the meteorological observations of mean sea level pressure, mean temperature, horizontal visibility and relative humidity in the same dates and locations of Moon's sighting. Horizontal visibility is divided into two categories of bellow and above 10kM. Period of study was 8 years starting from 1423 to 1430 according to Islamic calendar.
Genetic algorithm is used to formulate the relations between moon visibilities and meteorological parameters. Genetic algorithms are one of the best ways to solve a problem for which little is known. They are very general algorithms and so may work well in any search space. Genetic algorithms use the principles of selection and evolution to produce several solutions to a given problem. Two methods of linear and non-linear approaches are used to model the statistical relations between the lunar visibilities and meteorological parameters.
For linear-based method the following formula is used:
We used the bellow formula for the nonlinear approach:
Where MSE is the Mean Square Error, Robs and Rmod represent actual and modeled visibility of the Moon. P, RH, T and V are mean seas level pressure, relative humidity, temperature and visibility, respectively. (n-p) is degree of freedom and ai is constants.
One of the important factors affecting crescent visibility is meteorological parameters, but they have not been considered well up to now. In this paper a Genetic algorithm has been used to find relationship between percentage of crescent lighting and meteorological parameters such as sea level pressure; mean temperature, relative humidity and horizontal visibility. In this regards, observations have been considered during the period of 1423-1430 lunar Hejri(Islamic calender) calendar for Mashad, Kerman, Shiraz, Esfahan, Birjand, and Boujnourd for two cases with about 10 km horizontal visibility.
Error, bias and weighted factor of meteorological impacts on crescent visibility have been calculated after comparing modeled and observed crescent visibility. Results generally show that non-linear parameterization equations have more bias than linear equations. Maximum bias with 3.24 has been occurred in nonlinear model for horizontal visibility less than 10km over Birjand and Bojnourd sites. The minimum bias of crescent visibility has been occurred in Shiraz by 0.01 percent. The minimum and maximum percentages of relative error are found in Shiraz and Birjand by 1.96% and 99%, respectively. We also found that in linear modeling with horizontal visibility more than 10km, weighted effect of pressure increase by decreasing altitude from mean sea level and effect of humidity decreases by increasing altitude from mean sea level.
Our result confirms that the crescent visibility is more sensitive both to pressure and horizontal visibility. Overlay, linear and nonlinear equations have acceptable results for modeling crescent visibility. Results of this paper reveal that meteorological parameterization of crescent visibility can be used for prediction of crescent visibility from meteorological view point.https://jesphys.ut.ac.ir/article_30219_120d97e3d93a330b279f0a35d4f65357.pdf