Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Crustal velocity in the Busher region and analyses of 2013 Mw 6.3 Kaki-Busher EarthquakeCrustal velocity in the Busher region and analyses of 2013 Mw 6.3 Kaki-Busher Earthquake3513615400310.22059/jesphys.2015.54003FAMehrdadAnsariporMehdiRezapourJournal Article20140814Investigation of velocity structure based on seismic wave propagation have been performed in recent years in different parts of Iran that results obtained is effective in the analysis of seismic zones and locate earthquakes with reasonable accuracy. On a broad scale, the seismotectonics of southern Iran are controlled by active convergence between the Arabian and Eurasian tectonic plates that at the latitude of the event, convergence rates between Arabia and Eurasia are approximately 30 mm/yr. The April 9, 2013 Mw 6.3 earthquake in southern Iran occurred as result of northeast-southwest oriented thrust-type motion in the shallow crust of the Arabian plate. The depth and style of faulting in this event are consistent with shortening of the shallow Arabian crust within the Zagros Mountains in response to active convergence between the Arabian and Eurasian plates. To determine crustal velocity for Busher region we used the aftershock sequence of the 2013, Mw 6.3 Kaki-Busher earthquake that widely felt in Bahrain, Iran, Kuwait, Qatar, Saudi Arabia and United Arab Emirates. Velocity structure is significant in locating and also understanding of the structure of crust. In this study, to reduce the maximum gap azimuth and locate earthquakes with more accurately were used recorded data in the Saudi Seismic Network stations. 137 aftershocks that were a reliable locating were employed to calculating the one-dimensional velocity structure with inversion method and thickness and P wave velocity in the layers was measured from the ground to the depth for crust region. This study shows that the crust is consists of five layers, a layer with a thickness of 4 km with Vp = 5.75 km/s over the layer with thickness of 11 km and Vp = 5.95 km/s and the third layer with a thickness of 7 km and Vp = 6.30 km/s is over the fourth layer with a thickness of 12 km and Vp = 6.60 km/s. Finally the fifth layer with 9 km thickness and Vp=7.25 km/s located on the half-space with velocity of Vp=8.00 km/s. In this study Moho depth of area was determined 43 km, The ability of the method to detect the velocity anomaly, is visible in well resolution the final model and the acceptable results and To comparing these models (the model used in this study and of the Institute of Geophysics), we relocated aftershocks with new model that the locating errors overall is reduced in comparison with Geophysical Institute model but under same conditions. Look at the previous seismic activity in the region indicates that this earthquake occurred in a region of Iran which not has been occurred any earthquake greater than 5 from radius of 25 km and earthquakes above 4 in area, indicating the active faults in the region. Study of spatial distribution of epicenters of aftershocks of this earthquake that relocate with new velocity model is indicated that the causative fault for 2013, Mw 6.3 Kaki-Busher earthquake is mountain front fault (MFF). The normal profiles to aftershocks trend shows that the causative faults dip is towards northeast. It was observed that the depth of aftershocks is between 15 and 20 km in sections that can be indicate the seismogenic zone depth in area.Investigation of velocity structure based on seismic wave propagation have been performed in recent years in different parts of Iran that results obtained is effective in the analysis of seismic zones and locate earthquakes with reasonable accuracy. On a broad scale, the seismotectonics of southern Iran are controlled by active convergence between the Arabian and Eurasian tectonic plates that at the latitude of the event, convergence rates between Arabia and Eurasia are approximately 30 mm/yr. The April 9, 2013 Mw 6.3 earthquake in southern Iran occurred as result of northeast-southwest oriented thrust-type motion in the shallow crust of the Arabian plate. The depth and style of faulting in this event are consistent with shortening of the shallow Arabian crust within the Zagros Mountains in response to active convergence between the Arabian and Eurasian plates. To determine crustal velocity for Busher region we used the aftershock sequence of the 2013, Mw 6.3 Kaki-Busher earthquake that widely felt in Bahrain, Iran, Kuwait, Qatar, Saudi Arabia and United Arab Emirates. Velocity structure is significant in locating and also understanding of the structure of crust. In this study, to reduce the maximum gap azimuth and locate earthquakes with more accurately were used recorded data in the Saudi Seismic Network stations. 137 aftershocks that were a reliable locating were employed to calculating the one-dimensional velocity structure with inversion method and thickness and P wave velocity in the layers was measured from the ground to the depth for crust region. This study shows that the crust is consists of five layers, a layer with a thickness of 4 km with Vp = 5.75 km/s over the layer with thickness of 11 km and Vp = 5.95 km/s and the third layer with a thickness of 7 km and Vp = 6.30 km/s is over the fourth layer with a thickness of 12 km and Vp = 6.60 km/s. Finally the fifth layer with 9 km thickness and Vp=7.25 km/s located on the half-space with velocity of Vp=8.00 km/s. In this study Moho depth of area was determined 43 km, The ability of the method to detect the velocity anomaly, is visible in well resolution the final model and the acceptable results and To comparing these models (the model used in this study and of the Institute of Geophysics), we relocated aftershocks with new model that the locating errors overall is reduced in comparison with Geophysical Institute model but under same conditions. Look at the previous seismic activity in the region indicates that this earthquake occurred in a region of Iran which not has been occurred any earthquake greater than 5 from radius of 25 km and earthquakes above 4 in area, indicating the active faults in the region. Study of spatial distribution of epicenters of aftershocks of this earthquake that relocate with new velocity model is indicated that the causative fault for 2013, Mw 6.3 Kaki-Busher earthquake is mountain front fault (MFF). The normal profiles to aftershocks trend shows that the causative faults dip is towards northeast. It was observed that the depth of aftershocks is between 15 and 20 km in sections that can be indicate the seismogenic zone depth in area.https://jesphys.ut.ac.ir/article_54003_de73989a658f57c2f717dcfd128d0dd6.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Seismic activity zoning of Zagros fold and thrust belt using fractal parametersSeismic activity zoning of Zagros fold and thrust belt using fractal parameters3633755360210.22059/jesphys.2015.53602FASomayehKalaneM.Sc., Department of Geology, Faculty of Sciences, Golestan University, Gorgan, IranMaryamAgh-AtabaiAssistant Professor, Department of Geology, Faculty of Sciences, Golestan University, Gorgan, IranJournal Article20140831Earthquakes are a particular concern because of the serious hazard they present. The existence of seismicity patterns such as the foreshock and aftershock clusters, doughnut pattern, seismic quiescence etc show that the earthquake occurrence in the active tectonic regions is not random and clustering phenomena are obvious characteristics of these events in space and time. Fractal analysis, as a statistical tool, has been applied to describe the spatial and temporal distribution of earthquakes (Bhattacharya and Kayal, 2003; Ceylan, 2006). The well-known Gutenberg- Richter (1944) relation also implies a power law relation between the energy release and the frequency of occurrence. It means size distribution of earthquakes is scale invariant and <em>b-value</em> has been suggested as a generalized fractal dimension of earthquake magnitude (Aki, 1981; Turcotte, 1997). Therefore, earthquakes have fractal structure in the distribution of size, space and time. In this paper, the spatial variations of the fractal parameters for the seismicity of Zagros fold and thrust belt, Between 25 to 37 N and 44 to 58 E, have been analyzed. For this aim, we have extracted a homogeneous catalogue with mb ≥ 4.4 from ISC and NEIC bulletins, covering a time period 1975 – 2014. In order to investigate the spatial variations of fractal parameters, the study area is covered by a 0.5◦×0.5◦ grid. Then, fractal parameters consist of the <em>b-value</em>, correlation dimensions of epicenteral (<em>D<sub>e</sub></em>) and occurrence time of earthquakes (<em>D<sub>t</sub></em>) were estimated for each sample volume, data within a fixed radii (75km) centered on every grid node. ZMAP software was used for calculating the parameters at each node. In this paper, two sets of analyses have been done for total and declustered data sets. In the final stage, the spatial variations of <em>b-value</em>, <em>D<sub>e</sub></em>and <em>D<sub>t</sub></em> parameters were drawn as maps. The <em>b-value</em> map of total data set indicates a low value of b in the Zagros-Makran transition zone, probably as a result of the increased applied stress due to an increase in depth of the seismogenic crust and/or an increase of the convergence rate between the Arabia and the Eurasian Plate at this zone. The Kazerun-Borazjan faults as transfer fault zone is characterized with low <em>b –value, </em>too. This map also show low values in the northwest Zagros, where the recent dangerous Murmuri earthquake with Mw 6.2 (NEIC) has been occurred. The interesting result of this research is a high correlation between <em>b-value</em>, <em>D<sub>e</sub></em> and <em>D<sub>t</sub></em>maps. Similar to the <em>b-value</em> map, the <em>D<sub>e</sub></em>- and <em>D<sub>t</sub></em> maps also show anomalous low values in the Zagros-Makran transition zone, Kazerun, Borazjan, Karebas, Sabz-Pushan faults and northwest Zagros, showing strong clustering of events in space and occurrence time.
In order to investigate the background seismicity pattern of the Zagros, same analysis have been done after declustering the catalogue, removing dependent events, using Reasenberg (1985) algorithm. The result of analyses of <em>b-value </em>is almost similar to the previously one, anomalous low values in the Zagros-Makran transition zone, Kazerun, Borazjan, Karebas, Sabz-Pushan faults and northwest Zagros. Since the epicentral and occurrence time distributions of earthquakes are sensitive to clustering of events, the declustering process, as expected, alters the results for two other parameters. The spatial variations map of <em>D<sub>t</sub></em> show almost uniform distribution. Despite this, the <em>D<sub>e</sub></em>- variation map still shows the lower values in the Zagros-Makran transition zone, Kazerun, Borazjan, Karebas, Sabz-Pushan faults rather than other regions. This suggests that the mainshocks occur in the clusters along these main structural trends. Hence the occurrence of smaller earthquakes, coupled with a homogeneous distribution of earthquake epicentres at the more extensive region of Zagros, show that stress released along dispersed and smaller faults. But in the Zagros- Makran transition and Kazerun-Borazjan zones, an increased clustering of epicenter distributions (decreased<em> D<sub>e</sub></em>), are associated with an increase in stress concentration on the main structural trends (such as Oman line and Qatar-Kazerun line). Temporal distributions of earthquakes in these zones are indicated a high clustering degree. This may be due to the occurrence of frequent larger earthquakes with aftershock sequences in this zone.
Generally, Spatial mapping of the fractal parameters found valuable information about the scale invariance property of the seismic activity variations in the region. These results suggest that the fractal approach can be used as a useful tool for assessing seismic energy distribution on seismotectonically active regions.Earthquakes are a particular concern because of the serious hazard they present. The existence of seismicity patterns such as the foreshock and aftershock clusters, doughnut pattern, seismic quiescence etc show that the earthquake occurrence in the active tectonic regions is not random and clustering phenomena are obvious characteristics of these events in space and time. Fractal analysis, as a statistical tool, has been applied to describe the spatial and temporal distribution of earthquakes (Bhattacharya and Kayal, 2003; Ceylan, 2006). The well-known Gutenberg- Richter (1944) relation also implies a power law relation between the energy release and the frequency of occurrence. It means size distribution of earthquakes is scale invariant and <em>b-value</em> has been suggested as a generalized fractal dimension of earthquake magnitude (Aki, 1981; Turcotte, 1997). Therefore, earthquakes have fractal structure in the distribution of size, space and time. In this paper, the spatial variations of the fractal parameters for the seismicity of Zagros fold and thrust belt, Between 25 to 37 N and 44 to 58 E, have been analyzed. For this aim, we have extracted a homogeneous catalogue with mb ≥ 4.4 from ISC and NEIC bulletins, covering a time period 1975 – 2014. In order to investigate the spatial variations of fractal parameters, the study area is covered by a 0.5◦×0.5◦ grid. Then, fractal parameters consist of the <em>b-value</em>, correlation dimensions of epicenteral (<em>D<sub>e</sub></em>) and occurrence time of earthquakes (<em>D<sub>t</sub></em>) were estimated for each sample volume, data within a fixed radii (75km) centered on every grid node. ZMAP software was used for calculating the parameters at each node. In this paper, two sets of analyses have been done for total and declustered data sets. In the final stage, the spatial variations of <em>b-value</em>, <em>D<sub>e</sub></em>and <em>D<sub>t</sub></em> parameters were drawn as maps. The <em>b-value</em> map of total data set indicates a low value of b in the Zagros-Makran transition zone, probably as a result of the increased applied stress due to an increase in depth of the seismogenic crust and/or an increase of the convergence rate between the Arabia and the Eurasian Plate at this zone. The Kazerun-Borazjan faults as transfer fault zone is characterized with low <em>b –value, </em>too. This map also show low values in the northwest Zagros, where the recent dangerous Murmuri earthquake with Mw 6.2 (NEIC) has been occurred. The interesting result of this research is a high correlation between <em>b-value</em>, <em>D<sub>e</sub></em> and <em>D<sub>t</sub></em>maps. Similar to the <em>b-value</em> map, the <em>D<sub>e</sub></em>- and <em>D<sub>t</sub></em> maps also show anomalous low values in the Zagros-Makran transition zone, Kazerun, Borazjan, Karebas, Sabz-Pushan faults and northwest Zagros, showing strong clustering of events in space and occurrence time.
In order to investigate the background seismicity pattern of the Zagros, same analysis have been done after declustering the catalogue, removing dependent events, using Reasenberg (1985) algorithm. The result of analyses of <em>b-value </em>is almost similar to the previously one, anomalous low values in the Zagros-Makran transition zone, Kazerun, Borazjan, Karebas, Sabz-Pushan faults and northwest Zagros. Since the epicentral and occurrence time distributions of earthquakes are sensitive to clustering of events, the declustering process, as expected, alters the results for two other parameters. The spatial variations map of <em>D<sub>t</sub></em> show almost uniform distribution. Despite this, the <em>D<sub>e</sub></em>- variation map still shows the lower values in the Zagros-Makran transition zone, Kazerun, Borazjan, Karebas, Sabz-Pushan faults rather than other regions. This suggests that the mainshocks occur in the clusters along these main structural trends. Hence the occurrence of smaller earthquakes, coupled with a homogeneous distribution of earthquake epicentres at the more extensive region of Zagros, show that stress released along dispersed and smaller faults. But in the Zagros- Makran transition and Kazerun-Borazjan zones, an increased clustering of epicenter distributions (decreased<em> D<sub>e</sub></em>), are associated with an increase in stress concentration on the main structural trends (such as Oman line and Qatar-Kazerun line). Temporal distributions of earthquakes in these zones are indicated a high clustering degree. This may be due to the occurrence of frequent larger earthquakes with aftershock sequences in this zone.
Generally, Spatial mapping of the fractal parameters found valuable information about the scale invariance property of the seismic activity variations in the region. These results suggest that the fractal approach can be used as a useful tool for assessing seismic energy distribution on seismotectonically active regions.https://jesphys.ut.ac.ir/article_53602_283c716d735dfe36f0e18b83b1124bc1.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Fault plane determination of the 2012 August 11 Ahar-Varzaghan earthquakes based on H-C methodFault plane determination of the 2012 August 11 Ahar-Varzaghan earthquakes based on H-C method3773905370110.22059/jesphys.2015.53701FAFarideh-SadatMirdamadiInstitute of Geophysics, University of TehranMahdiRezapourJournal Article20141014The study area locates between an active fault system of North Anatolian in Turkey and active Alborz and Zagros mountain ranges in Iran. The total shortening concluded from Arabia-Eurasia collision with a rate of 22 mm/year across the NE Persia is partitioned into two fault systems, right-lateral strike slip movements in the Turkish-Iranian Plateau like North Tabriz fault system and thrusting in the Caucasus. In other word the study area is transferred a part of this Arabia-Eurasia northward relative motion to the Anatolia. The region of this study is one of the most active and young tectonic areas in the Middle East which has experienced devastating earthquake during past few years. One of the most important basic parameters in earthquake seismology to identify earthquake source, understands the mechanism of earthquake fault. In this study, the focal mechanism of double earthquakes with Mw 6.4 and 6.2 on 11 August 2012 (21 Mordad 1391) and aftershocks with magnitudes greater than 5 which occurred in north-west of Iran, were determined using ISOLA software and then compared with solutions reported by Harvard CMT and other agencies. In this method focal mechanism of the earthquakes are determined using full waveform modeling and centroid moment tensor inversion. In this area, earthquakes are mostly concentrated around the Tabriz fault. However, the region of this study has no significant seismic activity. Recent activity of south Ahar fault is of great importance since it has generated destructive earthquakes.
In this study, a geometrical method, called H-C is used to identify fault plane. H-C method is a simple method, applicable when a reliable earthquake location and its Centroid Moment Tensor solution (CMT) are available. The CMT solution also gives two planes passing through C (plane I and plane II) defined by the strike and dip angles of the moment tensor solution. Then, assuming a planar fault, the fault plane can be identified as that one among planes I and II that encompasses the hypocenter (H-C method).
The data from broad-band stations of International Institute of Earthquake Engineering and Seismology (IIEES), Azerbaijan National Seismic Network (ANSN) and Iranian Seismological Center (IRSC) were used in this study. To analyze these earthquakes, the Hypocenter location is obtained from these agencies and gathering all available data and using a proper velocity model, by using Hypocenter program. The region in this study enclosed between 45º to 48º east longitudes, and 37.5º to 39º north latitudes.
We are able to achieve a higher accuracy in this method in comparison with other methods which use teleseismic data, since the local and regional seismogram data and so higher frequencies are used. The mechanism for the first shock was obtained as Strike/Dip/Rake = 85º, 89º, 165º and for the second shock was obtained as Strike/Dip/Rake = 252º, 64º, 125º. The main mechanism of the South Ahar fault is strike-slip and strike-slip with a reverse component, according to the calculated focal mechanisms. The obtained focal mechanisms show that two separate activated fault segment in this earthquake has a right lateral mechanism, first of them with a dip toward the south and second of them with a dip toward the north.The study area locates between an active fault system of North Anatolian in Turkey and active Alborz and Zagros mountain ranges in Iran. The total shortening concluded from Arabia-Eurasia collision with a rate of 22 mm/year across the NE Persia is partitioned into two fault systems, right-lateral strike slip movements in the Turkish-Iranian Plateau like North Tabriz fault system and thrusting in the Caucasus. In other word the study area is transferred a part of this Arabia-Eurasia northward relative motion to the Anatolia. The region of this study is one of the most active and young tectonic areas in the Middle East which has experienced devastating earthquake during past few years. One of the most important basic parameters in earthquake seismology to identify earthquake source, understands the mechanism of earthquake fault. In this study, the focal mechanism of double earthquakes with Mw 6.4 and 6.2 on 11 August 2012 (21 Mordad 1391) and aftershocks with magnitudes greater than 5 which occurred in north-west of Iran, were determined using ISOLA software and then compared with solutions reported by Harvard CMT and other agencies. In this method focal mechanism of the earthquakes are determined using full waveform modeling and centroid moment tensor inversion. In this area, earthquakes are mostly concentrated around the Tabriz fault. However, the region of this study has no significant seismic activity. Recent activity of south Ahar fault is of great importance since it has generated destructive earthquakes.
In this study, a geometrical method, called H-C is used to identify fault plane. H-C method is a simple method, applicable when a reliable earthquake location and its Centroid Moment Tensor solution (CMT) are available. The CMT solution also gives two planes passing through C (plane I and plane II) defined by the strike and dip angles of the moment tensor solution. Then, assuming a planar fault, the fault plane can be identified as that one among planes I and II that encompasses the hypocenter (H-C method).
The data from broad-band stations of International Institute of Earthquake Engineering and Seismology (IIEES), Azerbaijan National Seismic Network (ANSN) and Iranian Seismological Center (IRSC) were used in this study. To analyze these earthquakes, the Hypocenter location is obtained from these agencies and gathering all available data and using a proper velocity model, by using Hypocenter program. The region in this study enclosed between 45º to 48º east longitudes, and 37.5º to 39º north latitudes.
We are able to achieve a higher accuracy in this method in comparison with other methods which use teleseismic data, since the local and regional seismogram data and so higher frequencies are used. The mechanism for the first shock was obtained as Strike/Dip/Rake = 85º, 89º, 165º and for the second shock was obtained as Strike/Dip/Rake = 252º, 64º, 125º. The main mechanism of the South Ahar fault is strike-slip and strike-slip with a reverse component, according to the calculated focal mechanisms. The obtained focal mechanisms show that two separate activated fault segment in this earthquake has a right lateral mechanism, first of them with a dip toward the south and second of them with a dip toward the north.https://jesphys.ut.ac.ir/article_53701_b3c41a317ed1d1803b20091b43bb1d27.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Prediction of near-field directivity pulse characterestics through simulation deterministic approach and its calibrationPrediction of near-field directivity pulse characterestics through simulation deterministic approach and its calibration3914025510310.22059/jesphys.2015.55103FAAliHassankhaniHamidZafaraniJournal Article20141214Earthquake ground-motions show significant variability in both spectral and temporal characteristics. Procedures to generalize and predict strong ground motions may be generally divided into three main disciplines: numerical techniques based on a kinematic source description, empirical attenuation models, and semi-empirical stochastic models. Numerous studies have shown that, in the near-fault regions, i.e. at distances comparable with few fault lengths, the ground motion from moderate to large earthquakes is strongly affected by the evolution of the rupture along the fault plane, causing more complex spatial distribution of the observed values. Also, since the number of near-field records of large earthquakes is usually inadequate, empirical models even those developed specifically for the near-field show a large uncertainty which cannot be improved until sufficient data become available. Although after recent events e.g. the 2003 L’Aquila, Italy, and 2011 Christchurch, New Zealand earthquakes, we have been provided with some additional records in the near-fault region but fifty years of strong-motion records worldwide is not sufficient to cover the whole range of site and propagation path conditions, rupture processes and geometric relationships between source and site that are possible from earthquakes in the near source regions. As an alternative to the use of records from past earthquakes computational geophysical techniques, based on a kinematic source description, can be used to simulate physically based synthetic seismograms for engineering applications. Near-fault ground motions show high spatial heterogeneity due to rupture complexity, fault to site orientation, and also seismic wave propagation and local site effects. Since the number of near-field records of large earthquakes is usually inadequate, empirical ground motion prediction models even those developed specifically for the near-field region show a large uncertainty which cannot be improved until sufficient data become available. For the time being, the use of physically based synthetic ground motions obtained by kinematic simulation approaches may partially overcome the scarcity of near-source data. Here, a discrete wave number/finite element technique is used to compute velocity time series in the low-frequency band (up to 1.5 Hz) and to investigate the variability of the ground motion as a function of different source characteristics and source-to-site geometry. The approach is well suited to study the propagation of seismic waves in a horizontal layered medium. Ground motions from 4 earthquakes with moment magnitudes from 6.0 to 7.5 were simulated, in 0.5 magnitude unit increments, at 3 values of fault distances i.e., 5, 10 and 15 km. For studying the effects of these velocity pulses on dynamic response of structures, some simple models such as rectangular, triangle and sinus have been presented and used in the literature. But, recent studies show that using these kinds of methods in study of dynamic behavior of structure may lead to incorrect conclusions. Mavroeidis and Papageorgiou (2003, hereafter MP2003) using a dataset of near fault records, have presented a new mathematical equation for velocity pulse form that because of its simplicity and precision has been widely accepted .The simulation results of the current study have been used in the context of mathematical modeling proposed by MP2003 to perform a robust parameterization of the model.Earthquake ground-motions show significant variability in both spectral and temporal characteristics. Procedures to generalize and predict strong ground motions may be generally divided into three main disciplines: numerical techniques based on a kinematic source description, empirical attenuation models, and semi-empirical stochastic models. Numerous studies have shown that, in the near-fault regions, i.e. at distances comparable with few fault lengths, the ground motion from moderate to large earthquakes is strongly affected by the evolution of the rupture along the fault plane, causing more complex spatial distribution of the observed values. Also, since the number of near-field records of large earthquakes is usually inadequate, empirical models even those developed specifically for the near-field show a large uncertainty which cannot be improved until sufficient data become available. Although after recent events e.g. the 2003 L’Aquila, Italy, and 2011 Christchurch, New Zealand earthquakes, we have been provided with some additional records in the near-fault region but fifty years of strong-motion records worldwide is not sufficient to cover the whole range of site and propagation path conditions, rupture processes and geometric relationships between source and site that are possible from earthquakes in the near source regions. As an alternative to the use of records from past earthquakes computational geophysical techniques, based on a kinematic source description, can be used to simulate physically based synthetic seismograms for engineering applications. Near-fault ground motions show high spatial heterogeneity due to rupture complexity, fault to site orientation, and also seismic wave propagation and local site effects. Since the number of near-field records of large earthquakes is usually inadequate, empirical ground motion prediction models even those developed specifically for the near-field region show a large uncertainty which cannot be improved until sufficient data become available. For the time being, the use of physically based synthetic ground motions obtained by kinematic simulation approaches may partially overcome the scarcity of near-source data. Here, a discrete wave number/finite element technique is used to compute velocity time series in the low-frequency band (up to 1.5 Hz) and to investigate the variability of the ground motion as a function of different source characteristics and source-to-site geometry. The approach is well suited to study the propagation of seismic waves in a horizontal layered medium. Ground motions from 4 earthquakes with moment magnitudes from 6.0 to 7.5 were simulated, in 0.5 magnitude unit increments, at 3 values of fault distances i.e., 5, 10 and 15 km. For studying the effects of these velocity pulses on dynamic response of structures, some simple models such as rectangular, triangle and sinus have been presented and used in the literature. But, recent studies show that using these kinds of methods in study of dynamic behavior of structure may lead to incorrect conclusions. Mavroeidis and Papageorgiou (2003, hereafter MP2003) using a dataset of near fault records, have presented a new mathematical equation for velocity pulse form that because of its simplicity and precision has been widely accepted .The simulation results of the current study have been used in the context of mathematical modeling proposed by MP2003 to perform a robust parameterization of the model.https://jesphys.ut.ac.ir/article_55103_4e59325815af74d886b551f1de3f2afb.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923S-transform with maximum energy concentration and its application to detect gas bearing zones and low-frequency shadowsS-transform with maximum energy concentration and its application to detect gas bearing zones and low-frequency shadows4034125370010.22059/jesphys.2015.53700FAMohammadRadad0000-0002-3904-1999AliGholamiHamid RezaSiahkoohiJournal Article20141119Seismic attribute is a quantitative measure of an interested seismic characteristic. There are several seismic attributes. In recent years, time-frequency (TF) attributes have been developed which to reach them, TF analyzing of seismic data is required. A high resolution TF representation (TFR) can yield more accurate TF attributes. There are several TFR methods including short-time Fourier transform, wavelet transforms, S-transform, Wigner-Ville distribution, Hilbert-Huang transform and etc. In this paper, the S-transform is considered and an algorithm is proposed to improve its resolution. In the Fourier-based TFR methods, the width of the utilized window is the main factor affecting the resolution. The standard S-transform (SST) employs a Gaussian window which its standard deviation, controller the window width, changes inversely with frequency (Stockwell et al., 1996). It was an idea to use a frequency dependent window for TF decomposition. However, the TF resolution of SST is far from ideal; it demonstrates weak temporal resolution at low frequencies and weak spectral resolution at high frequency components. Later on, the generalized S-transform was proposed using an arbitrary window function whose shape is controlled by several free parameters (McFadden et a., 1999; Pinnegar and Mansinha, 2003). Another approach to improve the resolution of a TFR is based on energy concentration concept (Gholami, 2013; Djurovic et al., 2008). According this approach, in this paper, an algorithm is proposed to find the optimum windows for S-transform to get a TFR with maximum energy concentration. To reach this aim, an optimization problem is defined where an energy concentration measure (ECM) is employed to condition the windows so as the TFR would have the maximum energy concentration. Here, we utilize a Gaussian as the window function. Then different windows are constructed by a range of different values of standard deviations in a non-parametric form. Different TFRs are constructed by different windows. The optimum TFR is one with maximum energy concentration. The optimization is performed for each frequency component, individually, and hence, there would be an optimum window width for each frequency component. There are several ECMs which they are used in different applications (Hurley and Rickard, 2009). In this paper, we employ Modified Shannon Entropy as the ECM. As one knows, SST algorithm needs to be implemented in frequency domain (Stockwell et al., 1996). It is due to the dependency of the standard deviation of Gaussian window on the frequency. However, the proposed method of our paper can also be implemented in time domain where the optimum windows would be found, adaptively, for each time sample of the signal. We apply the proposed method to a synthetic signal to compare its performance with some other TF analysis methods in providing a well-concentrated TF map. The comparison of the results shows the superiority of the proposed method rather than STFT and SST. We also perform a quantitative experiment to evaluate the performance of the TFRs. The results confirm the best performance by the proposed method compared with STFT and SST. Then the proposed method is employed to detect gas bearing zones and low-frequency shadows on a seismic data set related to a gas reservoir of Iran. For this aim, some TF seismic attributes are extracted. The attributes include instantaneous amplitude, dominant instantaneous frequency, sweetness factor, single-frequency section and cumulative relative amplitude percentile (C80). The attributes are also extracted by SST to compare with those of the proposed method. The results show that the attributes obtained by the proposed method have more resolution; so that gas bearing zones and low-frequency shadows are better localized on the attribute sections obtained by the proposed method.Seismic attribute is a quantitative measure of an interested seismic characteristic. There are several seismic attributes. In recent years, time-frequency (TF) attributes have been developed which to reach them, TF analyzing of seismic data is required. A high resolution TF representation (TFR) can yield more accurate TF attributes. There are several TFR methods including short-time Fourier transform, wavelet transforms, S-transform, Wigner-Ville distribution, Hilbert-Huang transform and etc. In this paper, the S-transform is considered and an algorithm is proposed to improve its resolution. In the Fourier-based TFR methods, the width of the utilized window is the main factor affecting the resolution. The standard S-transform (SST) employs a Gaussian window which its standard deviation, controller the window width, changes inversely with frequency (Stockwell et al., 1996). It was an idea to use a frequency dependent window for TF decomposition. However, the TF resolution of SST is far from ideal; it demonstrates weak temporal resolution at low frequencies and weak spectral resolution at high frequency components. Later on, the generalized S-transform was proposed using an arbitrary window function whose shape is controlled by several free parameters (McFadden et a., 1999; Pinnegar and Mansinha, 2003). Another approach to improve the resolution of a TFR is based on energy concentration concept (Gholami, 2013; Djurovic et al., 2008). According this approach, in this paper, an algorithm is proposed to find the optimum windows for S-transform to get a TFR with maximum energy concentration. To reach this aim, an optimization problem is defined where an energy concentration measure (ECM) is employed to condition the windows so as the TFR would have the maximum energy concentration. Here, we utilize a Gaussian as the window function. Then different windows are constructed by a range of different values of standard deviations in a non-parametric form. Different TFRs are constructed by different windows. The optimum TFR is one with maximum energy concentration. The optimization is performed for each frequency component, individually, and hence, there would be an optimum window width for each frequency component. There are several ECMs which they are used in different applications (Hurley and Rickard, 2009). In this paper, we employ Modified Shannon Entropy as the ECM. As one knows, SST algorithm needs to be implemented in frequency domain (Stockwell et al., 1996). It is due to the dependency of the standard deviation of Gaussian window on the frequency. However, the proposed method of our paper can also be implemented in time domain where the optimum windows would be found, adaptively, for each time sample of the signal. We apply the proposed method to a synthetic signal to compare its performance with some other TF analysis methods in providing a well-concentrated TF map. The comparison of the results shows the superiority of the proposed method rather than STFT and SST. We also perform a quantitative experiment to evaluate the performance of the TFRs. The results confirm the best performance by the proposed method compared with STFT and SST. Then the proposed method is employed to detect gas bearing zones and low-frequency shadows on a seismic data set related to a gas reservoir of Iran. For this aim, some TF seismic attributes are extracted. The attributes include instantaneous amplitude, dominant instantaneous frequency, sweetness factor, single-frequency section and cumulative relative amplitude percentile (C80). The attributes are also extracted by SST to compare with those of the proposed method. The results show that the attributes obtained by the proposed method have more resolution; so that gas bearing zones and low-frequency shadows are better localized on the attribute sections obtained by the proposed method.https://jesphys.ut.ac.ir/article_53700_947afe9c4021e1961048bd49977515f1.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Real-time hybrid orbit determination using satellite to satellite tracking observationsReal-time hybrid orbit determination using satellite to satellite tracking observations4134245281810.22059/jesphys.2015.52818FAMohammad AliSharifi0000-0003-0745-4147MasoudAbbas HadiMohammad RezaSeyfTaghiShojayiJournal Article20140422Three-dimensional position measurements by Global Positioning System (GPS) provide purely geometrical estimate of Low Earth Orbiters (LEOs) positions. It provides uninterrupted tracking of the LEOs in three spatial dimensions or the so-called kinematic orbit. This solution relies heavily on the observations. High frequency observation noise, outlying observations and low redundancy number of measurements are the main obstacles to the purely observed or the so called kinematic orbit. on the other hand dynamic orbit is not ideal orbit due to the mis-modeling of the assumed forced filed. Introducing the equation of motion in terms of a dynamic process helps us to overcome the aforementioned problems to a great extent. Therefore the equation of satellite motion based on the forces acting on a satellite provides dynamic solution which helps to fix and reduce the stated problem of purely geometric solution. Combination of both the kinematic and dynamic methods of state vector determination is used with carefully selected relative weighting in the hybrid methods. At first glance, the dynamic orbit adjusts the fewest parameters, preserving maximum data strength, and yielding the lowest formal error, error due to observations noise, even in low observation condition. In the other side, the dynamic orbit can suffer from a large systematic error due to inefficient and imperfect introduced models which produce accumulative and frequent adverse effects on the dynamic orbit. The kinematic orbit eliminates modeling error, but the orbit is determined entirely from the observations, data strength is depleted, and the formal error due to observations can grow largely. The hybrid orbit optimally combines the two techniques to achieve the desired output. In other words, the final goal of hybrid methods will determine what is the optimal combination which leads us to the lowest overall state vector errors and estimated dynamic models parameters. If the vector of parameters and observations are related linearly, several powerful linear estimators can be applied for estimation of the unknown parameters. Linearization of the nonlinear models is the most frequently used scheme for using theoretically and computationally well-developed linear estimators Orbit determination problem is well-known example of the highly nonlinear engineering problems. In general, observations and augmented equation of motion are nonlinear with respect to time and parameters. Because of highly nonlinearity conditions in orbit determination problem, Extended Kalman Filter (EKF) has been chosen as appropriate filter rather than Standard Kalman filter. For numerical evaluation of the proposed method, GPS, P-code observations of the CHAllenging Minisatellite Payload (CHAMP) and Gravity Recovery And Climate Experiment (GRACE) twin satellites have been used. The proposed method will be applicable for phase observations as well as code observations. On the other side, capability of more precise code observations of upcoming global navigation satellite system like Galileo, was another motivation for choosing code observations. The final results are compared with the Rapid Science Orbits of CHAMP and GRACE twin satellites, disseminated by Geo Forschungs Zentrum Helmholtz center Potsdam. High quality of the hybrid solution proves the efficiency of the proposed method and the proposed method achieves nearly 4 times better noise level than the purely kinematic method in the aforementioned case study.Three-dimensional position measurements by Global Positioning System (GPS) provide purely geometrical estimate of Low Earth Orbiters (LEOs) positions. It provides uninterrupted tracking of the LEOs in three spatial dimensions or the so-called kinematic orbit. This solution relies heavily on the observations. High frequency observation noise, outlying observations and low redundancy number of measurements are the main obstacles to the purely observed or the so called kinematic orbit. on the other hand dynamic orbit is not ideal orbit due to the mis-modeling of the assumed forced filed. Introducing the equation of motion in terms of a dynamic process helps us to overcome the aforementioned problems to a great extent. Therefore the equation of satellite motion based on the forces acting on a satellite provides dynamic solution which helps to fix and reduce the stated problem of purely geometric solution. Combination of both the kinematic and dynamic methods of state vector determination is used with carefully selected relative weighting in the hybrid methods. At first glance, the dynamic orbit adjusts the fewest parameters, preserving maximum data strength, and yielding the lowest formal error, error due to observations noise, even in low observation condition. In the other side, the dynamic orbit can suffer from a large systematic error due to inefficient and imperfect introduced models which produce accumulative and frequent adverse effects on the dynamic orbit. The kinematic orbit eliminates modeling error, but the orbit is determined entirely from the observations, data strength is depleted, and the formal error due to observations can grow largely. The hybrid orbit optimally combines the two techniques to achieve the desired output. In other words, the final goal of hybrid methods will determine what is the optimal combination which leads us to the lowest overall state vector errors and estimated dynamic models parameters. If the vector of parameters and observations are related linearly, several powerful linear estimators can be applied for estimation of the unknown parameters. Linearization of the nonlinear models is the most frequently used scheme for using theoretically and computationally well-developed linear estimators Orbit determination problem is well-known example of the highly nonlinear engineering problems. In general, observations and augmented equation of motion are nonlinear with respect to time and parameters. Because of highly nonlinearity conditions in orbit determination problem, Extended Kalman Filter (EKF) has been chosen as appropriate filter rather than Standard Kalman filter. For numerical evaluation of the proposed method, GPS, P-code observations of the CHAllenging Minisatellite Payload (CHAMP) and Gravity Recovery And Climate Experiment (GRACE) twin satellites have been used. The proposed method will be applicable for phase observations as well as code observations. On the other side, capability of more precise code observations of upcoming global navigation satellite system like Galileo, was another motivation for choosing code observations. The final results are compared with the Rapid Science Orbits of CHAMP and GRACE twin satellites, disseminated by Geo Forschungs Zentrum Helmholtz center Potsdam. High quality of the hybrid solution proves the efficiency of the proposed method and the proposed method achieves nearly 4 times better noise level than the purely kinematic method in the aforementioned case study.https://jesphys.ut.ac.ir/article_52818_28f9bf01555796792a4d4bb0022f1729.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Depth estimation of Salt Domes using gravity data through General Regression Neural Networks, case study: Mors Salt dome DenmarkDepth estimation of Salt Domes using gravity data through General Regression Neural Networks, case study: Mors Salt dome Denmark4254385481410.22059/jesphys.2015.54814FAAlirezaHajianMahmoudShiraziJournal Article20140804In this paper an intelligent method through General Regression Neural Networks (GRNN) is presented to estimate the depth of salt domes from gravity data. Neural networks are as a good tool for automatic interpretation of geophysical data especially for depth estimation of gravity anomalies. The gravity signal is a nonlinear function of depth and density and the geometrical parameters of the buried body. One of the common modern tools for non-linear systems identifications is neural networks. The parallel processing and the ability of the network to learn from training data is a good motivation to use them for interpretation of gravity data. Salt domes are as a target of the gravity explorations in oil exploration because in the most cases in Middle East, America and some parts of the Europe like Denmark they are as a good locations for oil traps and diapers. The non-simple structure of the salt domes is noticeable. Almost in most of the available methods of salt dome modeling for depth estimation they are considered simply to simple geometrical bodies like sphere or cylinder.These simplifications cause to no adaption to the real nature of salt domes. The salt domes modeling in this paper is not followed these simplifications and the near to real shape of salt dome bodies is modeled through Grav2dc software. Different possibilities for the salt dome model are considered: salt dome with oil, salt dome with oil and salt water, salt dome with gas and oil, salt dome with none of the gas oil or salt water. For all the mentioned salt dome models both the Grav2dc software and surfer are used to calculate the gravity effect of the body and then the related feature are extracted. To train the general regression neural network the range of the salt dome depth(s) are selected regard to available geological prior information. For example if the possible range of the salt dome is regard to the geological properties and/or well log data between 2 to 4 kilometers the GRNN is trained with models of salt dome with depths from 1 to 4 kilometer. In this way, first the gravity effect of several salt dome models with different depths were calculated via forward modeling and the GRNN was trained with this set of data. The GRNN architecture was modified regard to Root Mean Square Error of the GRNN network and modifications were followed and repeated until achieving the network with acceptable Root Mean Square Error (RMSE) for the training process. To test the GRNN the synthetic gravity data of salt dome with two different level of noise 5% as low noise, and 10% as high noise were applied to the designed GRNN and the related depth was estimated. Totally the results showed good ability of GRNN for depth estimation of salt domes. Finally, to test the GRNN for real data the GRNN was tested for gravity data over Mors Salt dome in Denmark. Mors salt dome is a gravity field for oil exploration and is also an interesting case study for a lot of the geophysics researchers and geoscientists. The results for real data also proved the ability of the general regression neural network for estimating the depth of salt domes with low root mean square error.In this paper an intelligent method through General Regression Neural Networks (GRNN) is presented to estimate the depth of salt domes from gravity data. Neural networks are as a good tool for automatic interpretation of geophysical data especially for depth estimation of gravity anomalies. The gravity signal is a nonlinear function of depth and density and the geometrical parameters of the buried body. One of the common modern tools for non-linear systems identifications is neural networks. The parallel processing and the ability of the network to learn from training data is a good motivation to use them for interpretation of gravity data. Salt domes are as a target of the gravity explorations in oil exploration because in the most cases in Middle East, America and some parts of the Europe like Denmark they are as a good locations for oil traps and diapers. The non-simple structure of the salt domes is noticeable. Almost in most of the available methods of salt dome modeling for depth estimation they are considered simply to simple geometrical bodies like sphere or cylinder.These simplifications cause to no adaption to the real nature of salt domes. The salt domes modeling in this paper is not followed these simplifications and the near to real shape of salt dome bodies is modeled through Grav2dc software. Different possibilities for the salt dome model are considered: salt dome with oil, salt dome with oil and salt water, salt dome with gas and oil, salt dome with none of the gas oil or salt water. For all the mentioned salt dome models both the Grav2dc software and surfer are used to calculate the gravity effect of the body and then the related feature are extracted. To train the general regression neural network the range of the salt dome depth(s) are selected regard to available geological prior information. For example if the possible range of the salt dome is regard to the geological properties and/or well log data between 2 to 4 kilometers the GRNN is trained with models of salt dome with depths from 1 to 4 kilometer. In this way, first the gravity effect of several salt dome models with different depths were calculated via forward modeling and the GRNN was trained with this set of data. The GRNN architecture was modified regard to Root Mean Square Error of the GRNN network and modifications were followed and repeated until achieving the network with acceptable Root Mean Square Error (RMSE) for the training process. To test the GRNN the synthetic gravity data of salt dome with two different level of noise 5% as low noise, and 10% as high noise were applied to the designed GRNN and the related depth was estimated. Totally the results showed good ability of GRNN for depth estimation of salt domes. Finally, to test the GRNN for real data the GRNN was tested for gravity data over Mors Salt dome in Denmark. Mors salt dome is a gravity field for oil exploration and is also an interesting case study for a lot of the geophysics researchers and geoscientists. The results for real data also proved the ability of the general regression neural network for estimating the depth of salt domes with low root mean square error.https://jesphys.ut.ac.ir/article_54814_7972810e9c52ef0ba88304493d6b73a5.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Depth and shape estimation of salt domes via interpretation of gravity data using Multi Layer Perceptron Neural NetworksDepth and shape estimation of salt domes via interpretation of gravity data using Multi Layer Perceptron Neural Networks4394525369710.22059/jesphys.2015.53697FAOmidOlfatiHamidAghajaniAlirezaHajianJournal Article20140830In applied geophysics especially in potential methods like gravity generalized bodies are often used to represent the distribution of underground masses, as sphere, vertical cylinders, vertical prisms, horizontal cylinders, vertical faults, anticlines and synclines. In this paper Multi Layer Perceptron (MLP) Artificial Neural Networks are used to find the most probable model for a given gravity anomaly of a salt dome. Therefore a neural network is trained with anomalies produced by two different kinds of distributing bodies, producing similar anomalies. These simple models which are the most common used shapes for modelling of salt domes are Sphere and Vertical Cylinder. The trained Multi Layer Percetron Artificial Neural Network is then able to recognize the kind of body that is producing the given gravity anomaly .Throught neural networks technique the ambiguity between similar anomalies generated by different disturbing bodies can be solved without using densities. There is no classical interpretation method available, which can, for example discriminate between an anticline and a syncline without any hypotheses about the shape or density contrast of the target.It is shown here that this can be done by applying Multi layer perceptron Artificial Neural Networks for qualitative gravity interpretation. By using of this kind of Artificial Neural Networks the gravity data interpreter can do qualitative and gravity quantitative interpretation. Qualitative interpretation means to solve the ambiguity between two bodies that produce similar anomalies. In quantitative interpretation with multi layer perceptron Artificial Neural Networks, the model parameters (include depth, radius) can be achived. Sphere and vertical cylinder are the models to representing the salt domes. Therefore, as we use data gravity of humble salt dome, as a real test of the method, we will use these models for training of the neural network. By using of sphere and vertical cylinder models, we prepared, normalized and used a set of suitable features as inputs of the network. Because there is no certain rule for defining the suitable number of the neurons of hidden layer, by changing the number of neurons in hidden layer, and comparing the Sum Squared Errors in every state, we received best number for neurons for this layer. After defining these neurons, by synthetic data from artificial sphere and cylinder models, trained the network.
It is necessary to mention that the neural network was trained in the relatred domain of thre probable depth, especially for the real data that we know the geological prior information and so the approximation of the depth domain is possible.Also the training data are all normalized both inputs and outputs. The index used to evaluate the errors was sum squared error for both validation and test data. Finally by using outputs of the network used for recognition of the shape of the anomaly, and the network used for defining model parameters, we defined the shape and parameters of humble salt dome. The results for real and synthetic gravity data showed very good ability of the multi layer perceptron neural networks for estimation of shape and depth of salt domes. In applied geophysics especially in potential methods like gravity generalized bodies are often used to represent the distribution of underground masses, as sphere, vertical cylinders, vertical prisms, horizontal cylinders, vertical faults, anticlines and synclines. In this paper Multi Layer Perceptron (MLP) Artificial Neural Networks are used to find the most probable model for a given gravity anomaly of a salt dome. Therefore a neural network is trained with anomalies produced by two different kinds of distributing bodies, producing similar anomalies. These simple models which are the most common used shapes for modelling of salt domes are Sphere and Vertical Cylinder. The trained Multi Layer Percetron Artificial Neural Network is then able to recognize the kind of body that is producing the given gravity anomaly .Throught neural networks technique the ambiguity between similar anomalies generated by different disturbing bodies can be solved without using densities. There is no classical interpretation method available, which can, for example discriminate between an anticline and a syncline without any hypotheses about the shape or density contrast of the target.It is shown here that this can be done by applying Multi layer perceptron Artificial Neural Networks for qualitative gravity interpretation. By using of this kind of Artificial Neural Networks the gravity data interpreter can do qualitative and gravity quantitative interpretation. Qualitative interpretation means to solve the ambiguity between two bodies that produce similar anomalies. In quantitative interpretation with multi layer perceptron Artificial Neural Networks, the model parameters (include depth, radius) can be achived. Sphere and vertical cylinder are the models to representing the salt domes. Therefore, as we use data gravity of humble salt dome, as a real test of the method, we will use these models for training of the neural network. By using of sphere and vertical cylinder models, we prepared, normalized and used a set of suitable features as inputs of the network. Because there is no certain rule for defining the suitable number of the neurons of hidden layer, by changing the number of neurons in hidden layer, and comparing the Sum Squared Errors in every state, we received best number for neurons for this layer. After defining these neurons, by synthetic data from artificial sphere and cylinder models, trained the network.
It is necessary to mention that the neural network was trained in the relatred domain of thre probable depth, especially for the real data that we know the geological prior information and so the approximation of the depth domain is possible.Also the training data are all normalized both inputs and outputs. The index used to evaluate the errors was sum squared error for both validation and test data. Finally by using outputs of the network used for recognition of the shape of the anomaly, and the network used for defining model parameters, we defined the shape and parameters of humble salt dome. The results for real and synthetic gravity data showed very good ability of the multi layer perceptron neural networks for estimation of shape and depth of salt domes. https://jesphys.ut.ac.ir/article_53697_e4ff6253e063e55f28388211f0ee0f06.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X413201509233D data-space gravity inversion using compactness constraint3D data-space gravity inversion using compactness constraint4534625369810.22059/jesphys.2015.53698FAZeynabAbbaszadeSaeedVatankhahVahidEbrahimzade ErdestaniJournal Article20150128In this paper the 3D inversion of gravity data is considered. The goal is to reconstruct models of subsurface density distribution using a set of known gravity observations measured on the earth surface. The subsurface under the survey area is divided into large number of rectangular blocks of known sizes and positions. The unknown density contrasts within each prism define the parameters to be estimated. This kind of parameterization is flexible for the reconstruction of the subsurface model, but requires more unknown model parameters than observations (here N << M, where N is the number of data and M is the number of model parameters). The final density distribution will be obtained by minimizing a global objective function consists of data misfit and a regularization term. The inverse problem is solved in data space, which needs inverse of matrix with N×N dimension, as compared with M×M dimension system in model space inversion. This methodology was used by Pilkington (2009) in 3D inversion of magnetic data. To solve the resulting set of linear equation, the conjugate gradient method is used. Combination of data-space method with conjugate gradient leads to keep the storage and computational time to a minimum. The iteratively-defined regularization matrix, which is used in objective function, is a combination of three diagonal matrix; namely depth weighting, compactness and hard constraint matrices. The compactness constraint was introduced in Last and Kubik (1983) and developed in Portniaguine and Zhdanov (1999), who used term "minimum support stabilizer", is considered here to produce models with non-smooth features. It is a suitable and well-known constraint for identifying geologic structures which have material properties that vary over relatively short distances. The depth weighting matrix, introduced in Li and Oldenburg (1998), is used in regularization term to counteract the natural decay of the kernel with depth. The hard constraint allows us to incorporated priori geological and geophysical information into inversion process. While depth weighting and hard constraint matrices both are independent of the iteration index, the compactness depends on iterations. In order to recover a feasible image of the subsurface, realistic lower and upper density bounds are imposed during the inversion process. The computer program is written in MATLAB and tested on synthetic data produced by a model consists of two cubes. The cubes have same dimension and density, but located at different depths. The results indicate that the algorithm is efficient to handle large-scale gravity inverse problems. For the shallow cube the geometry and density of the reconstructed model are close to those of the original model, but for the deeper body the resolution decrease and a smooth image of subsurface obtained. The gravity data acquired over the Safo mining camp in the north-west of Iran, which is well-known for manganese ores, are used as a real modeling case. The results show a density distribution in the subsurface from about 5 to 35-40 m in depth and about 35 m extent in the x direction, which are close to those obtained by bore-hole drilling on the site.In this paper the 3D inversion of gravity data is considered. The goal is to reconstruct models of subsurface density distribution using a set of known gravity observations measured on the earth surface. The subsurface under the survey area is divided into large number of rectangular blocks of known sizes and positions. The unknown density contrasts within each prism define the parameters to be estimated. This kind of parameterization is flexible for the reconstruction of the subsurface model, but requires more unknown model parameters than observations (here N << M, where N is the number of data and M is the number of model parameters). The final density distribution will be obtained by minimizing a global objective function consists of data misfit and a regularization term. The inverse problem is solved in data space, which needs inverse of matrix with N×N dimension, as compared with M×M dimension system in model space inversion. This methodology was used by Pilkington (2009) in 3D inversion of magnetic data. To solve the resulting set of linear equation, the conjugate gradient method is used. Combination of data-space method with conjugate gradient leads to keep the storage and computational time to a minimum. The iteratively-defined regularization matrix, which is used in objective function, is a combination of three diagonal matrix; namely depth weighting, compactness and hard constraint matrices. The compactness constraint was introduced in Last and Kubik (1983) and developed in Portniaguine and Zhdanov (1999), who used term "minimum support stabilizer", is considered here to produce models with non-smooth features. It is a suitable and well-known constraint for identifying geologic structures which have material properties that vary over relatively short distances. The depth weighting matrix, introduced in Li and Oldenburg (1998), is used in regularization term to counteract the natural decay of the kernel with depth. The hard constraint allows us to incorporated priori geological and geophysical information into inversion process. While depth weighting and hard constraint matrices both are independent of the iteration index, the compactness depends on iterations. In order to recover a feasible image of the subsurface, realistic lower and upper density bounds are imposed during the inversion process. The computer program is written in MATLAB and tested on synthetic data produced by a model consists of two cubes. The cubes have same dimension and density, but located at different depths. The results indicate that the algorithm is efficient to handle large-scale gravity inverse problems. For the shallow cube the geometry and density of the reconstructed model are close to those of the original model, but for the deeper body the resolution decrease and a smooth image of subsurface obtained. The gravity data acquired over the Safo mining camp in the north-west of Iran, which is well-known for manganese ores, are used as a real modeling case. The results show a density distribution in the subsurface from about 5 to 35-40 m in depth and about 35 m extent in the x direction, which are close to those obtained by bore-hole drilling on the site.https://jesphys.ut.ac.ir/article_53698_af0d0a99e175a3ec6524e7cb7c25b13b.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Determining the electrical conductivity of the upper mantle using Sq FieldDetermining the electrical conductivity of the upper mantle using Sq Field4634715509910.22059/jesphys.2015.55099FAShahabIzadiAsadollahJoata BayramiMansorehMontahaeiJournal Article20140520The atmospheric electric currents that varying in daily, seasonal, and latitude patterns above the Earth's surface act as a source that induces currents to flow in the conducting layers of the Earth. The magnitude, direction, and depth of penetration of the induced currents are determined by the characteristics of the source currents as well as the distribution of electrically conducting materials in the Earth. The solar quiet (Sq) magnetic ﬁeld variation is a manifestation of an ionospheric current system. Heating at the dayside and cooling at the nightside of the atmosphere generates tidal winds which drive ionospheric plasma against the geomagnetic ﬁeld inducing electric ﬁelds and currents in the dynamo region between 80-200 km in height. The current system remains relatively ﬁxed to the Earth-sun line and produces regular daily variations which are directly seen in the magnetograms of geomagnetic “quiet” days, therefore the name Sq. The Sq field variations are dominated by 24-, 12-, 8-, and 6-hr spectral components and can penetrate in the conductive earth to depth between 100 to 600 km. For the situation in which field measurements are available about a spherical surface that separates the source from the induced currents (and a current doesn’t flow across this surface), Gauss (1838) devised a special solution of the differential electromagnetic field equations that is separable in the spherical coordinates r, θ , and φ . In Gauss’s solution, the field terms that represent radial dependence appear as two series. One with increasing powers of the sphere radius, r, and one with increasing powers of 1/r. As the value of r becomes larger (outward from the sphere) the first series produces an increased field strength, as if approaching external current sources. As the value of r decreases (toward the sphere center) the second series of 1/r terms indicate increased field strength, as if approaching internal current sources. Gauss had devised the way to separately represent the currents that were external and internal to his analysis At the Earth's surface the observed mixture of fields from the source and induced currents can be separated by spherical harmonic analyses and the relationship between the internal and external amplitudes and phases can be used to infer the Earth's conductivity profile at great depths. A spherical harmonic analysis (SHA) was applied to obtain a separation into internal and external field coefficients. The magnetic scalar potential, V, in colatitude θ and longitude φ described at the Earth's surface by
In which the cosine (A) and sine (B) coefficients of the expansion for the external (ex) and internal (in) parts are taken to be:
After separating the geomagnetic field into internal and external part by SHA we can use Schmucker’s (1970) method for profiling the Earth's substructure. In the method outlined by Schmucker formulas are developed that provide the depth (d) and conductivity () of apparent layers that would produce surface-field relationships similar to the observed components.
These profile values, need to be determined for each n, m set of SHA coefficients using the real z and imaginary p parts of a complex induction transfer function, , given as:
We calculate the Electrical conductivity properties of the upper by employing the 6, 8, 12, 24 hour spectral components of the quiet-day geomagnetic field variation. The Gauss coefficients obtained from an spherical harmonic analysis of the two components of the quiet daily variation field for the solar-quiet year 2009 were applied to Schmucker's model (Schmucker, 1970). The findings coincide with the results of previous solar quite years and demonstrate that electrical conductivity varies exponentially with depth between 150 and 530 Km.
The atmospheric electric currents that varying in daily, seasonal, and latitude patterns above the Earth's surface act as a source that induces currents to flow in the conducting layers of the Earth. The magnitude, direction, and depth of penetration of the induced currents are determined by the characteristics of the source currents as well as the distribution of electrically conducting materials in the Earth. The solar quiet (Sq) magnetic ﬁeld variation is a manifestation of an ionospheric current system. Heating at the dayside and cooling at the nightside of the atmosphere generates tidal winds which drive ionospheric plasma against the geomagnetic ﬁeld inducing electric ﬁelds and currents in the dynamo region between 80-200 km in height. The current system remains relatively ﬁxed to the Earth-sun line and produces regular daily variations which are directly seen in the magnetograms of geomagnetic “quiet” days, therefore the name Sq. The Sq field variations are dominated by 24-, 12-, 8-, and 6-hr spectral components and can penetrate in the conductive earth to depth between 100 to 600 km. For the situation in which field measurements are available about a spherical surface that separates the source from the induced currents (and a current doesn’t flow across this surface), Gauss (1838) devised a special solution of the differential electromagnetic field equations that is separable in the spherical coordinates r, θ , and φ . In Gauss’s solution, the field terms that represent radial dependence appear as two series. One with increasing powers of the sphere radius, r, and one with increasing powers of 1/r. As the value of r becomes larger (outward from the sphere) the first series produces an increased field strength, as if approaching external current sources. As the value of r decreases (toward the sphere center) the second series of 1/r terms indicate increased field strength, as if approaching internal current sources. Gauss had devised the way to separately represent the currents that were external and internal to his analysis At the Earth's surface the observed mixture of fields from the source and induced currents can be separated by spherical harmonic analyses and the relationship between the internal and external amplitudes and phases can be used to infer the Earth's conductivity profile at great depths. A spherical harmonic analysis (SHA) was applied to obtain a separation into internal and external field coefficients. The magnetic scalar potential, V, in colatitude θ and longitude φ described at the Earth's surface by
In which the cosine (A) and sine (B) coefficients of the expansion for the external (ex) and internal (in) parts are taken to be:
After separating the geomagnetic field into internal and external part by SHA we can use Schmucker’s (1970) method for profiling the Earth's substructure. In the method outlined by Schmucker formulas are developed that provide the depth (d) and conductivity () of apparent layers that would produce surface-field relationships similar to the observed components.
These profile values, need to be determined for each n, m set of SHA coefficients using the real z and imaginary p parts of a complex induction transfer function, , given as:
We calculate the Electrical conductivity properties of the upper by employing the 6, 8, 12, 24 hour spectral components of the quiet-day geomagnetic field variation. The Gauss coefficients obtained from an spherical harmonic analysis of the two components of the quiet daily variation field for the solar-quiet year 2009 were applied to Schmucker's model (Schmucker, 1970). The findings coincide with the results of previous solar quite years and demonstrate that electrical conductivity varies exponentially with depth between 150 and 530 Km.
https://jesphys.ut.ac.ir/article_55099_10054f2d09f1dc06bf1b031b3843ca40.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Vertical Total Electron Content (VTEC) prediction with neural network for single station in Iran and comparison with IRIVertical Total Electron Content (VTEC) prediction with neural network for single station in Iran and comparison with IRI4734855531810.22059/jesphys.2015.55318FAFaridehSabzeheeDepartment of Surveying and Geomatics Engineering, College of Engineering,
University of Tehran.Mohammad AliSharifi1) Department of Surveying and Geomatics Engineering, College of Engineering,
University of Tehran.
2) Research Institute of Geoinformation Technology (RIGT), College of Engineering,
University of Tehran, Iran0000-0003-0745-4147MehdiAkhoond ZadehDepartment of Surveying and Geomatics Engineering, College of Engineering,
University of Tehran.SaeedFarzanehDepartment of Surveying and Geomatics Engineering, College of Engineering,
University of Tehran.Journal Article20140707The ionosphere as the upper part of Earth’s atmosphere consists of electrons and atoms affecting the signal propagation in the radio frequency domain. Nowadays, Global Navigation Satellite Systems (GNSS), like GPS, are widely used for various applications. The majority of navigation satellite receivers operate on a single frequency and experience an error due to the ionospheric delay. They compensate for the ionospheric delay using an ionospheric model which typically only corrects for 50% of the delay. An alternative approach is to map the ionosphere with a network of real-time measurements. Global Positioning System (GPS) networks prepare chance to study the dynamics and continuous variations in the ionosphere by complementary ionospheric measurements, which are usually obtained by different techniques such as ionosondes, incoherent scatter radars and satellites. The ionospheric delay is characterized by the Total Electron Content (TEC) along the signal path from the satellite to the receiver. Elimination (or reduction) of the ionosphere effects is possible using dual-frequency receivers by a very useful combination of dual-frequency data known as geometry free combination (L4) as follows:
where the L4 signature is derived from L1(1.57542 GHz) and L2(1.22760 GHz) phase observables. Single-frequency users, however cannot take advantage of this combination. So, they have to use a proper ionospheric model to correct the ionospheric delay. Ionosondes (up to an altitude of 1000 km) can determine TEC while GPS measurements give completely information about the topside ionosphere. In this paper, the suitability of Neural Networks (NNs) in order to predict the Total Electron Content (TEC) obtained from Iranian Permanent GPS Network (IPGN) during the low-solar-activity period 2006 has been investigated. TEC has many non-linear variations while the neural network has a significant ability to model and approximate it (Williscroft and Poole, 1996; Hernandez-Pajares et al., 1997; Xenos et al., 2003; Sarma and Mahdu, 2005; Leandro and Santos, 2007). The input space included the day number (DN,seasonal variation), hour (HR,diurnal variation), sunspot number (SSN,measure of the solar activity) and magnetic index (measure of magnetic activity).
To make the data continuous, the first two parameters were each split into sine and cosine components, two cyclic as follows:
where DNS, DNC, HRS and HRC are the sine and cosine components of DN and HR, respectively.
In this paper, the TEC values have been estimated using the PPP (Precise Point Positioning) module of the Bernese over Iranshahr (27˚N, 60˚E).
Optimum situation of the neural network include of single hidden layer and eight neurons of inputs layer and fifty neurons of hidden layer and one neuron of output layer. To this end, the single hidden layer feed-forward network with a back propagation algorithm has been designed.
An analysis was done by comparing predicted NN TEC(TEC values predicted by the NN model) with TEC values from the IRI2007 version of the International Reference Ionosphere, validating GPS TEC(TEC values calculated from the GPS measurements) with the maximum electron density obtained from ionosonde and calculating the performance of the NN model during equinoxes and solstices.
The results show high correlation with GPS TEC and NN TEC. Their Root-Mean-Square Error(RMSE) and coefficient of determination (R<sup>2</sup>) are 1.5273 TECU and 0.9334 respectively.
RMSE is defined as:
where <em>N </em>is the number of data points.
In table 3, the absolute error (E<sub>abs</sub>) is defined as the magnitude of the difference between the NN predicted TEC and the GPS TEC, while the relative error(E<sub>rel</sub>) is the ratio of the absolute error to GPS TEC and can be represented as a percentage (Habarulema et al, 2007).These errors were calculated as follows:
is the absolute error and is the relative error respectively. The difference (100- % gives the relative correction, which indicates the approximate TEC prediction accuracy for the NN model (Leandro and Santos, 2007). An average error of ~11.41% means that the NN can predict about 88.58% of the GPS TEC on average. Results show that the neural network works better rather than the IRI model for IRAN. The ionosphere as the upper part of Earth’s atmosphere consists of electrons and atoms affecting the signal propagation in the radio frequency domain. Nowadays, Global Navigation Satellite Systems (GNSS), like GPS, are widely used for various applications. The majority of navigation satellite receivers operate on a single frequency and experience an error due to the ionospheric delay. They compensate for the ionospheric delay using an ionospheric model which typically only corrects for 50% of the delay. An alternative approach is to map the ionosphere with a network of real-time measurements. Global Positioning System (GPS) networks prepare chance to study the dynamics and continuous variations in the ionosphere by complementary ionospheric measurements, which are usually obtained by different techniques such as ionosondes, incoherent scatter radars and satellites. The ionospheric delay is characterized by the Total Electron Content (TEC) along the signal path from the satellite to the receiver. Elimination (or reduction) of the ionosphere effects is possible using dual-frequency receivers by a very useful combination of dual-frequency data known as geometry free combination (L4) as follows:
where the L4 signature is derived from L1(1.57542 GHz) and L2(1.22760 GHz) phase observables. Single-frequency users, however cannot take advantage of this combination. So, they have to use a proper ionospheric model to correct the ionospheric delay. Ionosondes (up to an altitude of 1000 km) can determine TEC while GPS measurements give completely information about the topside ionosphere. In this paper, the suitability of Neural Networks (NNs) in order to predict the Total Electron Content (TEC) obtained from Iranian Permanent GPS Network (IPGN) during the low-solar-activity period 2006 has been investigated. TEC has many non-linear variations while the neural network has a significant ability to model and approximate it (Williscroft and Poole, 1996; Hernandez-Pajares et al., 1997; Xenos et al., 2003; Sarma and Mahdu, 2005; Leandro and Santos, 2007). The input space included the day number (DN,seasonal variation), hour (HR,diurnal variation), sunspot number (SSN,measure of the solar activity) and magnetic index (measure of magnetic activity).
To make the data continuous, the first two parameters were each split into sine and cosine components, two cyclic as follows:
where DNS, DNC, HRS and HRC are the sine and cosine components of DN and HR, respectively.
In this paper, the TEC values have been estimated using the PPP (Precise Point Positioning) module of the Bernese over Iranshahr (27˚N, 60˚E).
Optimum situation of the neural network include of single hidden layer and eight neurons of inputs layer and fifty neurons of hidden layer and one neuron of output layer. To this end, the single hidden layer feed-forward network with a back propagation algorithm has been designed.
An analysis was done by comparing predicted NN TEC(TEC values predicted by the NN model) with TEC values from the IRI2007 version of the International Reference Ionosphere, validating GPS TEC(TEC values calculated from the GPS measurements) with the maximum electron density obtained from ionosonde and calculating the performance of the NN model during equinoxes and solstices.
The results show high correlation with GPS TEC and NN TEC. Their Root-Mean-Square Error(RMSE) and coefficient of determination (R<sup>2</sup>) are 1.5273 TECU and 0.9334 respectively.
RMSE is defined as:
where <em>N </em>is the number of data points.
In table 3, the absolute error (E<sub>abs</sub>) is defined as the magnitude of the difference between the NN predicted TEC and the GPS TEC, while the relative error(E<sub>rel</sub>) is the ratio of the absolute error to GPS TEC and can be represented as a percentage (Habarulema et al, 2007).These errors were calculated as follows:
is the absolute error and is the relative error respectively. The difference (100- % gives the relative correction, which indicates the approximate TEC prediction accuracy for the NN model (Leandro and Santos, 2007). An average error of ~11.41% means that the NN can predict about 88.58% of the GPS TEC on average. Results show that the neural network works better rather than the IRI model for IRAN. https://jesphys.ut.ac.ir/article_55318_22bd5ea38076a470183393d3d57f6303.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Estimating the Effect of Meteorological Parameters on the Decrease of Earth Received Radiation Emphasizing the Relative Humidity Changes to Set Solar SitesEstimating the Effect of Meteorological Parameters on the Decrease of Earth Received Radiation Emphasizing the Relative Humidity Changes to Set Solar Sites4874975449610.22059/jesphys.2015.54496FAAmanollahFathniyaSaeedRajayiJournal Article20140831Solar radiation, a renewable energy, is the most effective, economical and the safest source of energy which has the potential to be the major source of energy in near future. As a result optimum use of solar energy needs precise siting of solar site. The most accurate way of measuring solar radiation is using pyranometer which is not so popular around the world because of its high charge and lack of facilities. Therefore, nowadays the researchers use climatic and environmental parameters to estimate solar radiation (Belcher and DeGaetano, 2007: 10). Although meteorological parameters, namely relative humidity, cloudiness percentage, temperature and hours of sunshine affect the amount of earth received radiation, research has proved that hours of sunshine is the most effective factor on receiving radiation; it has been proved in the model of linear regression of Angstrom in the way of FAO- penman-Monteith method. Approximately 1445"> of Iran area, having 240-250 sunny days yearly is capable of producing a lot of solar energy. Angstrom model (1924) was made based on the relationship between received radiation and sunshine hours (Angstrom, 1924: 122). It was revised in 1940 (Prescott, 1940: 115). Sabziparvar (2008: 1002) believed that out of various studies, aiming at presenting a simple equation to estimate solar radiation in central arid region of Iran, Sabagh method has the least number of errors. This study aimed at finding the effect of climatic parameters, especially relative humidity on the amount of total received radiation between Minab and Bandarabas stations. <br />Material and method <br />In the present research daily data of synoptic meteorology stations of Minab and Bandarabas during the temporal period of 2006-2009 were studied in order to estimate not only the amount of received radiation but also the effect of relative humidity on radiation decrease in real standard mode. The aforesaid stations were chosen due to their similar features like the height difference less than 20m altitude, 4◦ latitude while different distance from the source of moist; Oman sea and Persian gulf. The amounts of solar radiation was estimated, using Bird and Hulstrom model, by considering both locative features (altitude, latitude and longitude) and climatic features (relative humidity, temperature, pressure, sunshine hours, sun height, atmosphere albedo, particle absorbing, earth albedo, atmosphere mass, ozone absorption and Rayleigh distribution). Moreover in order to estimate the effect of relative humidity on total received radiation decrease the values of climatic variables were supposed stable in standard mode. Finally daily and monthly values were presented as tables and graphs. <br />Discussion and Conclusion <br />The findings of this study showed that the maximum amount of total received radiation of Minab station was in June, 14.48 MJ/m<sup>-2</sup>/ d<sup>-1</sup>, while for Bandarabas station was in May, 13.97 MJ/m<sup>-2</sup>/ d<sup>-1</sup>. In fact this difference during a month is due to the difference in the amount of relative humidity. In other words, high radiation of Bandarabas station in May is the result of both its less relative humidity than warm months and high solar height. As summer comes radiation decreases, for the amount of moist in atmosphere increases. The lowest amount of total received radiation of Bandarabas is in cold months in which slight angle of sun are the major reason for low received radiation. Besides total received radiation of Minab during winter is less than Bandarabas which is the result of more moist in June. Minab station received maximum amount of solar radiation coinciding with highest height of sun. In addition, sunshine hour decrease leads to minimum total received radiation during cold month. The most considerable point is total received radiation of Minab which is less than Bandarabas, for relative humidity of Minab is more than Bandarabas. Studying the effect of relative humidity on standard atmosphere showed that the amounts of total received radiation of the stations are not stable: it varies both in January form maximum amount, 6.4 MJ/ m<sup>-2</sup>/ d<sup>-1</sup>, with relative humidity of 10% to minimum amount, 5.6 MJ/ m<sup>-2</sup>/ d<sup>-1</sup> with relative humidity of 90%, and in June from maximum amount, 18.4 MJ/m<sup>-2</sup>/ d<sup>-1</sup> with relative humidity of 10% to minimum amount, 15.8 MJ/ m<sup>-2</sup>/ d<sup>-1</sup>, with relative humidity of 90%. As a matter of fact the amount of relative humidity in average mode has more effect on total received radiation compared with its low or high mode. Moreover it is necessary to say that the amount of total received radiation in different hours is 4 MJ/ m<sup>-2</sup>/ d<sup>-1</sup>, minimum, and 16 MJ/m<sup>-2</sup>/ d<sup>-1</sup>, maximum, considering the effect of relative humidity. Totally, it seems that Minab than BandarAbas station has more potential for energy production to Set Solar Sites because Minab station recive more radiation.Solar radiation, a renewable energy, is the most effective, economical and the safest source of energy which has the potential to be the major source of energy in near future. As a result optimum use of solar energy needs precise siting of solar site. The most accurate way of measuring solar radiation is using pyranometer which is not so popular around the world because of its high charge and lack of facilities. Therefore, nowadays the researchers use climatic and environmental parameters to estimate solar radiation (Belcher and DeGaetano, 2007: 10). Although meteorological parameters, namely relative humidity, cloudiness percentage, temperature and hours of sunshine affect the amount of earth received radiation, research has proved that hours of sunshine is the most effective factor on receiving radiation; it has been proved in the model of linear regression of Angstrom in the way of FAO- penman-Monteith method. Approximately 1445"> of Iran area, having 240-250 sunny days yearly is capable of producing a lot of solar energy. Angstrom model (1924) was made based on the relationship between received radiation and sunshine hours (Angstrom, 1924: 122). It was revised in 1940 (Prescott, 1940: 115). Sabziparvar (2008: 1002) believed that out of various studies, aiming at presenting a simple equation to estimate solar radiation in central arid region of Iran, Sabagh method has the least number of errors. This study aimed at finding the effect of climatic parameters, especially relative humidity on the amount of total received radiation between Minab and Bandarabas stations. <br />Material and method <br />In the present research daily data of synoptic meteorology stations of Minab and Bandarabas during the temporal period of 2006-2009 were studied in order to estimate not only the amount of received radiation but also the effect of relative humidity on radiation decrease in real standard mode. The aforesaid stations were chosen due to their similar features like the height difference less than 20m altitude, 4◦ latitude while different distance from the source of moist; Oman sea and Persian gulf. The amounts of solar radiation was estimated, using Bird and Hulstrom model, by considering both locative features (altitude, latitude and longitude) and climatic features (relative humidity, temperature, pressure, sunshine hours, sun height, atmosphere albedo, particle absorbing, earth albedo, atmosphere mass, ozone absorption and Rayleigh distribution). Moreover in order to estimate the effect of relative humidity on total received radiation decrease the values of climatic variables were supposed stable in standard mode. Finally daily and monthly values were presented as tables and graphs. <br />Discussion and Conclusion <br />The findings of this study showed that the maximum amount of total received radiation of Minab station was in June, 14.48 MJ/m<sup>-2</sup>/ d<sup>-1</sup>, while for Bandarabas station was in May, 13.97 MJ/m<sup>-2</sup>/ d<sup>-1</sup>. In fact this difference during a month is due to the difference in the amount of relative humidity. In other words, high radiation of Bandarabas station in May is the result of both its less relative humidity than warm months and high solar height. As summer comes radiation decreases, for the amount of moist in atmosphere increases. The lowest amount of total received radiation of Bandarabas is in cold months in which slight angle of sun are the major reason for low received radiation. Besides total received radiation of Minab during winter is less than Bandarabas which is the result of more moist in June. Minab station received maximum amount of solar radiation coinciding with highest height of sun. In addition, sunshine hour decrease leads to minimum total received radiation during cold month. The most considerable point is total received radiation of Minab which is less than Bandarabas, for relative humidity of Minab is more than Bandarabas. Studying the effect of relative humidity on standard atmosphere showed that the amounts of total received radiation of the stations are not stable: it varies both in January form maximum amount, 6.4 MJ/ m<sup>-2</sup>/ d<sup>-1</sup>, with relative humidity of 10% to minimum amount, 5.6 MJ/ m<sup>-2</sup>/ d<sup>-1</sup> with relative humidity of 90%, and in June from maximum amount, 18.4 MJ/m<sup>-2</sup>/ d<sup>-1</sup> with relative humidity of 10% to minimum amount, 15.8 MJ/ m<sup>-2</sup>/ d<sup>-1</sup>, with relative humidity of 90%. As a matter of fact the amount of relative humidity in average mode has more effect on total received radiation compared with its low or high mode. Moreover it is necessary to say that the amount of total received radiation in different hours is 4 MJ/ m<sup>-2</sup>/ d<sup>-1</sup>, minimum, and 16 MJ/m<sup>-2</sup>/ d<sup>-1</sup>, maximum, considering the effect of relative humidity. Totally, it seems that Minab than BandarAbas station has more potential for energy production to Set Solar Sites because Minab station recive more radiation.https://jesphys.ut.ac.ir/article_54496_aa60fc0c0103b2cafc706870f36ca613.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Estimation of atmospheric particulate matter (PM10) concentration based on remote sensing measurements and meteorological parameters: application of artificial neural networkEstimation of atmospheric particulate matter (PM10) concentration based on remote sensing measurements and meteorological parameters: application of artificial neural network4995105452810.22059/jesphys.2015.54528FAMasoudKhoshsimaSatellite Research Institute, Iranian Space Research Center, Tehran, IranSeyede SamaneSabet GhadamAbasaliAliakbari BidokhtiInstitute of Geophysics, University of TehranJournal Article20141027 Suspended aerosols in the atmosphere have strong impact on the global climate. They influence the earth’s radiation budget by scattering or absorbing both incoming and outgoing radiation. Aerosols in troposphere are caused by natural sources, such as dust, sea-spray and volcanoes and also by anthropogenic sources, such as combustion of fossil fuels and biomass burning activities and from gas-to-particle conversion processes. Those have been implicated in human health effects and visibility reduction in urban and regional areas.
In this work, the aerosol optical indices were calculated by using the CIMEL sun photometer i.e. passive measurement. These indices have been monitored during December 2009 to September, 2010, in a semi urban area in the Zanjan region in Iran, which has a continental climate. Aerosol optical depth (AOD) is a dimensionless number that characterizes the total absorption and scattering effects of particles in the direct or scattered sunlight. The value of AOD was measured by means of a sun-photometer in a ground station, located at the University of Zanjan (36.7 N, 48.5 E). The information on the aerosol number distribution was defined by Angstrom in 1929. The wavelength exponent is calculated according to the Angstrom formula. Hence, the wavelength exponent may be calculated from the slope of a linear fit of <em>ln</em>AOD against <em>ln</em>λ. The value of 1.3 for α represents an average value for the mean atmospheric conditions. An empirical relationship between the wavelength exponent and the dominant geometric diameter of the aerosol particles was found by Angstrom.
Besides such ground-based observations of AOD, which are point-based, aerosol optical depth measurements taken by MODIS on board of the Terra and Aqua satellites are used for further analysis. Satellites are able to yield timely information on the atmospheric conditions at the regional and global scales inexpensively. The MODIS sensor onboard the Terra/Aqua Earth Observation System satellites captures the radiative energy from the target in 36 spectral bands over the visible light, near infrared and infrared spectra. The raw imagery has a spatial resolution ranging from 250 m to 1 km at a ground swath of 2,330 km. Standard meteorological variables, such as air pressure, relative humidity, wind speed and direction are also measured at Zanjan synoptic station. Moreover, the concentration of particle mass under 10 μm (PM<sub>10</sub>) which is measured hourly by the Zanjan environmental protection bureau, is also used.
In this study, the relationship between the suspended particulate matter (PM<sub>10</sub> ) concentration and aerosol optical indices such as AOD, Angstrom coefficients (α,β) and meteorological parameters such as wind speed and direction, and relative humidity were considered. Two forecasting techniques are presented in this paper for predicting the average hourly PM<sub>10</sub> concentration. The first one is the Multivariate Linear Regression (MLR) and the second technique is an Artificial Neural Network (ANN) model, based on Radial Basis Function (RBF). Multiple linear regression models were developed with several sets of data (aerosol optical properties and meteorological data as predictor). The results show that correlation Coefficient between predicted values and observed values for MLR model and ANN model were 0.62 and 0.81, respectively. The impact of wind direction on PM<sub>10</sub> concentration prediction is weak in MLR model. The results also show that MLR could not predict PM<sub>10</sub> concentration as well as ANN model. Suspended aerosols in the atmosphere have strong impact on the global climate. They influence the earth’s radiation budget by scattering or absorbing both incoming and outgoing radiation. Aerosols in troposphere are caused by natural sources, such as dust, sea-spray and volcanoes and also by anthropogenic sources, such as combustion of fossil fuels and biomass burning activities and from gas-to-particle conversion processes. Those have been implicated in human health effects and visibility reduction in urban and regional areas.
In this work, the aerosol optical indices were calculated by using the CIMEL sun photometer i.e. passive measurement. These indices have been monitored during December 2009 to September, 2010, in a semi urban area in the Zanjan region in Iran, which has a continental climate. Aerosol optical depth (AOD) is a dimensionless number that characterizes the total absorption and scattering effects of particles in the direct or scattered sunlight. The value of AOD was measured by means of a sun-photometer in a ground station, located at the University of Zanjan (36.7 N, 48.5 E). The information on the aerosol number distribution was defined by Angstrom in 1929. The wavelength exponent is calculated according to the Angstrom formula. Hence, the wavelength exponent may be calculated from the slope of a linear fit of <em>ln</em>AOD against <em>ln</em>λ. The value of 1.3 for α represents an average value for the mean atmospheric conditions. An empirical relationship between the wavelength exponent and the dominant geometric diameter of the aerosol particles was found by Angstrom.
Besides such ground-based observations of AOD, which are point-based, aerosol optical depth measurements taken by MODIS on board of the Terra and Aqua satellites are used for further analysis. Satellites are able to yield timely information on the atmospheric conditions at the regional and global scales inexpensively. The MODIS sensor onboard the Terra/Aqua Earth Observation System satellites captures the radiative energy from the target in 36 spectral bands over the visible light, near infrared and infrared spectra. The raw imagery has a spatial resolution ranging from 250 m to 1 km at a ground swath of 2,330 km. Standard meteorological variables, such as air pressure, relative humidity, wind speed and direction are also measured at Zanjan synoptic station. Moreover, the concentration of particle mass under 10 μm (PM<sub>10</sub>) which is measured hourly by the Zanjan environmental protection bureau, is also used.
In this study, the relationship between the suspended particulate matter (PM<sub>10</sub> ) concentration and aerosol optical indices such as AOD, Angstrom coefficients (α,β) and meteorological parameters such as wind speed and direction, and relative humidity were considered. Two forecasting techniques are presented in this paper for predicting the average hourly PM<sub>10</sub> concentration. The first one is the Multivariate Linear Regression (MLR) and the second technique is an Artificial Neural Network (ANN) model, based on Radial Basis Function (RBF). Multiple linear regression models were developed with several sets of data (aerosol optical properties and meteorological data as predictor). The results show that correlation Coefficient between predicted values and observed values for MLR model and ANN model were 0.62 and 0.81, respectively. The impact of wind direction on PM<sub>10</sub> concentration prediction is weak in MLR model. The results also show that MLR could not predict PM<sub>10</sub> concentration as well as ANN model.https://jesphys.ut.ac.ir/article_54528_5bd893038253581e9a9159c8d6463099.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Ability of RegCM4 climate model to simulate precipitation in cold period of fars. case study: 1990-2010 periodAbility of RegCM4 climate model to simulate precipitation in cold period of fars. case study: 1990-2010 period5115245333210.22059/jesphys.2015.53332FAFahimeMohammadiAzarZarin0000-0002-4542-3176ImanBabaeiyanJournal Article20141122Due to climate changes, precipitation forecast average in time scale is one of the most important challenges for specialists in the recent years. The purpose of this research is to investigate the capabilities of the dynamic model RegCM4 in precipitation forecast in cold period of Fars province. In this study, September to February or 6 month is considered as the cold period. Several variable statistical periods 1990-2010 are selected. In this study, two data sets are used by post-processing methods using statistical regression techniques. 1 - Data needed for implementation of Dynamic Model RegCM4 was taken from the Centre ICTP with a NetCDF format including weather data on a daily scale (6 hours) with a horizontal grid 5/2 × 5/2 degrees, sea level data, with 1 ° grid surface data. 2 - Monthly precipitation data (mm) watch of seven synoptic stations which have been received from the Meteorological province. In order to implement dynamic model, the test of scheme determination is performed and investigation showed that Darrell’ scheme in comparison with the two other schemes, Koo and Emmanuel, has fewer errors in the modeling the rainfall during the cold season in Fars stations. After running the model, outputs were processed using multivariate regression methods. In order to enhance the efficiency of RegCM4 model, 20 km horizontal resolution model output (dynamic) using multiple regression the statistical post-processing (dynamic-data) groups. Double precipitation data with a resolution of 20 km and precipitation observations of monthly precipitation data were compared with the post-processed to determine the performance of the statistical processing on the output RegCM4 model.The results with comparing the data showed that in 43% of the stations in autumn the use of raw output of climate model precipitation RegCM4 and dynamic-statistic method have had the same efficiencies and in about 14% of cases neither of the two options have preference to the other. In winter many more stations represent the efficiency of using the raw climate model RegCM4 as 4 out of 7 stations have confirmed the superiority of the model in this season (57 percent) while the number of successful stations using the dynamic-Statistics is 2 (29 percent). Also 1 station (14%) in applying the above two cases do not have a specific preference. In the cold period of Fars Province, the number of the stations adapted with raw output of climate model (rainfall) RegCM4, and the output of dynamic-statistic method is respectively 5 cases (71%) and 2 cases (29%). in the study of rainfall in cold period no cases has been found that none of the options is not superior to the other in it. In general we can say that in 1.57% of cases the output of RegCM4 model and in 3.33% of cases the output of dynamic-statistic method has better ability to predict rainfall of Fars in cold period.Therefore, we can conclude that the small-scale dynamic view of 20 × 20 km horizontal resolution needed to apply statistical post-processing or dynamic-data to enhance the accuracy of the data is not mentioned.Due to climate changes, precipitation forecast average in time scale is one of the most important challenges for specialists in the recent years. The purpose of this research is to investigate the capabilities of the dynamic model RegCM4 in precipitation forecast in cold period of Fars province. In this study, September to February or 6 month is considered as the cold period. Several variable statistical periods 1990-2010 are selected. In this study, two data sets are used by post-processing methods using statistical regression techniques. 1 - Data needed for implementation of Dynamic Model RegCM4 was taken from the Centre ICTP with a NetCDF format including weather data on a daily scale (6 hours) with a horizontal grid 5/2 × 5/2 degrees, sea level data, with 1 ° grid surface data. 2 - Monthly precipitation data (mm) watch of seven synoptic stations which have been received from the Meteorological province. In order to implement dynamic model, the test of scheme determination is performed and investigation showed that Darrell’ scheme in comparison with the two other schemes, Koo and Emmanuel, has fewer errors in the modeling the rainfall during the cold season in Fars stations. After running the model, outputs were processed using multivariate regression methods. In order to enhance the efficiency of RegCM4 model, 20 km horizontal resolution model output (dynamic) using multiple regression the statistical post-processing (dynamic-data) groups. Double precipitation data with a resolution of 20 km and precipitation observations of monthly precipitation data were compared with the post-processed to determine the performance of the statistical processing on the output RegCM4 model.The results with comparing the data showed that in 43% of the stations in autumn the use of raw output of climate model precipitation RegCM4 and dynamic-statistic method have had the same efficiencies and in about 14% of cases neither of the two options have preference to the other. In winter many more stations represent the efficiency of using the raw climate model RegCM4 as 4 out of 7 stations have confirmed the superiority of the model in this season (57 percent) while the number of successful stations using the dynamic-Statistics is 2 (29 percent). Also 1 station (14%) in applying the above two cases do not have a specific preference. In the cold period of Fars Province, the number of the stations adapted with raw output of climate model (rainfall) RegCM4, and the output of dynamic-statistic method is respectively 5 cases (71%) and 2 cases (29%). in the study of rainfall in cold period no cases has been found that none of the options is not superior to the other in it. In general we can say that in 1.57% of cases the output of RegCM4 model and in 3.33% of cases the output of dynamic-statistic method has better ability to predict rainfall of Fars in cold period.Therefore, we can conclude that the small-scale dynamic view of 20 × 20 km horizontal resolution needed to apply statistical post-processing or dynamic-data to enhance the accuracy of the data is not mentioned.https://jesphys.ut.ac.ir/article_53332_c7e0e62cf48b9f276802c2b27973dba3.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Assessment of the application of the Stokes parameters to investigation of inertia-gravity waves propertiesAssessment of the application of the Stokes parameters to investigation of inertia-gravity waves properties5255335475210.22059/jesphys.2015.54752FAMojganAmirAmjadiPhD student/Space Physics Department, Institute of Geophysics, University of TehranMohammadMirzaeiAssistant Professor/Space Physics Department, Institute of Geophysics, University of Tehran0000-0003-0813-3994Ali RezaMohebalhojehAssociate Professor, Space Physics Department, Institute of Geophysics, University of Tehran, IranJournal Article20141209Advantages and disadvantages of using “Stokes parameters” in exploration of inertia-gravity wave properties are discussed in this paper. The survey is carried out on an inertia-gravity wave event (IGWs in what follows) that occurred over Iran from 7 to 9 February 2012. Applying this technique for investigation of atmospheric IGWs was recommended by Vincent and Fritts (1987) and then developed by Eckerman and Vincent (1989).The methods of Stokes parameters and hodograph (as the traditional methods) were used for estimation of IGW characteristics by Lue and Kuo (2012) in an idealized simulation study and Lue et al. (2013) in a real case study. It was proved that there were limitations and errors in intrinsic period and horizontal propagation direction especially when data consists of both upward and downward propagating waves. Zülicke and Peters (2006) calculated the horizontal divergence field using MM5 simulated data in an IGW event occurring in a poleward breaking Rossby wave and compared results with the application of Stokes parameters. It was shown that while using the divergence eliminates the background field, the exploitation of Stokes parameters strongly depends on the way that the background is removed. The case chosen for the current survey had previously been studied using the horizontal divergence and the hodograph methods.
The data used for this study were obtained from radio soundings launched from five upper-air stations in Iran throughout the mentioned event. IGWs structures were identified by fluctuation in temperature and wind fields after removing the background flow. The removal is performed either by fitting a sixth-degree polynomial or applying a bandpass Lanczos filter. In order to avoid arbitrary choices of inappropriate filter parameters, wavelet analysis software provided by Torrence and Compo (1998) has been applied to the interpolated data (with equal vertical-space points of 100 m before background removal). Based on previous studies (Spiga et al., 2008 and Serafimovich et al., 2005) the Morlet wavelet Function is chosen for this purpose.
Preliminary wavelet analysis shows that a typical vertical wavelength of 4-8 km is dominant in wind profile (also, a wavelength of 2-4 km is detected in the lower stratosphere in some cases). To facilitate the conduct of the study and avoid the adverse effects of strong wind speed in the vicinity of the jet stream, the vertical profiles were separated into two sections of troposphere (from 3 to 9 km in altitude below the jet stream) and lower stratosphere (from 14 to 27 km, above the jet stream).
The intrinsic frequencies estimated using Stokes parameters imply that this method (like hodograph method) is usually unable to detect the high-frequency part of IGW spectra. In addition, a great uncertainty appeared when a polynomial fitting was used to separate perturbation from the mean field. Furthermore, the effects of background winds prevent accurate estimation of horizontal direction of wave propagation. Indeed, the technique can distinguish neither the eastward (northward) from westward (southward), nor upward from downward energy propagation.
Nevertheless, the concomitant use of “Stokes parameters” and “Rotary Spectrum” shows satisfactory results. In addition to reducing the amount of computation (using only two Fast Fourier Transformation (FFT) in comparison with five FFTs in conventional calculation), calculating rotary spectrum directly from Stokes parameters suggests that upward propagation of energy prevails over downward propagation in the stratosphere and more importantly in the troposphere.
Energy of the wave can be determined from the fluctuation fields. The ratio of estimated potential energy to kinetic energy is greater than 1 in the troposphere and less than 1 in the lower stratosphere. It implies that there is a great energy source of IGWs in the troposphere which is compatible with previous numerical simulations of this case undertaken by the authors.Advantages and disadvantages of using “Stokes parameters” in exploration of inertia-gravity wave properties are discussed in this paper. The survey is carried out on an inertia-gravity wave event (IGWs in what follows) that occurred over Iran from 7 to 9 February 2012. Applying this technique for investigation of atmospheric IGWs was recommended by Vincent and Fritts (1987) and then developed by Eckerman and Vincent (1989).The methods of Stokes parameters and hodograph (as the traditional methods) were used for estimation of IGW characteristics by Lue and Kuo (2012) in an idealized simulation study and Lue et al. (2013) in a real case study. It was proved that there were limitations and errors in intrinsic period and horizontal propagation direction especially when data consists of both upward and downward propagating waves. Zülicke and Peters (2006) calculated the horizontal divergence field using MM5 simulated data in an IGW event occurring in a poleward breaking Rossby wave and compared results with the application of Stokes parameters. It was shown that while using the divergence eliminates the background field, the exploitation of Stokes parameters strongly depends on the way that the background is removed. The case chosen for the current survey had previously been studied using the horizontal divergence and the hodograph methods.
The data used for this study were obtained from radio soundings launched from five upper-air stations in Iran throughout the mentioned event. IGWs structures were identified by fluctuation in temperature and wind fields after removing the background flow. The removal is performed either by fitting a sixth-degree polynomial or applying a bandpass Lanczos filter. In order to avoid arbitrary choices of inappropriate filter parameters, wavelet analysis software provided by Torrence and Compo (1998) has been applied to the interpolated data (with equal vertical-space points of 100 m before background removal). Based on previous studies (Spiga et al., 2008 and Serafimovich et al., 2005) the Morlet wavelet Function is chosen for this purpose.
Preliminary wavelet analysis shows that a typical vertical wavelength of 4-8 km is dominant in wind profile (also, a wavelength of 2-4 km is detected in the lower stratosphere in some cases). To facilitate the conduct of the study and avoid the adverse effects of strong wind speed in the vicinity of the jet stream, the vertical profiles were separated into two sections of troposphere (from 3 to 9 km in altitude below the jet stream) and lower stratosphere (from 14 to 27 km, above the jet stream).
The intrinsic frequencies estimated using Stokes parameters imply that this method (like hodograph method) is usually unable to detect the high-frequency part of IGW spectra. In addition, a great uncertainty appeared when a polynomial fitting was used to separate perturbation from the mean field. Furthermore, the effects of background winds prevent accurate estimation of horizontal direction of wave propagation. Indeed, the technique can distinguish neither the eastward (northward) from westward (southward), nor upward from downward energy propagation.
Nevertheless, the concomitant use of “Stokes parameters” and “Rotary Spectrum” shows satisfactory results. In addition to reducing the amount of computation (using only two Fast Fourier Transformation (FFT) in comparison with five FFTs in conventional calculation), calculating rotary spectrum directly from Stokes parameters suggests that upward propagation of energy prevails over downward propagation in the stratosphere and more importantly in the troposphere.
Energy of the wave can be determined from the fluctuation fields. The ratio of estimated potential energy to kinetic energy is greater than 1 in the troposphere and less than 1 in the lower stratosphere. It implies that there is a great energy source of IGWs in the troposphere which is compatible with previous numerical simulations of this case undertaken by the authors.https://jesphys.ut.ac.ir/article_54752_c5c51c2b385c903c9a9c9780239be8bc.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923The study of upwelling in the eastern coast of the Caspian Sea using numerical simulationThe study of upwelling in the eastern coast of the Caspian Sea using numerical simulation5355455510510.22059/jesphys.2015.55105FAMaryamShiea AliFaculty of Marine Science and TechnologyAbbas AliA. A. BidokhtiInstitute of Geophysics, University of Tehran, IranJournal Article20141228Upwelling areas in the ocean are important places for fishing as nutrients can be transported from deeper area to the near surface region. In this study, the upwelling in the eastern coastal area of the Caspian Sea for 2004 was studied using COHERENS, a three-dimensional ocean model. In order to simulate the circulation in the Caspian Sea, the gridded fields were chosen as 0.046 × 0.046 degrees along the horizontal directions, which gives a grid size of about 5 km, and 30sigmalayers along the vertical (the layers’ numbers are represented as K so that the bottom layer begins with 1 and the layers go up towards sea level). Bathymetry and coastline locations are based on GEBCO data, that has been interpolated and slightly smoothed. The model was initialized for winter (January) using monthly mean temperature and salinity climatology obtained from Kara et al (2010). The model was forced by climatologic six hourly atmospheric forcing, air temperature and air pressure (0.5˚ ×0.5˚) derived from Reanalyses (ERA- Interim) ECMWF data, wind velocity derived from modification of ECMWF (0.5˚ ×0.5˚) and precipitation rate, cloud cover and relative humidity (2.5˚×2.5˚) derived from NCEP/NCAR re-analysis data. Four rivers used in the model and the monthly mean values of the flows for the Volga (has three locations for discharge into the Caspian Sea in the model), Ural, Kura and Sepidrood (Seﬁdrood) are used. The salinity of river water was considered to be 0‰. Monthly mean discharge value for three major rivers (Volga, Ural, Kura) was obtained from the GRDC (The Global Runoff Data Centre) and for Sepidrood (Seﬁdrood) was obtained from the Water Research Institute.
Results show that in the middle and southern parts of the Caspian Sea, the easterly and north easterly winds lead to upwelling near the east coast of the Caspian Sea. In the summer the eastern coast of the middle part of the Caspian Sea experiences an upwelling that is considered to be the most important thermal and dynamical phenomenon. The upwelling area is a region of 20 km wide and extends 10s of km along the coast; also the timescale of such phenomena was about a few weeks. Anticyclonic circulation during this period in the middle basin of the Caspian Sea was also another feature during the upwelling period and was found to be stronger in August during which two strong upwelling areas are also observed in this basin; one is particularly strong near the west coast. A southward current from the margin of upwelling area was also another important characteristic of upwelling off the eastern coasts of the Caspian Sea. From June to August the advection of cold upwelling waters occur from eastern area of the Caspian sea as also have been noted by Tuzhilkin and Kosarev (2005).The results show that the temperature of the east coast was lower than the west coast by 2 to 3 degrees Celsius when the upwelling occurred. Also, this phenomenon occurred down to a depth of less than 40 m, which is nearly consistent with Tuzhilkin and Kosarev's study (2005). Due to the upwelling, the depth of the thermocline near the coast raised by about 20 m. The vertical velocities in the upwelling area were also found to be about 12 and 7 m per month respectively for July and August. In August the horizontal extension of the upwelled area was also found less that for July. Also, another result of the simulation shows the existence of the vertical velocity in western part of the middle of Caspian Sea that, one can hypothesize the existence of topographically-associated upwelling phenomenon in the area because of the presence of especial topography that has disordered shape and steep slope. Upwelling areas in the ocean are important places for fishing as nutrients can be transported from deeper area to the near surface region. In this study, the upwelling in the eastern coastal area of the Caspian Sea for 2004 was studied using COHERENS, a three-dimensional ocean model. In order to simulate the circulation in the Caspian Sea, the gridded fields were chosen as 0.046 × 0.046 degrees along the horizontal directions, which gives a grid size of about 5 km, and 30sigmalayers along the vertical (the layers’ numbers are represented as K so that the bottom layer begins with 1 and the layers go up towards sea level). Bathymetry and coastline locations are based on GEBCO data, that has been interpolated and slightly smoothed. The model was initialized for winter (January) using monthly mean temperature and salinity climatology obtained from Kara et al (2010). The model was forced by climatologic six hourly atmospheric forcing, air temperature and air pressure (0.5˚ ×0.5˚) derived from Reanalyses (ERA- Interim) ECMWF data, wind velocity derived from modification of ECMWF (0.5˚ ×0.5˚) and precipitation rate, cloud cover and relative humidity (2.5˚×2.5˚) derived from NCEP/NCAR re-analysis data. Four rivers used in the model and the monthly mean values of the flows for the Volga (has three locations for discharge into the Caspian Sea in the model), Ural, Kura and Sepidrood (Seﬁdrood) are used. The salinity of river water was considered to be 0‰. Monthly mean discharge value for three major rivers (Volga, Ural, Kura) was obtained from the GRDC (The Global Runoff Data Centre) and for Sepidrood (Seﬁdrood) was obtained from the Water Research Institute.
Results show that in the middle and southern parts of the Caspian Sea, the easterly and north easterly winds lead to upwelling near the east coast of the Caspian Sea. In the summer the eastern coast of the middle part of the Caspian Sea experiences an upwelling that is considered to be the most important thermal and dynamical phenomenon. The upwelling area is a region of 20 km wide and extends 10s of km along the coast; also the timescale of such phenomena was about a few weeks. Anticyclonic circulation during this period in the middle basin of the Caspian Sea was also another feature during the upwelling period and was found to be stronger in August during which two strong upwelling areas are also observed in this basin; one is particularly strong near the west coast. A southward current from the margin of upwelling area was also another important characteristic of upwelling off the eastern coasts of the Caspian Sea. From June to August the advection of cold upwelling waters occur from eastern area of the Caspian sea as also have been noted by Tuzhilkin and Kosarev (2005).The results show that the temperature of the east coast was lower than the west coast by 2 to 3 degrees Celsius when the upwelling occurred. Also, this phenomenon occurred down to a depth of less than 40 m, which is nearly consistent with Tuzhilkin and Kosarev's study (2005). Due to the upwelling, the depth of the thermocline near the coast raised by about 20 m. The vertical velocities in the upwelling area were also found to be about 12 and 7 m per month respectively for July and August. In August the horizontal extension of the upwelled area was also found less that for July. Also, another result of the simulation shows the existence of the vertical velocity in western part of the middle of Caspian Sea that, one can hypothesize the existence of topographically-associated upwelling phenomenon in the area because of the presence of especial topography that has disordered shape and steep slope. https://jesphys.ut.ac.ir/article_55105_f5c4316294edde4ea113e05247d83a4d.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Application of TOPSIS Index in Monitoring of Droughts and Wet for Golestan providenceApplication of TOPSIS Index in Monitoring of Droughts and Wet for Golestan providence5475635368910.22059/jesphys.2015.53689FAAbdolazimGhanghermehGholamrezaRoshanJournal Article20150121Since the emergence of human civilization, drought has had extreme and sometimes catastrophic effects on human livelihoods. Although drought itself is not a disaster, its impact on people and the environment may sometimes yield disastrous consequences; so a primary requirement is to better understand the natural and social dimensions associated with drought (Wilhite 2000). Given that drought is a gradually developing natural phenomena, problems regularly arise when establishing drought start and end dates, as also the spatial extent of drought owing to the complex nature of drought and also the difficulty of separating ‘dry periods’ with ‘drought periods’. Given the importance of drought forecasting and classification to reduce associated risks, many efforts have been undertaken over the years to calculate and understand all aspects of drought. For example Palmer (1965) was first to initiate statistical methods (in 1946) for establishing drought occurrence using rainfall, temperature and soil moisture parameters or recently, the Standardized Precipitation Index (SPI) has become a popular and widely used indicator of drought owing to its easy computation and flexibility across spatial and temporal scales .Droughts are an annual concern to Iran, seriously affecting agriculture, water resources and ecosystems in one or more region(s). Iran is exceptionally water scarce; for instance, in 2002 approximately eight million hectares of agricultural land suffered from drought, causing revenue loss amounting to millions of US Dollars (Darvishi et al. 2008). Thus, considerable scientific efforts have been made to categorize and monitor drought in the region.
The TOPSIS index has previously been used to assess drought/wetness conditions in Iran, but only using a few parameters (mean annual wind speed [km/h], total annual precipitation [mm], mean annual temperature [˚C], and number of rain days) (Koshakhlagh et al. 2008, Roshan et al. 2012). In other cases, parameters used to calculate a drought index have not been validated for their accuracy (e.g. Kazemi Rad et al. 2012). To this end, and for the first time, we use a combination of climate/environmental parameters which are entered into the TOPSIS Algorithm; years are then ranked statistically for Golestan Province Weather Stations (Iran) based on dry/wet conditions. The focus of this paper is to: 1) present the TOPSIS computational method, 2) demonstrate the step-wise sequence for calculating and ranking the drought index using the method of similarity to ideal solution (TOPSIS), and 3) validate the TOPSIS method through calculating drought/wetness values for four stations in Golestan Province weather station using more conventional methods (i.e. PNPI, SPI, BMDI and RِِDI) for calculating drought intensity and finally zoning Golestan Province on base of TOPSIS index.
TOPSIS, which is one of the multi-criteria decision-making methods, was first presented by Hwang and Yoon (1981) and soon received global interest for numerous scientific applications as wide ranging as the aeronautical industry (Wang and Chang 2007), engineering risk assessment (Wang and Elhag 2006) and decision making in management (Antuchevičiené et al. 2010).TOPSIS is a multi-criteria method to identify solutions from a finite set of alternatives. The basic principle is that the chosen alternative should have the shortest distance from the positive ideal solution and the farthest distance from the negative ideal solution (Chen et al. 2011).
Using this method, we use seven single and combined climatological parameters which are applied to the years 1971 to 2011, with data obtained from the Golestan province weather stations. The parameters include average rainfall, number of days with precipitation, effective precipitation based on the method of land reclamation of America (U.S Bureau of Reclamation Method [USBR]), the ratio of highest daily precipitation to total monthly precipitation, evapotranspiration (Torrent White Method), and maximum/minimum temperature. The index has no temporal-scale limitations and may thus be applicable to scales ranging from days to seasons.
To validate the outputs, values for four stations were compared to four customary drought indices (PNPI, RDI, BMDI, SPI), and correlated well with these overall (r = 0.9), thus confirming the high reliability of the TOPSIS algorithm. However, the TOPSIS method has a distinct advantage over other methods as it considers important variables influencing wetness that the other methods have not incorporated into their models, hence also some of the differences in output results between TOPSIS and the other methods. A further advantage of TOPSIS is that the climatic variables required are available for most stations, or alternatively, variables such as evapotranspiration or effective rainfall can easily be calculated using simple experimental calculations. In contrast, other reliable methods frequently used, such as the Palmer method, are spatially limited in their application as these rely on less readily available data, such as for instance soil moisture. The results obtained from the TOPSIS algorithm are thus relatively consistent with those from other methods, yet TOPSIS offers some distinct advantages and should thus be considered as a reliable future application tool for establishing dry/wet conditions and trends.Since the emergence of human civilization, drought has had extreme and sometimes catastrophic effects on human livelihoods. Although drought itself is not a disaster, its impact on people and the environment may sometimes yield disastrous consequences; so a primary requirement is to better understand the natural and social dimensions associated with drought (Wilhite 2000). Given that drought is a gradually developing natural phenomena, problems regularly arise when establishing drought start and end dates, as also the spatial extent of drought owing to the complex nature of drought and also the difficulty of separating ‘dry periods’ with ‘drought periods’. Given the importance of drought forecasting and classification to reduce associated risks, many efforts have been undertaken over the years to calculate and understand all aspects of drought. For example Palmer (1965) was first to initiate statistical methods (in 1946) for establishing drought occurrence using rainfall, temperature and soil moisture parameters or recently, the Standardized Precipitation Index (SPI) has become a popular and widely used indicator of drought owing to its easy computation and flexibility across spatial and temporal scales .Droughts are an annual concern to Iran, seriously affecting agriculture, water resources and ecosystems in one or more region(s). Iran is exceptionally water scarce; for instance, in 2002 approximately eight million hectares of agricultural land suffered from drought, causing revenue loss amounting to millions of US Dollars (Darvishi et al. 2008). Thus, considerable scientific efforts have been made to categorize and monitor drought in the region.
The TOPSIS index has previously been used to assess drought/wetness conditions in Iran, but only using a few parameters (mean annual wind speed [km/h], total annual precipitation [mm], mean annual temperature [˚C], and number of rain days) (Koshakhlagh et al. 2008, Roshan et al. 2012). In other cases, parameters used to calculate a drought index have not been validated for their accuracy (e.g. Kazemi Rad et al. 2012). To this end, and for the first time, we use a combination of climate/environmental parameters which are entered into the TOPSIS Algorithm; years are then ranked statistically for Golestan Province Weather Stations (Iran) based on dry/wet conditions. The focus of this paper is to: 1) present the TOPSIS computational method, 2) demonstrate the step-wise sequence for calculating and ranking the drought index using the method of similarity to ideal solution (TOPSIS), and 3) validate the TOPSIS method through calculating drought/wetness values for four stations in Golestan Province weather station using more conventional methods (i.e. PNPI, SPI, BMDI and RِِDI) for calculating drought intensity and finally zoning Golestan Province on base of TOPSIS index.
TOPSIS, which is one of the multi-criteria decision-making methods, was first presented by Hwang and Yoon (1981) and soon received global interest for numerous scientific applications as wide ranging as the aeronautical industry (Wang and Chang 2007), engineering risk assessment (Wang and Elhag 2006) and decision making in management (Antuchevičiené et al. 2010).TOPSIS is a multi-criteria method to identify solutions from a finite set of alternatives. The basic principle is that the chosen alternative should have the shortest distance from the positive ideal solution and the farthest distance from the negative ideal solution (Chen et al. 2011).
Using this method, we use seven single and combined climatological parameters which are applied to the years 1971 to 2011, with data obtained from the Golestan province weather stations. The parameters include average rainfall, number of days with precipitation, effective precipitation based on the method of land reclamation of America (U.S Bureau of Reclamation Method [USBR]), the ratio of highest daily precipitation to total monthly precipitation, evapotranspiration (Torrent White Method), and maximum/minimum temperature. The index has no temporal-scale limitations and may thus be applicable to scales ranging from days to seasons.
To validate the outputs, values for four stations were compared to four customary drought indices (PNPI, RDI, BMDI, SPI), and correlated well with these overall (r = 0.9), thus confirming the high reliability of the TOPSIS algorithm. However, the TOPSIS method has a distinct advantage over other methods as it considers important variables influencing wetness that the other methods have not incorporated into their models, hence also some of the differences in output results between TOPSIS and the other methods. A further advantage of TOPSIS is that the climatic variables required are available for most stations, or alternatively, variables such as evapotranspiration or effective rainfall can easily be calculated using simple experimental calculations. In contrast, other reliable methods frequently used, such as the Palmer method, are spatially limited in their application as these rely on less readily available data, such as for instance soil moisture. The results obtained from the TOPSIS algorithm are thus relatively consistent with those from other methods, yet TOPSIS offers some distinct advantages and should thus be considered as a reliable future application tool for establishing dry/wet conditions and trends.https://jesphys.ut.ac.ir/article_53689_6b37d721a9cfbb1bcdda6a74ae467e98.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41320150923Factors Affecting the summer rainfall in a region with complex topography (Case Study: Golestan Province)Factors Affecting the summer rainfall in a region with complex topography (Case Study: Golestan Province)5655775369410.22059/jesphys.2015.53694FAAmirShabanianMohammadAliNasr-EsfahanyFrozanArkianJournal Article20150318Iran is a vast land of geographically specific features and the climate is quite different. Each year a number of times, short intense rains caused flooding in various parts of the southern coast of the Caspian Sea and river flood malicious conduct is falling. The rainfall intensity is greater and more destructive floods caused heavy damage. Severe flooding during the brutal summer precipitation in the region, notably Iran, which sometimes lead to large losses of life and property. Having the proper depth of the Caspian Sea and the north-south strain and temperature is relatively constant during the period of two to three days, thus having the potential to heat and high humidity, high impact weather system is feeding. Another factor influencing the occurrence of floods in Golestan province Alborz mountain range and its effect on the flow of the atmosphere and therefore the issue is complex. Factors such as height and width roughness, and the interaction of the Alborz mountains to heavy rainfall in the region and how the agent is effective. Here a summer torrential rainfall in Golestan province is simulated using the WRF model toinvestigate the effective factors. 30 vertical sigma levels are used in the network. To run the model horizontal resolution of 3 nesting range of 90, 30 and 10 km in length and latitude is used. During geographically considered the center latitude 54,15 and 35,51 respectively. Model of internal networks and the highest point is 27 × 27, 31 × 43 and 34 × 45 grid points are. Depending on the selected physical model microphysics scheme WSM5, longwave radiation scheme RRTM, shortwave radiation scheme Dudhia, cumulus parameterization scheme and Kian-Fritsch are YUS boundary layer.To investigate the role of vertical surface fluxes and effects of the Alborz mountains on rainfall intensity in the selected system, four experiments were conducted to test the model simulation with control (CTL), the physical model was used. In a second experiment was to delete the Elburz Mountains (NTO), in the third experiment, the vertical flux of moisture and temperature of the Caspian Sea has been removed (NFL) and the fourth test of the Elburz Mountains and vertical flux of moisture and temperature simultaneously removed the Caspian Sea (BOT) and the results of the first experiment (control) were compared. Using the output of the model, some parameters such as the effective rainfall floods the advection of temperature, convective available potential energy, vorticity advection in the simulations were calculated and analyzed.
The results show that the mechanisms of rainfall in the Golestan province depend to position of phenomena. So that the horizontal convergence of heat and humidity fluxes are the main causes of rainfall along the coastline of the Caspian Seawhile the rainfall over Northern part of Alborz mountains range is caused by the forcing ascent over the mountain. The heavy rains in the South East of the Caspian Sea occurred due to the horizontal convergence of heat flux, intense upward vertical flux and significant amount of CAPE.Convective instability in this area is due to the warm advection in surface and cold advection in middle troposphere. Upward motion and precipitation start with positive vorticity advection in 500-hPa level which is affected by Alborz Mountain strongly.Iran is a vast land of geographically specific features and the climate is quite different. Each year a number of times, short intense rains caused flooding in various parts of the southern coast of the Caspian Sea and river flood malicious conduct is falling. The rainfall intensity is greater and more destructive floods caused heavy damage. Severe flooding during the brutal summer precipitation in the region, notably Iran, which sometimes lead to large losses of life and property. Having the proper depth of the Caspian Sea and the north-south strain and temperature is relatively constant during the period of two to three days, thus having the potential to heat and high humidity, high impact weather system is feeding. Another factor influencing the occurrence of floods in Golestan province Alborz mountain range and its effect on the flow of the atmosphere and therefore the issue is complex. Factors such as height and width roughness, and the interaction of the Alborz mountains to heavy rainfall in the region and how the agent is effective. Here a summer torrential rainfall in Golestan province is simulated using the WRF model toinvestigate the effective factors. 30 vertical sigma levels are used in the network. To run the model horizontal resolution of 3 nesting range of 90, 30 and 10 km in length and latitude is used. During geographically considered the center latitude 54,15 and 35,51 respectively. Model of internal networks and the highest point is 27 × 27, 31 × 43 and 34 × 45 grid points are. Depending on the selected physical model microphysics scheme WSM5, longwave radiation scheme RRTM, shortwave radiation scheme Dudhia, cumulus parameterization scheme and Kian-Fritsch are YUS boundary layer.To investigate the role of vertical surface fluxes and effects of the Alborz mountains on rainfall intensity in the selected system, four experiments were conducted to test the model simulation with control (CTL), the physical model was used. In a second experiment was to delete the Elburz Mountains (NTO), in the third experiment, the vertical flux of moisture and temperature of the Caspian Sea has been removed (NFL) and the fourth test of the Elburz Mountains and vertical flux of moisture and temperature simultaneously removed the Caspian Sea (BOT) and the results of the first experiment (control) were compared. Using the output of the model, some parameters such as the effective rainfall floods the advection of temperature, convective available potential energy, vorticity advection in the simulations were calculated and analyzed.
The results show that the mechanisms of rainfall in the Golestan province depend to position of phenomena. So that the horizontal convergence of heat and humidity fluxes are the main causes of rainfall along the coastline of the Caspian Seawhile the rainfall over Northern part of Alborz mountains range is caused by the forcing ascent over the mountain. The heavy rains in the South East of the Caspian Sea occurred due to the horizontal convergence of heat flux, intense upward vertical flux and significant amount of CAPE.Convective instability in this area is due to the warm advection in surface and cold advection in middle troposphere. Upward motion and precipitation start with positive vorticity advection in 500-hPa level which is affected by Alborz Mountain strongly.https://jesphys.ut.ac.ir/article_53694_66b92342819a57c2f80dc9269ef3e444.pdf