Journal of the Earth and Space Physics
https://jesphys.ut.ac.ir/
Journal of the Earth and Space Physicsendaily1Thu, 21 Apr 2022 00:00:00 +0430Thu, 21 Apr 2022 00:00:00 +0430Evaluating CRUST1.0 crustal model efficiency for Moho depth estimation in Middle East region
https://jesphys.ut.ac.ir/article_81523.html
Study of Moho in Middle East and surrounding region is of great importance for scientists, because it has a rich geological history and contains parts of the Eurasian, Indian, African and Arabian plates as the main plates and some small plates. According to complexity and different tectonic structures in Middle East using a proper method that yields a Moho depth model which is in accordance with these structures, has a great importance. In this paper we compare the Moho depth obtained from two different methods, 1) Gravity data inversion of spherical prisms (tesseroids) and 2) Moho depth evaluation using tesseroids and CRUST1.0 crustal model. Determining of Moho depth from gravity data is a nonlinear inverse problem. Regarding the extent of the study area we use an efficient inversion method (Uieda&rsquo;s inversion method) in order to consider the earth's curvature by using spherical prisms instead of rectangular prisms. In this method one needs to minimize the &nbsp;cost function, where is the fidelity term, &nbsp;is the penalty term and &nbsp;is regularization parameter. In this method in addition to Moho depth, we need to estimate three hyper parameters namely the regularization parameter ( ), Moho reference level ( ) and density contrast ( ). They are estimated in two steps during the inversion by holdout-cross validation methods.To estimate the relief of the Moho from gravity data, first one must obtain the gravitational effect of the target anomalous density distribution attributed to the Moho relief, this requires eliminating all gravity effects other than that of the target anomalous density from observed data. In the first method tesseroid modeling is used to calculate the gravity effect of the topography and sediments. The effect of topography and crustal sediments are removed using global topography and crustal models. In the second method first we extract Moho depth over the study region from CRUST1.0 model and then evaluate gravity effect arising from this anomalous Moho, then using inversion method to estimate the Moho depth from CRUST 1.0 model. According to the results, the minimum depth of Moho is about 12 km in some parts of Indian Ocean and the maximum depth is about 54 km in the west of Tibetan plateau from the first method which is in accordance with plate boundaries and correlates well with the prominent tectonic features of the Middle East region. The Moho depth obtained from the second method varies between 7.5 and 49 km where the minimum depth is related to the parts of Indian Ocean and maximum depth is appeared in parts of the Zagros in Iran. Comparing the results of two methods demonstrates the acceptable performance of the adapted inversion procedure and utilization of spherical prisms but the calculated Moho depth from second method failed to estimate acceptable Moho depth especially in divergent boundary at Red sea, Gulf of Aden and Indian Ocean. The results indicate that the CRUST1.0 model, at least over an area with large extent, is not a suitable model for gravity inversion and Moho depth estimation.Combination of Radio Occultation data in 3D and 4D functional model tomography for retrieving the wet refractivity indices
https://jesphys.ut.ac.ir/article_83553.html
Atmospheric wet refractivity indices, which are dependent on the water vapor, are one of the most important parameters for analyzing climate change in an area. Wet refractivity indices can be estimated from Radiosonde stations measurement or calculated from numerical meteorological models. But due to low temporal and spatial resolution of radiosonde stations and severe variations of water vapor in the lower levels of Atmosphere, today&rsquo;s numerical meteorological models provide low accuracy for atmospheric parameters. But nowadays, by growing number of stations that can use global positioning satellite measurements, atmospheric parameter can be estimated via remote sensing measurements in wide temporal and spatial resolutions. Wet refractivity indices cause delay in GPS measurement signals thus this delay have information about distribution of wet refractivity indices in atmosphere. By the use of global positioning satellites that can estimate atmospheric wet delay and tomography method, wet refractivity indices can be estimated. One of the growing methods for measuring the atmosphere parameters is the radio occultation technique. By increasing the number of low earth orbit satellites that carry GNSS receiver, this technique can provide observation in all of the globe, which its observations are obtained directly from the type of atmosphere parameters. The aim of this study is to use a combination of RO and GPS observation in 3D and 4D atmospheric tomography. But since tomography problem are ill-posed because of the poor distribution of GPS observations in network, a functional model has been implemented to estimate the wet refractivity indices from of the atmospheric tomography problem. By expanding tomography&rsquo;s unknowns to base functions coefficients, the number of unknowns will be decreased and problem will become well-posed and unknowns can be estimated from inverse problem. In the three-dimensional functional model, combination of spherical cap harmonics and empirical orthogonal functions have been used to solve the inverse problem. Spherical cap harmonics are used to represent the wet refractivity indices in horizontal distribution and empirical orthogonal functions are used for the vertical distribution of the unknown coefficients. Eventually, the B-spline is used to represent the four-dimensional functional model and the dependence of coefficients to the time. After implementing 3D and 4D functional models, the relative weight of RO data with comparison to GPS data has been calculated using variance component method. The US region of California has been selected as the study network due to its high tectonic importance and the large number of GPS stations. The results in two considered tomography epochs have been validated with radiosonde station data in the network and also have been compared with ERA5 reanalysis data. Comparison of the profiles obtained from tomography and the ERA5 data profiles with the radiosonde wet refractivity indices shows that the results obtained from the functional model tomography are better than those of the ERA5 data. The results of the combination illustrate that using RO data in both 3D and 4D models, the RMSE has been decreased and showed improvement of about 7 to 10 percent compared to uncombined tomographic models. Also, it is seen that using RO data in the 4D model has higher accuracy compared to the 3D model due to the use of a time-dependent functional model that increases the functional model's accuracy.Sea level anomaly prediction using Empirical Mode Decomposition and Radial Basis Function Neural Networks
https://jesphys.ut.ac.ir/article_85436.html
Sea level anomaly as a parameter that expresses the difference between the instantaneous water level height and the average amount of water level in a period of time is of great importance in studying the water level situation in different regions. Predicting a time series requires that the series be static and that seasonal trends and changes be removed from the observations to eliminate the dependence of variance and mean on time. For this purpose, the use of various methods to static a time series has been suggested and used. Using the method of decomposition into the intrinsic modes of a signal that underlies the formation of intrinsic mode functions that include parts of the signal with approximately the same frequency; in order to analyze and isolate the trend and seasonal changes of the signal have been considered. Caspian sea as the largest lake in the world or the so-called largest enclosed water area in the world is located in northern Iran. This important water area has become one of the main sources of income for its peripheral countries. It has important oil and gas resources as well as the main source of sturgeon as one of the most expensive food sources in the world. This strategic region is known as a medium for connecting the East and the West of the world. In addition to the economic and commercial dimension, the Caspian Sea is of great importance from the military point of view, as numerous military maneuvers are held every year by the neighboring countries. For the above reasons; awareness of the water level and its changes has become increasingly important, especially over the past few decades, but despite this importance, not many studies have been conducted to study the water level. Therefore, in this research, using satellite altimeter data, the monitoring of water level changes in this area has been done. In this study a coverage of the sea anomaly parameter and its changes from 1993 to the present has been provided. The Caspian Sea water region as one of the two important water sources for Iran, is strategically important.For this purpose, in this study, using the transit data of 92 satellite altimetric missions passing through the Caspian Sea region, the changes in the sea level anomaly in this region since 1993 have been observed. This quantity is then analyzed using the method of analysis of intrinsic modes as an efficient method in separating the frequencies that make up a signal and then, using a neural network, a network of radial base functions has been created in order to predict sea level anomaly. 9 dominant frequencies along with a trend are the result of signal analysis considered in this study. Finally, it leads to the parameters of the mean square error of 0.029 m and 0.034 m with a correlation coefficient of 0.99 and 0.97, respectively, in the two stages of neural network training and testing.2D reconstruction of gravity anomalies using the level set method
https://jesphys.ut.ac.ir/article_83563.html
In order to properly understand the subsurface structures, the issue of inversion of geophysical data has received much attention from researchers. Since accurate reconstruction of the shape and boundaries of the mass using gravimetric data is very important in some issues, it is important to use an effective and efficient method that has a high ability to draw and reconstruct the boundaries of a mass. In recent years, the level set method introduced by Asher and Stein has been widely used to solve this problem. From the expansion of the level set function in some bases of the problem, the effective number of parameters is greatly reduced and an optimization problem is created which its behavior is better than the least squares problem. As a result, the level set parameterization method will be presented for the reconstruction of inversion models. A common advantage of the parametric level set method is the careful examination of the boundary for optimum sensitivities, which significantly reduces the dimensional problem, and many of the difficulties of traditional level set methods, such as regularization, reconstruction, and basis function. Level set parameterization is performed by radial basis functions (RBF); which causes an optimal problem with an average number of parameters and high flexibility; and the computational and optimization process for Newton's method is more accurate and smooth. The model is described by the zero contour of a level-set function, which in turn is represented by a relatively small number of radial basis functions. This formulation includes some additional parameters such as the width of the radial basis functions and the smoothness of the Heaviside function. The latter is of particular importance as it controls the sensitivity to changes in the model. In this algorithm adaptively chooses the required smoothness parameter and tests the method on a suite of idealized Earth models.&nbsp;In this evolutionary approach, the reduction gradient method usually requires many iterations for convergence, and the functions are weakened for low-sensitivity problems. Although the use of Quasi- Newton methods to improve the level set function increases the degree of convergence, they are computationally challenging, and for large problems and relatively finer grids, a system of equations must be solved in each iteration. Moreover, based on the fact that the number of underlying parameters in a parametric approach is usually much less than the number of pixels resulting from the discretization of the level set function, we make a use of a Newton-type method to solve the underlying optimization problem.In this research, the algorithm is used to investigate its strengths and weaknesses for applying geophysical gravity data, coding and programming, and it is tested using several two-dimensional synthetic models. Finally, the method is tested on gravity data from the Mobrun ore body, north east of Noranda, Quebec, Canada.The results of this study show that the application of the optimization algorithm of the level set function will lead to a relatively more accurate and realistic detection of mass boundaries. It shows that the tested mass has spread from a depth of 10 meters to a depth of 160 meters.Comparison of least squares collocation and Poisson's integral methods in downward continuation of airborne gravity data
https://jesphys.ut.ac.ir/article_85444.html
Terrestrial gravimetry in large countries such as Iran with mountainous areas is time consuming and costly. Airborne gravimetry can be used to fill the data gravity gaps. Airborne gravity data are contaminated with different kinds of systematic and random errors that should be evaluated before use. In this study, the downward continued airborne gravity data is compared with existing terrestrial gravity data for detecting probable biases and measurement error. For this purpose, the efficiencies of the two least squares colocation and Poisson's integral methods are compared.Collocation is an optimal linear prediction method in which the base functions are directly related to the covariance functions. The covariance function can be derived from empirical covariance fitting. This method can be utilized for downward continuation (DWC) of gravity data with arbitrary distribution. Often the homogeneous and isotropic covariance functions are used in collocation. However, in reality the statistical parameters of gravity data change with location and azimuth. This is the main drawback of collocation with stationary covariance function. Based on the Dirichlet&rsquo;s boundary values problem for harmonic functions, the downward continuation of airborne gravity data from the flight altitude to the geoid/ellipsoid surface is given by inverse of Poisson&rsquo;s integral. Similar to collocation, this method can be utilized for DWC of gravity data with arbitrary distribution. Poisson&rsquo;s integral as inverse problem is unstable in continuous form. However, for discrete data, the instability depends of the amplitude of high frequency components in the gravity observation such as error measurements.Numerical computations for this study were performed in the Colorado region and northern parts of New Mexico that is bounded by . In this region, 524,381 airborne data are available in 106 flight lines. The along track sampling is 1 Hz (about 128 meters) and the cross distance between lines is about 10 km. To reduce the edge effect, the final test area is reduced to &nbsp;which includes 5494 ground gravity points. To improve the efficiency of the computations, the sampling interval is decreased to &nbsp;Hz (about 2 km).We first demonstrate the applications of the DWC methods using simulated gravity data. Short wavelength of gravity disturbance related to degree 360-2190, was generated using experimental global gravity model 'refB' at the two true positions of airborne and ground data. Two (white) noise 1 and 2 mGal was added to airborne data. Using these simulated observations, the two aforementioned methods were employed to determine the terrestrial disturbances. The comparison of computed and simulated terrestrial disturbances show that the accuracy of the Poisson method for both noise levels is about 30% better than the collocation.For real data, the residual gravity data is computed by subtracting the long wavelengths up to degree 360 and corresponding residual topographical effect (RTM) from the real gravity observation. RTM is derived from the harmonic model (dV_ELL_Earth2014_5480) of spherical harmonic degrees between 360-5480. This model provides spherical harmonics of gravitational potential of upper crust. According to previous studies, the level noise of airborne gravity of Colorado is about 2.0 mGal. By introducing this noise into collocation, the problem becomes stable. In Poisson method, the iterative 'lsqr' method is used to solve the system of linear equations. To achieve stable solution, the iterations was terminate using discrepancy principal rule.The residual anomaly gravity at Earth's surface can be computed directly using collocation. But in the Poisson method, computation is performed in two steps: 1 the airborne gravity disturbances are downward continued to a &nbsp;grid on the reference ellipsoid, 2- the terrestrial gravity disturbance is computed by upward continuation from ellipsoid disturbances. Despite of simulated data, the accuracy of the two methods is the same in terms of standard deviation of the differences. The mean and the standard values of difference is about 2mGal and 8mGal, respectively. According to a study by Saleh et al. (2013), the bias of in parts of Colorado reaches more than 2mGal. Therefore, due to the bias of terrestrial data, the estimated bias in airborne data cannot be confirmed.Investigating the Relationship between Change of Tropopause Pressure's level (TPL) and Cyclones Associated with Widespread Precipitation (WP) in Iran
https://jesphys.ut.ac.ir/article_83555.html
The study of the simultaneous occurrence of cyclones and the changes of Tropopause Pressure's level (TPL) can provide useful insights into the characteristics of the precipitation, especially the widespread precipitation (WP) over Iran; as mid-latitude cyclones are one of the most critical factors associated with WP in Iran. Understanding the mechanisms and the features associated with the cyclones can be crucial for estimating and predicting cyclones and their consequences with precision. To this end, in the current study, we underlined the relationship between tropopause and cyclones affecting WP in the country.In the current study, two data sets were adopted. These data sets include daily precipitation data of Asfazary national data set (version 3) and atmospheric data (including temperature and geopotential height (GH) data of ERA-Interim base from the European Centre for Medium-Range Weather Forecasts (ECMWF)) with spatial resolution of 0.25 degrees for an area comprised 0 to 80&deg; N and -10 to 120&deg; E. The main aim of selecting the aforementioned area and the data was to identify all the cyclones which are originated from or pass through the Mediterranean Sea and are associated with WP over Iran. Accordingly, the associated pressure levels of the tropopause were examined.The Asfazary database from 1979 to 2015 was adopted to identify days with WP based on precipitation anomalies covering more than 10% of the country. Accordingly, a total of about 1189 days with WP was extracted for the intended period.In this study, regional variations of GH at the level of 1000 hPa have been used to identify cyclone centers. To this end, the GH of the pixel was evaluated in relation to the eight neighboring pixels; when the GH was lower than the neighboring ones, and the gradient of the GH was at least 100 geopotential meters per thousand kilometers, the pixel was considered as the center of the cyclone. Cyclones were tracked with respect to the days with WP, and their characteristics were investigated based on the day of cyclone activity and the day of WP.Using the thermal criterion defined by the World Meteorological Organization (WMO, 1957)), the tropopause was identified.The 1189 days with WP have been studied visually. Since it is not feasible to present all the days in this brief paper, a few samples were selected to identify the association of tropopause with cyclones on days with WP. The days were selected based on the highest percentage of the area covered for different months. Accordingly, for the entire period, 8 days were selected to represent January, February, March, April, June, October, November, and December. In May, July, August, and September, days with WP were not observed. In the present study, to investigate the relationship between tropopause and cyclones in eight WP samples, the features of tropopause and cyclones on the starting days and on the days with WP were considered.The spatial distribution of the TPL on the day of cyclone activity and the day with WP showed that on the day of cyclone activity, tropopause had certain characteristics; at this time, the tropopause pressure level showed larger values than those in the surrounding areas. Even on days when WP was observed in Iran and within the cyclone activity range, this anomaly was observed in the TPL. The tropospheric condition of the country compared to the day of the cyclone activity had significant differences; at the time of precipitation, tropopause level showed a larger numerical value in most areas compared to the beginning of the cyclone, especially in areas with heavy precipitation intensity. Tropopause at the time of the formation of the cyclone with WP on April 7, 2013, was different from other under study cases. In this case, at the beginning of the cyclone activity on the cyclone formation area, the tropopause did not have a significant anomaly; while on the day of WP in the south of Iran, the anomaly was significantly prominent. It seems that this difference can be due to the differences in the origin and the mechanism of cyclones in different areas. This probably explains the difference in the characteristics of tropopause on the day of cyclone activity. In the whole area under study, at latitudes above 30 degrees, in geographic locations where the cyclones emerged at the 1000 hPa, tropopause was broken. At this time, tropopause pressure levels showed larger values than the surrounding areas. Given this fact, it seems that there is a relationship between the two phenomena, cyclones and TPL.Based on the findings, in all eight samples of WP days, tropopause had special characteristics in the same area of cyclone; in addition, tropopause pressure levels in these areas were higher than their counterparts at the same geographical situation.Combined Estimation of Nighttime Land Surface Temperature in Jazmourian Drainage Basin Using MODIS Sensor Data of Terra/Aqua Satellites
https://jesphys.ut.ac.ir/article_83548.html
Land surface temperature (LST) estimation is widely used in many applied and environmental studies such as agriculture, climate change, water resources, energy management, urban microclimate and environment. LST, which is the result of atmospheric-earth interaction, due to the sensitivity and influence of land surface conditions such as soil cover, soil moisture, albedo, surface roughness and the interaction of these factors with the atmosphere, can well determine changes in land surface temperature conditions. In the present study, Modis nighttime sensor products of both Terra and Aqua satellites (MOD11C3 &amp; MYD11C3) from http://reverb.echo.nasa.gov/reverb for LST estimation in the Jazmourian drainage basin (southeast of Iran), were used in the period 2013-2019. After providing the products with monthly and spatial time steps of 5 km, calculations on two matrices; One monthly with dimensions of 2784 x 204 (204 represents the number of observations in consecutive months of 17 years studied (17 x 12) and 2784 represents the number of gridded points (cells) in Jazmourian drainage basin area) and the other is a seasonal matrix with dimensions of 2784 x 68 (68 representing the number of observations in consecutive chapters (17 x 4) were performed. After performing the relevant statistical and spatial analyzes in Excel and GIS software environment, nighttime LST estimation was used. The results showed that the nighttime LST in the statistical period increased by about 1 degree Celsius and this increase was more in the minimum temperatures (cold period months of the year) than the maximum nighttime LST. According to the findings, the maximum nighttime LST has occurred in the low altitudes of the central and southern regions and the minimum LST has also occurred in the northern heights of the drainage basin. The seasonal spatial distribution of the Earth's nighttime LST indicates the distribution of nighttime LST in the range of -10 to +35&deg;C in winter and summer, respectively. Extreme fluctuations in nighttime LST during the seasonal terrestrial surface well show the prominent role of altitudes and latitudes in the temperature distribution of the Jazmourian drainage basin. Also, the time analysis of the studied variable shows a positive trend of nighttime LST in all four seasons, among which the spring and winter seasons had a higher upward slope. In addition; spatial estimation of nighttime LST anomalies, while confirming its increasing trend, shows the maximum location of nighttime LST anomalies in the central and western parts and the minimum anomalies in the eastern parts and northern heights of the drainage basin. Also, the analysis of monthly anomalies of nighttime LST shows the maximum occurrence of positive anomalies with +0.07&deg;C in September 2016 and the minimum anomalies with -0.01 &deg;C. are in January 2008. In general, the values of the nighttime LST significantly increased from 2008 onwards, especially in the months related to the cold period of the year (with a greater increase in the minimum nighttime LST than the maximum nighttime LST). This indicates the nighttime LST trend of the cold period of the year towards a warmer pattern. These conditions can be considered as an indicator of climate change and lead to changes in some environmental parameters such as relative humidity, evapotranspiration, soil surface moisture, snow persistence, dew point temperature and nightly reflective energy. Considering the high capabilities of the Jazmourian drainage basin in agricultural products and also the capability of seasonal tourism in different areas of this drainage basin, the importance of investigating nighttime LST changes, in this regard, is undeniable. On the other hand, with the continuing increase of environmental sensitivities and the accelerating trend of continental climate in this drainage basin, it is suggested that in future research, while estimating other climatic variables, their correlations with LST are considered. This will provide more climate knowledge of the environmental changes that have occurred in this less studied drainage basin.Design and calculation of a multilayer radiation shield for replacement with Al in GEO orbit
https://jesphys.ut.ac.ir/article_83559.html
Protecting the electronic components against the space radiation is an important basic requirement in satellites designing and constructing. One of the most common radiation shields for satellites is the addition of aluminum to achieve the desired radiation levels. However, in environments such as the GEO circuit where electrons are predominant, thick aluminum walls are not the most effective beam shields, as they are not able to attenuate the secondary X-rays caused by the electrons colliding with the shielding material. In general, materials with higher atomic numbers, such as tantalum, can severely attenuate X-rays, but when used as their own electron shield, they generate more secondary X-rays and impose more weight on the system. Today, polyethylene is a well-known material in the field of protection due to its high level of hydrogen, low density, ease of use and reasonable price, and is used as a benchmark for comparing the efficiency and effectiveness of other protection materials. There is a lighter method of protection called multilayer which works well in electronic environments as well as protecting against energetic protons. In designing and manufacturing radiation protection, proper selection of material and layer thickness is very important in reducing the dose and optimizing the weight. This requires experimental or computational work. Despite the accuracy of the experimental method, because practical experiments are costly and require a long time to run, and due to lack of access to space radiation testing laboratories, using computational and simulation methods can save time and budget.In this work, the influence of different structures in space radiation shielding has been evaluated using MCNPX Monte Carlo code. Therefore, the induced dose was calculated in a silicon component. A graded-z shield consisting of aluminum, carbon and polyethylene was proposed. The operation of the graded-z shield in various dose ranges has been investigated and compared with aluminum and polyethylene. Due to the importance of weight factor in the design of space systems, this factor is considered as one of the criteria for optimizing the thickness of the designed protection layers in comparison with aluminum and polyethylene protection for low-risk, medium and high-risk periods. The energy and flux of space rays for a mission in the GEO orbit that began in early 2021 and lasts for 5 years is provided by the Space Environment Information System (SPENVIS). The results showed that by replacing the conventional aluminum shield with the graded-z shield in specified dose ranges, weight reduction of 22/12% will be achieved in maximum case. For medium and low risk ranges, the use of multi-layer protection is more sensible in terms of weight than aluminum protection. In addition, if it is not necessary to use aluminum boxes to place electronic components inside the satellite, use polyethylene shield in terms of weight budget in high risk mode with 17.65%, medium risk 13.16% and low risk with 19.23% difference compared to aluminum protection is cost effective. Advantage in the field of manufacturing new materials such as aerogels and the placement of these lightweight materials can lead to lighter shields.The QBO effect on the wave breaking over the east of mediterranean and west Asia: Critical Latitude Aspect
https://jesphys.ut.ac.ir/article_85445.html
In the present study, using the ERA-INTERIM reanalysis data for geopotential height, horizontal wind speed and relative vorticity at 300, 200, 150, 100 and 50hPa levels, the quasi geostrophic potential vorticity, the quasi geostrophic potential vorticity gradient ,the wave activity and wave activity flux for cyclonic and anticyclonic Rossby wave breaking events that occurred over Europe during the winter time 1979-2018 in the westerly and easterly phase of quasi biennial Oscillation, were calculated and analyzed. The mechanism of Rossby wave breaking during five days before to five days after the wave break were analyzed. The Results show that in the anticyclonic breaking event over west Asia in the QBOe, the poleward displacement of jet in the upstream of trough to upper latitude over the Europe is more consistent than for the QBOw. Whereas in the cyclonic break in the westerly phase, jet on the upstream of trough over the west of Mediterranean sea displace to lower latitude over the Europe more than that pf the easterly phase. Therefore in the anticyclonic wave breaking over the west Asia in the QBOe compared that of to QBOw, the amplitude of the waves increase. The QBOe in the anticyclonic breaking causes increasing altitude on the upstream trough over the Europe and decreasing altitude on the downstream trough over the east Europe and Mediterranean and also causes increasing altitude over the east of Atlantic ocean. In the cyclonic breaking, QBOe causes increasing altitude on upstream of trough over the west of Mediterranean and decreasing altitude on the downstream of trough over the east of Mediterranean region.In the anticyclonic wave breaking on the west Asia and east Mediterranean in the QBOe, anomaly jets velocity and following the formation of critical latitude on north Europe is stronger than the critical latitude in the QBOw. The QBOe causes poleward displacement the jets and critical latitude as compared to that of the QBOw. In the anticyclonic wave breaking over west Asia, formation of extended ridge over Atlantic ocean and Europe causes settlement of the narrow trough on the west Asia. In the QBOe, the jet intensifies over north of Europe and critical latitude on the upstream of trough form stronger, QBOw. Equatorward wave activity flux due to anticyclonic breaking in the QBOe is more than that of the QBOw. Therefore the anticyclonic wave breaking in QBOe is stronger than QBOw.&nbsp;&nbsp;&nbsp;In the cyclonic waves breaking, jets on the upstream of trough over Europe and jet on the downstream of trough over east Mediterranean are formed across north westerly- south easterly. In the QBOe, jet on the upstream of trough intensifies on the upper latitude as compared to the QBOw. Following this the critical latitude have poleward displacement. In the QBOe, north westerly-south easterly slope of trough is more than QBOw and the trough on the Mediterranean and east Europe has lower altitude compared to that for the QBOw. The poleward wave activity flux due to cyclonic wave breaking is more in QBOe compared to that for the QBOw. Therefore the cyclonic wave breaking is stronger in QBOe compared to that for the QBOw.Whereas in the anticyclonic wave breaking over west Mediterranean in the QBOw compared to that for the QBOe and the meridional gradient of quasi geostrophic potential vorticity is stronger and meridional wave activity flux is more. Therefore the anticyclonic wave breaking over west Mediterranean in the QBOw is stronger compared to that for the QBOe.Effects of Quantum Gravity on a Vector Field Cosmological Model
https://jesphys.ut.ac.ir/article_85438.html
The modification of laws of physics at short intervals is an important result of the theory of quantum gravity. For instance, commutative relations of standard quantum mechanics change on scales of length- called Planck length. It should be noted that these changes can be neglected at low energy levels but they are considerable only at high energy levels such as the initial universe. In this regard, the principle of uncertainty of standard quantum mechanics is changed with modified relations of uncertainty including a visible minimum of Planck order. Early moments of the universe, which included the inflation period, was a period with noticeable effects of quantum gravity due to the high energy level, and as such, the effects can be studied during this period. To do this, characteristics of the inflation period can be examined according to initial parameters of the universe such as the initial fluctuations in the formation of the universe structure and the spectral index. On the other hand, vector cosmology models have been taken into consideration by researchers. These models include an action in which a vector field (in addition to the scalar field) is included to investigate effects of violation of the Lorentz invariance in observations.The present paper investigated effects of quantum gravity (with effects on non-commutative geometry and generalization of the uncertainty principle) on parameters of a vector cosmological model. The vector model was used as this scenario had acceptable adaptation to parameters of cosmology after inflation (e.g. the transition from the Phantom boundary, etc.) (Nozari and Sadatian, 2009). Furthermore, the present study could test this vector model for determining parameters of the inflation period based on effects of quantum gravity. According to calculations in the present paper, we concluded that, first: the density of scalar perturbations decreased in the vector model based on effects of quantum gravity (the reduction of standard model was more considerable), and second: due to the ignorance of effects quantum gravity, the scalar spectral index parameter remained invariant as observations indicate, but due to large enough gravitational effects (depending on amount of&nbsp; &beta;), the spectral index parameter is not maintained its invariance scale. According to obtained modification in the present study, the quantum gravity can be tested for the density of scalar perturbation (which can be measured by observing the spectrum of cosmic microwave background radiation).In order to compare our results with other studies, we can refer to (Zhu et al, 2014) where they examined the spectral index in accordance with high-order correction mechanism. It also indicated that a single asymmetric approximation does not lead to a considerable error value for the spectral index, and the invariance scale is maintained. Furthermore, the paper (Hamber and Sunny Yu, 2019) found the same results for invariance scale of the spectral index according to the Wilson normalization analysis method. Therefore there was no need to have common assumptions in the inflation period.Finally, it should be noted that despite a great number of studies on effects of quantum gravity, the reviewed model of this paper considers a state in which the effects can be investigated at all stages of the universe evolution from inflation till now.Assessing the Performance of CMIP5 GCMs in Copula-Based Bivariate Frequncy Analysis of Drought Characteristics in the Southern Part of Karun Catchment
https://jesphys.ut.ac.ir/article_85446.html
Drought is an extreme event and is a creeping phenomenon as compared with other natural disasters, which has great effects on the environment and human life. During 1997 to 2001, a severe 40-year return period drought affected half of Iran's provinces, with a loss in the agricultural sector estimated at more than US$ 10 billion (National Center for Agricultural Drought Management, http://www.ncadm.ir) and a Gross Domestic Product (GDP) reduction of about 4.4% was reported (Salami et al., 2009). A more severe drought period (2007&ndash;2009) devastated the country on a larger scale than the previous drought period. A 20% average reduction of precipitation has been reported for 2008 compared with a 30-year average (Modarres, et al. 2016). It was found that the longest and most severe drought episodes have occurred in the last 15&ndash;20 years (1998-2017) (Ghamghami and Irannejad, 2019). A drought is characterized by severity, duration and frequency. These characteristics are not independent of each other, and droughts cause significant economic, social and ecosystem impacts worldwide (IPCC, 2013). Probabilistic analysis of drought events plays an important role for an appropriate planning and management of water resources systems and agriculture, especially in arid or semi-arid regions. In particular, estimation of drought return periods can provide useful information for different water sectors under drought conditions. In this study, the capability of two CMIP5 GCMs in estimating the joint return period of severity and duration of drought using copula have been investigated in the Southern part of the Karun Basin.In this study, three type data have been used. These include monthly precipitation and temperature observed at synoptic stations and gridded data in 1975-2005 were obtained from IRIMO (the Iranian Meteorological Organization) and CRU (http: https://crudata.uea.ac.uk/cru/data) as well as the outputs of two GCM (HadGEM2-ES and IPSL-CM5A-MR) from CMIP5 (http;//cmip-pcmdi.llnl.gov/CMIP5/) in the period of 1975-2005 for historical. Following the Intergovernmental Panel on Climate Change (IPCC, 2013), the first ensemble member (r1i1p1) from two GCMs were selected. RCPs are estimation of radiative forcing (RF), where RCP2.6 and RCP4.5 represents 2.6 and 4.5 W.m-2 and RCP8.5 represents 8.5 W.m-2 at the end of the 21th century (Goswami, 2018). Assuming a drought period as a consecutive number of intervals where SPEI (Vicente-Serrano et al. 2010) values are less than &minus;1, two characteristics are determined, namely: extreme drought length and severity. Hydrological phenomena are often multidimensional and hence require the joint modeling of several random variables. Copulas model have become a popular multivariate modeling tool in many fields where multivariate dependence is of interest and the usual multivariate normality is in question. Among the copula-based drought frequency analysis, Elliptical and Archimedean copulas have been the most popular used equations. In this paper, we focus on copulas based multivariate drought frequency analysis considering drought duration and severity. Return period is defined as &lsquo;&lsquo;the average time elapsing between two successive realizations of a prescribed event&rsquo;&rsquo; (Salvadori et al.,2011). In the univariate setting, the return period is generally defined as (Bonaccorso, et al., 2003): &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(1)In this equation T is return period with a single variable, X (duration (D) or severity (S) of drought), greater or equal to a certain value, FX (.) are percentiles of CDF with X and E(T) is expected inter-arrival time of sequential droughts within the study period.The bivariate analysis of drought return period is calculated as (Shiau, 2006): &nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(2) &nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(3)Where TD&cap;S denotes the joint return period for D &ge; d and S &ge; s; T_ &nbsp;denotes the joint return period for D &ge; d or S &ge; s.Results of a preliminary analysis based on Kendall&rsquo;s correlation and upper tail dependence coefficient, computed on different datasets show significant dependence properties between the considered pair. Archimedean copulas (Clayton, Frank, and Gumbel) are fitted to the joint S-D datasets (observation, CRU, HadGEM-es and IPSL-CM4-MR) by Maximum Pseudo Likelihood Estimator (MPLE). The selected copula functions and marginal distributions were used to calculate the joint return periods of severity and duration in the conditions of "and" and "or". The results showed that HadGem has a good skill in simulating the joint probability characterization of drought. Results of the bivariate analysis using copula showed that the study area will experience droughts with greater severity and duration in future as compared with the historical period. Projected changes in characteristics of drought throughout the 21st century can help inform climate change assessments across drought‐sensitive sectors. However, the ability of global climate models (GCMs) to reproduce statistical attributes of observed drought should be investigated. We evaluated the fidelity of GCMs to simulate probabilistic characteristics of drought in Southwest of Karoun where drought plays a key climate impact.Modeling and prediction of the ionospheric total electron content time series using support vector machine in 2007-2018
https://jesphys.ut.ac.ir/article_85437.html
The ionosphere is a layer of the Earth's atmosphere that extends from an altitude of 60 km to an altitude of 1,500 km. Knowledge of electron density distribution in the ionosphere is very important and necessary for scientific studies and practical applications. Observations of global navigation satellite system (GNSS) such as the global positioning system (GPS) are recognized as an effective and valuable tool for studying the properties of the ionosphere. Studies on ionosphere modeling in the Iranian region have shown that the global ionosphere maps (GIM) model as well as empirical models such as IRI2016 and NeQuick have low accuracy in this region. The main reason for the low accuracy of these models is the lack of sufficient observations in the Iranian region. For this reason, this paper presents the idea of using learning-based methods to generate a local ionosphere model using observations of GNSS stations. Therefore, the main purpose of this paper is to use three models of artificial neural networks (ANNs), adaptive neuro-fuzzy inference system (ANFIS) and support vector machine (SVM) to model and predict the time series of ionospheric TEC variations in Tehran GNSS station.An adaptive neuro-fuzzy inference system (ANFIS) is a kind of ANN that is based on Takagi&ndash;Sugeno fuzzy inference system. The technique was developed in the early 1990s (Jang, 1993). Since it integrates both neural networks and fuzzy logic principles, it has potential to capture the benefits of both in a single framework. Its inference system corresponds to a set of fuzzy IF&ndash;THEN rules that have learning capability to approximate nonlinear functions. Hence, ANFIS is considered to be a universal estimator. ANFIS architecture consists of five layers: fuzzy layer, product layer, normalized layer, defuzzy layer, and total output layer.In machine learning, support-vector machines (SVM) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. More formally, a SVM constructs a hyperplane or set of hyperplanes in a high- or infinite-dimensional space, which can be used for classification, regression, or other tasks like outliers detection (Vapnik, 1995). In SVM method, using nonlinear functions &phi;(x), the input vector (x) is depicted from N-dimensional space to M-dimensional space (M&gt;N). The number of hidden units (M) is equal to the number of support vectors that are the learning data points, closest to the separating hyperplane.The results of this paper show that the SVM has a very high accuracy and capability in modeling and predicting the ionosphere TEC time series. This model has a higher accuracy in the period of severe solar activity than GIM and IRI2016 models, which are the traditional ionospheric models in the world. Due to the fact that global models in the region of Iran do not have acceptable accuracy due to lack of sufficient observations, therefore, the SVM can be used as a local ionosphere model with high accuracy. Using this model, the TEC value can be predicted with high accuracy for different times and during periods of severe solar activity. This model can be used in studies related to the physics of the ionosphere as well as its temporal variations.Near term (2021-2028) climate prediction of monthly temperature in Iran using Decadal Climate Prediction Project (DCPP)
https://jesphys.ut.ac.ir/article_85449.html
Decadal prediction is a general term that encompasses predictions for annual, interannual, and decadal periods in which significant progress has been made over the years. Decadal climate prediction is made using a hindcast and the latest generation of climate models. It provides two categories of hindcast and prediction data. The purpose of this study is to evaluate the temperature from the DCPP and its prediction in Iran based on the available models of the DCPP project contribution to the CMIP6 project.The study area of this research is Iran. As mentioned, the purpose of this study is to predict the near-term temperature based on the output of the DCPP project. For this purpose, daily temperature from 42 synoptic stations was used as observation to evaluate the available models of the DCPP project. Unlike general circulation models (GCMs), the DCPP project has an initialization that includes a three-month time step for implementation of each year. Air temperature of two models BCC-CSM2-MR and MPI-ESM1-2-HR with a horizontal resolution of 100 km is available for the DCPP project from the CMIP6 series. Three statistics, Pearson correlation coefficient (PCC), root mean square error (RMSE) and mean bias error (MBE), were used to evaluate the selected models of the DCPP project using observational data (synoptic stations).In the study of the relationship between observation and hindcast of the two selected models, it is found that the BCC-CSM2-MR model shows a high correlation (0.99) in the mountainous areas of Zagros and Alborz and arid and semi-arid regions of the inland and east of Iran. However, the northern and southern coasts show a weak correlation (between 0.92 and 0.97). Examination of RMSE statistics for the BCC-CSM2-MR model also shows the maximum error between 1.2 to 2.2o in the coastal areas of the country (the Caspian Sea and the Oman Sea). The western and northern mountains of Iran show the minimum RMSE.The BCC-CSM2-MR model shows more bias than the MPI-ESM1-2-HR model in the northern regions of the country. Examination of the average monthly temperature anomaly across Iran in the predicted period compared to the hindcast period (1980-2019) showed that the monthly temperature anomaly is positive across the country compared to the normal period in all months of the year. This value is 1.03 degrees Celsius for the country-wide average. In other words, the temperature in Iran will increase by one degree for the bear term period (2021-2028) compared to the long-term period of the last 40 years (1980-2019).In this study, for the first time, a decadal climate prediction of Iran's monthly temperature is assessed using the output of two available models BCC-CSM2-MR and MPI-ESM1-2-HR from the DCPP contribution to the Coupled Model Intercomparison Project Phase 6 (CMIP6). The evaluation of the models using three statistical measures RMSE, MBE and PCC showed that the BCC-CSM2-MR model has the lowest performance in the coastal areas of Iran (the Caspian and the Oman Sea) and the highest performance in the highlands of Iran. The output of the MPI-ESM1-2-HR model during the hindcast period (1980-2019) show good performance of this model in determining the temperature patterns of the country. The minimum temperature is based on the output of this model in January with a value of -6.28o. Examination of the predicted temperature anomaly (2021-2028) compared to the hindcast period (1980-2019) shows that the average anomaly across the country for different months of the year during 2021-2028 compared to the hindcast period is 0.99o.The Effect of the type of training algorithm for multi-layer perceptron neural network on the accuracy of monthly forecast of precipitation over Iran, case study: ECMWF model
https://jesphys.ut.ac.ir/article_85443.html
Due to increasing atmospheric disasters in the Iran, accurate monthly and seasonal forecasts of rainfall as well as temperature, can help decision makers to better plan for the future. Meanwhile, machine learning methods are widely used today in predicting temperature and precipitation. For this purpose, the outputs of climate models are processed with the help of observational data and machine learning methods and a more accurate forecast of temperature and precipitation (or other climatic variables) are provided. In the meantime, methods based on multilayer perceptron artificial neural networks are widely used.In a multi-layer perceptron artificial neural network, the design of the network architecture is very important and this design can directly affect the ability of the neural network to solve the problem. In designing network architecture, questions such as the number of neurons in each layer, the number of layers, network activity functions in each layer, etc. must be answered. In some cases, there are methods to answer each of the above questions, but in most cases, a suitable architecture for the specific problem under study must be found by trial and error. One of the important steps in using machine learning methods (in general) and especially the use of perceptron artificial neural network method is the training stage. During the neural network training process, which actually leads to solving a mathematical optimization problem, the optimal network weights are calculated as its adjustable parameters.Today, various types of artificial neural networks are used in various fields of atmospheric science and climatology for purposes such as classification, regression and prediction. But the fundamental question in the use of artificial neural networks is how they are designed and built. One of the important points in using artificial neural networks that should be considered by designers is choosing the right algorithm for network training. In this paper, six different methods are considered for training a multilayer perceptron neural network including: Bayesian Regularization algorithm, Levenberg-Marquatt algorithm, Conjugate Gradient with Powell/Beale Restarts, BFGS Quasi-Newton algorithm, Scaled Conjugate Gradient and Fletcher-Powell Conjugate Gradient methods for monthly forecasting of precipitation that are reviewed and compared. In mathematical optimization methods based on derivatives and gradient vectors, the second-order derivative of the objective function, called the Hessian matrix, and its inverse, play an essential role in the calculations. On the other hand, with increasing the number of variables, the size of the matrix increases and its inverse calculation is computationally time consuming. Therefore, in the improved optimization methods, it is tried to approximate the inverse matrix of the objective function with some tricks.Because the ECMWF model has six different lead times, 72 different models can be proposed for 12 different months of the year. For this purpose, data for the period 1993 to 2010 were used as network training data and data for the period 2011 to 2016 for testing. To evaluate the performance of different neural networks, three indices of correlation coefficient, mean square error and Nash-Sutcliffe index were used. Results indicated that the Bayesian Regularization and Levenberg-Marquatt, Conjugate Gradient with Powell/Beale Restarts outperforms other training algorithms.Post Processing of WRF Model Output by Cokriging Method for Minimum and Maximum Temperature in Iran
https://jesphys.ut.ac.ir/article_85439.html
Weather forecasting and monitoring systems based on numerical weather forecasting models have been increasingly used to manage issues related to meteorology and agriculture. Using more accurate minimum and maximum temperature forecasts can be helpful in this regard. But systematic and random errors in the model affect the accuracy of forecasts. In this study, the model errors during the 5 and 14 days training period in the same climate areas on the points of the network where the observations are available are calculated.Then the errors are generalized on all points of the network using the cokriging interpolation method. This, preserves the model forecasts for other points of the network and only error values are applied to them. To better evaluate the model, the spatial and temporal distribution of the maximum and minimum temperature forecast errors are also investigated in the country. Observed daily maximum and minimum temperatures data from 560 meteorological stations for the period 1/11/2019 to 1/2/2021 are used to evaluate the WRF model. The WRF model is run daily at 12UTC, with a forecast time of 120 hours. And first 12 hours of each run is consider as the model spin-up and is not used in errors calculation. In order to correct the maximum and minimum temperature forecast errors for next three days (forecasts of 36, 60 and 84 hours), the forecasts for each day in the period of 11/1/ 2019 to 1/2/2021, is extracted from the model outputs. In order to evaluate the error correction method, the skill score index is used. The validation results of the error correction method shows that the absolute mean error value, correlation coefficient and RMSE are improved after the error correction compared to results that were before the error correction. This shows that the error correction method can be used for other network points that do not contain observational data. The results shows that the RMSE of the raw model maximum (minimum) temperatures forecasts for next three days is approximately 6 degrees Celsius (5 degrees Celsius), which after error correction reaches 2 degrees Celsius (4 degrees Celsius). Also the value of correlation coefficient, after correcting for the model error, has a significant increase compared to the raw model output. The average skill score for the raw minimum and maximum temperature forecast for more than 50% of the days is more than -1 and -1.9, respectively, but after correction, the model skill scores become closer to one and for more than 75 percentage of days that reach above zero. Without exception, all climatic regions after error correction have a higher skill score than before error correction, so that the model skill score for most climatic regions after error correction reaches above zero for more than 75% of the days. Before error correction, the warm semi-humid zone has the lowest average skill score for forecasting maximum and minimum temperatures among climatic zones, but after error correction it reaches the highest value among other zones. In general, for areas with hot and dry climates, the raw output skill score for predicting the minimum temperature in July, August, and September is minimized. The 14-day error correction method did not improve the modeling skill score much compared to the 5-day error correction method, and they acted almost similarly. In areas with high elevation gradient, the model error increases. In general, model underestimates the maximum and minimum temperatures in most areas. Knowing the spatial and temporal distribution of model forecast error can be helpful for researchers to have an overview of the areas (and months) where the model forecast error is high.On the design and implementation of digital filters to process meteorological signals
https://jesphys.ut.ac.ir/article_85460.html
Separation of different frequency bands in the complex and combined signals related to meteorological variables and also climatic indices requires the use of digital filtering methods. In this way, the information on different frequency bands can be organized and used. Given that these signals generally exhibit complex and nonlinear behavior, the use of mathematical filtering methods to identify their stochastic and periodic components leads to a better understanding of their behavior and helps modeling them as well. Therefore, the use of digital filters in order to recognize regular variabilities and facilitate statistical forecasting is one of the main goals in this field.
The design and implementation of these filters is possible both in time and frequency domains. In the frequency space, this process is performed based on the Fourier transform of the signals on the basis of Fast Fourier Transform Algorithm (FFT), in which the variances of the desired signal can be extracted based on spectral analysis in different frequencies. By employing different types of non-recursive and recursive digital filters which they can be implemented as low-pass, high-pass, band-pass, and band-stop, the related signal in time domain for each state can be constructed and the corresponding spectrum can be studied. The isolated spectrum can be related to the effect of a special phenomenon that influences the main signal. In addition, it is possible to remove high frequency components from the original signal which include noises and may not contain important information. Moreover, the process of optimal smoothing the original signal can also be carried out.
In this study, different digital filters have been designed and then applied to meteorological data such as monthly surface temperature and precipitation. Two synoptic stations over Iran were selected and the related discrete monthly signals are constructed for a 504 months during 1979-2021. Then, the moving average (MA) filter is used as a main filter, because it is the most common filter in digital signal processing (DSP), and also it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for
a common task such as reducing random noise while retaining a sharp step response. This makes it the
premier filter for time domain encoded signals. The filtering process in this study is conducted to denoise the original signals, and also to examine seasonal, annual and inter-annual components of the original signals. Since employed filters are digital, they must be applied to the initial discrete signal in the form of convolution with the finite impulse response (FIR) of the filter in the time domain, or they can be applied in the form of multiplication in the frequency domain based on discrete Fourier transform and then using of the inverse Fourier transform to recover the desired signal.
The results of this study show the importance of using digital filters in analyzing the spectral contents of meteorological signals. Furthermore, the Hamming filter, which is defined based on the cosine truncation and windowing, shows better performance in attenuating Gibbs oscillations in the lateral sidelobes of the filter frequency response than the simple moving average (MA) filter. In addition, the correlation analysis is carried out separately to indicate the linear relationships between different frequency components of the signals. The higher correlations observed in annual frequency bands of the temperature and precipitation signals for the selected stations. It shows the effect of external climate forcing on both temperature and precipitation that is stemmed from the earth motion around the sun during a year. Obviously, choosing more weights in the design of a filter can improve the filtering performance, but it should be avoided to use more weights than necessary.A study of clear air turbulence by spontaneous imbalance theory
https://jesphys.ut.ac.ir/article_85450.html
Emission of inertia&ndash;gravity waves (IGWs) through imbalance is a well-known cause of clear air turbulence (CAT) in the upper troposphere. IGWs may initiate CAT by locally modifying the environmental characteristics of the meteorological quantities like static stability and wind shear. CAT is a micro-scale phenomenon for which there are also mechanisms other than IGWs. Accurate forecasting methods by using numerical model and CAT diagnostic indices are still being studied and developed (Sharman and Lane, 2016). Following Knox et al. (2008) (hereafter KMW), the current study is focused on detecting CAT by spontaneous imbalance theory and the effect of IGWs on the flow.
For this purpose, the lifecycle of the baroclinic waves including their phases of growth, overturn and decay as well as the generation and propagation of IGWs are investigated by numerical simulation using the Weather Research and Forecasting (WRF) model in a channel of 4000 km length, 10000 km width and 22 km height in respectively the zonal, meridional and vertical directions on the f plane, with a horizontal resolution of 25 km and vertical resolution of 0.25 km. Based on the wave&ndash;vortex decomposition (WVD) method, the unbalanced flow, the dimensional and non-dimensional IGW amplitude have been estimated. In the next step, the non-dimensional wave amplitude has been alternatively determined for reference, based on the Lighthill&ndash;Ford theory of spontaneous imbalance in KMW method. Then the turbulent kinetic energy (TKE) dissipation and eddy dissipation rate (EDR) have been calculated to determine the intensity and location of CAT.
The results showed that KMW method uses a proportionality constant to make the non-dimensional wave amplitude as order of the Rossby number and determines the constant empirically by matching distributions of pilot reports of turbulence to the pattern of TKE dissipation. For this reason, the EDR has the best fit with the location of observed CAT and the minimum value of Richardson number. This is while most values of the non-dimensional wave amplitudes calculated by the WVD and harmonic divergence analysis are less than unity and have values of the order of the Rossby number itself. At day 8 when the baroclinic wave and IGWs are at their peak of activity, the pattern of distribution of EDR by WVD indicates that there is moderate turbulence all around the jet stream region, and the maximum values of EDR are located below the jet core and in the jet-exit region, which is similar to the location of wave activity and location of CAT in previous studies. Also minimum values of Richardson number are at the jet-exit region where the maxima of EDR reveals moderate turbulence there. The distribution of EDR by KMW unlike the distribution of EDR by WVD, shows that in most areas of the flow there are no sign of turbulence except in a few patchy places near the jet region where moderate turbulence is predicted. Thus making use of an optimal WDV could improve the accuracy of detecting unbalanced part of the flow and predicting areas of CAT in the upper troposphere in the vicinity of the jet stream.Magnetic and IP/Res data inversion for investigation of the spatial relation between the geophysical models and mineralization in the southern Dalli Cu-Au porphyry deposit
https://jesphys.ut.ac.ir/article_85508.html
Because of declining high-grade ore deposits and increasing demands for metal resources, exploration of low-grade metal deposits, such as porphyries, became feasible. Besides, humankind has spent most of the shallow metal ore deposits, and new prospecting projects focus on deeper deposits. Therefore, geophysical methods have gained more attention due to their ability to determine buried ore bodies&#039; physical properties. Hence, most countries, including Iran, make significant investments in the geophysical exploration of deep porphyry deposits. According to widely accepted Lowell and Guilbert&#039;s model for porphyry copper deposits, the ore-bearing zones mainly concentrate at the edge of the potassic alteration zone. Pyrite, a highly conductive and chargeable metallic mineral, is a significant attribute in the potassic alteration. The model also states that the high susceptible magnetite-bearing rocks mainly occur at the bottom of the pyrite shell and the ore body. Due to the occurrence presence of susceptible and conductive metallic minerals such as magnetite and pyrite in the potassic zone adjust to the ore body in the copper and gold porphyry deposits, using magnetometry, resistivity, and inducing polarization methods gives reliable information about the location, depth, and shape of the deposits. For instance, in the research, we focus on the magnetic and IP/Res data in the southern Dalli porphyry deposit, promising Cu-Au indices have been located at Euromieh-Dokhtar ore-bearing zone Markazi Province. First, we applied standard processing techniques to remove the aliasing and regional effect in the magnetic data. Then, using the analytic signal technique, we showed the concentration of the magnetic sources over the study area. We also applied the power spectrum and Euler deconvolution techniques to the magnetic data and estimated the magnetic sources&#039; depths. The estimated depth from the power spectrum is between the estimated depth from Euler deconvolution for possible sources with step and pillar shapes. Next, we used the average estimated depth from each of the depth estimation techniques in a three-dimensional magnetic data inversion as the depth of the sources in depth weighting. Also, we studied the inversion results via combining the cross-section of the magnetic susceptibility model along the boreholes and the lithology and geochemical information from core samples analysis. The results indicate that the higher grades for gold and copper occur at the edge of the magnetic sources and possible magnetite mineralization zones. The inversion results using the depth weighting with the depth extracted from the power spectrum show the best correlation and spatial relation with the geochemical data. Besides the magnetic data inversion, applying Oldenburg and Li algorithms for two-dimensional inverse modeling, we extracted the underground bodies&#039; resistivity and chargeability model along with a IP/Res profile in the study area. The resulting chargeability models show a significant relationship with the presence of gold and copper mineralization. We also compared the resulting two-dimensional resistivity and changeability models with their corresponding magnetic susceptibility at the cross-sections along with the IP/Res. The comparison shows that the possible mineralization zones coincide with larger magnetic susceptibility values, high chargeability, and low resistivity. The results show good accordance with Lowell and Guilbert&#039;s model. Also, highly susceptible rock in the shallower depth indicates that the erosion process has destroyed most possible orebody.Paleostrees Analysis and Evaluation of the Movement Potential of Dochah Fault, Central Iran
https://jesphys.ut.ac.ir/article_85441.html
Qom region is one the significant area insight of geological features in Central Iran. Several researches have been studied about the Cenozoic strata in terms of sedimentology, Stratigraphy and paleontology but, few structural detail data are available from this area. The most important exposure of the rock unites at the west of the Qom city is related to the Eocene volcanics, Lower Red, Qom and Upper Red Formations. The major structures at this area is Kamar Kuh and Mil anticlines, Yazdan syncline, Dochah and Sefid kuh faults. Dochah Fault with E-W trending and ~70&deg; dipping to the northward placed at the northwest termination of Qom-Zefreh Fault as a recent sinistral strike slip fault. This fault with ~15 km length separate Mil anticline from Yazdan syncline and eliminates the southern limb of Dochah overturned anticline. In this study, we focused on the Dochah Fault damaged zone in order to paleostress analysis using geometric and kinematic characteristics of fault slip data, which is achieved from the deformed Qom and Upper Red Formations. For this purpose, 100 fault slip data with precise and accurate geometric and kinematic characteristics have been measured in the field and analyzed with software Dasiy and Rotax methods. In order to determine the sense of shearing of the faults, the criteria of Petit (1987) and Doblas (1998) have been used. While the trend of the major structures is east-west but, most of slip data is related the transverse oblique slip faults, because the Dochah Fault passes through the soft materials of Lower Red Formation and consequently it is not possible or too hard to find the slicken line. Meanwhile, our results indicate the attitude of the axes of the maximum and minimum principal stress (&sigma;1, &sigma;3) as 030/05 and 285/05, Geometric and kinetic structural analysis related to the dochah fault and according to the spatial arrangement of the main stress axes indicate the readiness of the left-hand section on the right-hand section, especially in the western parts of the region (Caspian) attributed. oblate shape of field stress ellipsoid shape (R~0.7). Based on the field stress ellipsoid shape and the rotation of the fault data regarding the Anderson&#039;s theory for the compressive stress regime, the stress transition trajectory map has been prepared. The arrangement of maximum stress trajectories is consistent with the general stress regime in the Iranian crust and is consistent with the activity of the Dochah Fault. Different criteria have been proposed to evaluate the activity of a fault in terms of seismicity. In experimental studies, there are various estimates for selection for the part of the fault that the movement rediscovers for each tectonic seismic zone. Here, the possibility of moving Dochah Fault has been estimated by the method of Lee et al. (1997). In this method, the angular relationship between maximum principal stress axis (&sigma;1) and the pole of the fault plane considered in order to evaluate the Fault Movement Potential (FMP) based on equation &ldquo;FMP=f (G, &sigma;)&rdquo;. The angle between maximum principal stress axis (&sigma;1) and the pole of Dochah Fault (&theta;) is equal to ~40&deg; and so FMP=0.33 based on equation FMP= (&theta;-30&deg;) &frasl; (30&deg;) if &theta;&isin;[30&deg;,60&deg;]. This value of FMP indicates the low seismic potential of Dochah fault for movement and Creating earthquakes.Statistical modeling of the mean annual temperature at Mehrabad station, Tehran
https://jesphys.ut.ac.ir/article_85447.html
Regarding climate changes and global warming, it seems that the behavior of climate elements in the future should be predicted and known. Therefore, in this study, using modeling in a set of ARIMA statistical models, models on the time series of the mean annual temperature at Mehrabad station in Tehran during 1951-2015 were fitted to investigate a significant model for trial and error in order to identify the most appropriate model. Since the time series of the observations had a normal distribution, modeling was performed on the time series without applying Box Cox transformation. First, for static and non-static investigations, the time series of annual mean temperature observations was plotted simply in diagrams. In addition, the first and second order regression line equations were used to further ensure the type of time series behavior of the mean annual temperature. The results showed that the time series behavior of temperature at this station is linear. Since the time series behavior was linear, the order d = 1 was determined. Second, the first-order differentiation was performed on the time series. In the third step, the order p and q were determined using autocorrelation and partial autocorrelation of the differentiated values (w_t). After investigating the significance of the order of the components of each of the models, the following models were selected as significant models, respectively:
1) ARIMA(0,1,1)_{theta_0}
2) ARIMA(2,1,0)_{theta_0}
Since the first significant model was observed with suspicion, as a result each of the components (p, d, q) of the above two models were tested up to the 3th order. Finally, these two models were selected as significant models. Also, Akaike information criterion (AIC) was considered to determine the most appropriate model among the above two models. ARIMA model (0,1,1)_{theta_0} had the minimum value of AIC compared to the other model. As a result, using this model, the temperature time series at this station was predicted from the end of the period to &frac14; of the first time series. Given the concept of uncertainty, which underlies descriptive and inferential statistics, as a result, it seems that uncertainties should be expressed with high statistical certainty. In this regard, we used statistical tests of autocorrelation, Pearson correlation coefficient, standard normal homogeneity, cumulative deviations, milestones, sign on the time series of ARIMA model residues (0,1,1)_{theta_0}, and drawing methods for residual normality, residual independence, constant residual variance and portmanteau test to consider further criteria to increase the statistical reliability of the applied model. The results of all statistical tests showed the random residual time series of the model. These tests showed that the best model for modeling the time series of the mean annual temperature at Mehrabad station, Tehran is ARIMA model (0,1,1)_{theta_0}. Since the upper and lower limits of the predicted series as well as the predicted observations show the same behavior of the temperature time series at Mehrabad station, it can be said that the estimation of the predicted numerical values is still appropriate for this model to predict the temperature variable at this station. Finally, the results showed that the mean temperature of the predicted series is likely 17.742 ͦ C, and the mean annual temperature will increase by 0.038 ͦ C compared to the previous year.Analysis and prediction of EOP time series using LSHE+ARMA method
https://jesphys.ut.ac.ir/article_85451.html
The rotation of the solid Earth with respect to inertial space is not constant due to the changes of external gravitational forces and internal dynamics. Earth orientation parameters (EOP), including, the Earth&rsquo;s polar motion (PM), Anomalies in the Earth&rsquo;s angular velocity and celestial pole offsets (CPO), describe these irregularities in the Earth&rsquo;s rotation. Anomalies in the axis defined by the celestial intermediate pole (CIP) with respect to the Z axis of the terrestrial reference system are named as PM. The CPO are expressed as the deviations, dX and dY, between the observed CIP and the conventional CIP position. The difference between the smoothed principal form of universal time UT1 and the coordinated universal time UTC denotes the Earth&rsquo;s rotation angle, which, together with the xp, yp terrestrial pole coordinates, forms a set of Earth orientation parameters (EOP). In addition to the other EOP, the length of day (LOD) is used to model the Anomalies in the Earth&rsquo;s rotation rate. LOD is the difference between the duration of the day measured by space geodesy and nominal day of 86,400 s duration.
Generally, EOP are the parameters that provide the rotation of the International Terrestrial Reference System (ITRS) to the International Celestial Reference System (ICRS) as a function of time. However the EOP are computed using the modern space geodetic techniques such as Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), Satellite Laser Ranging (SLR), Very Long Baseline Interferometry (VLBI) and the Global Navigation Satellite System (GNSS), they are unavailable to the real-time applications due to the data processing complexities. Accurate and rapid EOP predictions are required for different fields like precise orbit determinations of artificial Earth satellites, positional astronomy, space navigation and geophysical phenomena.
There are many different methods for analysis and prediction of EOP time series including deep learning methods, least square (LS) with autoregressive (AR) and also Singular Spectrum Analysis as a non-parametric method.
In this research Least Square Harmonic Estimation analysis is used to investigate the frequencies of EOP. First, The solid and ocean tide terms are modeled based on IERS technical notes. These effects are removed from LOD time series. The remained time series are named as LODR time series. The univariate time series analysis is then applied to the LODR time series and multivariate analysis is used for detecting the PM periodic patterns. Applying these methods to the 40 years observations of EOP (since 1 January 1980 to 31 December 2020) revealed the Chandler, annual, semi Chandler, semi-annual and annual signals as the main periodic signals in the EOP time series. The functional model is then formed using all detected signals in order to model the deterministic variations of EOP time series.
In order to model the remained non-deterministic variations, an ARMA (Autoregressive Moving Average) model is fitted to the least square residuals. The Akaike&#039;s Information Criterion (AIC) is used to investigate the optimized order of ARMA model.
The EOP is then predicted for the first 20 days of 2021, using the pre-identified functional model for the deterministic part and the ARMA model for the non-deterministic part of the time series variations. For the prediction of LOD time series, after creating the functional model of LODR time series, the solid and ocean tide terms are added to the functional model of LODR.
Finally, in order to validate the accuracy of the proposed method, a comparison is made with an EOP prediction study that used the ANN (Artificial Neural Network) and ANFIS (Adaptive Network Based Fuzzy Inference System) methods for short term prediction of EOP.
The result shows that the accuracy of the proposed method is better than the previous study and the method can be used for accurate prediction of EOP time series.Interpolation of horizontal GPS velocity field in the oblique collision zone of Arabia-Eurasia tectonic plates using Green’s functions
https://jesphys.ut.ac.ir/article_86899.html
One way of gridding two dimensional vector data would be gridding each component separately. Alternatively, using Green&rsquo;s functions we can grid two components simultaneously in a way that couples them through elastic deformation theory. This is particularly suited, though not exclusive, to data that represent elastic/semi-elastic deformation, like horizontal GPS velocity fields. Measurements made on the surface of the Earth are often sparse and unevenly distributed. For example, GPS displacement measurements are limited by the availability of ground stations and airborne geophysical measurements are highly sampled along flight lines but there is often a large gap between lines. Many data processing methods require data distributed on a uniform regular grid, particularly methods involving the Fourier transform or the computation of directional derivatives. Hence, the interpolation of sparse measurements onto a regular grid (known as gridding) is a prominent problem in the Earth Sciences.
In this research, sparse two-dimensional vector data of the horizontal GPS velocity field are interpolated using Green&rsquo;s functions derived from elastic constraints. The method is based on the Green&rsquo;s functions of an elastic body subjected to in-plane forces. This approach ensures elastic coupling between the two components of the interpolation. Users may adjust the coupling by varying Poisson&rsquo;s ratio. Smoothing can be achieved by ignoring the smallest eigenvalues in the matrix solution for the strengths of the unknown body forces. The study area is the oblique collision zone of Arabia-Eurasia tectonic plates, which has a GPS velocity field with sparse distribution.
Since the Green&rsquo;s functions developed for the half-space environment, the Mercator map projection used to create the half-space for interpolation and gridding. Data split into a training and testing set. We&#039;ll fit the gridder on the training set and use the testing set to evaluate how well the gridder is performing. The vector gridding was done using the Poisson&#039;s ratio 0.5 to couple the two horizontal components. Then score on the testing data. The best possible score is 1, meaning a perfect prediction of the test data. By calculating the mean square deviation ratio (MSDR) to evaluate the gridding accuracy, the score of 0.86 obtained for this statistic.
While this method is not new, it provides some insight into the behavior of the coupled interpolation for a wide range of Poisson&rsquo;s ratio. This approach provides improved interpolation of sparse vector data when the physics of the deforming material follows elasticity equations.
We interpolated our horizontal GPS velocities onto a regular geographic grid with 1 arc second spacing and masked the data that are far from the observation points and finally the residuals between the predictions and the original input data were calculated. Interpolation of horizontal GPS velocity fields of local geodynamic networks are proposed to obtain an estimate for Poisson&#039;s ratio values in the best case for gridding validation.
In this study, two dimensional GPS data were interpolated. Three dimensional GPS data gridding can also be done using the Green&rsquo;s functions provided by Uieda et al., (2018). It is also recommended to use different Green&rsquo;s functions to grid different types of spatial data.The effect of sudden stratospheric warming on the height and temperature variations of thermal tropopause in northern hemisphere (1979-2020)
https://jesphys.ut.ac.ir/article_86900.html
A sudden stratospheric warming (SSW) represent large scale perturbations of the polar winter stratosphere, which substantively influence the temperature and circulation of the middle atmosphere and also the contents of atmospheric species. SSW occurs mostly in middle and late winter and almost exclusively in the Northern Hemisphere. During an event, the polar stratospheric temperature increases by several tens of degree Celsius within a few days and eventually becomes warmer than that of the mid latitudes, reversing the climatological temperature gradient. At the same time, the prevailing westerly wind speed decreases rapidly and becomes easterly.
The tropopause is a transition layer between the troposphere and the stratosphere. The occasional exchange of air, water vapor, trace gases, and energy between the troposphere and the stratosphere occurs in this layer. Based on the concepts; two different tropopause in the name of thermal tropopause and dynamical tropopause are defined. The conventional definition is the thermal tropopause which is detected based on the mark disruption of the vertical temperature lapse rate. The thermal tropopause definition is based on the fact that the stratosphere is more stably stratified than the troposphere. The thermal tropopause is defined as the lowest level at which the lapse rate decreases to 2 K/km or less, provided also the average lapse rate between this level and all higher levels within 2 km does not exceed 2 K/km. The original concept of the dynamical tropopause was based on the isentropic gradient of potential vorticity. The dynamical tropopause is typically determined in a thin layer with absolute PV values within 1&thinsp;pvu and 4&thinsp;pvu.
The vertical temperature stratification of the atmosphere plays a basic role in atmospheric motions. In this paper, the Brunt&ndash;V&auml;is&auml;l&auml; frequency (N2) value is used to detect the change of stratospheric static stability.
In this paper the NCEP/NCAR reanalysis daily data including temperature at different pressure levels (1000hPa-10hPa), the tropopuse temperature and pressure from 1th of January 1961 to 31th of December 2020 in northern hemisphere are used. The study region covers 0&deg; to 357.5&deg; geographical longitudes and 0&deg;N to 90&deg;N geographical latitudes. the northern hemisphere is divide into three 30&deg; nun overlapping latitudinal band width called as the tropical bands (0&deg;N-27.5&deg;N), the middle latitude bands (30&deg;N-57.5&deg;N) and polar bands (60&deg;N-90&deg;N) regions. Firth of all the potential temperature and Brunt-V&auml;is&auml;l&auml; frequency (N2) at different pressure levels are calculated, then the average zonal mean temperatures at 10hPa, the tropopause temperatures; the tropopuse pressures and the values of N2 in three former introduced regions are obtained. To represent the tropopuse&#039;s height variations during the sudden stratospheric warming; the daily anomaly of these parameters in the regions are calculated and analyzed.
The daily average mean zonal tropopause temperatures and pressure changes in the three meridian divided regions during eighteen major and one minor sudden stratospheric warming (SSW) events are analyzed in this study. The results show that all 19 SSW events in the statistical period of 1979-1920 were associated with positive anomaly of the zonal mean temperature and pressure of tropopuse along with increase of the tropopuse temperature and lowering its height which caused downward development of the stratosphere and thinning the depth of the troposphere. In addition, the tropopuse height reduction in the polar band region was greater than in the middle latitude band. It was also showed that, the static stability (positive (N^2 ) ̅ anomaly) increment in the stratosphere started before the SSW and decreased during SSW (negative(N^2 ) ̅ anomaly ). These changes are greater in the polar cap band respect to the middle latitudes band. This result reveals that the static stability structure in the lower stratosphere and upper troposphere in the polar cap are more affected by SSW respect to other regions.An assessment of WRF-NMM configurations in heavy rainfalls over the Bushehr Province during 2000-2020; using cumulus and boundary layer schemes
https://jesphys.ut.ac.ir/article_86901.html
The mesoscale numerical weather prediction system of Weather Research and Forecasting (WRF), with two cores of ARW and NMM, has been used for atmospheric research, operational forecasting, and dynamical downscaling of Global Climate Models. Many parameterizations for each physics option can be accessed in this model. It is noteworthy that the performance of the model depends on the selected configuration and varies in different areas. Therefore, choosing a configuration with the lowest error for each terrain is mandatory. Here, the performance of various physics schemes, including cumulus and boundary layer schemes of the WRF-NMM model, was examined to simulate the twelve heaviest extreme rainfall events in the southwest of Iran, the Bushehr Province, during 2000-2020. These events lasted for eighteen days. Three domains with 27, 9, and 3 km resolution were used in the configuration, with no cumulus option for the smallest one. The initial and boundary conditions were used from the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis v5 (ERA5) datasets. One hundred and eight simulations were run using six cumulus schemes of KF, BMJ, SAS, oldSAS, NSAS, and TiedTKE, and seventy-two runs were done to evaluate the boundary layer schemes of MRF, MYJ, QNSE, and YSU. The simulated precipitation patterns were assessed using two observational data sets, including (I) in-situ measured data from eleven automatic weather stations and (II) grid point data from Global Precipitation Measurement (GPM) satellite with 0.1-degree horizontal resolution. Four statistic indices of Root Mean Square Error, Correlation Coefficient, Standard Deviation, and Bias were applied in the evaluation process. The evaluation process with the data measured at 11 automatic weather stations was done using outputs of the third domain. The outputs of the second domain were used for evaluation basis on GPM data at grid points. For a comprehensive analysis, the assessment process was performed separately for rainfall events (March-April and November-December events) and coastal and non-coastal stations. Comparison of precipitation from simulations of various cumulus schemes with the eleven in-situ data showed that the schemes from SAS family well performed at March-April events at coastal and noncoastal stations. While, the KF scheme produced the least error at coastal and noncoastal stations during the November-December events. The precipitation data from 1271 GPM grid-point data revealed that the oldSAS scheme generated the least error for the March-April and November-December events. According to the number of GPM grid-point data, the oldSAS scheme opted as the cumulus option for the next runs. Evaluation of WRF-NMM simulations using different boundary layer physics with the in-situ data indicated that MRF scheme produced the minor error at coastal and noncoastal stations for both March-April and November-December events. Using the 1271 GPM grid-point data illustrated that the QNSE and MRF (MYJ and MRF) options did the best performance for March-April (November-December) events. In conclusion, based on the number of GPM grid-point data compared with in-situ measured data, it is suggested that the oldSAS cumulus scheme and MRF boundary layer scheme can be chosen with some robustness in predicting the amount and pattern of the heavy rainfall precipitation in Bushehr Province of Iran. It is also notable that the default options introduced by the model for cumulus scheme and boundary layer scheme in the WRF-NMM model produce the largest error and are not appropriate for the selected area. This reveals the importance of adequately selecting physics options for this area.Effect of non-thermal and trapped electrons on solitary waves and chaos in auroral acceleration regions
https://jesphys.ut.ac.ir/article_86902.html
In this paper, using the reductive perturbation method, the propagation of nonlinear solitary waves and chaos phenomenon and its stability were studied in auroral acceleration regions in the presence of electrons with the Cairns-Gurevich distribution function. Using the continuity, momentum transfer, and Poisson equations, and considering the density of electrons as the Cairns -Gurovich distribution function, and using two different models, Korteweg&ndash;De Vries (KdV) and modified KdV equations were obtained. It was shown that the solutions of these equations are in the form of solitary waves. The effect of non-thermal and trapped electrons and wave velocity on these waves were studied. In the next section, pseudo-potentials and total mechanical energy are obtained. Considering a quasi-periodic factor, KdV and modified KdV equations were reviewed and the chaos and its stability were studied in the auroral acceleration regions. Results showed that by increasing the wave velocity and non-thermal and trapped parameters, the size of the field increases, and the depth of the potential well increases. These results confirmed each other. It was indicated that in the case of b=0, this distribution function goes to the Maxwellian distribution function. In the case b&gt;0, in addition to free particles, the trapped and non-thermal particles also affect the distribution function. In this case, the width of the distribution function became larger, which indicates that the more energetic electrons are in this case. It is also concluded that for both nonlinear equations, the solutions can exist in the form of rarefactive and compressive soliton. Three-dimensional graphs of total mechanical energy were also plotted for different values of the wave velocity and non-thermal and trapped parameters. Results for this case also showed that for the total energy of E1, by increasing the b parameter, the energy deviates from the uniform function and reaches the saddle state. This was also shown that the wave velocity is similar to the b parameter. It was found that for different values of U and b parameters, the behavior of the total energy of E2 is different from the total energy diagram of E1. Poincar&eacute; return mapping diagrams confirmed the existence of a closed cycle indicating chaos in these plasmas. Results of this section also showed that for solitons with function &psi;1, by increasing the U parameter, the Poincar&eacute; return mapping cycle region increases. Poincar&eacute; return mapping lines were also more focused in this case. For solitons with &psi;1 functions, by increasing the wave velocity, Poincar&eacute;&#039;s return map goes from a quasi-stable state to a stable state. By increasing the quasi-periodic frequency, the Poincar&eacute; return map goes from steady-state to quasi-steady state so that a cycle converts to two cycles with a certain overlap. Finally, it was concluded that using real parameters, the wave velocity was in the interval 13km/s&lt;v&#039;&lt;52km/s and the electric field was approximately 5mV/m and the Debye length became 15m. It was also concluded that the results of the recent work are in good agreement with the results obtained from the Viking, Freja, and S3-3 satellites.
Keywords: Reductive perturbation method, Soliton waves, Cairns-Gurevich distribution function, Chaos phenomenon, Poincar&eacute; return map, Auroral acceleration regions.Post Processing of WRF Model Output by Cokriging Method for Daily Average Wind Speed and Relative Humidity on Iran
https://jesphys.ut.ac.ir/article_86903.html
Weather forecasting and monitoring systems based on numerical weather forecasting models have been increasingly used to manage issues related to meteorology and agriculture. Using more accurate daily average wind speed (10m) and relative humidity forecasts can be helpful in this regard. But systematic and random errors in the model affect the accuracy of forecasts. In this study, the model errors during the 5 and 14 days training period in the same climate areas on the points of the network where the observations are available were calculated. Then the errors were generalized on all points of the network using the cokriging interpolation method. This, preserves the model forecasts for other points of the network and only error values are applied to them. To better evaluate the model, the spatial and temporal distribution of daily average wind speed (10m) and relative humidity forecast errors were also investigated in the country. Observed daily wind speed and relative humidity data from 560 meteorological stations for the period 1/11/2019 to 1/2/2021 were used to evaluate the WRF model. The WRF model is run daily at 12UTC, with a forecast time of 120 hours. And first 12 hours of each run is consider as the model spin-up and don&rsquo;t use in errors calculation. In order to correct wind speed and relative humidity forecast errors for next three days (forecasts of 36, 60 and 84 hours), the forecasts for each day in the period of 11/1/ 2019 to 1/2/2021, was extracted from the model outputs. In order to evaluate the error correction method, the skill score index was used. The validation results of the error correction method showed that the absolute mean error value, correlation coefficient and RMSE improved after the error correction compared to results that were before the error correction, which showed that the error correction method can be used for other network points that do not contain observational data. In general after correction, the RMSE for wind speed and relative humidity forecasts will decrease by 13% and 18%, and the skill score will increase to a maximum of 160% and 308%, respectively. Value of correlation coefficient, after correcting the model error, has a significant increase compared to the raw model output. In general skill score for the raw wind speed and relative humidity forecast for more than 50% of the days was more than -0.5 and -0.3, but after correction increases to 0.2, 0.4 respectively. Without exception, all climatic regions after error correction have a higher skill score than before error correction, so that the model skill score for most climatic regions after error correction reaches above zero for more than 75% of the days. The results showed that error of the model in different months, places and climatic zones does not have a uniform distribution. In general, the model underestimates the wind speed and overestimates the relative humidity in most areas. In general, the lowest skill scores for relative humidity forecasts occur in the colder months, of November to February in most climatic zones. The 14-day error correction method did not improve the modeling skill score much compared to the 5-day error correction method, and they acted almost similarly. Knowing the spatial and temporal distribution of model forecast error can be helpful for researchers to have an overview of the areas (and months) where the model forecast error is high.Temporal variability analysis of measured surface ozone at the Geophysics Institute Station of the Tehran University
https://jesphys.ut.ac.ir/article_86906.html
Near surface ozone (O$_{3} ^{surf}$), or tropospheric ozone at the ground level, is a secondary air pollutant that deteriorates human health and plants via damaging respiratory systems. This species is one of the main greenhouse gases associated with global warming and climate change. Despite many efforts to study and to make policy control program, this gas is still increasing and is a recent serious threat for human. So, a comprehensive understating of its variation and controlling factors is necessary for having a precise plan for its regulation.
Here, a measured time series of O$_{3} ^{surf}$ at one of the air quality monitoring sites in Iran, i.e. Geophysics Institute of the University of Tehran, was selected to assess the O$_{3} ^{surf}$ variation in more detail. Although this time series has been measured since 2007, there are many gaps in the data and a few years without data. Nevertheless, the data possess a high quality which has been discussed in this paper. The series was prepared for the period of four years, i.e. 2007-2008 and 2019-2020.
The data series was decomposed to five spectral components, i.e. intraday (ID), diurnal (DU), synoptic (SY), seasonal (SE), and baseline (BL), by applying Kolmogorov-Zurbenko (KZ) filter. This filter introduced by Kolmogorov and later formalized by Zurbenko in 1997. Theoretically, the KZ filter is a technique consists of iterative running moving average (MA), in which a simple MA of m points is computed by:
S(t) = $frac{1}{m}$ $sum_{j=-(m-1)/2}^{(m-1)/2} ORG(t)_j$
where ORG and t represent the original time series and its time steps, respectively, and S is the input for each iteration. Therefore, the filter can be express as:
KZ$_{m,k}$= R$_{i=1} ^{k}$ {J$_{p=1} ^{w_i}$ [S(t${_i}$)$_{p}$]}
Here m and k are window length and number of iterations, respectively. R and J represent iteration and running window, respectively, and w$_{i}$ is defined as:
W$_{i}$ = L$_{i}$ - m + 1
where L$_{i}$ is the length of S(t$_{i}$). KZ$_{m,k}$ is a low pass filter in which high frequency (short time period) variation are removed from the time series. The band of frequency and the level of suppression in this filter are controlled by m and k, respectively. Here, the ozone time series was decomposed to five spectral components as:
ORG(t) = ID(t$_{&lt;12h}$) + DU(t$_{12h-2.5d}$) + SY(t$_{2.5d-21d}$) + SE(t$_{21d-365d}$) + BL(t$_{&gt;365d}$)
The results indicate that the contribution of each component to the O$_{3} ^{surf}$ variability is different as such the DU component constitutes more than 50% of the ozone variability. In fact, this component makes most of the ozone variability which attributes to light variation (daytime-nighttime). The SE component has the second largest contribution to the O$_{3} ^{surf}$ variability. The contribution of the SY component is different and that depends on the year. As an example, the relative contribution of this component in 2007 is 8.93% and in 2019 is 4.84%. Only 5% of the total O$_{3} ^{surf}$ variability makes by the variation of the ID component. This implies that the contribution of each component to the total O$_{3} ^{surf}$ variability is different and this information should be considered in ozone control strategies.Numerical solution of two-layer shallow water equations using mode splitting method
https://jesphys.ut.ac.ir/article_86907.html
In the numerical models that use iterative methods to solve the momentum equations by applying the rigid-lid approximation, the number of iterations increases for high resolution, therefore processing time increases. An alternative method is applying a free surface and splitting equations to barotropic and baroclinic modes. The surface gravity waves that are faster than slow moving internal gravity waves; impose a limitation on the time steps with the CFL condition. Thus, mode splitting method is computationally efficient to handle the multiple time steps separating the barotropic and baroclinic mode equations. In this method, the barotropic mode equations are solved at small time steps consistent with the fast surface gravity wave speeds and the baroclinic mode equations are solved at larger time steps consistent with the slow internal gravity wave speeds. It is used in most of the ocean circulation models and an unavoidable choice to high resolution models.
In this study, we considered the shallow water equations for two-layer basin with vorticity-divergence formulation using mode splitting method by a small time step of barotropic mode within a larger time step of baroclinic mode. The primary systems of equations that contain both upper and lower layer variables, rewritten in terms of new (barotropic and baroclinic) variables without any variation or more approximation of primary systems. This procedure can be extended to multi-layer systems so that primary N-layer system of equations changed to 1 system of barotropic mode equations and N-1 systems of baroclinic mode equations coupled together.
For numerical experiments, a fully baroclinic (non-barotropic) initial condition is considered in a constant depth rectangular domain with 64, 128 and 256 grid points in each direction and periodic boundaries. For spatial differencing, second order centered scheme with low computational cost and fourth-order compact scheme with high computational cost are used. For time integration, a semi-implicit descretization based on leapfrog scheme is implemented with the Robert-Asselin time filter for both barotropic and baroclinic systems of equations, similarly.
Mode splitting method may presents numerical instabilities on the larger baroclinic time steps, in spite of time step limitation based on CFL condition coming from each system of barotropic and baroclinic mode equations taken individually. Here, it is controlled by increasing the coefficient of time filter to some extent.
First, we solve the baroclinic mode equations to derive all baroclinic variables that are necessary to solve barotropic mode equations during a baroclinic time step. In this case, these variables can taken to be constant up to the next baroclinic time level or determined by time interpolation between two successive baroclinic time levels.
To assess the performance of the numerical method, relative error of energy conservation is calculated. Results show that for the ratio of baroclinic time step to up to 20 times of that of barotropic time step, time evolution of the barotropic and baroclinic variables have appropriate correspondence to the basic state, in which the barotropic mode has the same time step as the baroclinic mode. When this ratio increases, the differences of errors from basic state are presented more clearly. These errors are increased on fourth order compact method insofar as lead to numerical instabilities so the time filter coefficient had to be increased, while second order scheme is not sensitive and stays stable with small coefficient. Moreover, taking constant for baroclinic variables to solve barotropic mode equations makes the solution on fourth order compact scheme for large baroclinic time step unstable, but on the other hand time interpolation provides more stable condition and has a good performance almost on both spatial schemes.Observing of Pre-flare Very Long-period Pulsations, for 12 Solar Flares, as a Sign of Flare’s Onset
https://jesphys.ut.ac.ir/article_86908.html
Solar flares are sudden bursts in the solar atmosphere, which have emissions, from radio wavelengths up to gamma rays, and according to their energy are classified into different classes (A, B, C, M, and X, respectively). The process of releasing magnetic energy in flares is done by magnetic reconnection, which is often created by a complex magnetic field. Flares accelerate many electrons and ions, raising their energy to the limit of relative energy. These accelerating particles play a very important role in the release of large solar flare energies. Considering the fact that flares emit radiation when they explode, most of them create light spectrum and sometimes X-rays and ultraviolet rays, which are emitted mainly by the photosphere and chromosphere into concentrated sources called footpoints and ribbons. These radiations and emissions occur when the lower layers of the sun&#039;s atmosphere heat up during a flare, and this heating due to the collision of particles probably plays an important role in the occurrence of the flare; In addition, they emit high-energy radiation such as hard X-rays (HXR) from electrons and gamma rays from ions. The main part of these emissions is in the form of electromagnetic emission (soft X-rays) and energetic particles. Emissions radiated from a large flare or a solar mass eruption (with an energy more than 〖10〗^25 J ), when reaching the earth, can have destructive effects on the Earth&#039;s atmosphere, as well as the orbits of satellites or magnetic and electrical facilities of devices like ships and airplanes. Therefore, predicting the time of the flare occurrence and determining its class type can help reduce these destructive effects.
One of the observable structures that can be seen before a flare occurs, are oscillations with very long period pulsations (VLPs) of the order of 8-30 minutes, which occur about one to two hours before the flare onset, and were first reported by Tan et al. (2016) in the pre-flare phase. MHD oscillations and longitudinal electric current in flare loops can be appropriate candidates to explain the formation of VLPs. Investigating pre-flare VLPs can also help us in understanding the origin of flares. With the help of observational data of X-ray radiation (SXR), onboard the GOES satellite, during the pre-flare phase, these pulses can be observed at similar time scales during flare processes.
In this paper, using the abovementioned data, we selected nineteen flares for the study, of which 7 flares are in class C and 12 flares are in class M. Of these, twelve had typical VLPs before flare-onset, which were all in the M class, with the exception of one. The periodicity that we calculated for the VLPs of these flares, with the help of the Fast Fourier Transform is 14 to 34.6 minutes, which is in agreement with the results of Tan et al. (2016) and also shows that the periodicity of VLPs can exceed 30 minutes. The number of pulses observed in each pre-flare is between 3 and 7. For the other seven remaining flares of our selection, no typical pre-flare VLP was observed, which all but one of them, were in Class C.Improving diffractivity attribute to image faults using tapered local semblance in post-stack domain
https://jesphys.ut.ac.ir/article_86909.html
Diffractions carry out useful and important information about subsurface features such as unconformities, faults, pinch-out, and so on. On the other hand, most of the information encoded in diffractions. Polarity reversal across diffraction move out curves that generated from fault&rsquo;s edges is a great challenge in seismic diffraction imaging. For the last few decades, several conventional methods in the pre- and post-stack domains, have been carried out for the diffractions characteristics and their locations. But most of their methods were not able to deal with the polarity reversal for diffraction imaging, some of them were time consuming, and needed to have some correction for deal with polarity changes, especially in diffraction caused by fault&rsquo;s edges. Despite a large amount of research that has been carried out on diffraction imaging, very few studies have been devoted to the challenge of the polarity reversal across move out surfaces. We used the semblance function along the hyperbolic move out curves for the diffractions that their travel times have been calculated by using the double-square-root equation. As we know, using both Semblance, and Kirchhoff migration for diffraction imaging from fault&rsquo;s edges without taking the polarity reversal into account will be fail. It caused by presence of same number of positive and negative wavelets in the diffraction move out curves. For solving this problem, we divided the global scanning window along hyperbolic move out surfaces into several subdivided window and the local semblance measurements over the sub-windows have been performed separately. Every point in image domain is considered as a diffraction point that we call this points as image points. The final semblance measure at each image point is calculated by averaging the semblance measurements from sub-divided smaller windows. We also contaminated the synthetic data with white Gaussian noise having different signal to noise ratios. Results showed no significant differences due to the fact that random arrivals in seismic data do not influence the semblance measurement. In next step to improve the diffraction imaging we used tapered local semblance due to interference of diffractions with dominated reflection waves, other data and even other diffractions, especially at far offsets from diffraction&rsquo;s apex. We called the proposed method as tapered local semblance method. The method weights the data from top to the bottom along the time axis, we use less also obtained from number of traces at shallow parts and more traces at deeper parts to reduce the harming effect of the interference. To coup with this task, we introduced a triangle taper to take few number of traces at the early arrival parts and more traces at the late arrival parts, instead of using a box with constant number of traces in the apertures from top to the bottom of the window. We tested several tapers with different angles of apex to determine the optimum one. We evaluated both methods on synthetic data as well as field recorded dataset. Both methods required no polarity reversal corrections to be applied. The obtained results showed the ability of our workflow to having higher resolution and good localization for diffractions from fault&rsquo;s edges in synthetic data. The results obtained from using the tapered local semblance method on field recorded dataset showed more diffractivity than local semblance method.Investigation of Seasonal dust in northeastern Iran and numerical simulation of extreme dust events with WRF-CHEM model
https://jesphys.ut.ac.ir/article_86910.html
In recent years, dust storm has become a serious environmental concern and has attracted a lot of attention among atmospheric scientist. Northeast of Iran is a large and strategic population area. Due to its proximity to large arid regions in Central Asia, this region has a high risk of dust events. In recent years, it has faced many problems regarding to dust phenomena. This study was conducted to investigate seasonal dust events in northeastern Iran. To achieve this goal, a combination of station data, reanalysis, satellite and output of the WRF-Chem numerical model have been used simultaneously to improve our understanding of the dust seasonal cycle in northeast of Iran. Accordingly, this research was organized in two parts: monitoring and modeling of dust phenomenon. The results of this study may be useful for forecasting dust storms as well as spatial planning.
To investigate the dust events seasonal variabilities, the dust surface mass concentration of MERRA-2 dataset, aerosol optical depth (AOD) of the combined Dark Target (DT) and Deep Blue (DB) algorithms of MODIS sensor of Terra and Aqua satellites were examined during the long-term period (2004-2018).
Since the emission of dust is highly dependent on biophysical components, it is necessary to use numerical models. The WRF-Chem numerical model was used for this purpose. The study area includes northeastern Iran and parts of Central Asia. The horizontal resolution of the child domin of 30 km model was performed with 32 vertical levels. The NCEP / FNL is used as boundary conditions with 3-hourly time step and 1-degree horizontal resolution for the model configuration. Four extreme dust events were selected to investigate the transport of dust to northeastern Iran. The selected dust events occurred on November 13, 2007, May 29, 2008, June 8, 2015, and October 17, 2017 in northeastern Iran. Therefore, case events were simulated with a time step of 180 seconds and output every three hours using GOCART, AFWA, UoC_S01 and UoC_S11 schemas.
The results showed that the maximum dust activity is occurred in spring with AOD value equal to 0.59 and dust surface mass concentration is 645.2 &micro;g m -3. The summer is in the next place. Seasonal analysis of AOD and dust using satellite and reanalysis data, showed that Aralkum, Kyzylkum, Karakum and Kara-Bogaz-Gol are the main dust sources in Central Asia that are active in all seasons.
Comparison of dust simulation results for PM2.5 and PM10 variable with observational data of air quality control stations in Mashhad showed that GOCART scheme can well depict dust events and provide a low bias towards station data. Also, the study of correlation coefficient between simulation and observation showed that the GOCART scheme explains nearly 90% of the variance of data. The root mean square error (RMSE) for the GOCART scheme is less than 20 micrograms per cubic meter for PM2.5. Accordingly, the GOCART scheme is a suitable scheme for dust study in Northeast of Iran and the WRF-Chem model can be used to operationally forecast dust storms. The dust detection algorithm (DDA) of the AIRS sensor and the aerosol optical depth (AOD) of the MODIS sensor confirm the contribution of the mentioned sources of dust in transferring dust to the northeast of Iran.The results showed that three of the case studies occurred as a result of the passage of an extratropical Rossby wave and the deepening of the trough on the territory of Turkmenistan. In contrast, the summer case study is the result of the establishment of a summer circulation pattern that has occurred with the simultaneous establishment of an anticyclonic circulation in the southern part of Turkmenistan and the northeastern parts of Iran and a cyclonic circulation in the Sistan plain and southeastern parts of the country.A spectral approach to the origin and propagation of magnetoacoustics’ oscillations in the network and internetwork areas of solar granules
https://jesphys.ut.ac.ir/article_86911.html
In this paper, a spectral approach to the origin and propagation of magnetoacoustic oscillations in the network and internetwork areas of solar granules is performed.
The data used in this study are mostly from Interface Region Imaging Spectrometer (IRIS). Slit Jaw Images (SJIs) data of IRIS at wavelengths of 1400 angstroms related to Si IV and 2796 angstroms related to Mg II h / k and 2832 angstroms related to Mg II w s, are used to select network and internetwork areas.
The data of the Mg II k spectrum with a wavelength of 2796 angstroms and a temperature of 10,000 Kelvin have been used to construct the temporal profile of the intensity at the peaks of h3, k3, h2r, h2v, k2r and k2v, and the prospective profile of intensity temperature.
One of the common methods for temporal and frequential characteristics analysis is the use of wavelet analysis. This method seems to be a practical method due to the variety and flexibility of wavelet types for different types of analysis. Wavelets and their convolution with waves lead to the extraction of time, frequency and power data. It should be noted that due to the uncertainty principle, resolution of time and frequency interact and its need to select optimum limit of the time and frequency resolution.
One of the reasons for choosing Morlet Wavelet for the analysis of this study is the lack of a sharp edge, which reduces the ripple and improves the accuracy of detect the fluctuations properties.
Another and one of the most important reasons for using the Morlet wavelet was not changing the temporal resolution of the wave.
For these reasons, Morlett 5 was the most sensible and reliable choice for high-temporal and frequency-specific results for this study.
Using wavelet analysis, the oscillation characteristics of the intensity are obtained in the network areas and internetwork areas.
By Investigation of the intensity profiles in h and k peaks, it was found that the general behavior in them is the same and the only difference is in the intensity of these peaks and therefore their temperature.
In the case of intensity temperature profiles, the general behavior for intensity temperature profiles extracted from h and k peaks, also seems to be the same.
By Investigation of the wavelet analysis results, it appears that the oscillating behavior at the h and k peaks is almost similar.
Using the results of wavelet analysis, in this study, the period of oscillation in the intensity of bright points in the network and internetwork has been obtained. According to their values, it seems that the bright points of the internetwork have a photospheric origin and the bright points of the network have a chromospheric origin.
Another result of the wavelet analysis of this study was the intensity oscillations with a period of about 64 seconds. This high frequency differs from the solar researchers&rsquo; observations of photosphere and chromosphere oscillations, so it cannot be related to those oscillations. It seems that this is the first time that this type of high frequency oscillations has been reported.
It seems that these high frequency oscillations can play an important role in heating the TR . For this reason, Accurate study of these high frequency oscillation is necessary to understand the causes and heating mechanisms of TR.
These high frequency oscillations have been seen in almost all data and areas under study, so far there is no strong evidence of the origin and cause of these high frequency oscillation, and we hope that with more detailed and extensive studies we can better understand the properties and reason of these oscillations.Investigating the Potential of Infrared Stimulated Luminescence for Dating the Debris rocks of Fatalak Landslide
https://jesphys.ut.ac.ir/article_86918.html
Over the last decade, extensive studies have been done to date rock surfaces using optical luminescence signals, and recently a model has been proposed that shows the rock surfaces using infrared-stimulated luminescence signal have been successfully dated. This method is based on the resetting of luminescence signal with depth into rock surfaces. When a rock surface is first exposed to sunlight, the luminescence signal that has been stored over time in its constituent minerals (particularly quartz and feldspar) starts to decrease. The longer the rock is exposed to sunlight, the depth of light penetration into the rock also increases and the luminescence signal in the rock decreases, however, the rate of luminescence resetting reduces with depth because of the attenuation of daylight into the rock surface. This differential change in bleaching rate with depth leads to the development of a sigmoidal shape luminescence-depth profile. Such profile provides an internal check on an inadequate daylight exposure, and therefore an incomplete resetting of the luminescence signal and allow us to identify the sample that are most likely to provide reliable OSL age. In this study, we investigated the potential of this method to date debris rocks of Fatalak landslide which were induced by Rudbar-Manjil earthquake in north of Iran in 1990. Cores of ~10 cm long and 1 cm diameter were extracted from the buried and exposed sides of the rock samples using a water-cooled, diamond-tipped drill. The cores were then cut into ~1.5 mm thick slices. The slices were gently broken into small chips and mounted in 10-mm diameter stainless steel cups for natural luminescence signal and dose response measurements. All sub-samples from each slice were stimulated by infrared radiation and the blue and ultraviolet luminescence signals were measured. To determine whether the luminescence signals at the buried surface of the rock were sufficiently bleached before the earthquake event, we measured the natural sensitivity-corrected IR50 and pIRIR225 signals (Ln/Tn) with depth into the core and the luminescence-depth profiles were plotted. Unexpectedly, weak or no IR50 and pIRIR225 signals and no suitable luminescence-depth profiles were observed. According to the experience of the second author, almost all sediment samples taken from Iran have generated IRSL signal, so it is necessary to investigate the cause of the lack of a suitable IRSL signal for rock samples in Fatalak. Due to the fact that with increasing depth, the bleaching rate decreases and the luminescence signal intensity increases and also the luminescence signal is generated by a small percentage (approximately 10%) of the grains of the dosimeter grains (mainly quartz and feldspar), it is possible to produce signals (response to the same dose) with different intensities and properties for different slices. Therefore, the potential of all slices to produce the signal and finally to prepare the luminescence-depth profile were investigated. Unfortunately, this profile did not match the profiles provided by previous studies.
In order to analyze whether this observation is due to the nature of the samples taken from Iran or there was a defect in the luminescence signal measuring device or in the experiment process, we performed similar tests for a rock surface which was taken from another site. The same process was then carried out for two rock art paintings from Spain, which showed acceptable signals and the IR50 depth profile showed a sigmoidal shape where the luminescence signal is almost reset at the surface slice but increases with depth until it reaches saturation, as expected from the model. Then, the luminescence-depth profiles from Fatalak and Spain sites were compared with two previous successful studies in Italy and Denmark. The IRSL luminescence-depth profile for rock art sample in Spain was in a good agreement with that of the two burial samples from Italy and Denmark. However, no such correlation was observed between the profiles of the Fatalak sample and the profiles of the two Italian and Danish samples. As the profiles derived for Fatalak sample were not consistent with the model and none of the previous studies, we could not determine the time of the landslide event in the conventional method.Analysis of behavioral pattern the basic parameters in foreshocks with target the prediction of big earthquakes in Iran
https://jesphys.ut.ac.ir/article_86921.html
The analysis of the basic parameters of the foreshocks is one of the most applied researches for risk reduction of earthquake. Because identification of behavioral pattern of foreshocks can help to researchers in detection of the active fault conditions in different areas. Also accurate analysis of these parameters help to study of earthquake prediction as more effective. In this study, we study about behavioral pattern of foreshocks in different tectonic zone in Iran. This research was conducted to prediction of probability the earthquakes with M&gt;5 in Iran. According to this research, accurate analysis the basic seismic parameters of foreshocks (including: relationship between depth and magnitude of foreshocks) studied with target the prediction of big earthquake in various zone for a ten-year period (from 2007 until 2017). The results of this research suggest that there are the certain similarities to the magnitude - depth models for the one zone and also different for various zones. Therefore, this can be used as a precursor in earthquake prediction with Magnitude&gt;5 for different zones in Iran.
The important results presented in this article can be presented in the following cases:
- Investigation of the information of seismicity parameters of foreshocks regarding the relationship between the focal depth of the main earthquake and the frequency of the foreshocks that used in some parts of the world as a precursor of earthquake suggested that main shocks with M&gt;5 and shallow depth have foreshocks with more abundance (Fig 2).
- Due to the relationship between the type of fault with the occurrence and non-occurrence of aftershocks in different parts of the world, in the case of earthquakes greater than 5 in Iran, in earthquakes with reverse faults, relatively more aftershocks have been recorded compared to strike-slip faults.
- The results of the statistical study conducted in this study show, for earthquakes with reverse fault, the frequency of foreshocks increases with magnitude. However, we do not see such conditions for earthquakes with faults causing strike-slip.
- The result of this study shows, the more earthquake especially in Zagros zone and in the near of salt domes happened without foreshock. The reason for this is related to effect of salt dome on movement fault from slide to creep. The creep is a gradual movement and it is not usually accompanied by rapid movement such as slides that lead to large and recordable earthquakes.
- Based on the present study on earthquakes, for the Zagros (especially in the northern and central part) and Central Iran and Sanandaj-Sirjan, can be used more confidently as a precursor of earthquake because in this zones earthquakes happened with more foreshock.
- In Zagros and Iran Markazi zone the relationship between variations of the depth and magnitude of foreshocks is fruitful for predicting of the main shocks.
- For other zones we need to more complete data bank that has earthquakes with higher frequency of foreshocks. Based on this data bank we can presented suitable relations and models for the study of foreshock with the aim of prediction the big earthquakes.Resistivity and IP Tomography to determine Overburden-Bedrock Interface: A case study of Ilam Embankment dam
https://jesphys.ut.ac.ir/article_86961.html
Determination of the overburden-bedrock interface with fine-grained sediments in a high-fold sedimentary environment is a challenging geophysical issue. Electrical Resistivity Tomography (ERT) is considered one of the most effective geophysical approaches for mapping subsurface layers based on the conductivity distribution of materials. The surveys are often performed in two dimensions to investigate lateral and depth variations of resistivity and chargeability values of subsurface layers. The resistivity method, influenced by the volumetric properties of empty spaces, is defined by the ability to transfer charge in subsurface medium, but the induced polarization method depends upon the geometric properties of the pore spaces (grain surface size). Despite the advantages of geo-electrical methods in imaging subsurface structures, due to the high dependency of resistivity and induced polarization parameters on the physical and hydrogeological conditions of the layers, it is not possible to fully match the geological and geo-electrical sections.
One of the applications of geophysical studies is to determine the contact zone between overburden and bedrock in engineering structures such as embankment dams. In cases where the conductivity contrast between the overburden and the bedrock is low, the exact determination of this boundary with the help of geo-electrical methods confronts high uncertainty. In this study, the efficiency of electrical resistivity tomography and induced polarization is investigated by measuring several parallel profiles with the aim of imaging the boundary between overburden and bedrock and determining the possibility of a water escape zone at the left bank of the Ilam embankment dam. According to the results obtained from the inversion of the field measurements, rechargeable sections would be ascribed to the shale region as well as marl limestone containing pyrite particles.
The main objectives of this study include determining the general condition of the overburden concerning the bedrock, geometric imaging of the bedrock, and identification of parts of the bedrock eroded over time. The significant challenge of this geophysical study is the low conductivity contrast between clay and silt overburden and limestone bedrock interbedded with shale and marl. Due to the size of the study area, the studies were performed based on tomographic measurements of electrical resistivity and induced polarization. The field surveys were conducted using four almost parallel profiles (according to the topographic conditions of the area) and with relatively different lengths and through a Pole-Dipole array in forward and reverse measurements.
Geological data, as well as borehole information, are used to validate the geo-electrical sections to better interpret the models obtained from the collected data (i.e., geo-electrical measurements). Finally, due to the high topography of the area and to better show the trend of subsurface structures using two-dimensional models obtained from electrical resistivity tomography and induced polarization as well as drilled boreholes, a three-dimensional view of sections and boreholes has been prepared. Based on the models obtained from the geo-electrical data, it can be concluded that geophysical studies (electrical tomography) have been able to successfully determine the eroded region of the bedrock surface as well as the bedrock-overburden contact which correlates well with boreholes drilled in the area.Determination of 3D seismic wave velocity in Zagros collision zone
https://jesphys.ut.ac.ir/article_87004.html
The Zagros orogenic belt was formed approximately 12 million years ago due to the convergence between the Arabian and Eurasian plates upon the closing of the Neo-Tethys Ocean. The Zagros is categorized as one of the youngest such settings on Earth, at an early stage of collision. Many geophysical multiscale studies have been performed in the Zagros region based on different seismic and non-seismic data. Based on these studies, it can be concluded that the Zagros thrust belt has a crustal thickness of 45 &plusmn; 3 km, whereas beneath the Sanandaj-Sirjan zone, the Moho depth significantly increases up to 65 3 &plusmn; km. Among the many geophysical studies of Zagros and surrounding areas, local earthquake tomography (LET), which uses travel time data of both stations and earthquakes located in the study area, has never been performed for the entire Zagros. In this research, a 3D velocity model of body waves has been extracted by using the information of the arrival time of 7783 earthquakes in the period of 2006 to 2018, recorded in the National Seismological Center and the broadband seismic network of Iran. The dataset used for tomography consists of 123,575 P- and 11,520 S-picks from 7783 events with magnitude greater than 2.5. We used the LOTOS code (Koulakov, 2009) developed for simultaneous inversion for the 3D distributions of the P and S wave velocity anomalies and source locations. In the first step, LOTOS determines initial source locations using tabulated values of travel times previously calculated in a 1-D velocity model. The iterative algorithm of tomographic inversion includes the following steps: (1) Source relocations in the updated 3-D velocity structure based on the ray tracing bending method, (2) calculation of the first derivative matrix and (3) simultaneous inversion for P and S wave velocity anomalies, earthquake source parameters (4 parameters for each source), and station corrections. The inversion uses the LSQR method39. The distribution of estimated 3D velocity models correlates well with tectonic and geological conditions. The Vp and Vs anomalies, which were obtained independently, appear to be almost identical in the crust (depths smaller than 45 km). According to the results, the low velocity anomaly observed in the obtained models in the upper crust can be interpreted due to the presence of Cambrian-Miocene sediments with a thickness of at least 10 km that are spread throughout the Zagros. According to the obtained velocity models in the vertical sections, the Moho depth in the Sanandaj-Sirjan area increases significantly compared to the Zagros region. This increase in Moho depth is related to the subduction of the Arabic plate below the micro-continent of Central Iran, which increases the thickness of the crust (double crust) in the Sanandaj-Sirjan region. Using LOTOS code, the optimal one-dimensional velocity model for the whole Zagros collision zone is also presented. In this model, we can distinguish a &sim;10 km thick sedimentary (Vp &sim;4.90 km s-1), the upper crust down to &sim;30 km (Vp &sim; 5.54 km s-1) and the lower crust down to &sim;45 km (Vp &sim;6.30 km s-1).Deterministic and Fuzzy Evaluation of Human and Climate Contributions in Changing Hydrologic Regime: A Case Study of the Gorganrood Watershed at Tamar River Hydrometric Station
https://jesphys.ut.ac.ir/article_87081.html
Human and climate are two major scio-hydrologic drivers that determine hydrological regimes and patterns. In this regards, Land Use and Land Cover (LULC) changes, agricultural development, etc., on global and regional scales, hydro-climatological components have been influenced so far. The effects of each driver on the variation of hydrological components have been assessed in different studies, but these approaches are not accurate enough at watershed-scale that experience the simultaneous impacts of climate dynamics and LULC changes. Different studies considered both climate and human altertions in the hydrological cycle, and quantified their contributions in such basin. Results of these researches can help for decision makers in water management of the pros and cons of water and land use policies. The Gorganrood watershed is an important basin in the northern part of Iran, especially from the agricultural point of view, which has considerably experienced hydrological and extreme events changes. While the consequence of each climate change and LULC changes have been assessed in the watershed, there is no study, which considers the complicated interactions of these drivers. In this paper, the authors firstly evaluated the contributions of LULC and climate change on the variation of streamflow. Secondly, the modified fuzzy arithmetic method has been used to achieve their fuzzy contributions. To this purpose, the computational period was firstly divided into two different temporal spans known as the reference and affected periods. The reference period is the first temporal scapn in which climate controls the hydrological responses. Then, the statistical behavior of the time-serries changes due to human activities, and the affected period begins. Two hydrological models, Soil and Water Assessment Tool (SWAT) and a black box Artificial Neural Networks (ANNs), were used to simulate the streamflow in the watershed. However, the results of the hydrological models showed their general acceptable performance to simulate the recorded streamflow at Tamar hydrometric station, but the results of the conceptual model (SWAT) showed that the performance of the model in the dry season is not as good as the wet season. In the next step, the contributions of human and climate activities were assessed via two different methods. The first method is simple differential method, which compares the projection of the calibrated model in the second period with observations in both periods. The second set of contribution rates was calculated using the climate elasticity method via recorded monthly data and implemented derivation rules. In the first method, the contribution rate of human activities is significantly higher than the rate of climate change, and in the result of the second method is a reverse. Because of differences in the methods&rsquo; concepts, the calculated contributions rates are different. To assess the uncertainty grouped with the estimations, a novel approach was developed using fuzzy mathematics. The uncertain version of the contribution rates showed that in each &alpha;-cut (fuzzy uncertainty level), the contribution of human alternation (LULC change) as the most important human interventions is more significant than climate direvers. In other words, during the simulation period, the effect of LULC change on the flow was very noteworthy, while climate change had relatively less effect on the behavioral change of flow.Prediction of Water Saturation by FSVM using well logs in a gas field located in South of Iran
https://jesphys.ut.ac.ir/article_86905.html
Water saturation is one of the key petrophysical parameters that mainly affects the accuracy of initial oil estimation related to a hydrocarbon reservoir. Approximation of this parameter is inevitable since it has a high effect on economic development of hydrocarbon reservoirs. In this paper, we propose a two-step approach using 2 well sets and core data to predict water saturation by means of Support Vector Machine (SVM) algorithm in one of the gas reservoirs in Persian Gulf. Due to inevitability of noise and outliers in measured data, SVM modified to Fuzzy SVM (FSVM). Support Vector Regression (SVR) roots from SVM for regression purposes. Considering data in fuzzy sets approaches the machine to reality. In this case, the user is able to give priority to each data point. As a result, noise and outliers can receive less priority which leads to creating better models. After receiving degree of membership, data points enter the algorithm for prediction of water saturation in core missing areas.Water saturation is the fraction of water in a given pore space. It is expressed in volume/volume, percent or saturation units. This is one of the most applicable petrophysical parameters to evaluate petroleum reservoirs which directly affects success of drilling operations, complementary and production of oil well sets. Therefore, an accurate estimation of this parameter is necessary in exploitation of oil and gas reservoirs. There are two main methods to investigate reservoir parameters; first core data analysis as a direct method and second using well logs as an indirect method. Core data analysis to obtain water saturation information has been presented by different authors (Walther 1967, Morad Zadeh et al. 2011, Jia et al 2020). Measurement of this parameter in the laboratory is costly and takes lots of time. Moreover, core data is not always available for whole well sets. So, using algorithms to estimate reservoir parameters in wells missing core data is profitable. There are variety of formulas that estimate water saturation from other parameters such as resistivity and porosity (Luthi 1941, Archi 1942). But these formulas highly depend on lithology and formation type. So, they can not be generalized to variety of situations. Over the last decade application of machine learning methods has been widely used to estimate reservoir parameters (Zhang et al. 2018, Okwu et al. 2019, Li et al. 2021). Water saturation has been estimated using different algorithms (Adeniran et al 2009, Jafari Kenari et al 2013, Bagheripour et al 2014). Each algorithm has its pros and cons. This paper applies SVR algorithm on well logs to obtain water saturation. The superiority of SVR to other algorithms is the high capability of model generalization and low amount of model error. As the next step, membership functions was used to devote membership degrees to each data point. In other words, data is transformed to a fuzzy system in which each data in a (0,1) interval (Zadeh 1965). In this case, noise and outliers receive less degree of membership so their influence on the final model decreases. As a result, better output is produced and modification of SVR to FSVR notably improves the results (Lim et al. 2002, Le et al. 2009). In this paper 3 well sets of a gas reservoir was utilized, two well sets for training the algorithm and the third well for the testing purpose. Well logs for this study include acoustic-DT, transit interval time or slowness, neutron porosity (NPHI), density log (RHOB), photoelectric absorption factor (PEF) and gamma ray, intensity of natural radioactivity (GR), resistivity log both shallow and deep (LLD, LLS) and Micro-Spherical Focused log (MSFL). Determination coefficient calculated for water saturation core data and predicted model obtained from FSVR illustrates better results compared to SVR. This study shows determination of coefficient measured from predicted water saturation and core data of SVR algorithm is 71% while for FSVR is 95%.