Journal of the Earth and Space Physics
https://jesphys.ut.ac.ir/
Journal of the Earth and Space Physicsendaily1Tue, 23 Aug 2022 00:00:00 +0430Tue, 23 Aug 2022 00:00:00 +0430Improving diffractivity attribute to image faults using tapered local semblance in post-stack domain
https://jesphys.ut.ac.ir/article_86909.html
Diffractions carry useful and important information about subsurface features such as unconformities, faults, pinch-out, and so on. On the other hand, most of the information is encoded in diffractions. Polarity reversal across diffraction move out curves that are generated from fault&rsquo;s edges, is a great challenge in seismic diffraction imaging. For the last few decades, several conventional methods in the pre-and post-stack domains, have been carried out for the diffractions characteristics and their locations. But most of these methods were not able to deal with the polarity reversal for diffraction imaging, some of them were time consuming, and needed to have some correction to deal with polarity changes, especially in diffraction caused by fault&rsquo;s edges. Despite a large amount of research that has been carried out on diffraction imaging, very few studies have been devoted to the challenge of the polarity reversal across move out surfaces. We used the semblance function along with the hyperbolic move out curves for the diffractions that their travel times have been calculated using the double-square-root equation. As we know, using both semblance and Kirchhoff migration for diffraction imaging from fault&rsquo;s edges, without which taking the polarity reversal into account would fail. This is caused by presence of same number of positive and negative wavelets in the diffraction move out curves. For solving this problem, we divided the global scanning window along hyperbolic move out surfaces into several subdivided window and the local semblance measurements over the sub-windows were performed separately. Every point in image domain is considered as a diffraction point that we call this points as image points. The final semblance measure at each image point is calculated by averaging the semblance measurements from sub-divided smaller windows. We also contaminated the synthetic data with white Gaussian noise, having different signal to noise ratios. Results showed no significant differences due to the fact that random arrivals in seismic data do not influence the semblance measurement. In next step to improve the diffraction imaging, we used tapered local semblance due to interference of diffractions with dominated reflection waves, other data and even other diffractions, especially at far offsets from diffraction&rsquo;s apex. We called the proposed method as tapered local semblance method. The method weights the data from top to the bottom along the time axis, we also use less number of traces at shallow parts and more traces at deeper parts to reduce the harming effect of the interference. To coup with this task, we introduced a triangle taper to take few number of traces at the early arrival parts and more traces at the late arrival parts, instead of using a box with constant number of traces in the apertures from top to the bottom of the window. We tested several tapers with different angles of apex to determine the optimum one. We evaluated both methods on synthetic data as well as field recorded dataset. Both methods required no polarity reversal corrections to be applied. The obtained results showed the ability of our workflow to having higher resolution and good localization for diffractions from fault&rsquo;s edges in synthetic data. The results obtained from using the tapered local semblance method on field recorded dataset showed more diffractivity than local semblance method.Magnetic and IP/Res data inversion for investigation of the spatial relation between the geophysical models and mineralization in the southern Dalli Cu-Au porphyry deposit
https://jesphys.ut.ac.ir/article_85508.html
Because of declining high-grade ore deposits and increasing demands for metal resources, exploration of low-grade metal deposits, such as porphyries, have become feasible. Besides, humankind has spent most of the shallow metal ore deposits, and new prospecting projects focus on deeper deposits. Therefore, geophysical methods have gained more attention due to their ability to determine buried ore bodies' physical properties. Hence, most countries, including Iran, make significant investments in the geophysical exploration of deep porphyry deposits. According to widely accepted Lowell and Guilbert's model for porphyry copper deposits, the ore-bearing zones mainly concentrate at the edge of the potassic alteration zone. Pyrite, a highly conductive and chargeable metallic mineral, is a significant attribute in the potassic alteration. The model also states that the high susceptible magnetite-bearing rocks mainly occur at the bottom of the pyrite shell and the ore body. Due to the occurrence and presence of susceptible and conductive metallic minerals such as magnetite and pyrite in the potassic zone near to the ore body in the copper and gold porphyry deposits, the use of magnetometry, resistivity, and inducing polarization methods give reliable information about the location, depth, and shape of the deposits. For instance, in this research, we focus on the magnetic and IP/Res data in the southern Dalli porphyry deposit, with promising Cu-Au indices, which is located at Euromieh-Dokhtar ore-bearing zone Markazi Province. First, we applied standard processing techniques to remove the aliasing and regional effect in the magnetic data. Then, using the analytic signal technique, we showed the concentration of the magnetic sources over the study area. We also applied the power spectrum and Euler deconvolution techniques to the magnetic data and estimated the magnetic sources' depths. The estimated depth from the power spectrum is between the estimated depth from Euler deconvolution for possible sources with step and pillar shapes. Next, we used the average estimated depth from each of the depth estimation techniques in a three-dimensional magnetic data inversion as the depth of the sources in depth weighting. Also, we studied the inversion results via combining the cross-section of the magnetic susceptibility model along the boreholes and the lithology and geochemical information from core samples analysis. The results indicate that the higher grades for gold and copper occur at the edge of the magnetic sources and possible magnetite mineralization zones. The inversion results using the depth weighting with the depth extracted from the power spectrum show the best correlation and spatial relation with the geochemical data. Besides the magnetic data inversion, applying Oldenburg and Li algorithms for two-dimensional inverse modeling, we extracted the underground bodies' resistivity and chargeability model along with a IP/Res profile in the study area. The resulting chargeability models show a significant relationship with the presence of gold and copper mineralization. We also compared the resulting two-dimensional resistivity and changeability models with their corresponding magnetic susceptibility at the cross-sections along with the IP/Res. The comparison shows that the possible mineralization zones coincide with larger magnetic susceptibility values, high chargeability and low resistivity. The results show good accordance with Lowell and Guilbert's model. Also, highly susceptible rock in the shallower depth indicates that the erosion process has destroyed most possible orebody.Determination of 3D seismic wave velocity in Zagros collision zone
https://jesphys.ut.ac.ir/article_87004.html
The Zagros orogenic belt was formed approximately 12 million years ago due to the convergence between the Arabian and Eurasian plates upon the closing of the Neo-Tethys Ocean. The Zagros is categorized as one of the youngest such settings on Earth, at an early stage of this collision. Many geophysical multiscale studies have been performed in the Zagros region based on different seismic and non-seismic data. Based on these studies, it can be concluded that the Zagros thrust belt has a crustal thickness of 45 &plusmn; 3 km, whereas beneath the Sanandaj-Sirjan zone, the Moho depth significantly increases up to 65 3 &plusmn; km. Among the many geophysical studies of Zagros and surrounding areas, local earthquake tomography (LET), which uses travel time data of both stations and earthquakes located in the study area, has never been performed for the entire Zagros. In this research, a 3D velocity model of body waves has been extracted using the information of the arrival time of 7783 earthquakes in the period of 2006 to 2018, recorded in the National Seismological Center and the broadband seismic network of Iran. The dataset used for tomography consists of 123,575 P- and 11,520 S-picks from 7783 events with magnitude greater than 2.5. We used the LOTOS code (Koulakov, 2009a) developed for simultaneous inversion for the 3D distributions of the P and S wave velocity anomalies and source locations. In the first step, LOTOS determines initial source locations using tabulated values of travel times previously calculated in a 1-D velocity model. The iterative algorithm of tomographic inversion includes the following steps: (1) Source relocations in the updated 3-D velocity structure based on the ray tracing bending method, (2) calculation of the first derivative matrix and (3) simultaneous inversion for P and S wave velocity anomalies, earthquake source parameters (4 parameters for each source), and station corrections. The inversion uses the LSQR method39. The distribution of estimated 3D velocity models correlates well with tectonic and geological conditions. The Vp and Vs anomalies, which are obtained independently, appear to be almost identical in the crust (depths smaller than 45 km). According to the results, the low velocity anomaly observed in the obtained models in the upper crust can be interpreted due to the presence of Cambrian-Miocene sediments with a thickness of at least 10 km that are spread throughout the Zagros. According to the obtained velocity models in the vertical sections, the Moho depth in the Sanandaj-Sirjan area increases significantly compared to the Zagros region. This increase in Moho depth is related to the subduction of the Arabic plate below the micro-continent of Central Iran, which increases the thickness of the crust (double crust) in the Sanandaj-Sirjan region. Using LOTOS code, the optimal one-dimensional velocity model for the whole Zagros collision zone is also presented. In this model, we can distinguish a &sim;10 km thick sedimentary (Vp &sim;4.90 km s-1), the upper crust down to &sim;30 km (Vp &sim; 5.54 km s-1) and the lower crust down to &sim;45 km (Vp &sim;6.30 km s-1).Paleostrees analysis and Evaluation of Movement potential of Dochah Fault, Central Iran
https://jesphys.ut.ac.ir/article_85441.html
Qom region is one the significant area insight of geological features in Central Iran. Several researches have studied about the Cenozoic strata in terms of sedimentology, Stratigraphy and paleontology but, few structural detail data are available from this area. The most important exposure of the rock unites at the west of the Qom city is related to the Eocene volcanics, Lower Red, Qom and Upper Red Formations. Major structures at this area are Kamar Kuh and Mil anticlines, Yazdan syncline, Dochah and Sefid Kuh faults. Dochah Fault with E-W trending and ~70&deg; dipping to the northward placed at the northwest termination of Qom-Zefreh Fault as a recent sinistral strike slip fault. This fault with ~15 km length separate Mil anticline from Yazdan syncline and eliminates the southern limb of Dochah overturned anticline. In this study, we focused on the Dochah Fault damaged zone in order to paleostress analysis using geometric and kinematic characteristics of fault slip data, which is achieved from the deformed Qom and Upper Red Formations. For this purpose, 100 fault slip data with precise and accurate geometric and kinematic characteristics have been measured in the field and analyzed with software Dasiy and Rotax methods. In order to determine the sense of shearing of the faults, the criteria of Petit (1987) and Doblas (1998) have been used. While the trend of the major structures is east-west but, most of slip data is related the transverse oblique slip faults, because the Dochah Fault passes through the soft materials of Lower Red Formation and consequently it is not possible or too hard to find the slicken line. Meanwhile, our results indicate the magnitudes of the axes of the maximum and minimum principal stress (&sigma;1, &sigma;3) as 030/05 and 285/05, Geometric and kinetic structural analysis related to the dochah fault and according to the spatial arrangement of the main stress axes indicate the readiness of the left-hand section on the right-hand section, especially in the western parts of the region (Caspian) attributed. oblate shape of field stress ellipsoid shape (R~0.7). Based on the field stress ellipsoid shape and the rotation of the fault data regarding the Anderson's theory for the compressive stress regime, the stress transition trajectory map has been prepared. The arrangement of maximum stress trajectories is consistent with the general stress regime in the Iranian crust and is consistent with the activity of the Dochah Fault. Different criteria have been proposed to evaluate the activity of a fault in terms of seismicity. In experimental studies, there are various estimates for selection of the part of the fault that the movement rediscovers for each tectonic seismic zone. Here, the possibility of moving Dochah Fault has been estimated by the method of Lee et al. (1997). In this method, the angular relationship between maximum principal stress axis (&sigma;1) and the pole of the fault plane considered in order to evaluate the Fault Movement Potential (FMP) based on equation &ldquo;FMP=f (G, &sigma;)&rdquo;. The angle between maximum principal stress axis (&sigma;1) and the pole of Dochah Fault (&theta;) is equal to ~40&deg; and so FMP=0.33 based on equation FMP= (&theta;-30&deg;) &frasl; (30&deg;) if &theta;&isin;[30&deg;,60&deg;]. This value of FMP indicates the low seismic potential of Dochah fault for movement and creating earthquakes.Analysis and prediction of EOP time series using LSHE+ARMA method
https://jesphys.ut.ac.ir/article_85451.html
The rotation of the solid Earth with respect to inertial space is not constant due to the changes of external gravitational forces and internal dynamics. Earth orientation parameters (EOP), including, the Earth&rsquo;s polar motion (PM), Anomalies in the Earth&rsquo;s angular velocity and celestial pole offsets (CPO), describe these irregularities in the Earth&rsquo;s rotation. Anomalies in the axis defined by the celestial intermediate pole (CIP) with respect to the Z axis of the terrestrial reference system are named as PM. The CPO are expressed as the deviations, dX and dY, between the observed CIP and the conventional CIP position. The difference between the smoothed principal form of universal time UT1 and the coordinated universal time UTC denotes the Earth&rsquo;s rotation angle, which together with the xp, yp terrestrial pole coordinates, forms a set of Earth orientation parameters (EOP). In addition to the other EOP, the length of day (LOD) is used to model the Anomalies in the Earth&rsquo;s rotation rate. LOD is the difference between the duration of the day measured by space geodesy and nominal day of 86,400 s duration.Generally, EOP are the parameters that provide the rotation of the International Terrestrial Reference System (ITRS) to the International Celestial Reference System (ICRS) as a function of time. However, the EOP are computed using the modern space geodetic techniques such as Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), Satellite Laser Ranging (SLR), Very Long Baseline Interferometry (VLBI) and the Global Navigation Satellite System (GNSS), they are unavailable to the real-time applications due to the data processing complexities. Accurate and rapid EOP predictions are required for different fields like precise orbit determinations of artificial Earth satellites, positional astronomy, space navigation and geophysical phenomena.There are many different methods for analysis and prediction of EOP time series including deep learning methods, least square (LS) with autoregressive (AR) and also Singular Spectrum Analysis as a non-parametric method.In this research Least Square Harmonic Estimation analysis is used to investigate the frequencies of EOP. First, the solid and ocean tide terms are modeled based on IERS technical notes. These effects are removed from LOD time series. The remained time series are named as LODR time series. The univariate time series analysis is then applied to the LODR time series and multivariate analysis is used for detecting the PM periodic patterns. Applying these methods to the 40 years of observations of EOP (since 1 January 1980 to 31 December 2020) revealed the Chandler, annual, semi Chandler, semi-annual and annual signals as the main periodic signals in the EOP time series. The functional model is then formed using all detected signals in order to model the deterministic variations of EOP time series.In order to model the remained non-deterministic variations, an ARMA (Autoregressive Moving Average) model is fitted to the least square residuals. The Akaike's Information Criterion (AIC) is used to investigate the optimized order of ARMA model.The EOP is then predicted for the first 20 days of 2021, using the pre-identified functional model for the deterministic part and the ARMA model for the non-deterministic part of the time series variations. For the prediction of LOD time series, after creating the functional model of LODR time series, the solid and ocean tide terms are added to the functional model of LODR.Finally, in order to validate the accuracy of the proposed method, a comparison is made with an EOP prediction study that used the ANN (Artificial Neural Network) and ANFIS (Adaptive Network Based Fuzzy Inference System) methods for short term prediction of EOP.The result shows that the accuracy of the proposed method is better than the previous study and the method can be used for accurate prediction of EOP time series.Analysis of behavioral pattern the basic parameters in foreshocks with target for the prediction of big earthquakes in Iran
https://jesphys.ut.ac.ir/article_86921.html
The analysis of the basic parameters of the foreshocks is one of the most applied researches for risk reduction of earthquakes. Because identification of behavioral pattern of foreshocks can help researchers in detection of the active fault conditions in different areas. Also accurate analysis of these parameters help to study of earthquake prediction as more effective. In this study, we study about behavioral pattern of foreshocks in different tectonic zons in Iran. This research was conducted for prediction of probability the earthquakes with M&gt;5 in Iran. According to this research, accurate analysis of the basic seismic parameters of foreshocks (including: relationship between depth and magnitude of foreshocks) is studied with target for the prediction of big earthquake in various zons for a ten-year period (from 2007 until 2017). The results of this research suggest that there are certain similarities in the magnitude-depth models for the one zone and also different for various zones. Therefore, this can be used as a precursor in earthquake prediction with Magnitude&gt;5 for different zones in Iran. The important results presented in this article can be presented in the following cases:- Investigation of the information of seismicity parameters of foreshocks regarding the relationship between the focal depth of the main earthquake and the frequency of the foreshocks that used in some parts of the world as a precursor of earthquake suggested that main shocks with M&gt;5 and shallow depth have more foreshocks abundances (Fig 2). - Due to the relationship between the type of fault with the occurrence and non-occurrence of aftershocks in different parts of the world, in the case of earthquakes greater than 5 in Iran, in earthquakes with reverse faults have relatively more aftershocks recorded compared to strike-slip faults.- The results of the statistical study conducted in this study show that for earthquakes with reverse fault, the frequency of foreshocks increases with magnitude. However, we do not see such conditions for earthquakes with faults of strike-slip.- The result of this study shows that more earthquake especially in Zagros zone and near salt domes happened without foreshocks. The reason for this is related to effect of salt dome on movement fault from slide to creep. The creep is a gradual movement and it is not usually accompanied by rapid movement such as slides that lead to large and recordable earthquakes.- Based on the present study on earthquakes, for the Zagros (especially in the northern and central part) and Central Iran and Sanandaj-Sirjan, can be used more confidently as a precursor of earthquake because in this zones earthquakes happened with more foreshocks.- In Zagros and Iran Markazi zone the relationship between variations of the depth and magnitude of foreshocks is fruitful for predicting of the main shocks.- For other zones we need to have more complete data bank that has earthquakes with higher frequency of foreshocks. Based on this data bank we can present suitable relations and models for the study of foreshock with the aim of predicting the big earthquakes.Observing of Pre-flare Very Long-period Pulsations, for 12 Solar Flares, as a Sign of Flare’s Onset
https://jesphys.ut.ac.ir/article_86908.html
Solar flares are sudden bursts in the solar atmosphere, which have emissions, from radio wavelengths up to gamma rays, and according to their energy are classified into different classes (A, B, C, M, and X, respectively). The process of releasing magnetic energy in flares is done by magnetic reconnection, which is often created by a complex magnetic field. Flares accelerate many electrons and ions, raising their energy to the limit of relative energy. These accelerating particles play a very important role in the release of large solar flare energies. Considering the fact that flares emit radiation when they explode, most of them create light spectrum and sometimes X-rays and ultraviolet rays, which are emitted mainly by the photosphere and chromosphere into concentrated sources called footpoints and ribbons. These radiations and emissions occur when the lower layers of the sun's atmosphere heat up during a flare, and this heating due to the collision of particles probably plays an important role in the occurrence of the flare. In addition, they emit high-energy radiation such as hard X-rays (HXR) from electrons and gamma rays from ions. The main part of these emissions is in the form of electromagnetic emission (soft X-rays) and energetic particles. Emissions radiated from a large flare or a solar mass eruption (with an energy more than J), when reaching the earth, can have destructive effects on the Earth's atmosphere, as well as the orbits of satellites or magnetic and electrical facilities of devices like ships and airplanes. Therefore, predicting the time of the flare occurrence and determining its class type can help reduce these destructive effects.One of the observable structures that can be seen before a flare occurs, are oscillations with very long period pulsations (VLPs) of the order of 8-30 minutes, which occur about one to two hours before the flare onset, and were first reported by Tan et al. (2016) in the pre-flare phase. MHD oscillations and longitudinal electric current in flare loops can be appropriate candidates to explain the formation of VLPs. Investigating pre-flare VLPs can also help us in understanding the origin of flares. With the help of observational data of X-ray radiation (SXR), onboard the GOES satellite, during the pre-flare phase, these pulses can be observed at similar time scales during flare processes.In this paper, using the abovementioned data, we selected eighteen flares for the study of which 6 flares are in class C and 12 flares are in class M. Of these, twelve had typical VLPs before flare-onset, which were all in the M class, with the exception of one. The periodicity that we calculated for the VLPs of these flares, with the help of the Fast Fourier Transform is 14 to 28.9 minutes, which is in agreement with the results of Tan et al. (2016). The number of pulses observed in each pre-flare is between 3 and 7. For the other six remaining flares of our selection, no typical pre-flare VLP was observed, which all but one of them, were in class C.On the design and implementation of digital filters to process meteorological signals
https://jesphys.ut.ac.ir/article_85460.html
Separation of different frequency bands in the complex and combined signals related to meteorological variables and also climatic indices requires the use of digital filtering methods. In this way, the information on different frequency bands can be organized and used. Given that these signals generally exhibit complex and nonlinear behavior, the use of mathematical filtering methods to identify their stochastic and periodic components leads to a better understanding of their behavior and helps modeling them as well. Therefore, the use of digital filters in order to recognize regular variabilities and facilitate statistical forecasting is one of the main goals in this field.The design and implementation of these filters are possible both in time and frequency domains. In the frequency space, this process is performed based on the Fourier transform of the signals on the basis of the Fast Fourier Transform Algorithm (FFT), in which the variances of the desired signal can be extracted based on spectral analysis in different frequencies. By employing different types of non-recursive and recursive digital filters, which they can be implemented as low-pass, high-pass, band-pass, and band-stop, the related signal inthe &nbsp;time domain for each state can be constructed, and the corresponding spectrum can be studied. The isolated spectrum can be related to the effect of a special phenomenon that influences the main signal. In addition, it is possible to remove high frequency components from the original signal, which include noises and may not contain important information. Moreover, the process of optimal smoothing the original signal can also be carried out.In this study, different digital filters have been designed and then applied to meteorological data such as monthly surface temperature and precipitation. Two synoptic stations over Iran are selected and the related discrete monthly signals are constructed for 504 months during 1979-2021. Then, the moving average (MA) filter is used as a main filter, because it is the most common filter in digital signal processing (DSP), and also it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for a common task such as reducing random noise while retaining a sharp step response. This makes it the premier filter for time domain encoded signals. The filtering process in this study is conducted to denoise the original signals, and also to examine seasonal, annual, and inter-annual components of the original signals. Since employed filters are digital, they must be applied to the initial discrete signal in the form of convolution with the finite impulse response (FIR) of the filter in the time domain, or they can be applied in the form of multiplication in the frequency domain based on discrete Fourier transform and then using of the inverse Fourier transform to recover the desired signal.The results of this study show the importance of using digital filters in analyzing the spectral contents of meteorological signals. Furthermore, the Hamming filter, which is defined based on the cosine truncation and windowing, shows better performance in attenuating Gibbs oscillations in the lateral sidelobes of the filter frequency response than the simple moving average (MA) filter. In addition, the correlation analysis is carried out separately to indicate the linear relationships between different frequency components of the signals. The higher correlations are observed in annual frequency bands of the temperature and precipitation signals for the selected stations. It shows the effect of external climate forcing on both temperature and precipitation that is stemmed from the earth&rsquo;s motion around the sun during a year. Obviously, choosing more weights in the design of a filter can improve the filtering performance, but it should be avoided to use more weights than necessary.A study of clear air turbulence by spontaneous imbalance theory
https://jesphys.ut.ac.ir/article_85450.html
Emission of inertia&ndash;gravity waves (IGWs) through imbalance is a well-known cause of clear air turbulence (CAT) in the upper troposphere. IGWs may initiate CAT by locally modifying the environmental characteristics of the meteorological quantities like static stability and wind shear. CAT is a micro-scale phenomenon for which there are also mechanisms other than IGWs. Accurate forecasting methods using numerical models and CAT diagnostic indices are still being studied and developed (Sharman and Lane, 2016). Following Knox et al. (2008) (hereafter KMW), the current study is focused on detecting CAT by spontaneous imbalance theory and the effect of IGWs on the flow.For this purpose, the lifecycle of the baroclinic waves, including their phases of growth, overturn and decay as well as the generation and propagation of IGWs are investigated by numerical simulation using the Weather Research and Forecasting (WRF) model in a channel of 4000 km length, 10000 km width and 22 km height in respectively the zonal, meridional and vertical directions on the f plane, with a horizontal resolution of 25 km and vertical resolution of 0.25 km. Based on the wave&ndash;vortex decomposition (WVD) method, the unbalanced flow, and the dimensional and non-dimensional IGW amplitude have been estimated. In the next step, the non-dimensional wave amplitude has been alternatively determined for reference, based on the Lighthill&ndash;Ford theory of spontaneous imbalance in KMW method. Then the turbulent kinetic energy (TKE) dissipation and eddy dissipation rate (EDR) have been calculated to determine the intensity and location of CAT.The results showed that KMW method uses a proportionality constant to make the non-dimensional wave amplitude as order of the Rossby number and determines the constant empirically by matching distributions of pilot reports of turbulence to the pattern of TKE dissipation. For this reason, the EDR has the best fit with the location of observed CAT and the minimum value of Richardson number. This is while most values of the non-dimensional wave amplitudes calculated by the WVD and harmonic divergence analysis are less than unity and have values of the order of the Rossby number itself. On day 8, when the baroclinic wave and IGWs are at their peak of activity, the pattern of distribution of EDR by WVD indicates that there is moderate turbulence all around the jet stream region, and the maximum values of EDR are located below the jet core and in the jet-exit region, which is similar to the location of wave activity and location of CAT in previous studies. Also minimum values of Richardson number are at the jet-exit region where the maxima of EDR reveal moderate turbulence there. The distribution of EDR by KMW, unlike the distribution of EDR by WVD, shows that in most areas of the flow, there is no sign of turbulence except in a few patchy places near the jet region, where moderate turbulence is predicted. Thus making use of an optimal WDV could improve the accuracy of detecting unbalanced parts of the flow and predicting areas of CAT in the upper troposphere in the vicinity of the jet stream.Deterministic and Fuzzy Evaluation of Human and Climate Contributions in Changing Hydrologic Regime: A Case Study of the Gorganrood Watershed at Tamar River Hydrometric Station
https://jesphys.ut.ac.ir/article_87081.html
Human and climate are two major scio-hydrologic drivers that determine hydrological regimes and patterns. In this regards, Land Use and Land Cover (LULC) changes, agricultural development, etc., on global and regional scales, hydro-climatological components have influenced these regimes. The effects of each driver on the variation of hydrological components have been assessed in different studies, but these approaches are not accurate enough at watershed-scales that experience the simultaneous impacts of climate dynamics and LULC changes. Different studies have considered both climate and human altertions in the hydrological cycle, and quantified their contributions in such basin. Results of these researches can help decision makers in water management of the pros and cons of water and land use policies. The Gorganrood watershed is an important basin in the northern part of Iran, especially from the agricultural point of view, which has considerably experienced hydrological and extreme events changes. While the consequence of each climate change and LULC changes have been assessed in the watershed, there is no study, which considers the complicated interactions of these drivers. In this paper, the authors firstly evaluated the contributions of LULC and climate change on the variation of streamflow. Secondly, the modified fuzzy arithmetic method has been used to achieve their fuzzy contributions. To this purpose, the computational period was firstly divided into two different temporal spans known as the reference and affected periods. The reference period is the first temporal span in which climate controls the hydrological responses. Then, the statistical behavior of the time-serries changes due to human activities, and the affected periods. Two hydrological models, Soil and Water Assessment Tool (SWAT) and a black box Artificial Neural Networks (ANNs), were used to simulate the streamflow in the watershed. However, the results of the hydrological models showed their general acceptable performance to simulate the recorded streamflow at Tamar hydrometric station, but the results of the conceptual model (SWAT) showed that the performance of the model in the dry season is not as good as in the wet season. In the next step, the contributions of human and climate activities were assessed via two different methods. The first method is simple differential method, which compares the projection of the calibrated model in the second period with observations in both periods. The second set of contribution rates was calculated using the climate elasticity method via recorded monthly data and implemented derivation rules. In the first method, the contribution rate of human activities is significantly higher than the rate of climate change, and the result of the second method is a reverse. Because of differences in the methods&rsquo; concepts, the calculated contributions rates are different. To assess the uncertainty grouped with the estimations, a novel approach was developed using fuzzy mathematics. The uncertain version of the contribution rates showed that in each &alpha;-cut (fuzzy uncertainty level), the contribution of human alternation (LULC change) as the most important human interventions is more significant than climate direvers. In other words, during the simulation period, the effect of LULC change on the flow was very noteworthy, while climate change had relatively less effect on the behavioral change of the flow.Investigation of Seasonal dust in northeastern Iran and numerical simulation of extreme dust events using WRF-CHEM model
https://jesphys.ut.ac.ir/article_86910.html
In recent years, dust storm has become a serious environmental concern and has attracted a lot of attention among atmospheric scientists. Northeast of Iran is a large and strategic population area. Due to its proximity to large arid regions in Central Asia, this region has a high risk of experiensing dust events. In recent years, it has faced many problems regarding dust phenomena. This study is conducted to investigate seasonal dust events in northeastern Iran. To achieve this goal, a combination of station data, reanalysis, satellite and output of the WRF-Chem numerical model have been used simultaneously to improve our understanding of the dust seasonal cycle in northeast of Iran. Accordingly, this research was organized in two parts: monitoring and modeling of dust phenomenon. The results of this study may be useful for forecasting dust storms as well as spatial planning.To investigate the dust events seasonal variabilities, the dust surface mass concentration of MERRA-2 dataset, aerosol optical depth (AOD) of the combined Dark Target (DT) and Deep Blue (DB) algorithms of MODIS sensor of Terra and Aqua satellites were examined during the long-term period (2004-2018).Since the emission of dust is highly dependent on biophysical components, it is necessary to use numerical models. The WRF-Chem numerical model was used for this purpose. The study area includes northeastern Iran and parts of Central Asia. The horizontal resolution of the child domin of 30 km model was performed with 32 vertical levels. The NCEP / FNL is used as boundary conditions with 3-hourly time step and 1-degree horizontal resolution for the model configuration. Four extreme dust events were selected to investigate the transport of dust to northeastern Iran. The selected dust events occurred on November 13, 2007, May 29, 2008, June 8, 2015, and October 17, 2017 in northeastern Iran. Therefore, case events were simulated with a time step of 180 seconds and output every three hours using GOCART, AFWA, UoC_S01 and UoC_S11 schemes.The results showed that the maximum dust activity occurred in spring with AOD value equal to 0.59 and dust surface mass concentration is 645.2 &micro;g m -3. The summer is in the next place. Seasonal analysis of AOD and dust using satellite and reanalysis data, showed that Aralkum, Kyzylkum, Karakum and Kara-Bogaz-Gol are the main dust sources in Central Asia that are active in all seasons.Comparison of dust simulation results for PM2.5 and PM10 variable with observational data of air quality control stations in Mashhad showed that GOCART scheme can well depict dust events and provide a low bias towards station data. Also, the study of correlation coefficient between simulation and observation showed that the GOCART scheme explains nearly 90% of the variance of data. The root mean square error (RMSE) for the GOCART scheme is less than 20 micrograms per cubic meter for PM2.5. Accordingly, the GOCART scheme is a suitable scheme for dust study in Northeast of Iran and the WRF-Chem model can be used to operationally forecast dust storms. The dust detection algorithm (DDA) of the AIRS sensor and the aerosol optical depth (AOD) of the MODIS sensor confirm the contribution of the mentioned sources of dust in transferring dust to the northeast of Iran. The results showed that three of the case studies occurred as a result of the passage of an extratropical Rossby wave and the deepening of the trough on the territory of Turkmenistan. In contrast, the summer case study is the result of the establishment of a summer circulation pattern that has occurred with the simultaneous establishment of an anticyclonic circulation in the southern part of Turkmenistan and the northeastern parts of Iran and a cyclonic circulation in the Sistan plain and southeastern parts of the country.Statistical modeling of the mean annual temperature at Mehrabad station, Tehran
https://jesphys.ut.ac.ir/article_85447.html
Regarding climate changes and global warming, it seems that the behavior of climate elements in the future should be predicted and known. Therefore, in this study, using modeling by a set of ARIMA statistical models, models on the time series of the mean annual temperature at Mehrabad station in Tehran during 1951-2015 were fitted to investigate a significant model by trial and error in order to identify the most appropriate model. Since the time series of the observations had a normal distribution, modeling was performed on the time series without applying Box Cox transformation. First, for static and non-static investigations, the time series of annual mean temperature observations was plotted simply in diagrams. In addition, the first and second order regression line equations were used to further ensure the type of time series behavior of the mean annual temperature. The results showed that the time series behavior of temperature at this station is linear. Since the time series behavior was linear, the order d = 1 was determined. Second, the first-order differentiation was performed on the time series. In the third step, the order p and q were determined using autocorrelation and partial autocorrelation of the differentiated values ( ). After investigating the significance of the order of the components of each of the models, the following models were selected as significant models, respectively:1) ARIMA(0,1,12) ARIMA(2,1,0Since the first significant model was observed with suspicion, as a result each of the components (p, d, q) of the above two models were tested up to the 3th order. Finally, these two models were selected as significant models. Also, Akaike information criterion (AIC) was considered to determine the most appropriate model among the above two models. ARIMA model (0,1,1 &nbsp;had the minimum value of AIC compared to the other model. As a result, using this model, the temperature time series at this station was predicted from the end of the period to &frac14; of the first time series. Given the concept of uncertainty, which underlies descriptive and inferential statistics, as a result, it seems that uncertainties should be expressed with high statistical certainty. In this regard, we used statistical tests of autocorrelation, Pearson correlation coefficient, standard normal homogeneity, cumulative deviations, milestones, sign on the time series of ARIMA model residues (0,1,1 , and drawing methods for residual normality, residual independence, constant residual variance and portmanteau test to consider further criteria to increase the statistical reliability of the applied model. The results of all statistical tests showed the random residual time series of the model.&nbsp; These tests showed that the best model for modeling the time series of the mean annual temperature at Mehrabad station, Tehran is ARIMA model (0,1,1 . Since the upper and lower limits of the predicted series as well as the predicted observations show the same behavior of the temperature time series at Mehrabad station, it can be said that the estimation of the predicted numerical values is still appropriate for this model to predict the temperature variable at this station. Finally, the results showed that the mean temperature of the predicted series is likely to be 17.742 ͦ C, and the mean annual temperature will increase by 0.038 ͦ C compared to the previous year.Effect of non-thermal and trapped electrons on solitary waves and chaos in auroral acceleration regions
https://jesphys.ut.ac.ir/article_86902.html
In this paper, using the reductive perturbation method, the propagation of nonlinear solitary waves and chaos phenomenon and its stability were studied in auroral acceleration regions in the presence of electrons with the Cairns-Gurevich distribution function. Using the continuity, momentum transfer, and Poisson equations, and considering the density of electrons as the Cairns-Gurovich distribution function, and using two different models, Korteweg&ndash;De Vries (KdV) and modified KdV equations were obtained. It was shown that the solutions of these equations are in the form of solitary waves. The effect of non-thermal and trapped electrons and wave velocity on these waves were studied. In the next section, pseudo-potentials and total mechanical energy are obtained. Considering a quasi-periodic factor, KdV and modified KdV equations were reviewed and the chaos and its stability were studied in the auroral acceleration regions. Results showed that by increasing the wave velocity and non-thermal and trapped parameters, the size of the field increased, and the depth of the potential well was also increased. These results confirmed each other. It was indicated that in the case of b=0, this distribution function became as the Maxwellian distribution function. In the case b&gt;0, in addition to free particles, the trapped and non-thermal particles also affect the distribution function. In this case, the width of the distribution function became larger, which indicated that the more energetic electrons existed in this case. It is also concluded that for both nonlinear equations, the solutions can exist in the form of rarefactive and compressive solitons. Three-dimensional graphs of total mechanical energy were also plotted for different values of the wave velocity and non-thermal and trapped parameters. Results for this case also showed that for the total energy of E1, by increasing the b parameter, the energy deviated from the uniform function and reached the saddle state. It was also shown that the wave velocity was similar to the b parameter. It was found that for different values of U and b parameters, the behavior of the total energy of E2 was different from the total energy diagram of E1. Poincar&eacute; return mapping diagrams confirmed the existence of a closed cycle indicating chaos in these plasmas. Results of this section also showed that for solitons with function &psi;1, by increasing the U parameter, the Poincar&eacute; return mapping cycle region increased. Poincar&eacute; return mapping lines were also more focused in this case. For solitons with &psi;1 functions, by increasing the wave velocity, Poincar&eacute;'s return map goes from a quasi-stable state to a stable state. By increasing the quasi-periodic frequency, the Poincar&eacute; return map goes from steady-state to quasi-steady state so that a cycle converts to two cycles with a certain overlap. Finally, it was concluded that using real parameters, the wave velocity was in the interval 13km/s&lt;v'&lt;52km/s and the electric field was approximately 5mV/m and the Debye length became 15m. It was also concluded that the results of the recent work were in good agreement with the results obtained from the Viking, Freja and S3-3 satellites.Numerical solution of two-layer shallow water equations using mode splitting method
https://jesphys.ut.ac.ir/article_86907.html
In the numerical models that use iterative methods to solve the momentum equations by applying the rigid-lid approximation, the number of iterations increases for high resolution, therefore processing time increases. An alternative method is applying a free surface and splitting equations to barotropic and baroclinic modes. The surface gravity waves that are faster than slow moving internal gravity waves; impose a limitation on the time steps with the CFL condition. Thus, mode splitting method is computationally efficient to handle the multiple time steps separating the barotropic and baroclinic mode equations. In this method, the barotropic mode equations are solved at small time steps consistent with the fast surface gravity wave speeds and the baroclinic mode equations are solved at larger time steps consistent with the slow internal gravity wave speeds. This is used in most of the ocean circulation models and is an unavoidable choice to high resolution models.In this study, we considered the shallow water equations for two-layer basin with vorticity-divergence formulation using mode splitting method by a small time step of barotropic mode within a larger time step of baroclinic mode. The primary systems of equations that contain both upper and lower layer variables, are rewritten in terms of new (barotropic and baroclinic) variables without any variation or more approximation of primary systems. This procedure can be extended to multi-layer systems so that primary N-layer system of equations is changed to 1 system of barotropic mode equations and N-1 systems of baroclinic mode equations coupled together.For numerical experiments, a fully baroclinic (non-barotropic) initial condition is considered in a constant depth rectangular domain with 64, 128 and 256 grid points in each direction and periodic boundaries. For spatial differencing, second order centered scheme with low computational cost and fourth-order compact scheme with high computational cost are used. For time integration, a semi-implicit descretization based on leapfrog scheme is implemented with the Robert-Asselin time filter for both barotropic and baroclinic systems of equations, similarly.Mode splitting method may presents numerical instabilities on the larger baroclinic time steps, in spite of time step limitation based on CFL condition coming from each system of barotropic and baroclinic mode equations taken individually. Here, it is controlled by increasing the coefficient of time filter to some extent.First, we solve the baroclinic mode equations to derive all baroclinic variables that are necessary to solve barotropic mode equations during a baroclinic time step. In this case, these variables can be taken to be constant up to the next baroclinic time level or determined by time interpolation between two successive baroclinic time levels.To assess the performance of the numerical method, relative error of energy conservation is calculated. Results show that for the ratio of baroclinic time step to up to 20 times of that of barotropic time step, time evolution of the barotropic and baroclinic variables have appropriate correspondence to the basic state, in which the barotropic mode has the same time step as the baroclinic mode. When this ratio increases, the differences of errors from basic state are presented more clearly. These errors are increased on fourth order compact method insofar as it leads to numerical instabilities so the time filter coefficient had to be increased, while second order scheme is not sensitive and stays stable with small coefficient. Moreover, taking constant for baroclinic variables to solve barotropic mode equations makes the solution on fourth order compact scheme for large baroclinic time step unstable, but on the other hand time interpolation provides more stable condition and has a good performance almost on both spatial schemes.Assessment of the performance of cumulus and boundary layer schemes in the WRF-NMM model in simulation of heavy rainfalls over the Bushehr Province during 2000-2020
https://jesphys.ut.ac.ir/article_86901.html
The mesoscale numerical weather prediction system of Weather Research and Forecasting (WRF), with two cores of ARW and NMM, has been used for atmospheric research, operational forecasting, and dynamical downscaling of Global Climate Models. Many parameterizations for each physics option can be accessed in this model. It is noteworthy that the performance of the model depends on the selected configuration and varies in different areas. Therefore, choosing a configuration with the lowest error for each terrain is mandatory. Here, the performances of various physics schemes, including cumulus and boundary layer schemes of the WRF-NMM model, were examined to simulate twelve heaviest extreme rainfall events in the southwest of Iran, the Bushehr Province, during 2000-2020. These events lasted for eighteen days. Three domains with 27, 9, and 3 km resolution were used in the model configurations, with no cumulus option for the smallest one. The initial and boundary conditions were used from the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis v5 (ERA5) datasets. One hundred and eight simulations were done using six cumulus schemes of KF, BMJ, SAS, oldSAS, NSAS, and TiedTKE, and seventy-two runs were done to evaluate the boundary layer schemes of MRF, MYJ, QNSE, and YSU. The simulated precipitation patterns were assessed using two observational data sets, including (I) in-situ measured data from eleven automatic weather stations and (II) grid point data from Global Precipitation Measurement (GPM) satellite with 0.1-degree horizontal resolution. Four statistic indices of Root Mean Square Error, Correlation Coefficient, Standard Deviation, and Bias were applied in the evaluation process. The evaluation process with the data measured at 11 automatic weather stations was done using outputs of the third domain. The outputs of the second domain were used for evaluation basis on GPM data at grid points. For a comprehensive analysis, the assessment process was performed separately for rainfall events (March-April and November-December events) in coastal and non-coastal stations. Comparison of precipitation from simulations of various cumulus schemes with the eleven in-situ data showed that the schemes from SAS family well performed at March-April events at coastal and noncoastal stations. While, the KF scheme produced the least error at coastal and noncoastal stations during the November-December events. The precipitation data from 1271 GPM grid-point data revealed that the oldSAS scheme generated the least error for the March-April and November-December events. According to the number of GPM grid-point data, the oldSAS scheme opted as the cumulus option for the next runs. Evaluation of WRF-NMM simulations using different boundary layer physics with the in-situ data indicated that MRF scheme produced the minor error at coastal and noncoastal stations for both March-April and November-December events. Using the 1271 GPM grid-point data illustrated that the QNSE and MRF (MYJ and MRF) options did the best performance for March-April (November-December) events. In conclusion, based on the number of GPM grid-point data compared with in-situ measured data, it is suggested that the oldSAS cumulus scheme and MRF boundary layer scheme can be chosen with some robustness in predicting the amount and pattern of the heavy rainfall precipitation in Bushehr Province of Iran. It is also notable that the default options introduced by the model for cumulus scheme and boundary layer scheme in the WRF-NMM model produce the largest error and are not appropriate for the selected area. This reveals the importance of adequately selecting physics options for this area.Temporal variability analysis of measured surface ozone at the Geophysics Institute Station of the Tehran University
https://jesphys.ut.ac.ir/article_86906.html
Near surface ozone (O$_{3} ^{surf}$), or tropospheric ozone at the ground level, is a secondary air pollutant that deteriorates human health and plants via damaging respiratory systems. This species is one of the main greenhouse gases associated with global warming and climate change. Despite many efforts to study and to make policy control program, this gas is still increasing and is a recent serious threat for human. So, a comprehensive understating of its variation and controlling factors is necessary for having a precise plan for its regulation.
Here, a measured time series of O$_{3} ^{surf}$ at one of the air quality monitoring sites in Iran, i.e. Geophysics Institute of the University of Tehran, was selected to assess the O$_{3} ^{surf}$ variation in more detail. Although this time series has been measured since 2007, there are many gaps in the data and a few years without data. Nevertheless, the data possess a high quality which has been discussed in this paper. The series was prepared for the period of four years, i.e. 2007-2008 and 2019-2020.
The data series was decomposed to five spectral components, i.e. intraday (ID), diurnal (DU), synoptic (SY), seasonal (SE), and baseline (BL), by applying Kolmogorov-Zurbenko (KZ) filter. This filter introduced by Kolmogorov and later formalized by Zurbenko in 1997. Theoretically, the KZ filter is a technique consists of iterative running moving average (MA), in which a simple MA of m points is computed by:
S(t) = $frac{1}{m}$ $sum_{j=-(m-1)/2}^{(m-1)/2} ORG(t)_j$
where ORG and t represent the original time series and its time steps, respectively, and S is the input for each iteration. Therefore, the filter can be express as:
KZ$_{m,k}$= R$_{i=1} ^{k}$ {J$_{p=1} ^{w_i}$ [S(t${_i}$)$_{p}$]}
Here m and k are window length and number of iterations, respectively. R and J represent iteration and running window, respectively, and w$_{i}$ is defined as:
W$_{i}$ = L$_{i}$ - m + 1
where L$_{i}$ is the length of S(t$_{i}$). KZ$_{m,k}$ is a low pass filter in which high frequency (short time period) variation are removed from the time series. The band of frequency and the level of suppression in this filter are controlled by m and k, respectively. Here, the ozone time series was decomposed to five spectral components as:
ORG(t) = ID(t$_{&lt;12h}$) + DU(t$_{12h-2.5d}$) + SY(t$_{2.5d-21d}$) + SE(t$_{21d-365d}$) + BL(t$_{&gt;365d}$)
The results indicate that the contribution of each component to the O$_{3} ^{surf}$ variability is different as such the DU component constitutes more than 50% of the ozone variability. In fact, this component makes most of the ozone variability which attributes to light variation (daytime-nighttime). The SE component has the second largest contribution to the O$_{3} ^{surf}$ variability. The contribution of the SY component is different and that depends on the year. As an example, the relative contribution of this component in 2007 is 8.93% and in 2019 is 4.84%. Only 5% of the total O$_{3} ^{surf}$ variability makes by the variation of the ID component. This implies that the contribution of each component to the total O$_{3} ^{surf}$ variability is different and this information should be considered in ozone control strategies.Interpolation of horizontal GPS velocity field in the oblique collision zone of Arabia-Eurasia tectonic plates using Green’s functions
https://jesphys.ut.ac.ir/article_86899.html
One way of gridding two dimensional vector data would be gridding each component separately. Alternatively, using Green&rsquo;s functions we can grid two components simultaneously in a way that couples them through elastic deformation theory. This is particularly suited, though not exclusive, to data that represent elastic/semi-elastic deformation, like horizontal GPS velocity fields. Measurements made on the surface of the Earth are often sparse and unevenly distributed. For example, GPS displacement measurements are limited by the availability of ground stations and airborne geophysical measurements are highly sampled along flight lines but there is often a large gap between lines. Many data processing methods require data distributed on a uniform regular grid, particularly methods involving the Fourier transform or the computation of directional derivatives. Hence, the interpolation of sparse measurements onto a regular grid (known as gridding) is a prominent problem in the Earth Sciences.
In this research, sparse two-dimensional vector data of the horizontal GPS velocity field are interpolated using Green&rsquo;s functions derived from elastic constraints. The method is based on the Green&rsquo;s functions of an elastic body subjected to in-plane forces. This approach ensures elastic coupling between the two components of the interpolation. Users may adjust the coupling by varying Poisson&rsquo;s ratio. Smoothing can be achieved by ignoring the smallest eigenvalues in the matrix solution for the strengths of the unknown body forces. The study area is the oblique collision zone of Arabia-Eurasia tectonic plates, which has a GPS velocity field with sparse distribution.
Since the Green&rsquo;s functions developed for the half-space environment, the Mercator map projection used to create the half-space for interpolation and gridding. Data split into a training and testing set. We&#039;ll fit the gridder on the training set and use the testing set to evaluate how well the gridder is performing. The vector gridding was done using the Poisson&#039;s ratio 0.5 to couple the two horizontal components. Then score on the testing data. The best possible score is 1, meaning a perfect prediction of the test data. By calculating the mean square deviation ratio (MSDR) to evaluate the gridding accuracy, the score of 0.86 obtained for this statistic.
While this method is not new, it provides some insight into the behavior of the coupled interpolation for a wide range of Poisson&rsquo;s ratio. This approach provides improved interpolation of sparse vector data when the physics of the deforming material follows elasticity equations.
We interpolated our horizontal GPS velocities onto a regular geographic grid with 1 arc second spacing and masked the data that are far from the observation points and finally the residuals between the predictions and the original input data were calculated. Interpolation of horizontal GPS velocity fields of local geodynamic networks are proposed to obtain an estimate for Poisson&#039;s ratio values in the best case for gridding validation.
In this study, two dimensional GPS data were interpolated. Three dimensional GPS data gridding can also be done using the Green&rsquo;s functions provided by Uieda et al., (2018). It is also recommended to use different Green&rsquo;s functions to grid different types of spatial data.The effect of sudden stratospheric warming on the height and temperature variations of thermal tropopause in northern hemisphere (1979-2020)
https://jesphys.ut.ac.ir/article_86900.html
A sudden stratospheric warming (SSW) represent large scale perturbations of the polar winter stratosphere, which substantively influence the temperature and circulation of the middle atmosphere and also the contents of atmospheric species. SSW occurs mostly in middle and late winter and almost exclusively in the Northern Hemisphere. During an event, the polar stratospheric temperature increases by several tens of degree Celsius within a few days and eventually becomes warmer than that of the mid latitudes, reversing the climatological temperature gradient. At the same time, the prevailing westerly wind speed decreases rapidly and becomes easterly.
The tropopause is a transition layer between the troposphere and the stratosphere. The occasional exchange of air, water vapor, trace gases, and energy between the troposphere and the stratosphere occurs in this layer. Based on the concepts; two different tropopause in the name of thermal tropopause and dynamical tropopause are defined. The conventional definition is the thermal tropopause which is detected based on the mark disruption of the vertical temperature lapse rate. The thermal tropopause definition is based on the fact that the stratosphere is more stably stratified than the troposphere. The thermal tropopause is defined as the lowest level at which the lapse rate decreases to 2 K/km or less, provided also the average lapse rate between this level and all higher levels within 2 km does not exceed 2 K/km. The original concept of the dynamical tropopause was based on the isentropic gradient of potential vorticity. The dynamical tropopause is typically determined in a thin layer with absolute PV values within 1&thinsp;pvu and 4&thinsp;pvu.
The vertical temperature stratification of the atmosphere plays a basic role in atmospheric motions. In this paper, the Brunt&ndash;V&auml;is&auml;l&auml; frequency (N2) value is used to detect the change of stratospheric static stability.
In this paper the NCEP/NCAR reanalysis daily data including temperature at different pressure levels (1000hPa-10hPa), the tropopuse temperature and pressure from 1th of January 1961 to 31th of December 2020 in northern hemisphere are used. The study region covers 0&deg; to 357.5&deg; geographical longitudes and 0&deg;N to 90&deg;N geographical latitudes. the northern hemisphere is divide into three 30&deg; nun overlapping latitudinal band width called as the tropical bands (0&deg;N-27.5&deg;N), the middle latitude bands (30&deg;N-57.5&deg;N) and polar bands (60&deg;N-90&deg;N) regions. Firth of all the potential temperature and Brunt-V&auml;is&auml;l&auml; frequency (N2) at different pressure levels are calculated, then the average zonal mean temperatures at 10hPa, the tropopause temperatures; the tropopuse pressures and the values of N2 in three former introduced regions are obtained. To represent the tropopuse&#039;s height variations during the sudden stratospheric warming; the daily anomaly of these parameters in the regions are calculated and analyzed.
The daily average mean zonal tropopause temperatures and pressure changes in the three meridian divided regions during eighteen major and one minor sudden stratospheric warming (SSW) events are analyzed in this study. The results show that all 19 SSW events in the statistical period of 1979-1920 were associated with positive anomaly of the zonal mean temperature and pressure of tropopuse along with increase of the tropopuse temperature and lowering its height which caused downward development of the stratosphere and thinning the depth of the troposphere. In addition, the tropopuse height reduction in the polar band region was greater than in the middle latitude band. It was also showed that, the static stability (positive (N^2 ) ̅ anomaly) increment in the stratosphere started before the SSW and decreased during SSW (negative(N^2 ) ̅ anomaly ). These changes are greater in the polar cap band respect to the middle latitudes band. This result reveals that the static stability structure in the lower stratosphere and upper troposphere in the polar cap are more affected by SSW respect to other regions.Post Processing of WRF Model Output by Cokriging Method for Daily Average Wind Speed and Relative Humidity on Iran
https://jesphys.ut.ac.ir/article_86903.html
Weather forecasting and monitoring systems based on numerical weather forecasting models have been increasingly used to manage issues related to meteorology and agriculture. Using more accurate daily average wind speed (10m) and relative humidity forecasts can be helpful in this regard. But systematic and random errors in the model affect the accuracy of forecasts. In this study, the model errors during the 5 and 14 days training period in the same climate areas on the points of the network where the observations are available were calculated. Then the errors were generalized on all points of the network using the cokriging interpolation method. This, preserves the model forecasts for other points of the network and only error values are applied to them. To better evaluate the model, the spatial and temporal distribution of daily average wind speed (10m) and relative humidity forecast errors were also investigated in the country. Observed daily wind speed and relative humidity data from 560 meteorological stations for the period 1/11/2019 to 1/2/2021 were used to evaluate the WRF model. The WRF model is run daily at 12UTC, with a forecast time of 120 hours. And first 12 hours of each run is consider as the model spin-up and don&rsquo;t use in errors calculation. In order to correct wind speed and relative humidity forecast errors for next three days (forecasts of 36, 60 and 84 hours), the forecasts for each day in the period of 11/1/ 2019 to 1/2/2021, was extracted from the model outputs. In order to evaluate the error correction method, the skill score index was used. The validation results of the error correction method showed that the absolute mean error value, correlation coefficient and RMSE improved after the error correction compared to results that were before the error correction, which showed that the error correction method can be used for other network points that do not contain observational data. In general after correction, the RMSE for wind speed and relative humidity forecasts will decrease by 13% and 18%, and the skill score will increase to a maximum of 160% and 308%, respectively. Value of correlation coefficient, after correcting the model error, has a significant increase compared to the raw model output. In general skill score for the raw wind speed and relative humidity forecast for more than 50% of the days was more than -0.5 and -0.3, but after correction increases to 0.2, 0.4 respectively. Without exception, all climatic regions after error correction have a higher skill score than before error correction, so that the model skill score for most climatic regions after error correction reaches above zero for more than 75% of the days. The results showed that error of the model in different months, places and climatic zones does not have a uniform distribution. In general, the model underestimates the wind speed and overestimates the relative humidity in most areas. In general, the lowest skill scores for relative humidity forecasts occur in the colder months, of November to February in most climatic zones. The 14-day error correction method did not improve the modeling skill score much compared to the 5-day error correction method, and they acted almost similarly. Knowing the spatial and temporal distribution of model forecast error can be helpful for researchers to have an overview of the areas (and months) where the model forecast error is high.A spectral approach to the origin and propagation of magnetoacoustics’ oscillations in the network and internetwork areas of solar granules
https://jesphys.ut.ac.ir/article_86911.html
In this paper, a spectral approach to the origin and propagation of magnetoacoustic oscillations in the network and internetwork areas of solar granules is performed.
The data used in this study are mostly from Interface Region Imaging Spectrometer (IRIS). Slit Jaw Images (SJIs) data of IRIS at wavelengths of 1400 angstroms related to Si IV and 2796 angstroms related to Mg II h / k and 2832 angstroms related to Mg II w s, are used to select network and internetwork areas.
The data of the Mg II k spectrum with a wavelength of 2796 angstroms and a temperature of 10,000 Kelvin have been used to construct the temporal profile of the intensity at the peaks of h3, k3, h2r, h2v, k2r and k2v, and the prospective profile of intensity temperature.
One of the common methods for temporal and frequential characteristics analysis is the use of wavelet analysis. This method seems to be a practical method due to the variety and flexibility of wavelet types for different types of analysis. Wavelets and their convolution with waves lead to the extraction of time, frequency and power data. It should be noted that due to the uncertainty principle, resolution of time and frequency interact and its need to select optimum limit of the time and frequency resolution.
One of the reasons for choosing Morlet Wavelet for the analysis of this study is the lack of a sharp edge, which reduces the ripple and improves the accuracy of detect the fluctuations properties.
Another and one of the most important reasons for using the Morlet wavelet was not changing the temporal resolution of the wave.
For these reasons, Morlett 5 was the most sensible and reliable choice for high-temporal and frequency-specific results for this study.
Using wavelet analysis, the oscillation characteristics of the intensity are obtained in the network areas and internetwork areas.
By Investigation of the intensity profiles in h and k peaks, it was found that the general behavior in them is the same and the only difference is in the intensity of these peaks and therefore their temperature.
In the case of intensity temperature profiles, the general behavior for intensity temperature profiles extracted from h and k peaks, also seems to be the same.
By Investigation of the wavelet analysis results, it appears that the oscillating behavior at the h and k peaks is almost similar.
Using the results of wavelet analysis, in this study, the period of oscillation in the intensity of bright points in the network and internetwork has been obtained. According to their values, it seems that the bright points of the internetwork have a photospheric origin and the bright points of the network have a chromospheric origin.
Another result of the wavelet analysis of this study was the intensity oscillations with a period of about 64 seconds. This high frequency differs from the solar researchers&rsquo; observations of photosphere and chromosphere oscillations, so it cannot be related to those oscillations. It seems that this is the first time that this type of high frequency oscillations has been reported.
It seems that these high frequency oscillations can play an important role in heating the TR . For this reason, Accurate study of these high frequency oscillation is necessary to understand the causes and heating mechanisms of TR.
These high frequency oscillations have been seen in almost all data and areas under study, so far there is no strong evidence of the origin and cause of these high frequency oscillation, and we hope that with more detailed and extensive studies we can better understand the properties and reason of these oscillations.Investigating the Potential of Infrared Stimulated Luminescence for Dating the Debris rocks of Fatalak Landslide
https://jesphys.ut.ac.ir/article_86918.html
Over the last decade, extensive studies have been done to date rock surfaces using optical luminescence signals, and recently a model has been proposed that shows the rock surfaces using infrared-stimulated luminescence signal have been successfully dated. This method is based on the resetting of luminescence signal with depth into rock surfaces. When a rock surface is first exposed to sunlight, the luminescence signal that has been stored over time in its constituent minerals (particularly quartz and feldspar) starts to decrease. The longer the rock is exposed to sunlight, the depth of light penetration into the rock also increases and the luminescence signal in the rock decreases, however, the rate of luminescence resetting reduces with depth because of the attenuation of daylight into the rock surface. This differential change in bleaching rate with depth leads to the development of a sigmoidal shape luminescence-depth profile. Such profile provides an internal check on an inadequate daylight exposure, and therefore an incomplete resetting of the luminescence signal and allow us to identify the sample that are most likely to provide reliable OSL age. In this study, we investigated the potential of this method to date debris rocks of Fatalak landslide which were induced by Rudbar-Manjil earthquake in north of Iran in 1990. Cores of ~10 cm long and 1 cm diameter were extracted from the buried and exposed sides of the rock samples using a water-cooled, diamond-tipped drill. The cores were then cut into ~1.5 mm thick slices. The slices were gently broken into small chips and mounted in 10-mm diameter stainless steel cups for natural luminescence signal and dose response measurements. All sub-samples from each slice were stimulated by infrared radiation and the blue and ultraviolet luminescence signals were measured. To determine whether the luminescence signals at the buried surface of the rock were sufficiently bleached before the earthquake event, we measured the natural sensitivity-corrected IR50 and pIRIR225 signals (Ln/Tn) with depth into the core and the luminescence-depth profiles were plotted. Unexpectedly, weak or no IR50 and pIRIR225 signals and no suitable luminescence-depth profiles were observed. According to the experience of the second author, almost all sediment samples taken from Iran have generated IRSL signal, so it is necessary to investigate the cause of the lack of a suitable IRSL signal for rock samples in Fatalak. Due to the fact that with increasing depth, the bleaching rate decreases and the luminescence signal intensity increases and also the luminescence signal is generated by a small percentage (approximately 10%) of the grains of the dosimeter grains (mainly quartz and feldspar), it is possible to produce signals (response to the same dose) with different intensities and properties for different slices. Therefore, the potential of all slices to produce the signal and finally to prepare the luminescence-depth profile were investigated. Unfortunately, this profile did not match the profiles provided by previous studies.
In order to analyze whether this observation is due to the nature of the samples taken from Iran or there was a defect in the luminescence signal measuring device or in the experiment process, we performed similar tests for a rock surface which was taken from another site. The same process was then carried out for two rock art paintings from Spain, which showed acceptable signals and the IR50 depth profile showed a sigmoidal shape where the luminescence signal is almost reset at the surface slice but increases with depth until it reaches saturation, as expected from the model. Then, the luminescence-depth profiles from Fatalak and Spain sites were compared with two previous successful studies in Italy and Denmark. The IRSL luminescence-depth profile for rock art sample in Spain was in a good agreement with that of the two burial samples from Italy and Denmark. However, no such correlation was observed between the profiles of the Fatalak sample and the profiles of the two Italian and Danish samples. As the profiles derived for Fatalak sample were not consistent with the model and none of the previous studies, we could not determine the time of the landslide event in the conventional method.Resistivity and IP Tomography to determine Overburden-Bedrock Interface: A case study of Ilam Embankment dam
https://jesphys.ut.ac.ir/article_86961.html
Determination of the overburden-bedrock interface with fine-grained sediments in a high-fold sedimentary environment is a challenging geophysical issue. Electrical Resistivity Tomography (ERT) is considered one of the most effective geophysical approaches for mapping subsurface layers based on the conductivity distribution of materials. The surveys are often performed in two dimensions to investigate lateral and depth variations of resistivity and chargeability values of subsurface layers. The resistivity method, influenced by the volumetric properties of empty spaces, is defined by the ability to transfer charge in subsurface medium, but the induced polarization method depends upon the geometric properties of the pore spaces (grain surface size). Despite the advantages of geo-electrical methods in imaging subsurface structures, due to the high dependency of resistivity and induced polarization parameters on the physical and hydrogeological conditions of the layers, it is not possible to fully match the geological and geo-electrical sections.
One of the applications of geophysical studies is to determine the contact zone between overburden and bedrock in engineering structures such as embankment dams. In cases where the conductivity contrast between the overburden and the bedrock is low, the exact determination of this boundary with the help of geo-electrical methods confronts high uncertainty. In this study, the efficiency of electrical resistivity tomography and induced polarization is investigated by measuring several parallel profiles with the aim of imaging the boundary between overburden and bedrock and determining the possibility of a water escape zone at the left bank of the Ilam embankment dam. According to the results obtained from the inversion of the field measurements, rechargeable sections would be ascribed to the shale region as well as marl limestone containing pyrite particles.
The main objectives of this study include determining the general condition of the overburden concerning the bedrock, geometric imaging of the bedrock, and identification of parts of the bedrock eroded over time. The significant challenge of this geophysical study is the low conductivity contrast between clay and silt overburden and limestone bedrock interbedded with shale and marl. Due to the size of the study area, the studies were performed based on tomographic measurements of electrical resistivity and induced polarization. The field surveys were conducted using four almost parallel profiles (according to the topographic conditions of the area) and with relatively different lengths and through a Pole-Dipole array in forward and reverse measurements.
Geological data, as well as borehole information, are used to validate the geo-electrical sections to better interpret the models obtained from the collected data (i.e., geo-electrical measurements). Finally, due to the high topography of the area and to better show the trend of subsurface structures using two-dimensional models obtained from electrical resistivity tomography and induced polarization as well as drilled boreholes, a three-dimensional view of sections and boreholes has been prepared. Based on the models obtained from the geo-electrical data, it can be concluded that geophysical studies (electrical tomography) have been able to successfully determine the eroded region of the bedrock surface as well as the bedrock-overburden contact which correlates well with boreholes drilled in the area.Prediction of Water Saturation by FSVM using well logs in a gas field located in South of Iran
https://jesphys.ut.ac.ir/article_86905.html
Water saturation is one of the key petrophysical parameters that mainly affects the accuracy of initial oil estimation related to a hydrocarbon reservoir. Approximation of this parameter is inevitable since it has a high effect on economic development of hydrocarbon reservoirs. In this paper, we propose a two-step approach using 2 well sets and core data to predict water saturation by means of Support Vector Machine (SVM) algorithm in one of the gas reservoirs in Persian Gulf. Due to inevitability of noise and outliers in measured data, SVM modified to Fuzzy SVM (FSVM). Support Vector Regression (SVR) roots from SVM for regression purposes. Considering data in fuzzy sets approaches the machine to reality. In this case, the user is able to give priority to each data point. As a result, noise and outliers can receive less priority which leads to creating better models. After receiving degree of membership, data points enter the algorithm for prediction of water saturation in core missing areas.Water saturation is the fraction of water in a given pore space. It is expressed in volume/volume, percent or saturation units. This is one of the most applicable petrophysical parameters to evaluate petroleum reservoirs which directly affects success of drilling operations, complementary and production of oil well sets. Therefore, an accurate estimation of this parameter is necessary in exploitation of oil and gas reservoirs. There are two main methods to investigate reservoir parameters; first core data analysis as a direct method and second using well logs as an indirect method. Core data analysis to obtain water saturation information has been presented by different authors (Walther 1967, Morad Zadeh et al. 2011, Jia et al 2020). Measurement of this parameter in the laboratory is costly and takes lots of time. Moreover, core data is not always available for whole well sets. So, using algorithms to estimate reservoir parameters in wells missing core data is profitable. There are variety of formulas that estimate water saturation from other parameters such as resistivity and porosity (Luthi 1941, Archi 1942). But these formulas highly depend on lithology and formation type. So, they can not be generalized to variety of situations. Over the last decade application of machine learning methods has been widely used to estimate reservoir parameters (Zhang et al. 2018, Okwu et al. 2019, Li et al. 2021). Water saturation has been estimated using different algorithms (Adeniran et al 2009, Jafari Kenari et al 2013, Bagheripour et al 2014). Each algorithm has its pros and cons. This paper applies SVR algorithm on well logs to obtain water saturation. The superiority of SVR to other algorithms is the high capability of model generalization and low amount of model error. As the next step, membership functions was used to devote membership degrees to each data point. In other words, data is transformed to a fuzzy system in which each data in a (0,1) interval (Zadeh 1965). In this case, noise and outliers receive less degree of membership so their influence on the final model decreases. As a result, better output is produced and modification of SVR to FSVR notably improves the results (Lim et al. 2002, Le et al. 2009). In this paper 3 well sets of a gas reservoir was utilized, two well sets for training the algorithm and the third well for the testing purpose. Well logs for this study include acoustic-DT, transit interval time or slowness, neutron porosity (NPHI), density log (RHOB), photoelectric absorption factor (PEF) and gamma ray, intensity of natural radioactivity (GR), resistivity log both shallow and deep (LLD, LLS) and Micro-Spherical Focused log (MSFL). Determination coefficient calculated for water saturation core data and predicted model obtained from FSVR illustrates better results compared to SVR. This study shows determination of coefficient measured from predicted water saturation and core data of SVR algorithm is 71% while for FSVR is 95%.