Journal of the Earth and Space Physics
https://jesphys.ut.ac.ir/
Journal of the Earth and Space Physicsendaily1Wed, 21 Apr 2021 00:00:00 +0430Wed, 21 Apr 2021 00:00:00 +0430Residual static correction Using Tunable Q Factor Discrete Wavelet Transform
https://jesphys.ut.ac.ir/article_79568.html
The derivation of the static reference corrections was generally based on a fairly simple geological model close to the surface. The lack of detailed information near the surface leads to inaccuracies in this model and, therefore, in static corrections. Residual static corrections are designed to correct small inaccuracies in the near-surface model. Their application should lead to an improvement of the final section treated compared to that in which only static corrections is applied. For example, if the final stacked section is to be inverted to produce an acoustic impedance section, it is important that the variations in amplitude along the section represent the changes in the reflection coefficient as close as possible. This is unlikely to be the case if small residual static errors are present. In addition, static reference corrections are not a unique set of values because a change in reference results in a different set of corrections. Due to variation in the Earth's surface, velocities, and thicknesses of near-surface layers, the shape of the travel time hyperbola changes. These deviations, called static, result in misalignments and events lost in the CMP, so they must be corrected during the processing. After correcting the statics of long wavelengths, there are still some short-wavelength anomalies. These &ldquo;residual&rdquo; statics are due to variations not counted in the low-velocity layer. The estimation of the residual static in complex areas is one of the main problems posed by the processing of seismic data, and the results from this processing step affect the quality of the final reconstructed image and the results of the interpretation. Residual static can be estimated by different methods such as travel time inversion, power stacking, and sparsity maximization, which are based on a coherent surface assumption. An effective method must be able to denoise the seismic signal without losing useful data and have to function properly in the presence of random noise. In the frequency domain, it is possible to separate the noise from the main data, so denoising in the frequency domain can be useful. Besides, the transformation areas are data-driven and require no information below the surface. The methods in the frequency domain generally use the Fourier transform, which takes time and has certain limits. Wavelet transformation methods always provide a faster procedure than Fourier transformation. We have found that this type of wavelet transform could provide a data-oriented method for analyzing and synthesizing data according to the oscillation behavior of the signal. Tune able Q Factor Discrete Wavelet Transform (TQWT) is a new method that provides a reliable framework for the residual static correction. In this transformation, the quality factor (Q), which relates to the particular oscillatory behavior of the data, could be adjusted in the signal by the user, and this characteristic leads to a good correspondence with the seismic signal. The Q factor of an oscillatory pulse is the ratio of its center frequency to its bandwidth. TQWT is developed by a tow channel filter bank. The use of a low-pass filter eliminates high-frequency data; these high-frequency components are the effect of residual static. After filtering, the data will be smoother; this amount of correction gives the time offset for the residual static correction. This time difference must apply to all traces. Applying this method to synthetic and real data shows a good correction of the residual static.Estimation of average shear V_sz and compressional V_pzwaves velocities using wavelength-depth relation obtained from surface waves analysis
https://jesphys.ut.ac.ir/article_79635.html
Shear wave velocity ( ) and its average based on travel time from the surface to a depth of 30 m, is known as ( ) are often used in engineering projects to determine soil parameters, evaluate the dynamic properties of the soil and classify it. This quantity is directly related to the important property of soil and rock, i.e., their shear strength. The average shear wave velocity is used in geotechnics to assess soil liquefaction and in earthquake engineering to determine soil period, site amplification coefficient, and determination of attenuation. Usually, the average shear wave velocity is obtained from shear wave refraction survey, PS logging or from shear wave velocity profile obtained by inversion of experimental dispersion curve of surface waves. Surface wave analysis is one of the methods for estimating the profile of shear wave velocity, but inverting of dispersion curve is a time-consuming part of this process and also, the inverse problem has a non-unique solution. This becomes more evident when the goal is to determine a two- or three-dimensional shear wave velocity model. This study provides a method to estimate directly the average shear wave velocity ( ) as well as the average compressional wave velocity ( ) from dispersion curves of surface waves without the need to invert the dispersion curves. For this purpose, we need to exploit the relation between surface wave wavelength and investigation depth. Estimating the wavelength-depth relationship requires access to a shear wave velocity model (a reference model) in the study area, which can be obtained from well data, refraction seismic profiles, or by inverting one of the experimental surface wave dispersion curves. The &nbsp;is then estimated directly from dispersion curve using the wavelength-depth relationship. In addition, due to the dependence of the value of &nbsp;to Poisson's ratio and the sensitivity of the estimated wavelength-depth relationship to this ratio, we estimate the Poisson's ratio profile and average compressional velocity ( ) for the study area, from the . For a given range of Poisson's ratio values, theoretical dispersion curves of the synthetic earth models are determined by forward modeling. Then using these dispersion curves and estimated average shear wave velocity of the model, the wavelength-depth relationship corresponding to each Poisson's ratio is determined. In the next step by comparing experimental and estimated wavelength-depth relationships, one can estimate the Poisson's ratio at each depth. Then the average compressional wave velocity ( ) is estimated using the &nbsp;and the Poisson's ratios. We evaluated the performance of the proposed method by applying on both real MASW seismic data set from USA and synthetic seismic data. The synthetic data collected over synthetic earth model and showed that the average shear and compression waves velocities are estimated with uncertainty of less than 10% in layered earth model with very large lateral variations in shear and compression waves velocities. According to the results, the proposed method can be used to take the non-destructive advantages of the surface wave method in engineering, geotechnical, and earthquake engineering projects to get the average shear wave velocity .Evaluation of Precise Point Positioning method with different combinations of Dual-frequencies of Galileo and BeiDou using PPPteh software
https://jesphys.ut.ac.ir/article_79569.html
Due to advances in global navigation satellite systems, it has been possible for satellites to send different frequencies. For this reason, different combinations of these frequencies can be considered to form ionospheric codes and phase observations. In this study, the aim is to evaluate the Precise Point Positioning method using a combination of different frequencies. For this purpose, the PPPteh software provided by the authors, written under MatLab is used. PPPteh has the ability to process observations from four GPS, GLONASS, BeiDou and Galileo satellite systems to perform precise point positioning. In this software, there are all possible combinations for making Dual-frequency ionosphere-free observations for all different frequencies. There are three modes for combining different frequencies for the GPS positioning system, ten modes for the Galileo system, and three modes for building the BeiDou satellite system to make ionospheric-free observations. To evaluate the precise point positioning method, four steps have been considered in terms of position accuracy and convergence time: 1) First, use the observations of two frequencies &nbsp;related to GPS and determine the position, 2) Combine the two systems satellite GPS and Galileo and select the best combination model, 3) Combining the two systems GPS and BeiDou and selecting the best combination and 4) Finally, after the previous three steps, the combination position will be determined using the three systems by the best frequency model and the results will be compared with each other. Based on the results provided for the Galileo and BeiDou navigation satellite systems, two combinations &nbsp;and were selected as the best combinations for use in determining the precise point positioning, respectively. Following the precise point positioning, the addition of observations on BeiDou satellites has reduced convergence time and, in most cases, increased the three-dimensional accuracy of the coordinate components. Using a combination of the signals has a better quality than the other two combinations. The same process was followed for observations of Galileo satellites, according to which the use of observations related to Galileo satellites when combined with GPS observations has increased accuracy and reduced convergence time. The use of &nbsp;signal signals is of better combination than the other three combinations. Finally, by combining all three systems and considering the selected frequency model in the first stage, it was concluded that the combination of three satellite navigation satellite systems GPS, Galileo and BeiDou significantly improved both in reducing convergence time and increasing the three-dimensional accuracy of the coordinates provided. Also, the error provided (the difference in the estimated coordinates with the final coordinates of the station from the IGS file), when using the Galileo and BeiDou systems in combination with the GPS, is noticeably different both in convergence and in the accuracy of the coordinates. Combining all three systems together increases accuracy and reduces convergence time. But in dual-combination with GPS, the use of Galileo satellite observations gives us higher accuracy as well as less convergence time. Therefore, choosing the right signals to form ionosphere-free observations in determining the exact absolute position as well as combining different observations with the correct weight for each signal in combination with GPS, can meet the user's needs in terms of accuracy and convergence.Investigation of the near-field and directivity effects in earthquake hazard analysis studies - a case study of Doroud fault
https://jesphys.ut.ac.ir/article_79634.html
In this study, considering the location of Dorud city in the area near the strike-slip and seismic fault of Doroud, the effects of the near site and the directivity due to rupture have been investigated in seismic hazard analysis studies. Doroud fault is located near the cities of Doroud and Boroujerd, in the western part of Iran. Dorud and Boroujerd are among the important cities of Iran in the agricultural industry and also due to the pristine nature in these areas has always been of interest to tourists. The micro-earthquakes recorded in this area indicate the activity of the Doroud fault system. In order to prevent possible earthquake damage in this area, seismicity studies can be useful to study the acceleration of the ground by considering the effects of the site in order to strengthen the construction of civil structures. Abrahamson (2000) and Somerville et al., (1997) were among the first researchers to establish studies based on this, and the relationships and methods proposed by them are more acceptable today in applying the directional effect. These researchers considered two parameters of angle and ratio of fault length as a direct factor in the effect of orientation and examined the results for the acceleration spectrum created. The effect of orientation can lead to the formation of long-period pulses in the earth's motion, which some proposed models (eg Somerville et al., 1997) can measure the quantity of this effect in estimating earthquake risk analysis with a deterministic and probabilistic approaches. (Abrahamson, 2000). In this study, seismic hazard has been investigated, compared and evaluated by considering the effects of Doroud fault in different periods and different return periods by considering the effect of orientation and also without applying the effect of orientation. Near-field and directional effects can lead to long-period pulses in ground motion parameter, and for structures with long periods such as bridges and dams near faults with high activity rates. The inclusion of directional effects in attenuation relationships, to see whether for deterministic and probabilistic approach can have a great impact on the results of realistic seismic hazard analysis. Doroud fault is one of the most important faults in Iran with a history of large earthquakes in the early instrumental period and its mechanism of strike-slip mechanism, It can intensify the strong motion parameters during earthquakes for long periods in the city of Dorud, and consequently cause serious damage to structures with long periods in this area. In this study, the parameters of strong ground motion in the analysis of probabilistic earthquake hazard by applying direction for the range of Doroud fault have been estimated. In addition, by examining the disaggregation of earthquake hazard, the effect of direction for the contribution of distance and magnitude in estimating the strong motion parameter has been evaluated. In the short and long return periods, the effect of directivity for different periods for the strong motion has been estimated and evaluated by the Somerville and Abrahamson method. The estimated acceleration is calculated and evaluated for three return periods, 50, 475 and 2475 years and in periods of 0.75, 1, 2, 3 and 4 sec. The value of the strong motion parameter was directly related to the increase of the return period and the period, so that the highest amount of acceleration increase (17.16 percentage) with the effect of directivity was calculated in the return period of 2475 years and in the 4-second period.Determining the Elastic thickness of the lithosphere in The Zagros Mountains using the Admittance function
https://jesphys.ut.ac.ir/article_79582.html
Zagros orogeny is one of the most active orogenic belts among the mountain ranges extending approximately 2000 kilometers from the Anatolian fault in eastern Turkey to the Minab fault in southern Iran. Concerning the importance of this region as well as the essential role of elastic thickness in controlling the rate of deformation under applied loads, determination of Te in Zagros Fold and Thrust belt has been conducted. The lithosphere's elastic thickness (Te) is a convenient measure of the flexural rigidity, which is defined as the resistance to bending under applied loads. To determine the elastic thickness of the lithosphere, the spectral admittance function is applied. We applied the load deconvolution of the admittance function between free-air gravity and topography data for estimation of Te. The Free air anomalies with a five arc-minute resolution are utilized in this study. In flexural isostatic studies, the gravity and topography data are compared with theoretical models to estimate several parameters of the lithosphere. In the simplest model, a plate has been flexed by a surface load, with the magnitude of the resulting deflection, which is governed by Te. Using the random fractal surfaces as the initial surface and subsurface loads applying at lithosphere, the lithosphere is modeled, and the post flexural gravity and topography are determined. Based on these new fields, the predicted admittance function is determined. Finally, the best-fitting Te is one that minimized the misfit between the observed and predicted functions. Additionally, the weighted misfit by the jackknife error is applied to estimate the observed admittance. The accuracy of the method is checked through synthetic modeling. Two fractal surfaces are used as the two initial surface and subsurface loads applied to the lithosphere. After calculating the corresponding gravity and topography data by the load deconvolution method, the observed and predicted admittance are estimated. The best-fitting Te will be obtained by minimizing the misfit between observed and predicted functions. After confirming the accuracy of the method in Te determination, the technique will be applied to the real data acquired from the NCC as follow. We consider a three-layered crust during the lithosphere modeling on which the internal loading is applied on the middle crust. To model the lithosphere, the global CRUST 1.0 is applied by treating the lithosphere as a three-layer crust. The 2D map of Te variations in the target area is depicted by utilizing the load deconvolution of the admittance function between free-air gravity and topography data. High-precision ground gravity data, which is more accurate than satellite data, allows us to detect more details on Te variations in the region. Based on the obtained results, the estimated range of Te in the survey region can be considered low to intermediate. This predicted range is in good accordance with the area's geology background as it is regarded as a young, active orogeny system. Te range and hence the lithosphere's predicted strength to deformation is supported by the previous studies using different geophysical and seismological studies. The mean value of Te in the area is 37&plusmn;2 km. The maximum amount is detected in the Sanandaj-Sirjan zone. The overall predicted trend of Te follows the geological background of the region. Additionally, the estimated trend for Te and the strength to the applied load and deformation is in good agreement with the previous geophysical and seismological studies conducted in the region.An Analytical solution to two-dimensional unsteady pollutant transport equation with arbitrary initial condition and source term in the open channels
https://jesphys.ut.ac.ir/article_79571.html
Pollutant dispersion in environment is one of the most important challenges in the world. The governing equation of this phenomenon is the Advection-Dispersion-Reaction (ADRE) equation. It has wide applications in water and atmosphere, heat transfer and engineering sciences. This equation is a parabolic partial differential equation that is based on the first Fick&rsquo;s law and conservation equation. The applications mathematical models of pollution transport in rivers is very vital. Analytical solutions are useful in understanding the contaminant distribution, transport parameter estimation and numerical model verification. One of the powerful methods in solving nonhomogeneous partial differential equations analytically in one or multi-dimensional domains is Generalized Integral Transform Technique (GITT). This method is based on eigenvalue problem and integral transform that converts the main partial differential equation to a system of Ordinary Differential Equation (ODE). In this research, an analytical solution to two-dimensional pollutant transport equation with arbitrary initial condition and source term was obtained for a finite domain in the rivers using GITT. The equation parameters such as velocity, dispersion and reaction factor were considered constant. The boundary condition was assumed homogenous. In this research, the source term is considered as point pollutant sources with arbitrary emission time pattern. To extract the analytical solution, the first step is choosing an appropriate eigenvalue problem. The eigenvalue must be selected based on Self-Adjoint operator and can be solved analytically. In the next, the eigenfunction set was extract by solving the eigenvalue problem with homogenous boundary condition using the separation of variables method. Then the forward integral transform and inverse transform were defined. By implementing the transform and using the orthogonality property, the ordinary differential equation system was obtained. The initial condition was transformed using forward transform and the ODE system was solved numerically and the transformed concentration function was obtained. Finally, the inverse transform was implemented and the main analytical solution was extracted. In order to evaluate the extracted solution, the result of the proposed solution was compared with the Green&rsquo;s Function Method (GFM) solution in the form of two hypothetical examples. In this way, in the first example, the initial condition function as an impulsive one at the specific point in the domain and one point source with the exponential time pattern were considered. In the second example, the initial condition was similar to the first example and two point sources with irregular time pattern were assumed. The final results were represented in the form of the concentration contours at different times in the velocity field. The results show the conformity of the proposed solution and GFM solution and report that the performance of the proposed solution is satisfactory and accurate. The concentration gradient decreases over time and the pollution plume spreads and finally exits from the domain at the resultant velocity direction due to the advection and dispersion processes. The presented solutions have various applications; they can be used instead of numerical models for constant- parameters conditions. The analytical solution is as an exact, fast, simple and flexible tool that is conveniently stable for all conditions; using this method, difficulties associated with numerical methods, such as stability, accuracy, etc., are not involved. Also because of the high flexibility of the present analytical solutions, it is possible to implement arbitrary initial condition and multiple point sources with more complexity in emission time patterns. So it can be used as a benchmark solution for the numerical solution validation in two-dimensional mode.Application of Principal Component Analysis (PCA) in Fuzzy Inference System (FIS) for Time-Series Modeling of Ionosphere
https://jesphys.ut.ac.ir/article_79583.html
The ionosphere is a layer of Earth's atmosphere extending from an altitude of 100 to more than 1000 km. Typically total electron content (TEC) is used to study the behavior and properties of the ionosphere. In fact, TEC is the total number of free electrons in the path between the satellite and the receiver. TEC varies greatly with time and space. TEC temporal frequencies can be considered on a daily, monthly, seasonal and annual basis. Understanding these variations is crucial in space science, satellite systems and positioning. Therefore, ionosphere time series modeling is very important. It requires a lot of observations to model the ionosphere temporal frequencies. As a result, it requires a model with high speed and accuracy. In this paper, a new method is presented for modeling the ionosphere time series. The principal component analysis (PCA) method is combined with the fuzzy inference system (FIS) and then, the ionosphere time series are modeled. The advantage of this combination is to increase the computational speed, reduce the convergence time to the optimal solution as well as increase the accuracy of the results. With the proposed model, the ionosphere can be analyzed at shorter time resolutions. Principal component analysisis a statistical procedure that uses anorthogonal transformationto convert a set of observations of possibly correlated variables into a set of values oflinearly uncorrelatedvariables calledprincipal components.This transformation is defined in such a way that the first principal component has the largest possiblevariance, and each succeeding component in turn has the highest variance possible under the constraint that it isorthogonalto the preceding components. The resulting vectors are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables. Fuzzy inference systems (FIS) take inputs and process them based on the pre-specified rules to produce the outputs. Both the inputs and outputs are real-valued, whereas the internal processing is based on fuzzy rules and fuzzy arithmetic. FIS is the key unit of a fuzzy logic system having decision making as its primary work. It uses the &ldquo;IF&hellip;THEN&rdquo; rules along with connectors &ldquo;OR&rdquo; or &ldquo;AND&rdquo; for drawing essential decision rules. To evaluate the proposed method of this paper, observations of Tehran's GNSS station, in 2016 have been used. This station is one of the International GNSS Service (IGS) in Iran. Therefore, its observations are easily accessible and evaluated. The statistical indices dVTEC = |VTECGPS-VTECmodel|, correlation coefficient and root mean square error (RMSE) are used to evaluate the new method. The statistical evaluations made on the dVTEC show that for the PCA-FIS combination model, this index has a lower numerical value than the FIS model without PCA as well as the global ionosphere map (GIM-TEC) and NeQuick empirical ionosphere model. The correlation coefficients are obtained 0.890, 0.704 and 0.697 for PCA-FIS, GIM and NeQuick models with respect to the GPS-TEC as a reference observation. Using the combination of PCA and FIS, the convergence speed to an optimal solution decreased from 205 to 159 seconds. Also, the RMSE of training and testing steps have also been significantly reduced. Northern, eastern, and height component analysis in precise point positioning (PPP) also show higher accuracy of the proposed model than the GIM and NeQuick model. The results of this paper show that the PCA-FIS method is a new method with precision, accuracy and high speed for time series modeling of TEC variations.Numerical Modelling and Automatic Detection of submesoscale eddies in Persian Gulf Using aVector Geometry Algorithm
https://jesphys.ut.ac.ir/article_79581.html
Nowadays, marine data containing both observational and measured values as well as the output of numerical models are largely available; But analyzing and processing this data is time consuming and tedious due to the heavy volume of information.Identifying and extracting eddies is one of the most important aspects of physical oceanography, and automatic detection algorithms of eddies are one of the most basic tools for analysing eddies. The general circulation of the Persian Gulf is a cyclonic circulation that is affected by tide, wind stress, and thermohaline forces. In this study, using the Mike model based on the three-dimensional solution of the Navier Stokes equations, assumption of incompressibility, Boussinesq approximation and hydrostatic pressure, the circulation in the Persian Gulf was modeled. Then a vector geometry algorithm has been used for detection of eddies in this region. Four constraints were derived in conformance with the eddy velocity field definition and characteristics in this algorithm. Eddy centers are determined at the points where all of the constraints are satisfied. The four constraints follow: (i) Along an east&ndash;west (EW) section, v has to reverse in sign across the eddy center, and its magnitude has to increase away from it; (ii) Along a north&ndash;south (NS) section, u has to reverse in sign across the eddy center, and its magnitude has to increase away from it: the sense of rotation has to be the same as for v; (iii) Velocity magnitude has a local minimum at the eddy center; and (iv) Around the eddy center, the directions of the velocity vectors have to change with a constant sense of rotation. The constraints require two parameters to be specified: one for the first, second, and fourth constraints and one for the third one. The first parameter, a, defines how many grids points away the increases in the magnitude of v along the EW axes and u along the NS axes are checked. It also defines the curve around the eddy center along which the change in direction of the velocity vectors is inspected. The second parameter, b, defines the dimension (in grid points) of the area used to define the local minimum of velocity. The main data used to detect eddies are numerical model outputs, including velocity components. These outputs are the result of numerical modeling with thermohaline and wind stress forces. In total, 4308 cyclonic and 2860 anticyclonic eddies are detected at the surface and 617 cyclonic and 329 anticyclonic eddies are found in the deepest layer, depth of 50 meters, for daily data during one year. The number of eddies is highest in winter, and the lowest in summer and the average radius of anticyclonic eddies is maximum in winter and minimum in summer for cyclonic eddies. Most eddies have a radius of 5-10 km and lifespan of 3-6 days. Also, as the lifespan of eddies increases, they penetrate deeper into the water. The percentage of eddy penetration or the ratio of the number of eddies of the deepest layer to the surface layer is 15% for cyclonic eddies and 10% for anticyclonic eddies. This indicates that the energy loss in the cyclonic eddies is less than in the anticyclonic eddies and is probably due to the alignment of the rotating eddy with the overall circulation of the Persian Gulf.Elemental analysis of air-full dust in World heritage city of Yazd by Laser Induced Breakdown Spectroscopy
https://jesphys.ut.ac.ir/article_79584.html
The dust and the environmental pollutions caused by dust storms are a serious environmental hazard, particularly in arid and semi-arid civilian regions in the world. Controlling and decreasing the harmful or undesirable effects of dust can be achieved by accurately identifying and analyzing dust samples. For this goal, various elemental analysis methods are commonly used for identifying and characterizing dust materials. The City of Yazd (UNESCO Heritage Center) is located in Iran's central region. It is surrounded by many industrial, mineral sites, and deserts. The city's urban areas suffer air pollution due to seasonal wind, the lack of annual rainfall, and dust storms. Hence, the dust concentration reaches higher than of standard limits occasionally in this city. In this paper, a study to characterize and analyze the falling-dust in Yazd city is reported. Initially, the sampling procedure was conducted at five different locations for two months using marble dust collectors. The size distributions and morphology of dust samples were studied by Scanning Electron Microscopy (SEM), X-Ray Diffraction technique (XRD). Moreover, samples' elemental composition was analyzed using Energy Dispersive X-Ray Spectroscopy (EDX) and distinctly, Laser-Induced Breakdown Spectroscopy (LIB). The analysis of SEM images and XRD patterns of dust particles allows studying the dust's size and morphology of samples. The size of 1 to 30 microns was estimated for the dust particles with the maximum size distributions between 2 to 7 microns. Also, capsular, triangular, spherical, irregular, and polyhedral shapes are revealed by recorded particles' images. The XRD analyses show the existence of silicates, carbonates, phosphates mineral groups, calcites, quartz, gypsum, magnesium carbonate, and aluminum phosphates components in samples. Laser-induced breakdown spectroscopy (LIBS) is a non-contact, fast response, high sensitivity, real-time, and multi-elemental analytical detection technique based on emission spectroscopy to measure the elemental composition. The elemental characterization of powder samples was carried out by investigating the emission spectra of breakdown plasma in the sample region. A 1064-nm Nd:YAG laser operating at high energy (100 mJ, 1 to 20 Hz), was focused on the surface of the tiny amount of powder sample to form an emitting plasma. The emission of produced plasma from the sample was collected by eight optical fibers and was detected by the spectrometer. The applied experimental setup allowed to record spectra in the range of 200 to 1200 nm with a spectral resolution of 0.4 nm. In total, 74 atomic emission lines of generated plasma were analyzed. Spectral analysis of obtained spectra enables to identify several elements such as calcium, silicon, iron, magnesium, aluminum, carbon, and other elements with less abundance such as potassium, sodium, strontium, manganese, titanium, cobalt, vanadium, barium and lead in the elemental composition of dust samples. The results deduced using the LIBS technique agree unambiguously with results obtained by EDX analysis of dust samples in this work. It is found that Laser-Induced Breakdown spectroscopy is a rapid, reliable, and powerful analytical tool for the diagnostic and detection of multiple elements for solid dust samples. Also, this technique is comparable with standard methods such as atomic absorption spectroscopy (AAS) and X-Ray Fluorescence (XRF) for chemical and elemental analysis of urban, mineral, and industrial dust.Evaluation of cumulus schemes of HWRF model in forecasting tropical cyclone characteristics, Gonu tropical cyclone case study
https://jesphys.ut.ac.ir/article_79578.html
Sensitivity of numerical models in the prediction of Tropical Cyclone (TC) characteristics has been considered in numerous research studies. In this research, application of five cumulus schemes of HWRF (Hurricane Weather Research and Forecasting) model, including KF, SAS, BMJ, TiedTKE and SASAS has been examined during Tropical Cyclone Gonu (TCG) from 4 to 7 June 2007. The simulations have been conducted using three nests with 27, 9 and 3 km resolutions. To this aim, the performance of schemes in predicting TCG intensity using minimum surface pressure and maximum 10-m wind speed are analyzed. Following, their effect on forecasting the radius of maximum wind is evaluated. The parameters of lower-level divergence, upper-level convergence, potential temperature, potential vorticity, Convective Available Potential Energy (CAPE), wind vector (both horizontal and vertical components), wind shear, precipitation and radar reflectivity have been analyzed. The results of the simulations have been compared with the analysis data, IMD and TRMM observational data and routine atmospheric parameter measured at the Chabahar station. The comparison was done in different time of TCG lifetime. To examine the performance of HWRF cumulus schemes for track and intensity of the TCG, the whole life cycle of TCG was considered. To test the efficiency of HWRF cumulus schemes in predicting some dynamical and thermodynamical parameters, the time of maximum intensity of TCG (18 UTC on 4 June 2007) was focused on. To evaluate the functionality of HWRF cumulus schemes in the coastal area, the outputs were discussed in the last two days of the TCG life cycle. Results showed that based on the used configuration, none of the five cumulus schemes predicted the TCG reaching the southern coast of Iran. Moreover, neither the pressure decrease nore the maximum wind speed were predicted accurately at the time of maximum intensity of TCG. Until TCG intensity was more that category-3, neither minimum surface pressure trend and nor the maximum wind speed trend have been forecasted well. However, for the less intense conditions, two schemes of TiedTKE and SAS produced the nearest values. The performance of all five cumulus schemes, similarly predicted the radius of the maximum wind, except TiedTKE scheme that predicted the super cyclone 6 hours earlier. The analysed and simulated of the vertical cross sections of potential temperature and horizontal wind were similar, respectively. The simulated values of the vertical component of wind were considerably larger than those from the analysis data and were also closer to the TCG center. The maximum values of simulated CAPE were off the Oman coast compared to the analysis values. Only the simulations using SASAS cumulus schemes showed the strongest potential vorticity near the surface. The simulated updrafts and downdrafts were larger than those from the analysis data. The simulated values of the major updrafts and downdrafts were closer to the center of the TCG, comparing to those from the analysis data. The upper-level divergence patterns were seen in both simulations using all 5 cumulus schemes and also in the analysis data, while the lower-level convergences were not captured neither in the simulations nor in the analysis data. The maximum value of the simulated accumulated precipitation using all 5 cumulus schemes were 80 mm in a 6 hour interval, however, the observational value from the TRMM was 25 mm/h. The predicted radar reflectivity from the simulations were similar and the simulated maximum values were the same, but the expansions of the simulated maximum values were different. All cumulus schemes predicted the wind shear values less than the analytical values. At Chabahar station, the observational values of the 10-m wind speed, sea level pressure, and temperature have been compared to the simulated values using all 5 cumulus schemes, in the period of 6-7 Jun 2007. The statistical parameters of correlation, standard deviation and root mean square were used to identify the best cumulus scheme. The least error prediction was obtained using KF cumulus schemes to predict the 10-m wind, the TiedTKE cumulus scheme to simulate sea level pressure the observed, and SASAS cumulus schemes to produce temperature.Cumulus Clouds from the rough surface perspective
https://jesphys.ut.ac.ir/article_79579.html
Although it is well-known the clouds show a fractal geometry for a long time, their detailed analysis is missing in the literature yet. Within scattering of the received radiation from the sun, clouds play a very important role in the energy budget in the earth atmosphere. It was shown that the surface fluctuations and generally the statistics of the clouds has a very important impact on the scattering and the absorption of the radiation of the sun. In this paper we first study the relation between the visible light intensity and the width of the cumulus clouds. To this end, we find that the received intensity is , where , &nbsp;and &nbsp;To this end we supposed that the transmitted intensity of light from a column of cloud is proportional to where (summation of the absorbed and the scattered contributions). Using this relation, we find a one to one relation between the cloud width and the intensity of the received visible light in low intensity regime. By calculating the Mie scattering cross sections for the physical parameters of the clouds, we argue that this correspondence works for thin enough clouds, and also the width of the clouds is proportional to the logarithm of the intensity. The Mie cross section is shown to behave almost like &nbsp;for large enough s, where &nbsp;is the angle of radiation of sun with respect to earth&rsquo;s surface, or equivalently the cloud&rsquo;s base. This allows us to map the system to two-dimensional rough media. Then exploiting the rough surface techniques, we study the statistical properties of the clouds. We first study the roughness, defined for rough surfaces as . This study on the local and global roughness exponents (&alpha;_l and &alpha;_g respectively) show that the system is self-similar. We also consider the fractal properties of the clouds. Importantly by least square fitting of the roughness we show numerically that the exponents are and . We study also the other statistical observables and their distributions. By studying the distribution of the local curvature (for various scales) and the height variable we conclude that these functions, and consequently the system is not Gaussian. Especially the distribution of the height profile follows the Weibull distribution, defined via the relation &nbsp;for &nbsp;and zero otherwise. The reasoning of how this relation arises is out of scope of the present work, and is postponed to our future studies. The studies on the local curvature, defined via &nbsp;reveals the same behaviors and structure. All of these show that the problem of the width of cumulus clouds maps to a non-Gaussian self-similar rough surface. Also we show that the system is mono-fractal, which requires&nbsp; . Given these results, the authors think that the top of the clouds are anomalous random rough surfaces that affect the albedo of cloud fields.Statistical Evaluation of Cloud Seeding Operations in Central Plateau of Iran in the 2015 Water Year
https://jesphys.ut.ac.ir/article_79585.html
Iran is located in an arid and semi-arid region and has experienced a reduction of average rainfall in recent years. This has turned the attention to the use of new methods such as cloud seeding to achieve more water resources. In this regard, cloud seeding operations have been carried out in the country since 1998. The purpose of this study was to evaluate cloud seeding projects in the 2015 water year (January, February, and March 2015) in the central region of Iran, including the provinces of Yazd, Kerman, Fars, Isfahan, and some adjacent provinces. The evaluation was performed statistically using stepwise multiple regression. Two different approaches have been used for evaluation. In the first approach, precipitation at stations located in the target area of cloud seeding operations is estimated based on the precipitation at stations in the control area using stepwise multiple regression and then taking into account a 90% confidence interval for this estimate, the effectiveness or ineffectiveness of the cloud seeding operation at each station is determined. In the second approach, the volume of precipitation in each province in the target area is estimated based on the precipitation in stations outside in the control area using stepwise multiple regression and then by considering a 90% confidence interval for this estimate, the effectiveness of cloud seeding operations on the rainfall volume of each province has been investigated. The target area in different months was selected based on the HYSPLIT model results. Due to the inconsistent spatial distribution of rain gauges in the target areas, parts of the target areas lacking enough rain gauges were excluded from further analysis. To define the boundaries of the exclusion areas, Inverse Distance Weighted (IDW) method was used to find the influence of the radius around each rain gauge. The influence radius values were selected as 93940, 89569, and 149015 m for the months of January, February, and March, respectively. Finally, the minimum value of 89569 m was selected as the influence radius. The results of both methods indicate the impact of cloud seeding operations this year in these areas. In particular, the volume of precipitation in February in all provinces located in the target area of cloud seeding operations has increased from 15 to 80 percent. Surface runoff generated from the increased precipitation due to cloud seeding were estimated by the two methods of Soil Conservation Service (SCS) and Rational method. The estimated surface runoffs generated by SCS and rational methods were 1318.5 and 1329.5 million m3, respectively. The groundwater recharge in the three months of January, February, and March is estimated as 105.3, 425.6, and 156.3 million m3, respectively. It is important to note that runoff and groundwater recharge estimations by the method used in this study are subject to high uncertainties, and the estimations can only represent the order of magnitude of impacts of cloud seeding operations, and therefore, exact numbers should not be used for water resources planning and management purposes. Further investigation in areas with more rain gauges can assist in a more accurate assessment of could seeding operations.Moho Topography Estimation using Interactive Forward Modeling of Gravity Data
https://jesphys.ut.ac.ir/article_79567.html
The Moho discontinuity is a boundary between the crust and upper mantle that reveals the difference between them with changes in seismic velocity, density, chemical structure, and constituents. Estimating the Moho depth and studying its lateral changes is one of the important goals of geophysical studies. The current study aims to estimate the depth and topography of the Moho discontinuity in the southwestern part of the Baltic Sea, including parts of the central European system, the Trans-European Suture Zone, Caledonian Crustal Suture, and the Ringkobing-Fyn High. This area is one of the most attractive regions for Geoscientists in the last decades due to its complicated geological structures caused by different tectonic events. For this purpose, a three-dimensional model of the crustal structures based on gravity data forward modeling in the study area has been presented. Previous seismic / non-seismic results have been used to constrain the model and reduce its degree of freedom. This model includes sedimentary sequences, crustal thickness, Moho topography, and high-velocity lower crust expansion in the region and shows the tectonic structures of the study area. This study used a combination of marine, land, and EGM2008 gravity data and modeled them with IGMAS+, Interactive Gravity and Magnetic Application System. The interactive modeling program allows the user to change the geometry as well as the density and susceptibility of the primary model and observe results quickly during the processing. In the software, the model structure will be more user friendly by eliminating additional details and dividing the whole model into vertical sections. Our primary model consists of three main layers of sediments, crust and upper mantle. The sedimentary layer is divided into two major parts, pre-Permian and post-Carboniferous. Also, the crustal layer is divided into the upper crust and the high-density lower crust. Besides, the upper crust is composed of the upper crust of the Baltica and the upper crust of Avalonia. The last layer of the model is a part of the upper mantle. The model space consists of 16 vertical planes stretching 385 kilometers east-west with an equal distance of 15 kilometers, covering the entire study area. The initial model was developed based on seismic sections and previous models, and it has been improved using interactive forward modeling of gravity data. The result shows a good agreement between the measured and modeled Bouguer anomaly, and the Root Mean Square Error of the model is 1.12 mGal. The model correlates clearly with major tectonic units. It indicates that the Caledonian collision resulted in the amalgamation of Baltica and Avalonia is the most prominent tectonic event in the area, and the Caledonian crustal suture between them is interpreted from changes in physical parameters at crustal levels. There is a relatively thick crystalline crust in the area, and the depth of Moho discontinuity varies from 26 to 42 km. The results also indicate that the transition from the Paleozoic crust of the Central European Basin to the Precambrian crust of the Eastern European Craton occurs within the Tornquist Zone.Evaluating the performance of a planetary boundary layer scheme by using GABLS1 experiment in a single-column version of the global model developed based on potential vorticity
https://jesphys.ut.ac.ir/article_79587.html
Representing the boundary layer processes is crucial in simulating atmospheric phenomena in operational hydrostatic weather forecast models. Moreover, evaluating the performance of different physical processes in a variety of numerical models is an essential subject of its own. This paper presents an objective assessment of a planetary boundary layer scheme based on turbulent kinetic energy in a single-column version of the innovative atmospheric general circulation model developed based on potential vorticity at the University of Tehran, which is called UTGAM. Single-column models are a complementary tool to the atmospheric general circulation models that provide a simple framework to investigate the fidelity of the simulated physical processes. The reliable parameterization of the boundary layer processes has got significant impacts on weather forecasts. Most of the hydrostatic models have got deficiencies in the representation of these unresolved processes, especially in stably stratified conditions, and it seems that this problem is continuing in the forthcoming future. Here we have utilized the first GABLS intercomparison experiment set up as a simple tool to evaluate the performance of the diffusion scheme in the UTGAM. Two different sigma-theta and sigma-pressure single-column grid staggering combined with, respectively, 33 and 14 vertical levels below 3 km height have been used for the low- and high-resolution simulations. The GABLS1 LES results have been used as a benchmark for comparison. The boundary layer scheme that has been explored here is the same as the one in the ECHAM model, but some simplifications have been made. For instance, in this simulation, the effects of tracers have been ignored to circumvent the complexity of the problem. Results depict subtle nuances between the sigma-theta and sigma-pressure coordinates in intercomparison between the low and high vertical resolutions separately, which are more apparent in the lower vertical resolution. Nevertheless, it seems that the diffusion processes have been simulated rather more accurately in the high-resolution sigma-pressure vertical coordinate. The boundary layer scheme analogous with most of the operational models in the GABLS1 intercomparison experiment overestimate the momentum and the heat diffusion coefficients. The wind profile with height depicts maxima that are higher than the corresponding LES profile. It is inferred that the scheme mixes momentum over a deeper layer than the LES, but the simulated wind profile is better in comparison with the other operational models in GABLS1. Considering the vertical profiles of potential temperature revealed that the amount of heat mixing is not suitable in this experiment, and it causes a negative bias in the lower part of the simulated boundary layer. The simulated amounts of surface friction velocity have proved significant differences with the LES results in all separate experiments. However, the latter large amounts seem unlikely to have a detrimental effect on forecast scores in the operational model. Moreover, the sensitivity of the scheme to the lowest full level has been partially explored. Decreasing the lowest full-level height concurrent with increasing the vertical resolution leads to a modest influence on the simulation of the boundary layer processes. All the results confirm notable improvements by increasing the vertical resolution in both sigma-theta and sigma-pressure coordinates.Efficiency of the adaptive neuro-fuzzy inference system in tropospheric slant water vapor modeling
https://jesphys.ut.ac.ir/article_79588.html
The passage of satellite signals from a different and variable nature of the troposphere will have a significant delay in the movement of these signals. This effect is commonly known as tropospheric delay. It can be divided into wet and dry components. The dry component is usually modeled using devices that measure air pressure. Unlike the dry component, the wet component of tropospheric refraction cannot be modeled using air pressure measuring devices. This component depends on the water vapor (WV) and moisture content of the troposphere. The WV is one of the key parameters in climate system analysis and a major factor in atmospheric events. Using the observations of local and regional GNSS networks, it is possible to estimate the slant tropospheric delay (STD) and subsequently, the slant wet delay (SWD) for each line of sight between the receiver and the satellite. The SWD observations are used to model horizontal and vertical WV variations in the upper atmosphere of the study network. This is done with a tomography technique. In tomography, the horizontal variations of tropospheric wet refractivity are modeled with the polynomial in degree and rank of 2 with latitude and longitude as variables. Also, altitude variations are modeled in the form of discrete layers with constant heights. The main innovation of this paper is in estimating the tropospheric parameters for each line of sight between the receiver and the satellite by the adaptive neuro-fuzzy inference system (ANFIS). The SWD obtained from GPS observations for the different signals at each station is compared with the SWD generated by the ANFIS (SWDGPS-SWDANFIS). The square of the difference between these two values is introduced as the cost function in the ANFIS. By calculating the value of the cost function at each step, the weights associated with the ANFIS network are corrected by the back-propagation (BP) method. In the next step, using the estimated wet refractivity, the value of slant water vapor (SWV) is calculated. To evaluate, GPS observations from 27-31 October 2011 and Tabriz radiosonde observation are used. For a more detailed evaluation, 2 test stations are selected and ANFIS zenith wet delays (ZWDANFIS) are compared with the ZWDGPS. Observations of test stations are not used in modeling step. In order to further examine the accuracy of the proposed method, the results of this study have been compared with the results of voxel-based tomography (TomoVoxel) method and troposphere tomography using artificial neural network (TomoANN). Also, relative error, mean square error (RMSE), standard deviation, and correlation coefficient were used to evaluate the results. At the Tabriz Radiosonde station, the correlation coefficient for the ANFIS, TomoVoxel and TomoANN have been calculated to 0.9131, 0.8863 and 0.9006, respectively. The minimum relative error for the TomoANFIS, TomoANN and TomoVoxel are 8.31%, 8.55% and 8.71%, respectively. Also, the maximum RMSE for three models is 0.9718, 1.0281 and 1.2346 mm/km, respectively. The results of this paper indicate the very high capability of the TomoANFIS model in showing the temporal and spatial variations of SWV. This method can be used to discuss the behavior of the atmosphere in real time and near to real time applications.The 2007 Kahak and 2010 Kazerun earthquakes: constrained non-negative least-squares linear finite fault inversion for slip distribution
https://jesphys.ut.ac.ir/article_79633.html
In this study, two moderate earthquakes from two main seismotectonic provinces of Iran are chosen for investigating slip distribution using finite-fault modeling. The first earthquake is the 2007 June 18 Mw 5.5 Kahak earthquake which is sited in Central Iran seismotectonic province in the vicinity of Kahak district of Qom province near Tehran, the capital of Iran. The second one is the 2010 September 27 Mw 5.9 Kazerun earthquake situated in Zagros seismotectonic province, near Kazerun County in Fars Province. This research aims to find finite-fault modeling of the broadband three-component displacement waveforms of these earthquakes using a least-squares inversion method for the spatial and temporal slip distribution. Green&rsquo;s functions are calculated using the frequency-wavenumber integration code (FKRPROG) developed by Saikia (1994), and the inversion algorithm used for acquiring synthetic data is based on a stabilized constrained non-negative least-squares method introduced by Hartzell and Heaton (1983). A great many inversions are implemented to obtain the optimal parameters used in the process, including rupture velocity and rise time. The rupture velocity of 2.6 km/s (0.75 Vs) and the rise time of 1.4 s are used for the first event, and 2.8 km/s (0.75 Vs) and 2.1 s are chosen for the second one. Results show the rupture with the peak slip of 8.6 cm and 14.3 cm, and the total seismic moment release of 1.59e+24 dyne-cm and 2.80e+25 dyne-cm for the Kahak and Kazerun earthquakes, respectively. Furthermore, due to the non-uniqueness of the inversion problem, a set of solutions is presented for both events. Among these models, the final solutions for both earthquakes resulting from the ISC hypocenter and GCMT focal mechanism give the smoothest synthetic data with the largest amount of data fitting. For the Kahak earthquake, the ISC hypocenter provides the best fit to the observed data with the maximum total variance reduction of 35.30 % for the spatial and 54.50 % for the spatiotemporal distribution. For the Kazerun earthquake, the best fit to the observed data with the maximum total variance reduction of 54.44 % is obtained using the ISC hypocenter. Also, the sensitivity of the slip models to some influential parameters such as rupture velocity and rise time are explored. This sensitivity test shows that increasing the rupture velocity increases the seismic moment and decreases the total variance reduction. Moreover, different values of the rise time demonstrate that the rise time growth reduces the rupture area and the seismic moment. Another result of this test is that slip distribution is heavily influenced by the number of stations and different sets of data. Finally, a comparison between the slip pattern in the fault plane and its projection on the earth&rsquo;s surface illustrates that the aftershocks distribute primarily outside of the majority of the slip. Since the aftershocks are a phase of relaxing stress concentrations, they are expected to spread outside of the slip patch. For the Kahak earthquake, the distribution occurs near the western end of the slip model. There is also a predominance of aftershocks which surrounds the majority of the slip in the Kazerun model. Therefore, an analysis of the aftershock distribution and slip patterns may prove the reliability of the solutions.