Journal of the Earth and Space Physics
https://jesphys.ut.ac.ir/
Journal of the Earth and Space Physicsendaily1Wed, 21 Apr 2021 00:00:00 +0430Wed, 21 Apr 2021 00:00:00 +0430Residual static correction Using Tunable Q Factor Discrete Wavelet Transform
https://jesphys.ut.ac.ir/article_79568.html
The derivation of the static reference corrections was generally based on a fairly simple geological model close to the surface. The lack of detailed information near the surface leads to inaccuracies in this model and, therefore, in static corrections. Residual static corrections are designed to correct small inaccuracies in the near-surface model. Their application should lead to an improvement of the final section treated compared to that in which only static corrections is applied. For example, if the final stacked section is to be inverted to produce an acoustic impedance section, it is important that the variations in amplitude along the section represent the changes in the reflection coefficient as close as possible. This is unlikely to be the case if small residual static errors are present. In addition, static reference corrections are not a unique set of values because a change in reference results in a different set of corrections. Due to variation in the Earth's surface, velocities, and thicknesses of near-surface layers, the shape of the travel time hyperbola changes. These deviations, called static, result in misalignments and events lost in the CMP, so they must be corrected during the processing. After correcting the statics of long wavelengths, there are still some short-wavelength anomalies. These &ldquo;residual&rdquo; statics are due to variations not counted in the low-velocity layer. The estimation of the residual static in complex areas is one of the main problems posed by the processing of seismic data, and the results from this processing step affect the quality of the final reconstructed image and the results of the interpretation. Residual static can be estimated by different methods such as travel time inversion, power stacking, and sparsity maximization, which are based on a coherent surface assumption. An effective method must be able to denoise the seismic signal without losing useful data and have to function properly in the presence of random noise. In the frequency domain, it is possible to separate the noise from the main data, so denoising in the frequency domain can be useful. Besides, the transformation areas are data-driven and require no information below the surface. The methods in the frequency domain generally use the Fourier transform, which takes time and has certain limits. Wavelet transformation methods always provide a faster procedure than Fourier transformation. We have found that this type of wavelet transform could provide a data-oriented method for analyzing and synthesizing data according to the oscillation behavior of the signal. Tune able Q Factor Discrete Wavelet Transform (TQWT) is a new method that provides a reliable framework for the residual static correction. In this transformation, the quality factor (Q), which relates to the particular oscillatory behavior of the data, could be adjusted in the signal by the user, and this characteristic leads to a good correspondence with the seismic signal. The Q factor of an oscillatory pulse is the ratio of its center frequency to its bandwidth. TQWT is developed by a tow channel filter bank. The use of a low-pass filter eliminates high-frequency data; these high-frequency components are the effect of residual static. After filtering, the data will be smoother; this amount of correction gives the time offset for the residual static correction. This time difference must apply to all traces. Applying this method to synthetic and real data shows a good correction of the residual static.The association of copper mineralization with magnetic data in the Saunajil area and the identification of copper mineralization areas by means of modeling and interpretation of this data
https://jesphys.ut.ac.ir/article_81526.html
Increasing in demands of raw materials and energy resources has been led to a fast growth in the geophysical studies. Due to the properties of minerals and geological conditions, the geophysical methods are various. Among these methods, the magnetometry based methods are able to explore the magnetic mineralizations or rocks with further magnetic properties. In this method, the magnetic field variations of the ground are measured. Sonajeel is located 17 kilometers from Harris, East Azarbaijan province. The main stone units in this area are from the old to the new: volcanic and volcanoclastic rocks, Sonageel porphyry stock, Incheh granitoid stock, and Okuzdaghi volcanic rocks. In this study, the magnetometric method is used as an indirect method for the identification of copper ore. Based on the magnetometric method, information can be obtained about the gradient, depth, shape, and extension of the source of abnormalities. There are several examples for using this method (especially the airborne magnetization method) to explore the copper deposit. Including the copper project in the Cadia region of Australia, as well as the use of magnetism explore for the mineralization of copper and gold in the Polymetal exploration area of Bashmaq Hashtrood; Cited. For this aim, the magnetic data in 19 lines with 1000 meter length and 20 meter for points distance were carried out. Distance between lines is 50 meter )Except Line 19, located 30 meters from Line 18) and then the area of survey is about 2 km2. After applying corrections (daily and IGRF), processing data by applying filters such as RTP, To remove the effect of the Inclination angle (anomaly is symmetrically placed on the creator mass); Upward, to study the process of mineralization in depth; vertical derivatives and analytical signals are used to estimate the anomalys boundary; on the data, done. 3D modeling of data is done by Mag3d software. The results indicate the mineralization process in the north and north-east to the south-east of the study area. Upward filtering is applied to data at altitudes of 20, 40, 80 meters, The maps resulting from this filtering Represents Anomalys&#039;s Root in the southeastern of the region, When we compare the images With the 3D model, mineralization in this area is scattered but covers a large range; Also, according to the results of three-dimensional modeling, the magnetite density in the north and northeastern of the region is more than south and southeast. The contrast of the magnetic susceptibility in the north region from the depth of 100 m to 270 is high, But in the east and southeastern parts of the region, From a depth of more than 100 meters, there is a high magnitude of magnetic susceptibility. Hence, it can be concluded that in the northern parts of the Potysac alteration is closer to the surface. Because Ptaxic alteration is a good place for copper and magnetite mineralization. By comparing and interpreting the results and assessing them with geological data, the probability of magnetite mineral and, consequently, copper in the Sonajeel area is Greatly understood.Estimation of average shear V_sz and compressional V_pzwaves velocities using wavelength-depth relation obtained from surface waves analysis
https://jesphys.ut.ac.ir/article_79635.html
Shear wave velocity ( ) and its average based on travel time from the surface to a depth of 30 m, is known as ( ) are often used in engineering projects to determine soil parameters, evaluate the dynamic properties of the soil and classify it. This quantity is directly related to the important property of soil and rock, i.e., their shear strength. The average shear wave velocity is used in geotechnics to assess soil liquefaction and in earthquake engineering to determine soil period, site amplification coefficient, and determination of attenuation. Usually, the average shear wave velocity is obtained from shear wave refraction survey, PS logging or from shear wave velocity profile obtained by inversion of experimental dispersion curve of surface waves. Surface wave analysis is one of the methods for estimating the profile of shear wave velocity, but inverting of dispersion curve is a time-consuming part of this process and also, the inverse problem has a non-unique solution. This becomes more evident when the goal is to determine a two- or three-dimensional shear wave velocity model. This study provides a method to estimate directly the average shear wave velocity ( ) as well as the average compressional wave velocity ( ) from dispersion curves of surface waves without the need to invert the dispersion curves. For this purpose, we need to exploit the relation between surface wave wavelength and investigation depth. Estimating the wavelength-depth relationship requires access to a shear wave velocity model (a reference model) in the study area, which can be obtained from well data, refraction seismic profiles, or by inverting one of the experimental surface wave dispersion curves. The &nbsp;is then estimated directly from dispersion curve using the wavelength-depth relationship. In addition, due to the dependence of the value of &nbsp;to Poisson's ratio and the sensitivity of the estimated wavelength-depth relationship to this ratio, we estimate the Poisson's ratio profile and average compressional velocity ( ) for the study area, from the . For a given range of Poisson's ratio values, theoretical dispersion curves of the synthetic earth models are determined by forward modeling. Then using these dispersion curves and estimated average shear wave velocity of the model, the wavelength-depth relationship corresponding to each Poisson's ratio is determined. In the next step by comparing experimental and estimated wavelength-depth relationships, one can estimate the Poisson's ratio at each depth. Then the average compressional wave velocity ( ) is estimated using the &nbsp;and the Poisson's ratios. We evaluated the performance of the proposed method by applying on both real MASW seismic data set from USA and synthetic seismic data. The synthetic data collected over synthetic earth model and showed that the average shear and compression waves velocities are estimated with uncertainty of less than 10% in layered earth model with very large lateral variations in shear and compression waves velocities. According to the results, the proposed method can be used to take the non-destructive advantages of the surface wave method in engineering, geotechnical, and earthquake engineering projects to get the average shear wave velocity .Moho Topography Estimation using Interactive Forward Modeling of Gravity Data
https://jesphys.ut.ac.ir/article_79567.html
The Moho discontinuity is a boundary between the crust and upper mantle that reveals the difference between them with changes in seismic velocity, density, chemical structure, and constituents. Estimating the Moho depth and studying its lateral changes is one of the important goals of geophysical studies. The current study aims to estimate the depth and topography of the Moho discontinuity in the southwestern part of the Baltic Sea, including parts of the central European system, the Trans-European Suture Zone, Caledonian Crustal Suture, and the Ringkobing-Fyn High. This area is one of the most attractive regions for Geoscientists in the last decades due to its complicated geological structures caused by different tectonic events. For this purpose, a three-dimensional model of the crustal structures based on gravity data forward modeling in the study area has been presented. Previous seismic / non-seismic results have been used to constrain the model and reduce its degree of freedom. This model includes sedimentary sequences, crustal thickness, Moho topography, and high-velocity lower crust expansion in the region and shows the tectonic structures of the study area. This study used a combination of marine, land, and EGM2008 gravity data and modeled them with IGMAS+, Interactive Gravity and Magnetic Application System. The interactive modeling program allows the user to change the geometry as well as the density and susceptibility of the primary model and observe results quickly during the processing. In the software, the model structure will be more user friendly by eliminating additional details and dividing the whole model into vertical sections. Our primary model consists of three main layers of sediments, crust and upper mantle. The sedimentary layer is divided into two major parts, pre-Permian and post-Carboniferous. Also, the crustal layer is divided into the upper crust and the high-density lower crust. Besides, the upper crust is composed of the upper crust of the Baltica and the upper crust of Avalonia. The last layer of the model is a part of the upper mantle. The model space consists of 16 vertical planes stretching 385 kilometers east-west with an equal distance of 15 kilometers, covering the entire study area. The initial model was developed based on seismic sections and previous models, and it has been improved using interactive forward modeling of gravity data. The result shows a good agreement between the measured and modeled Bouguer anomaly, and the Root Mean Square Error of the model is 1.12 mGal. The model correlates clearly with major tectonic units. It indicates that the Caledonian collision resulted in the amalgamation of Baltica and Avalonia is the most prominent tectonic event in the area, and the Caledonian crustal suture between them is interpreted from changes in physical parameters at crustal levels. There is a relatively thick crystalline crust in the area, and the depth of Moho discontinuity varies from 26 to 42 km. The results also indicate that the transition from the Paleozoic crust of the Central European Basin to the Precambrian crust of the Eastern European Craton occurs within the Tornquist Zone.Evaluation of Precise Point Positioning method with different combinations of Dual-frequencies of Galileo and BeiDou using PPPteh software
https://jesphys.ut.ac.ir/article_79569.html
Due to advances in global navigation satellite systems, it has been possible for satellites to send different frequencies. For this reason, different combinations of these frequencies can be considered to form ionospheric codes and phase observations. In this study, the aim is to evaluate the Precise Point Positioning method using a combination of different frequencies. For this purpose, the PPPteh software provided by the authors, written under MatLab is used. PPPteh has the ability to process observations from four GPS, GLONASS, BeiDou and Galileo satellite systems to perform precise point positioning. In this software, there are all possible combinations for making Dual-frequency ionosphere-free observations for all different frequencies. There are three modes for combining different frequencies for the GPS positioning system, ten modes for the Galileo system, and three modes for building the BeiDou satellite system to make ionospheric-free observations. To evaluate the precise point positioning method, four steps have been considered in terms of position accuracy and convergence time: 1) First, use the observations of two frequencies &nbsp;related to GPS and determine the position, 2) Combine the two systems satellite GPS and Galileo and select the best combination model, 3) Combining the two systems GPS and BeiDou and selecting the best combination and 4) Finally, after the previous three steps, the combination position will be determined using the three systems by the best frequency model and the results will be compared with each other. Based on the results provided for the Galileo and BeiDou navigation satellite systems, two combinations &nbsp;and were selected as the best combinations for use in determining the precise point positioning, respectively. Following the precise point positioning, the addition of observations on BeiDou satellites has reduced convergence time and, in most cases, increased the three-dimensional accuracy of the coordinate components. Using a combination of the signals has a better quality than the other two combinations. The same process was followed for observations of Galileo satellites, according to which the use of observations related to Galileo satellites when combined with GPS observations has increased accuracy and reduced convergence time. The use of &nbsp;signal signals is of better combination than the other three combinations. Finally, by combining all three systems and considering the selected frequency model in the first stage, it was concluded that the combination of three satellite navigation satellite systems GPS, Galileo and BeiDou significantly improved both in reducing convergence time and increasing the three-dimensional accuracy of the coordinates provided. Also, the error provided (the difference in the estimated coordinates with the final coordinates of the station from the IGS file), when using the Galileo and BeiDou systems in combination with the GPS, is noticeably different both in convergence and in the accuracy of the coordinates. Combining all three systems together increases accuracy and reduces convergence time. But in dual-combination with GPS, the use of Galileo satellite observations gives us higher accuracy as well as less convergence time. Therefore, choosing the right signals to form ionosphere-free observations in determining the exact absolute position as well as combining different observations with the correct weight for each signal in combination with GPS, can meet the user's needs in terms of accuracy and convergence.Study and validation of LF/VLF radio signal received at the research center for earthquake prediction (RCEP)
https://jesphys.ut.ac.ir/article_81512.html
For a long time, two very important issues have been raised for humans in relation to the phenomenon of earthquakes: ۲) Predicting the exact time of an earthquake ۲) Controlling the conditions caused by an earthquake. In many advanced societies, valuable work has been done on the second, which has reduced the loss of life and property, but the actions taken on the first have brought us closer to our goal. These measures include studying the tectonic activity of the Earth's crustal plates, investigating changes in wave velocity (P, S) in an area, installing sensors on the ocean floor, monitoring active faults by satellite and using Doppler shift frequency in satellites, studying the behavior of some animals and ect. The mentioned study has laso pointed out that some of them were able to inform us even 15 minutes before the earthquake.But we are looking for a way that, in addition to being efficient, accurate and comprehensive, can cover a wider area and give us more time before the main earthquake to predict. The study of changes in the characteristics of VLF / LF waves such as signal amplitude, phase signal, temporal and spatial transmissions of the signal along the transmitter-transmitter are cases that have been followed more seriously by Hayakawa et al. Since 1995. Since most of the studies have been with the help of VLF wave propagation and less LF waves have been used for investigation and pre-marking, so we want to analyze the first VLF / LF signals arrived at Tehran station in 2019 and also match them with daily density change diagrams. Electrons in terms of time obtained during the signal propagation path from the experimental IRI model associated with each month of the year. The proposed approach in this paper allows us to examine the ability of the IRI model in explaining the temporal evolution of the received signal. Here is a comprehensive way to advance IRI estimates of the current state of the ion sphere. This technique is proven to not only validate the experimental observations of recorded LF and VLF signals at the Tehran station, but also to propose a new approach for improving the estimate of the current state of the ionosphere using the IRI model. More observations could lead to a better estimation of averaged ionospheric densities along the signal propagation path at the morning and evening termination time.By examining the changes in amplitude and phase of the signal, we examine the amount of charge density and the condition of the lower layer of the sphere ion (layer D) along the propagation path of the waves. We are looking for signs to reach a premonition for other earthquakes that will occur in the future. This approach could be used as an indicator of pre-seimic activities produced through the well-known Lithosphere-Atmosphere -Ionosphere coupling LAIC process. Such a methodology could lead to a solid approach for earthquake prediction in Iran using the physic-based analogy. Therefore, this study investigated a new technique for ionosphere remote sensing as well as a new approach for earthquake prediction in Iran.Investigation of the near-field and directivity effects in earthquake hazard analysis studies - a case study of Doroud fault
https://jesphys.ut.ac.ir/article_79634.html
In this study, considering the location of Dorud city in the area near the strike-slip and seismic fault of Doroud, the effects of the near site and the directivity due to rupture have been investigated in seismic hazard analysis studies. Doroud fault is located near the cities of Doroud and Boroujerd, in the western part of Iran. Dorud and Boroujerd are among the important cities of Iran in the agricultural industry and also due to the pristine nature in these areas has always been of interest to tourists. The micro-earthquakes recorded in this area indicate the activity of the Doroud fault system. In order to prevent possible earthquake damage in this area, seismicity studies can be useful to study the acceleration of the ground by considering the effects of the site in order to strengthen the construction of civil structures. Abrahamson (2000) and Somerville et al., (1997) were among the first researchers to establish studies based on this, and the relationships and methods proposed by them are more acceptable today in applying the directional effect. These researchers considered two parameters of angle and ratio of fault length as a direct factor in the effect of orientation and examined the results for the acceleration spectrum created. The effect of orientation can lead to the formation of long-period pulses in the earth's motion, which some proposed models (eg Somerville et al., 1997) can measure the quantity of this effect in estimating earthquake risk analysis with a deterministic and probabilistic approaches. (Abrahamson, 2000). In this study, seismic hazard has been investigated, compared and evaluated by considering the effects of Doroud fault in different periods and different return periods by considering the effect of orientation and also without applying the effect of orientation. Near-field and directional effects can lead to long-period pulses in ground motion parameter, and for structures with long periods such as bridges and dams near faults with high activity rates. The inclusion of directional effects in attenuation relationships, to see whether for deterministic and probabilistic approach can have a great impact on the results of realistic seismic hazard analysis. Doroud fault is one of the most important faults in Iran with a history of large earthquakes in the early instrumental period and its mechanism of strike-slip mechanism, It can intensify the strong motion parameters during earthquakes for long periods in the city of Dorud, and consequently cause serious damage to structures with long periods in this area. In this study, the parameters of strong ground motion in the analysis of probabilistic earthquake hazard by applying direction for the range of Doroud fault have been estimated. In addition, by examining the disaggregation of earthquake hazard, the effect of direction for the contribution of distance and magnitude in estimating the strong motion parameter has been evaluated. In the short and long return periods, the effect of directivity for different periods for the strong motion has been estimated and evaluated by the Somerville and Abrahamson method. The estimated acceleration is calculated and evaluated for three return periods, 50, 475 and 2475 years and in periods of 0.75, 1, 2, 3 and 4 sec. The value of the strong motion parameter was directly related to the increase of the return period and the period, so that the highest amount of acceleration increase (17.16 percentage) with the effect of directivity was calculated in the return period of 2475 years and in the 4-second period.Efficiency of the adaptive neuro-fuzzy inference system in tropospheric slant water vapor modeling
https://jesphys.ut.ac.ir/article_79588.html
The passage of satellite signals from a different and variable nature of the troposphere will have a significant delay in the movement of these signals. This effect is commonly known as tropospheric delay. It can be divided into wet and dry components. The dry component is usually modeled using devices that measure air pressure. Unlike the dry component, the wet component of tropospheric refraction cannot be modeled using air pressure measuring devices. This component depends on the water vapor (WV) and moisture content of the troposphere. The WV is one of the key parameters in climate system analysis and a major factor in atmospheric events. Using the observations of local and regional GNSS networks, it is possible to estimate the slant tropospheric delay (STD) and subsequently, the slant wet delay (SWD) for each line of sight between the receiver and the satellite. The SWD observations are used to model horizontal and vertical WV variations in the upper atmosphere of the study network. This is done with a tomography technique. In tomography, the horizontal variations of tropospheric wet refractivity are modeled with the polynomial in degree and rank of 2 with latitude and longitude as variables. Also, altitude variations are modeled in the form of discrete layers with constant heights. The main innovation of this paper is in estimating the tropospheric parameters for each line of sight between the receiver and the satellite by the adaptive neuro-fuzzy inference system (ANFIS). The SWD obtained from GPS observations for the different signals at each station is compared with the SWD generated by the ANFIS (SWDGPS-SWDANFIS). The square of the difference between these two values is introduced as the cost function in the ANFIS. By calculating the value of the cost function at each step, the weights associated with the ANFIS network are corrected by the back-propagation (BP) method. In the next step, using the estimated wet refractivity, the value of slant water vapor (SWV) is calculated. To evaluate, GPS observations from 27-31 October 2011 and Tabriz radiosonde observation are used. For a more detailed evaluation, 2 test stations are selected and ANFIS zenith wet delays (ZWDANFIS) are compared with the ZWDGPS. Observations of test stations are not used in modeling step. In order to further examine the accuracy of the proposed method, the results of this study have been compared with the results of voxel-based tomography (TomoVoxel) method and troposphere tomography using artificial neural network (TomoANN). Also, relative error, mean square error (RMSE), standard deviation, and correlation coefficient were used to evaluate the results. At the Tabriz Radiosonde station, the correlation coefficient for the ANFIS, TomoVoxel and TomoANN have been calculated to 0.9131, 0.8863 and 0.9006, respectively. The minimum relative error for the TomoANFIS, TomoANN and TomoVoxel are 8.31%, 8.55% and 8.71%, respectively. Also, the maximum RMSE for three models is 0.9718, 1.0281 and 1.2346 mm/km, respectively. The results of this paper indicate the very high capability of the TomoANFIS model in showing the temporal and spatial variations of SWV. This method can be used to discuss the behavior of the atmosphere in real time and near to real time applications.Determining the Elastic thickness of the lithosphere in The Zagros Mountains using the Admittance function
https://jesphys.ut.ac.ir/article_79582.html
Zagros orogeny is one of the most active orogenic belts among the mountain ranges extending approximately 2000 kilometers from the Anatolian fault in eastern Turkey to the Minab fault in southern Iran. Concerning the importance of this region as well as the essential role of elastic thickness in controlling the rate of deformation under applied loads, determination of Te in Zagros Fold and Thrust belt has been conducted. The lithosphere's elastic thickness (Te) is a convenient measure of the flexural rigidity, which is defined as the resistance to bending under applied loads. To determine the elastic thickness of the lithosphere, the spectral admittance function is applied. We applied the load deconvolution of the admittance function between free-air gravity and topography data for estimation of Te. The Free air anomalies with a five arc-minute resolution are utilized in this study. In flexural isostatic studies, the gravity and topography data are compared with theoretical models to estimate several parameters of the lithosphere. In the simplest model, a plate has been flexed by a surface load, with the magnitude of the resulting deflection, which is governed by Te. Using the random fractal surfaces as the initial surface and subsurface loads applying at lithosphere, the lithosphere is modeled, and the post flexural gravity and topography are determined. Based on these new fields, the predicted admittance function is determined. Finally, the best-fitting Te is one that minimized the misfit between the observed and predicted functions. Additionally, the weighted misfit by the jackknife error is applied to estimate the observed admittance. The accuracy of the method is checked through synthetic modeling. Two fractal surfaces are used as the two initial surface and subsurface loads applied to the lithosphere. After calculating the corresponding gravity and topography data by the load deconvolution method, the observed and predicted admittance are estimated. The best-fitting Te will be obtained by minimizing the misfit between observed and predicted functions. After confirming the accuracy of the method in Te determination, the technique will be applied to the real data acquired from the NCC as follow. We consider a three-layered crust during the lithosphere modeling on which the internal loading is applied on the middle crust. To model the lithosphere, the global CRUST 1.0 is applied by treating the lithosphere as a three-layer crust. The 2D map of Te variations in the target area is depicted by utilizing the load deconvolution of the admittance function between free-air gravity and topography data. High-precision ground gravity data, which is more accurate than satellite data, allows us to detect more details on Te variations in the region. Based on the obtained results, the estimated range of Te in the survey region can be considered low to intermediate. This predicted range is in good accordance with the area's geology background as it is regarded as a young, active orogeny system. Te range and hence the lithosphere's predicted strength to deformation is supported by the previous studies using different geophysical and seismological studies. The mean value of Te in the area is 37&plusmn;2 km. The maximum amount is detected in the Sanandaj-Sirjan zone. The overall predicted trend of Te follows the geological background of the region. Additionally, the estimated trend for Te and the strength to the applied load and deformation is in good agreement with the previous geophysical and seismological studies conducted in the region.Doppler Oscillations in the Solar Spicules based on IRIS data
https://jesphys.ut.ac.ir/article_81536.html
In this research, we study the oscillating properties of the solar spicules in the line of sight with spectral measurements recorded by Interface Region Imaging Spectrograph (IRIS) on August 17, 2014. The primary purpose of IRIS is the observation of the movement of materials, fluctuations, and energy absorption and heat production in the lesser- known region of the solar atmosphere, which affect the behavior of the Earth's atmosphere, the performance of satellites, power transmission networks, and radio communications. The transmission of energy through waves and oscillations can play an important role in understanding the solar dynamics, and responding to the problems about the sudden rise of the solar atmosphere temperature to several million Kelvin from the transition layer to the solar corona. The source of energy required to heat the solar corona plasma to a temperature of one million Kelvin in the Sun's dynamic photosphere is a matter of debate in solar physics. One of the mechanisms of energy transfer is the propagation of magneto-hydrodynamic waves. These waves in photospheric magnetic tubes can be generated by granular shock motions and then propagate along the chromospheric magnetic field and penetrate the corona to transfer energy in the form of heat. Therefore, observations of oscillating motions in the chromosphere are a crucial test for the theory of corona heating. Quasi-periodic fluctuations in spicules appear mainly as displacement of these structures in image observations or periodic shifts in spectral lines. We use Interface Region Imaging Spectrograph (IRIS) to measure the spectrum around a narrow slit. By fitting a Gaussian profile of the Si IV profiles, we can calculate Doppler velocity shifts up to an altitude of 4200 km along the spicules. The Doppler velocity range from the edge of the sun to an altitude of 4200 km was obtained from 12 to 15 kms-1 (blue- shift), and from10 to 15 kms-1 (red- shift). For determining the dominant periods of Doppler shift oscillations, it is needed that the maximum intensity positions of 150 spectral profiles are collected, and a set of temporal signals is generated as a temporal signal. Any physical quantity that changes according to an independent parameter or variable is called a signal. If the parameter is a time variable, it is called a temporal signal, and if it is a position, the signal is called a spatial signal. These signals contain information about their sources, for example, period. So by processing signals, the behavior of resources can be studied and predicted. After processing temporal signals, we apply the wavelet analysis. Wavelet analysis is a useful method for simultaneous diagnosis of the power in time and frequency domains for temporal signals. The results of wavelet analysis revealed Doppler shift fluctuations with dominant periods of 3, 5 and 8 minutes. According to the results of this study, it is suggested that the main contribution of Doppler shift fluctuations in the solar spicules, observed transversely perpendicular to the axis of the solar spicules, is due to kink and alfven waves. These waves can play an essential role in heating the solar corona to millions of Kelvin.An Analytical solution to two-dimensional unsteady pollutant transport equation with arbitrary initial condition and source term in the open channels
https://jesphys.ut.ac.ir/article_79571.html
Pollutant dispersion in environment is one of the most important challenges in the world. The governing equation of this phenomenon is the Advection-Dispersion-Reaction (ADRE) equation. It has wide applications in water and atmosphere, heat transfer and engineering sciences. This equation is a parabolic partial differential equation that is based on the first Fick&rsquo;s law and conservation equation. The applications mathematical models of pollution transport in rivers is very vital. Analytical solutions are useful in understanding the contaminant distribution, transport parameter estimation and numerical model verification. One of the powerful methods in solving nonhomogeneous partial differential equations analytically in one or multi-dimensional domains is Generalized Integral Transform Technique (GITT). This method is based on eigenvalue problem and integral transform that converts the main partial differential equation to a system of Ordinary Differential Equation (ODE). In this research, an analytical solution to two-dimensional pollutant transport equation with arbitrary initial condition and source term was obtained for a finite domain in the rivers using GITT. The equation parameters such as velocity, dispersion and reaction factor were considered constant. The boundary condition was assumed homogenous. In this research, the source term is considered as point pollutant sources with arbitrary emission time pattern. To extract the analytical solution, the first step is choosing an appropriate eigenvalue problem. The eigenvalue must be selected based on Self-Adjoint operator and can be solved analytically. In the next, the eigenfunction set was extract by solving the eigenvalue problem with homogenous boundary condition using the separation of variables method. Then the forward integral transform and inverse transform were defined. By implementing the transform and using the orthogonality property, the ordinary differential equation system was obtained. The initial condition was transformed using forward transform and the ODE system was solved numerically and the transformed concentration function was obtained. Finally, the inverse transform was implemented and the main analytical solution was extracted. In order to evaluate the extracted solution, the result of the proposed solution was compared with the Green&rsquo;s Function Method (GFM) solution in the form of two hypothetical examples. In this way, in the first example, the initial condition function as an impulsive one at the specific point in the domain and one point source with the exponential time pattern were considered. In the second example, the initial condition was similar to the first example and two point sources with irregular time pattern were assumed. The final results were represented in the form of the concentration contours at different times in the velocity field. The results show the conformity of the proposed solution and GFM solution and report that the performance of the proposed solution is satisfactory and accurate. The concentration gradient decreases over time and the pollution plume spreads and finally exits from the domain at the resultant velocity direction due to the advection and dispersion processes. The presented solutions have various applications; they can be used instead of numerical models for constant- parameters conditions. The analytical solution is as an exact, fast, simple and flexible tool that is conveniently stable for all conditions; using this method, difficulties associated with numerical methods, such as stability, accuracy, etc., are not involved. Also because of the high flexibility of the present analytical solutions, it is possible to implement arbitrary initial condition and multiple point sources with more complexity in emission time patterns. So it can be used as a benchmark solution for the numerical solution validation in two-dimensional mode.Evaluating the performance of a planetary boundary layer scheme by using GABLS1 experiment in a single-column version of the global model developed based on potential vorticity
https://jesphys.ut.ac.ir/article_79587.html
Representing the boundary layer processes is crucial in simulating atmospheric phenomena in operational hydrostatic weather forecast models. Moreover, evaluating the performance of different physical processes in a variety of numerical models is an essential subject of its own. This paper presents an objective assessment of a planetary boundary layer scheme based on turbulent kinetic energy in a single-column version of the innovative atmospheric general circulation model developed based on potential vorticity at the University of Tehran, which is called UTGAM. Single-column models are a complementary tool to the atmospheric general circulation models that provide a simple framework to investigate the fidelity of the simulated physical processes. The reliable parameterization of the boundary layer processes has got significant impacts on weather forecasts. Most of the hydrostatic models have got deficiencies in the representation of these unresolved processes, especially in stably stratified conditions, and it seems that this problem is continuing in the forthcoming future. Here we have utilized the first GABLS intercomparison experiment set up as a simple tool to evaluate the performance of the diffusion scheme in the UTGAM. Two different sigma-theta and sigma-pressure single-column grid staggering combined with, respectively, 33 and 14 vertical levels below 3 km height have been used for the low- and high-resolution simulations. The GABLS1 LES results have been used as a benchmark for comparison. The boundary layer scheme that has been explored here is the same as the one in the ECHAM model, but some simplifications have been made. For instance, in this simulation, the effects of tracers have been ignored to circumvent the complexity of the problem. Results depict subtle nuances between the sigma-theta and sigma-pressure coordinates in intercomparison between the low and high vertical resolutions separately, which are more apparent in the lower vertical resolution. Nevertheless, it seems that the diffusion processes have been simulated rather more accurately in the high-resolution sigma-pressure vertical coordinate. The boundary layer scheme analogous with most of the operational models in the GABLS1 intercomparison experiment overestimate the momentum and the heat diffusion coefficients. The wind profile with height depicts maxima that are higher than the corresponding LES profile. It is inferred that the scheme mixes momentum over a deeper layer than the LES, but the simulated wind profile is better in comparison with the other operational models in GABLS1. Considering the vertical profiles of potential temperature revealed that the amount of heat mixing is not suitable in this experiment, and it causes a negative bias in the lower part of the simulated boundary layer. The simulated amounts of surface friction velocity have proved significant differences with the LES results in all separate experiments. However, the latter large amounts seem unlikely to have a detrimental effect on forecast scores in the operational model. Moreover, the sensitivity of the scheme to the lowest full level has been partially explored. Decreasing the lowest full-level height concurrent with increasing the vertical resolution leads to a modest influence on the simulation of the boundary layer processes. All the results confirm notable improvements by increasing the vertical resolution in both sigma-theta and sigma-pressure coordinates.Application of Principal Component Analysis (PCA) in Fuzzy Inference System (FIS) for Time-Series Modeling of Ionosphere
https://jesphys.ut.ac.ir/article_79583.html
The ionosphere is a layer of Earth's atmosphere extending from an altitude of 100 to more than 1000 km. Typically total electron content (TEC) is used to study the behavior and properties of the ionosphere. In fact, TEC is the total number of free electrons in the path between the satellite and the receiver. TEC varies greatly with time and space. TEC temporal frequencies can be considered on a daily, monthly, seasonal and annual basis. Understanding these variations is crucial in space science, satellite systems and positioning. Therefore, ionosphere time series modeling is very important. It requires a lot of observations to model the ionosphere temporal frequencies. As a result, it requires a model with high speed and accuracy. In this paper, a new method is presented for modeling the ionosphere time series. The principal component analysis (PCA) method is combined with the fuzzy inference system (FIS) and then, the ionosphere time series are modeled. The advantage of this combination is to increase the computational speed, reduce the convergence time to the optimal solution as well as increase the accuracy of the results. With the proposed model, the ionosphere can be analyzed at shorter time resolutions. Principal component analysisis a statistical procedure that uses anorthogonal transformationto convert a set of observations of possibly correlated variables into a set of values oflinearly uncorrelatedvariables calledprincipal components.This transformation is defined in such a way that the first principal component has the largest possiblevariance, and each succeeding component in turn has the highest variance possible under the constraint that it isorthogonalto the preceding components. The resulting vectors are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables. Fuzzy inference systems (FIS) take inputs and process them based on the pre-specified rules to produce the outputs. Both the inputs and outputs are real-valued, whereas the internal processing is based on fuzzy rules and fuzzy arithmetic. FIS is the key unit of a fuzzy logic system having decision making as its primary work. It uses the &ldquo;IF&hellip;THEN&rdquo; rules along with connectors &ldquo;OR&rdquo; or &ldquo;AND&rdquo; for drawing essential decision rules. To evaluate the proposed method of this paper, observations of Tehran's GNSS station, in 2016 have been used. This station is one of the International GNSS Service (IGS) in Iran. Therefore, its observations are easily accessible and evaluated. The statistical indices dVTEC = |VTECGPS-VTECmodel|, correlation coefficient and root mean square error (RMSE) are used to evaluate the new method. The statistical evaluations made on the dVTEC show that for the PCA-FIS combination model, this index has a lower numerical value than the FIS model without PCA as well as the global ionosphere map (GIM-TEC) and NeQuick empirical ionosphere model. The correlation coefficients are obtained 0.890, 0.704 and 0.697 for PCA-FIS, GIM and NeQuick models with respect to the GPS-TEC as a reference observation. Using the combination of PCA and FIS, the convergence speed to an optimal solution decreased from 205 to 159 seconds. Also, the RMSE of training and testing steps have also been significantly reduced. Northern, eastern, and height component analysis in precise point positioning (PPP) also show higher accuracy of the proposed model than the GIM and NeQuick model. The results of this paper show that the PCA-FIS method is a new method with precision, accuracy and high speed for time series modeling of TEC variations.Identification of precipitating clouds in the south and southwest of Iran using CALIPSO and CloudSat satellite observations
https://jesphys.ut.ac.ir/article_81519.html
The main purpose of this study is to detect precipitating clouds and analysis their vertical structure in the south and southwest of Iran using CALIPSO and CloudSat satellite observations. At First, precipitating samples using the daily precipitation data of the synoptic stations of the study area during the statistical period from 2006 to 2016 were selected. The selection of these samples is based on two parameters: the average precipitation of the system and the number of stations involved in precipitation. The average precipitation of the system was calculated by the ratio of the total precipitation of all stations in one day to the number of stations involved in precipitation on the same day. In order to eliminate light precipitating samples, a precipitating threshold was set for the mentioned parameters. So that at least in one of the days of precipitating system activity, the number of stations involved in precipitation is not less than 15 stations and the average precipitation of the system is not less than 15 mm. This threshold is defined as the day of peak precipitation. In total, the output of these criteria was 74 precipitating systems that lasted from one day to one week. Of these systems, 107 days of precipitation with the above specifications were selected. In order to ensure the occurrence of precipitation at the same time as the satellite orbit passing through the area, TRMM satellite level 3B precipitation data was used. These data have precipitation values in a temporal interval of 30 minutes and spatial resolution of 0.1 by 0.1 degrees. considering the network precipitation values of peak days, three precipitating samples in three different paths where the precipitation occurred along the satellite path, were selected to analyze their cloud structure. Precipitation characteristics of the mentioned systems were extracted based on station and network precipitation values. In the next stage, three features including the total attenuated backscatter at 532 nm, the depolarization ratio and the color ratio were prepared by use of CALIOP lidar level 1B data. The radar reflectivity feature was also extracted using data of CPR sensor of CloudSat. Then, using layers extracted from CALIOP and CPR sensors, the clouds of these samples were compared and analyzed in terms of cloud thickness and precipitation intensity. The results of the analysis showed that in the first sample (Path A), despite the too thickness of the cloud (approximately 10 km), The amount of precipitation is less than the other two samples. The cloud of this sample is different from the other two samples. Cloud layers in the vertical direction are not dense and integrated enough. Also, aerosol particles and ice crystals in the cloud are fewer and smaller. While in the other two samples, especially in path C, while the thick and dense cloud covers the atmosphere of the region, the concentration of aerosols and ice crystals is too higher.Numerical Modelling and Automatic Detection of submesoscale eddies in Persian Gulf Using aVector Geometry Algorithm
https://jesphys.ut.ac.ir/article_79581.html
Nowadays, marine data containing both observational and measured values as well as the output of numerical models are largely available; But analyzing and processing this data is time consuming and tedious due to the heavy volume of information.Identifying and extracting eddies is one of the most important aspects of physical oceanography, and automatic detection algorithms of eddies are one of the most basic tools for analysing eddies. The general circulation of the Persian Gulf is a cyclonic circulation that is affected by tide, wind stress, and thermohaline forces. In this study, using the Mike model based on the three-dimensional solution of the Navier Stokes equations, assumption of incompressibility, Boussinesq approximation and hydrostatic pressure, the circulation in the Persian Gulf was modeled. Then a vector geometry algorithm has been used for detection of eddies in this region. Four constraints were derived in conformance with the eddy velocity field definition and characteristics in this algorithm. Eddy centers are determined at the points where all of the constraints are satisfied. The four constraints follow: (i) Along an east&ndash;west (EW) section, v has to reverse in sign across the eddy center, and its magnitude has to increase away from it; (ii) Along a north&ndash;south (NS) section, u has to reverse in sign across the eddy center, and its magnitude has to increase away from it: the sense of rotation has to be the same as for v; (iii) Velocity magnitude has a local minimum at the eddy center; and (iv) Around the eddy center, the directions of the velocity vectors have to change with a constant sense of rotation. The constraints require two parameters to be specified: one for the first, second, and fourth constraints and one for the third one. The first parameter, a, defines how many grids points away the increases in the magnitude of v along the EW axes and u along the NS axes are checked. It also defines the curve around the eddy center along which the change in direction of the velocity vectors is inspected. The second parameter, b, defines the dimension (in grid points) of the area used to define the local minimum of velocity. The main data used to detect eddies are numerical model outputs, including velocity components. These outputs are the result of numerical modeling with thermohaline and wind stress forces. In total, 4308 cyclonic and 2860 anticyclonic eddies are detected at the surface and 617 cyclonic and 329 anticyclonic eddies are found in the deepest layer, depth of 50 meters, for daily data during one year. The number of eddies is highest in winter, and the lowest in summer and the average radius of anticyclonic eddies is maximum in winter and minimum in summer for cyclonic eddies. Most eddies have a radius of 5-10 km and lifespan of 3-6 days. Also, as the lifespan of eddies increases, they penetrate deeper into the water. The percentage of eddy penetration or the ratio of the number of eddies of the deepest layer to the surface layer is 15% for cyclonic eddies and 10% for anticyclonic eddies. This indicates that the energy loss in the cyclonic eddies is less than in the anticyclonic eddies and is probably due to the alignment of the rotating eddy with the overall circulation of the Persian Gulf.Investigation of the efficiency of methods infilling missing data in relation to the precipitation parameter in the arid regions of Iran
https://jesphys.ut.ac.ir/article_81516.html
Missing data are common issue in climate data and. Also precipitation is a very important part of the hydrological cycle, and meteorological and hydrological studies of watersheds initially depend on the quantity and quality of recorded rainfall data and its distribution in the area. Complete and reliable sets of climatic and hydrological data are required to plan and design these projects. Therefore for treatment of precipitation missing data, various methods have been developed and applied. Normal ratio method, linear regression, multivariate regression and inverse distance weighting (IDW) have a wide application in natural resources study in our country. Therefore, it is necessary to determine the ability of these methods, especially in relation to the precipitation parameter, which plays a crucial role in the study of natural resources. In this study, the capability of each mentioned methods for infilling missing data of daily, monthly and annual precipitation time series in the arid regions of the Iran was investigated, while the proportion of missing data varies from 5 to 50% of total data. In fact, the main purpose of this study is to answer the question of which of the four mentioned methods are more effective for infilling precipitation missing data. The daily data of Iran synoptic meteorological stations were used for the present study. Using the Ran Test homogeneity test, the data homogeneity was investigated. Also, using graphical data exploring, and especially boxplot diagrams, outlier data were identified and turned into missing data. The average annual precipitation and temperature of 400 stations were determined, and then based on these data their de Martonne coefficient was computed. In the next step, stations with de Martonne coefficient less than 10 were selected as arid climate. Among them, 73 stations that had sufficient statistics from 1986 to 2017 were distinguished. To evaluate each of the data reconstruction methods, part of the actual data was deliberately discarded from the original data and then reconstructed. Due to the high volume of calculations, this process was programmed in the MATLAB. The results showed that each method had different functionality according to the conditions. Daily data are not well estimated using the normal ratio method and estimate the missing data less than the actual one. The use of linear regression method showed that in the daily time scale, unlike the normal ratio method, the model accuracy in data reconstruction is higher. For linear regression approach, the distance between the fitted line between the observed and estimated data is small at first, and as the precipitation increases, this distance increases, indicating that the model is less accurate in estimating the extreme values. Given that the fitting line is below the one-to-one line, the linear regression method estimates the actual values to be lower than normal. Same results can be found for IDW producer. The multivariate regression method is more accurate for daily time series when the proportion of missing data are not considerable, but is generally very sensitive to the proportion of missing data. The normal ratio method is not suitable for reconstructing daily missing values, however it is more stable than other methods when missing data increase. In monthly time series, the performance of the IDW method and then the normal ratio is better. In annual series, linear correlation, normal ratio, and IDW have better performance, respectively.The findings of this study show that in general, the accuracy of reconstructions on annual scales is more than monthly and on monthly scales is higher than daily. This is due to smoother time series in the monthly and annual time series than the daily ones. Also it should be noted that the scale of current studies is in Iran. If the data from the rain-gauge stations of the Meteorological Organization and the Ministry of Energy are added to this data, the accuracy of the methods is expected to increase. As the results of the present study show, the accuracy of the models decreases with increasing incomplete data ratio. Therefore, if new data is included in missing data processing, there is an expectation of better performance of each of these methods. Finally it should be considered that each method should be used in accordance with the conditions, and therefore it is recommended to develop a software package for infilling missing data in Iran.Elemental analysis of air-full dust in World heritage city of Yazd by Laser Induced Breakdown Spectroscopy
https://jesphys.ut.ac.ir/article_79584.html
The dust and the environmental pollutions caused by dust storms are a serious environmental hazard, particularly in arid and semi-arid civilian regions in the world. Controlling and decreasing the harmful or undesirable effects of dust can be achieved by accurately identifying and analyzing dust samples. For this goal, various elemental analysis methods are commonly used for identifying and characterizing dust materials. The City of Yazd (UNESCO Heritage Center) is located in Iran's central region. It is surrounded by many industrial, mineral sites, and deserts. The city's urban areas suffer air pollution due to seasonal wind, the lack of annual rainfall, and dust storms. Hence, the dust concentration reaches higher than of standard limits occasionally in this city. In this paper, a study to characterize and analyze the falling-dust in Yazd city is reported. Initially, the sampling procedure was conducted at five different locations for two months using marble dust collectors. The size distributions and morphology of dust samples were studied by Scanning Electron Microscopy (SEM), X-Ray Diffraction technique (XRD). Moreover, samples' elemental composition was analyzed using Energy Dispersive X-Ray Spectroscopy (EDX) and distinctly, Laser-Induced Breakdown Spectroscopy (LIB). The analysis of SEM images and XRD patterns of dust particles allows studying the dust's size and morphology of samples. The size of 1 to 30 microns was estimated for the dust particles with the maximum size distributions between 2 to 7 microns. Also, capsular, triangular, spherical, irregular, and polyhedral shapes are revealed by recorded particles' images. The XRD analyses show the existence of silicates, carbonates, phosphates mineral groups, calcites, quartz, gypsum, magnesium carbonate, and aluminum phosphates components in samples. Laser-induced breakdown spectroscopy (LIBS) is a non-contact, fast response, high sensitivity, real-time, and multi-elemental analytical detection technique based on emission spectroscopy to measure the elemental composition. The elemental characterization of powder samples was carried out by investigating the emission spectra of breakdown plasma in the sample region. A 1064-nm Nd:YAG laser operating at high energy (100 mJ, 1 to 20 Hz), was focused on the surface of the tiny amount of powder sample to form an emitting plasma. The emission of produced plasma from the sample was collected by eight optical fibers and was detected by the spectrometer. The applied experimental setup allowed to record spectra in the range of 200 to 1200 nm with a spectral resolution of 0.4 nm. In total, 74 atomic emission lines of generated plasma were analyzed. Spectral analysis of obtained spectra enables to identify several elements such as calcium, silicon, iron, magnesium, aluminum, carbon, and other elements with less abundance such as potassium, sodium, strontium, manganese, titanium, cobalt, vanadium, barium and lead in the elemental composition of dust samples. The results deduced using the LIBS technique agree unambiguously with results obtained by EDX analysis of dust samples in this work. It is found that Laser-Induced Breakdown spectroscopy is a rapid, reliable, and powerful analytical tool for the diagnostic and detection of multiple elements for solid dust samples. Also, this technique is comparable with standard methods such as atomic absorption spectroscopy (AAS) and X-Ray Fluorescence (XRF) for chemical and elemental analysis of urban, mineral, and industrial dust.Evaluation of Spatiotemporal Column Particulate Matter Concentration (PM2.5) Due to Dust Events in Iran Using Data from NASA/ MERRA-2 Reanalysis Model
https://jesphys.ut.ac.ir/article_81525.html
Mineral suspended particles, in addition to being important components of the Earth's atmosphere, play an important role in the atmosphere-Earth energy interactions and geochemical cycles of the Earth system. The meteorological and climatic importance of atmospheric particulate matter can be attributed to its effects on the energy level of the Earth-Earth system, physical, dynamic, and chemical changes in the atmosphere at regional and global scales, absorption and emission of radiation in the atmosphere, micro physical changes and radiative properties of clouds and changes in Snow and ice levels occur, he explained. Fine particles smaller than 2.5 microns are one of the most important air pollutants with a wide variety, complexity and diffusion. Dust events are one of the most important natural sources of particulate matter in the atmosphere. In recent decades, air pollution in many parts of the world has raised public concerns about health effects. Epidemiological studies have shown that lung disease, cardiovascular disease, and their mortality are associated with particulate matter. Although the effects of particles on both climate and air quality has been evident over the past few decades, continuous monitoring will still be important. In recent years, techniques, and models based on satellite data has made significant contributions to the monitoring of particles. Different versions of the MERRA-based satellite model have excellent capabilities in the study of particles and its time series analysis. The MERRA-2 model (the Modern-Era Retrospective analysis for Research and Applications, Version 2 called MERRA-2) is based on the analysis of satellite data (Moloud et al., 2012) and is one of the most reliable models for assisting various environmental scientists to answer questions related to climate research and climate change, to make optimal use of the created satellite observations. The study aims was to investigate the spatio-temporal density and dispersion of PM2.5 suspended particles due to dust events in the Iranian atmosphere during the statistical period (1980-2019) based on the MERRA-2 based satellite model. In this study, the meaning of column PM2.5 suspended particles are the measurement of PM2.5 suspended particles of dust origin that has gone into space in a vertical column from the ground. Relevant data was prepared with monthly, seasonal, annual and spatial time steps of 0.5&deg;x 0.625&deg;and after applying the necessary preprocessing, they were identified and analyzed. The results show good fluctuations in PM2.5 particulate matter density during the statistical years studied. But in general, the density of PM2.5 suspended particles are increasing and its upward trend was observed especially in the last years of the statistical period. The results showed that MERRA-2 model has a good performance in monitoring the concentration of PM2.5 particulate matter in the vertical column of the Iranian atmosphere. The average of particulate matter PM2.5 in the atmosphere of Iran is 61/23 Mg/m2, which indicates the high concentration of these particles in the Iranian atmosphere compared to other parts of the world, including the United States (Bouchard et al., 2016), Taiwan (Provence et al., 2017) And Europe (Provence et al., B2017). On the other hand, the highest concentration of these particles are in the southwest of Iran, southern coastal areas, eastern regions, deserts of central Iran and part of northern Iran and the lowest is estimated over the Zagros highlands. The spatial distribution of PM2.5 suspended particles in the Iranian atmosphere depends on the frequency of dust events, distance from emission centers, seasons, rainfall and other climatic parameters (soil surface temperature, soil moisture, etc.). In this sense, in the warm months and seasons of the year, which are associated with the increasing land surface temperature, decreasing rainfall and, consequently, decreasing soil surface moisture, the conditions for the formation of the dust events are the release of suspended particles into the Iranian atmosphere. So that among the months of the year, May/December and between the seasons, summer/winter had the highest/the lowest value of column concentration of PM2.5 suspended particles in the Iranian atmosphere. Analysis of correlation values based on Pearson linear regression relationship between PM2.5 suspended particles in Iranian atmosphere (response variable) with some meteorological parameters (independent variables) such as; Precipitation, soil surface moisture and soil surface temperature in the geographical area of Iran, well indicate the significant relationship between this variable and the above parameters. So that in the meantime; the amount of correlation between PM2.5 suspended particles in Iranian barley with soil surface temperature indicates a significant positive relationship (R = 81%), a strong negative relationship with soil surface moisture (R = - 76%) and a significant relationship with monthly precipitation. Negative (R = - 61%). This means that the concentration of PM2.5 suspended particles in the Iranian atmosphere is strongly influenced by environmental parameters, so that in the time series analysis, the presence of seasonal behavior indicates a relatively stable time pattern of PM2.5 suspended particles distribution in the atmosphere of Iran.Evaluation of cumulus schemes of HWRF model in forecasting tropical cyclone characteristics, Gonu tropical cyclone case study
https://jesphys.ut.ac.ir/article_79578.html
Sensitivity of numerical models in the prediction of Tropical Cyclone (TC) characteristics has been considered in numerous research studies. In this research, application of five cumulus schemes of HWRF (Hurricane Weather Research and Forecasting) model, including KF, SAS, BMJ, TiedTKE and SASAS has been examined during Tropical Cyclone Gonu (TCG) from 4 to 7 June 2007. The simulations have been conducted using three nests with 27, 9 and 3 km resolutions. To this aim, the performance of schemes in predicting TCG intensity using minimum surface pressure and maximum 10-m wind speed are analyzed. Following, their effect on forecasting the radius of maximum wind is evaluated. The parameters of lower-level divergence, upper-level convergence, potential temperature, potential vorticity, Convective Available Potential Energy (CAPE), wind vector (both horizontal and vertical components), wind shear, precipitation and radar reflectivity have been analyzed. The results of the simulations have been compared with the analysis data, IMD and TRMM observational data and routine atmospheric parameter measured at the Chabahar station. The comparison was done in different time of TCG lifetime. To examine the performance of HWRF cumulus schemes for track and intensity of the TCG, the whole life cycle of TCG was considered. To test the efficiency of HWRF cumulus schemes in predicting some dynamical and thermodynamical parameters, the time of maximum intensity of TCG (18 UTC on 4 June 2007) was focused on. To evaluate the functionality of HWRF cumulus schemes in the coastal area, the outputs were discussed in the last two days of the TCG life cycle. Results showed that based on the used configuration, none of the five cumulus schemes predicted the TCG reaching the southern coast of Iran. Moreover, neither the pressure decrease nore the maximum wind speed were predicted accurately at the time of maximum intensity of TCG. Until TCG intensity was more that category-3, neither minimum surface pressure trend and nor the maximum wind speed trend have been forecasted well. However, for the less intense conditions, two schemes of TiedTKE and SAS produced the nearest values. The performance of all five cumulus schemes, similarly predicted the radius of the maximum wind, except TiedTKE scheme that predicted the super cyclone 6 hours earlier. The analysed and simulated of the vertical cross sections of potential temperature and horizontal wind were similar, respectively. The simulated values of the vertical component of wind were considerably larger than those from the analysis data and were also closer to the TCG center. The maximum values of simulated CAPE were off the Oman coast compared to the analysis values. Only the simulations using SASAS cumulus schemes showed the strongest potential vorticity near the surface. The simulated updrafts and downdrafts were larger than those from the analysis data. The simulated values of the major updrafts and downdrafts were closer to the center of the TCG, comparing to those from the analysis data. The upper-level divergence patterns were seen in both simulations using all 5 cumulus schemes and also in the analysis data, while the lower-level convergences were not captured neither in the simulations nor in the analysis data. The maximum value of the simulated accumulated precipitation using all 5 cumulus schemes were 80 mm in a 6 hour interval, however, the observational value from the TRMM was 25 mm/h. The predicted radar reflectivity from the simulations were similar and the simulated maximum values were the same, but the expansions of the simulated maximum values were different. All cumulus schemes predicted the wind shear values less than the analytical values. At Chabahar station, the observational values of the 10-m wind speed, sea level pressure, and temperature have been compared to the simulated values using all 5 cumulus schemes, in the period of 6-7 Jun 2007. The statistical parameters of correlation, standard deviation and root mean square were used to identify the best cumulus scheme. The least error prediction was obtained using KF cumulus schemes to predict the 10-m wind, the TiedTKE cumulus scheme to simulate sea level pressure the observed, and SASAS cumulus schemes to produce temperature.Feasibility of the use of MODIS products to climatology of precipitable water vapor over Iran
https://jesphys.ut.ac.ir/article_81517.html
IntroductionWater vapor is the dominant greenhouse gas in the Earth&rsquo;s atmosphere and, at the same time, highly variable in the atmosphere. Observations of its spatial and temporal variations is a major objective of climate. It is important in several major areas in the atmospheric sciences, on scales from turbulence to synoptic-scale systems, and including cloud formation and maintenance, radiation and climate. The intent of this paper is to demonstrate the ability of MODIS PWV products at monthly and daily scales over Iran. Therefore, the results are presented in two sections. The first section compares the long term (2003-2015) Monthly mean MODIS Level 3 and ERA-Interim PWV data sets. The second section validates the level 2 MODIS PWV products by Radiosonde data at daily scales. For a better comparison of MODIS level 2 PWV products with Radiosonde data, we used from 10 Radiosonde stations over Iran. We consider the sky conditions (cloudiness and visibility) in our comparison.Materials &amp; MethodsThere are no microwave radiometers (MWR) and Global Positioning System (GPS) sites in Iran, in the absence of these data, we used the measurements of Radiosonde and ERA-Interim as reference data for the comparison of the MODIS PWV estimates. These data were obtained at monthly and daily scales. In the first section, long-term (2003- 2015) spatial and temporal characteristics of monthly mean PWV investigated over Iran. For this, Level-3 MODIS terra (MOD08_M3) products and ERA-Interim data were obtained with the 1-degree resolution for Iran. In the second section, January (as a month with low values of PWV and unstable atmosphere) of 2004 and July (as a month with high values of PWV and Stable atmosphere) of 2008 were selected for comparison of MODIS daily (MOD05-L2) PWV product with Radiosonde data for 10 Radiosonde stations in Iran.Results &amp; DiscussionThe average annual MODIS and ERA-Interim PWV data are 12.248 and 12.243, respectively.These values are very close to each other. These values are very close to those derived by Asakereh and Doostkamian (2014) from NCEP data reanalysis (about 14.3 mm). Also, Ferencz and Pongra (2008) concluded that the ERA-Interim and the MODIS PWV fields are very similar .The maximum and minimum values of PWV for both data sets is observed during July and January, respectively. Tuller (1968) indicated that February and July are the months of highest and lowest precipitable water at most stations. At some, August replaces July, and at a smaller number, January replaces February. Also, our result is same with the study of Maghrabi &amp; Dajani, (2014) over Saudi Arabia. They reported that the lowest PWV values were in December and January, whereas the highest values were in June and July. They pointed that during warm periods, increases in the temperature and height of constant pressure levels result in an increased capacity for water vapor of the air mass, keeping it away from the saturation point and consequently preserving high PWV values. In contrast, in cold periods, the decrease in the height of constant- pressure levels, reduce the capacity for water vapor of the air mass and facilitates the condensation process, resulting in a decrease in the amount of PWV. The topography is a key factor in the spatial distribution of PWV. PWV from both data sets has a significant negative relationship with the distribution of topography in all months. This means that the concentration of PWV is high in the highland regions and vice versa. During January 2004, the ranges of errors are in the best case 5.53 mm (Tabriz) and in the worst case (Ahwaz) 16.02 mm. In all stations, the coefficient determinations are very negligible. While in the suitable weather condition, RMSE is decreased in all stations. During July of 2008 at many stations such as Zahedan, Kerman and Esfahan cloud cover and visibility condition has been appropriate, while in Bandar Abbas in all days the visibility was poor (less than 5 KM). It seems that the cloud cover and visibility conditions result in the high coefficient of determinations in Esfahan, Kerman and Zahedan (77, 80 and 66%, respectively) and high error in Bandar Abbas station.ConclusionAnnual average MODIS PWV and ERA-Interim are close to each other (12.24) in addition, MODIS has a higher negative correlation coefficient with topography compared to ERA-Interim PWV data. This suggests that MODIS level-3 monthly PWV data are valuable in order the monthly long-term climatology of PWV over Iran. In daily scale, a comparison of MODIS and radiosonde PWV data in different atmospheric conditions significantly are different. During clear days with appropriate visibility (despite to time lag between two data sets) values of R2 is higher compared to cloudy days with poor visibility. Accuracy of the MODIS PWV data over Iran is strongly dependent on weather conditions.Keywords: precipitable water vapor, MODIS products, ERA-Interim, Radiosonde, Iran.Cumulus Clouds from the rough surface perspective
https://jesphys.ut.ac.ir/article_79579.html
Although it is well-known the clouds show a fractal geometry for a long time, their detailed analysis is missing in the literature yet. Within scattering of the received radiation from the sun, clouds play a very important role in the energy budget in the earth atmosphere. It was shown that the surface fluctuations and generally the statistics of the clouds has a very important impact on the scattering and the absorption of the radiation of the sun. In this paper we first study the relation between the visible light intensity and the width of the cumulus clouds. To this end, we find that the received intensity is , where , &nbsp;and &nbsp;To this end we supposed that the transmitted intensity of light from a column of cloud is proportional to where (summation of the absorbed and the scattered contributions). Using this relation, we find a one to one relation between the cloud width and the intensity of the received visible light in low intensity regime. By calculating the Mie scattering cross sections for the physical parameters of the clouds, we argue that this correspondence works for thin enough clouds, and also the width of the clouds is proportional to the logarithm of the intensity. The Mie cross section is shown to behave almost like &nbsp;for large enough s, where &nbsp;is the angle of radiation of sun with respect to earth&rsquo;s surface, or equivalently the cloud&rsquo;s base. This allows us to map the system to two-dimensional rough media. Then exploiting the rough surface techniques, we study the statistical properties of the clouds. We first study the roughness, defined for rough surfaces as . This study on the local and global roughness exponents (&alpha;_l and &alpha;_g respectively) show that the system is self-similar. We also consider the fractal properties of the clouds. Importantly by least square fitting of the roughness we show numerically that the exponents are and . We study also the other statistical observables and their distributions. By studying the distribution of the local curvature (for various scales) and the height variable we conclude that these functions, and consequently the system is not Gaussian. Especially the distribution of the height profile follows the Weibull distribution, defined via the relation &nbsp;for &nbsp;and zero otherwise. The reasoning of how this relation arises is out of scope of the present work, and is postponed to our future studies. The studies on the local curvature, defined via &nbsp;reveals the same behaviors and structure. All of these show that the problem of the width of cumulus clouds maps to a non-Gaussian self-similar rough surface. Also we show that the system is mono-fractal, which requires&nbsp; . Given these results, the authors think that the top of the clouds are anomalous random rough surfaces that affect the albedo of cloud fields.Verification of potential intensity relations for the northwest Indian Ocean tropical cyclones during 1990-2019
https://jesphys.ut.ac.ir/article_81521.html
Prediction of tropical cyclone (TC) intensity has been considered in numerous research studies, due to TC destructive effects. Hence, various parameters were combined in potential intensity relations to show the maximum probable intensity that a TC can achieve. The relations of potential intensity were different, since each relation was suggested based on various factors affecting TC intensity. In this research, the validity of five potential intensity relations, defined by other researchers for the other basins, was verified for all TCs formed over the northwest of the Indian Ocean from 1990 to 2019. In this duration, sixteen cyclonic storms, nine sever cyclonic storms, ten very sever cyclonic storms and ten extremely severe cyclonic storms occurred. In this research, two data sets of data reported by India Meteorological Department (IMD) and reanalysis data from the fifth generation of the European Center for Medium Range Weather Forecast (ECMWF, ERA5) with the horizontal resolution of 0.25 degrees were used. The IMD data included position (latitude/longitude) of the TC&rsquo;s eye and maximum wind speed. The reanalysis data consisted of meteorological parameters from sea level to the tropopause level, including relative and specific humidity, temperature, pressure, dew point temperature and horizontal wind vector. The first relation for the potential intensity was based on the difference between convective available potential energy values at the radius of maximum wind using saturated and unsaturated air mass. The second one considered the difference between saturated entropy at sea level and environmental value of entropy. The third relation consisted of the ratio of difference between upper-level and lower-level temperature to the outflow temperature and also the discrepancy between saturated and unsaturated enthalpy. The fourth relation included difference of saturated and unsaturated values of equivalent potential temperature at the radius of maximum wind. The last relation not only used the ratio of temperature of inflow and outflow and discrepancy between surface and boundary layer entropy, but also emphasized on surface temperature. The ratio of the enthalpy and drag coefficients was used in the all relations, while thermodynamic efficiency was included in some recent relations. The potential intensity values achieved using the empirical relations were evaluated using the maximum wind speed reported by IMD. The comparison was done based on some statistical indexes and the Taylor diagrams. The statistical indices include (I) index of agreement (IOA), (II) standard deviation, (III) root mean square deviation and (IV) correlation coefficient. For the intensity of depression and deep depression stats, the minimum value of IOA achieved using the first relation, while the other relations produced the close values of around 0.7. For the CS-Category intensity, the first two relations produced the lowest IOA values. For the SCS- Category the last two relations did the best performance, while VSCS- and ESCS-Categories the second relation produced the most consistent results. The results from IOA showed that the fifth relation produced the highest agreement with the IMD data. This showed that the discrepancy between sea surface temperature and tropopause temperature and the difference between environmental entropy and inner-core entropy played the most important role in intensification for the first four categories of intensity. However, for the last two categories of intensity the discrepancy between the saturated entropy at surface and entropy of boundary layer produced IOA of 0.73 and 0.75, respectively. It is notable that difference between saturated equivalent potential temperature and potential temperature of boundary layer and also difference between temperature of inflow and outflow produced the same results for the beginning state. The other statistical indices were analyzed based on the Taylor diagram focusing the all considered tropical cyclones intensified to the various intensities. Conclusions demonstrated that the last and the second potential intensity relations produced the best performance in the all categories for the TCs formed over the northwest of the Indian Ocean during 1990-2019.Statistical Evaluation of Cloud Seeding Operations in Central Plateau of Iran in the 2015 Water Year
https://jesphys.ut.ac.ir/article_79585.html
Iran is located in an arid and semi-arid region and has experienced a reduction of average rainfall in recent years. This has turned the attention to the use of new methods such as cloud seeding to achieve more water resources. In this regard, cloud seeding operations have been carried out in the country since 1998. The purpose of this study was to evaluate cloud seeding projects in the 2015 water year (January, February, and March 2015) in the central region of Iran, including the provinces of Yazd, Kerman, Fars, Isfahan, and some adjacent provinces. The evaluation was performed statistically using stepwise multiple regression. Two different approaches have been used for evaluation. In the first approach, precipitation at stations located in the target area of cloud seeding operations is estimated based on the precipitation at stations in the control area using stepwise multiple regression and then taking into account a 90% confidence interval for this estimate, the effectiveness or ineffectiveness of the cloud seeding operation at each station is determined. In the second approach, the volume of precipitation in each province in the target area is estimated based on the precipitation in stations outside in the control area using stepwise multiple regression and then by considering a 90% confidence interval for this estimate, the effectiveness of cloud seeding operations on the rainfall volume of each province has been investigated. The target area in different months was selected based on the HYSPLIT model results. Due to the inconsistent spatial distribution of rain gauges in the target areas, parts of the target areas lacking enough rain gauges were excluded from further analysis. To define the boundaries of the exclusion areas, Inverse Distance Weighted (IDW) method was used to find the influence of the radius around each rain gauge. The influence radius values were selected as 93940, 89569, and 149015 m for the months of January, February, and March, respectively. Finally, the minimum value of 89569 m was selected as the influence radius. The results of both methods indicate the impact of cloud seeding operations this year in these areas. In particular, the volume of precipitation in February in all provinces located in the target area of cloud seeding operations has increased from 15 to 80 percent. Surface runoff generated from the increased precipitation due to cloud seeding were estimated by the two methods of Soil Conservation Service (SCS) and Rational method. The estimated surface runoffs generated by SCS and rational methods were 1318.5 and 1329.5 million m3, respectively. The groundwater recharge in the three months of January, February, and March is estimated as 105.3, 425.6, and 156.3 million m3, respectively. It is important to note that runoff and groundwater recharge estimations by the method used in this study are subject to high uncertainties, and the estimations can only represent the order of magnitude of impacts of cloud seeding operations, and therefore, exact numbers should not be used for water resources planning and management purposes. Further investigation in areas with more rain gauges can assist in a more accurate assessment of could seeding operations.Exploratory analysis and in-homogeneity study of temperature and rainfall series of meteorological stations in Iran (period 1989-2018)
https://jesphys.ut.ac.ir/article_81524.html
In-situ observations underlies a wide range of planning, applied studies and modeling in various fields and sciences, and using this data in studies and planning without ensuring the accuracy and homogeneity of them, can lead to uncertainty in the results. The major problems that researchers face is the poor data quality, missing data, outliers and in-homogeneity in time series. Inappropriate co-locating of stations, human errors in reading and recording data, errors in measuring equipment, changes in measurement tools, different methods of observation, non maintenance and calibration of equipment, constructions around the stations, changes in the type of instruments and sensors for atmospheric parameters measurement and station relocation during the statistical period are problems that affect the accuracy and homogeneity of the meteorological data. Therefore, in this paper, the minimum and maximum daily temperature series and daily rainfall series at 134 weather stations in Iran were analyzed for outliers and homogeneity over the period 1989-2018. First, Iran was divided into 5 clusters based on climatic characteristics. After clustering, the daily maximum and minimum temperatures and daily rainfall data were statistically analyzed using SPSS software and the percentage of missing data was determined separately for each station. Then, Climatol package in R software was used to study outliers, in-homogeneity and homogenization. In each cluster, the series are re-clustered based on the interested parameter, and for each station, the other stations belonging to that cluster are considered as reference stations. Based on this algorithm, first the desired series is estimated and standardized by reference series by type (II) regression method. After estimating the series, the standardized anomaly series is calculated, in which the difference between the observed and estimated values is calculated. For detecting outliers, two steps were followed. Original data corresponding to standardized anomalies that were greater than the prescribed thresholds, were detected as outliers. In the second step, in order to ensure the correct detection of the outliers, for temperature series the detected outliers in the first step were compared with the values of the days before and after. If they differed significantly, they would be accepted as outliers and deleted. For the precipitation series, the atmospheric condition of the desired dates would be checked. For detection of in-homogeneity, the standard normal homogeneity test (SNHT) was performed on the monthly series. If the SNHT test statistic was greater than the prescribed threshold, the series was split at the point of maximum SNHT and all the data before the break were transferred to a new series with the same geographic coordinates. This process was repeated until all series were homogeneous. If break points were confirmed by metadata, they would then be accepted as non-climatic breaks. Finally, all the missing data in all homogeneous series and subseries infilled with the same data estimation procedure using only the reference of their own other fragments.The maximum, minimum temperature and precipitation series for 134 weather station of Iran have an average 3%, 4% and 2% missing values, respectively. In this time series, 63 outliers were detected for the maximum temperature parameter that 53 of them were related to the Geophysics station of the University of Tehran. For the minimum temperature, this number reached 50 that 11 of them belong to the Geophysics station and for the precipitation parameter 13 outliers were identified that 5 of them are related to the Geophysics station. For the daily temperature series (excluding geophysics station), 89 stations were homogeneous and 44 stations had one or two break points, and for the precipitation series 15 stations were identified as in-homogeneous.Application of Siemens index of green cities for selected areas in Iraq
https://jesphys.ut.ac.ir/article_79580.html
Irregular urban design and use of traditional resources to provide energy have a major impact on people's lives and climate in the future. The aim of this paper is to determine the best place to design a sustainable city, based on the Siemens Index for Green Cities and Comfort Factor. Four sites were selected in Iraq based on the geographical location and weather variables for each site, as the sites include Um Qasr, Baghdad, Anah and Sulaymaniyah. Weights and percentages were distributed on the categories of the Siemens index, which are renewable energy, water, air quality, and comfort factor. With regard to the renewable energy category represented by solar and wind energy, the best evaluation of the energy category was found at Um Qasr by 60.88%. As for the evaluation of the water resources category, it was found that the Anah site possesses the highest percentage of available water quantities in the study sites by 85%. As for the air quality category represented by the percentage of pollutants for each site, it was found that Sulaymaniyah site has the lowest percentage of pollutants in the study sites by 24.86%. Finally, the comfort factor category represented by the Temperature Humidity Index and Wind Chill Factor shows Sulaymaniyah's location with the highest percentage of the comfort factor value at 94%. After distributing the weights and percentages and collecting them for each site, the results showed that it was the best site for designing a sustainable city in Iraq, by 72%.Detection of Aircraft Icing Threat pixels Using Cloud Properties of MSG Satellite Products Case Study: Tehran-Urmia Flight Route
https://jesphys.ut.ac.ir/article_79586.html
In the present study, the meteorological conditions of the plane crash on the Tehran-Urmia route on 01/19/2011 were investigated. The ultimate goal of this study is to detect icing threatening pixels in aircraft. To achieve this goal, using the products of Meteosat satellite, the physical properties of the cloud in the northwest were evaluated. First, cloud products was received in Netcdf4 format in 15 minutes. Then, a regular network of geographical coordinates with a spatial resolution of 101*165 was prepared. After the data networking process, cloud characteristics (cloud cover, cloud type, cloud phase, cloud optical depth and cloud temperature) were extracted for the study day in a period of 15 minutes. Finally, by combining cloud characteristics (temperature cloud less than 273 and cloud liquid phase and optical depth less than one) through FIT algorithm, icing mask was modeled for the study area. Examination of cloud characteristics maps shows that cloud temperature and cloud phase (liquid state) have played the most important role in creating icing conditions. According to the Aviation Authorities, there are icing pixels on the flight path and at the crash location. Examination of synoptic maps also showed that unstable weather conditions with severe convection at the time of the accident in the study area. Finally, under such conditions and with access to moisture sources in the upper layers of the atmosphere and the strengthening of super-cold water vapor, it has provided icing conditions.The 2007 Kahak and 2010 Kazerun earthquakes: constrained non-negative least-squares linear finite fault inversion for slip distribution
https://jesphys.ut.ac.ir/article_79633.html
In this study, two moderate earthquakes from two main seismotectonic provinces of Iran are chosen for investigating slip distribution using finite-fault modeling. The first earthquake is the 2007 June 18 Mw 5.5 Kahak earthquake which is sited in Central Iran seismotectonic province in the vicinity of Kahak district of Qom province near Tehran, the capital of Iran. The second one is the 2010 September 27 Mw 5.9 Kazerun earthquake situated in Zagros seismotectonic province, near Kazerun County in Fars Province. This research aims to find finite-fault modeling of the broadband three-component displacement waveforms of these earthquakes using a least-squares inversion method for the spatial and temporal slip distribution. Green&rsquo;s functions are calculated using the frequency-wavenumber integration code (FKRPROG) developed by Saikia (1994), and the inversion algorithm used for acquiring synthetic data is based on a stabilized constrained non-negative least-squares method introduced by Hartzell and Heaton (1983). A great many inversions are implemented to obtain the optimal parameters used in the process, including rupture velocity and rise time. The rupture velocity of 2.6 km/s (0.75 Vs) and the rise time of 1.4 s are used for the first event, and 2.8 km/s (0.75 Vs) and 2.1 s are chosen for the second one. Results show the rupture with the peak slip of 8.6 cm and 14.3 cm, and the total seismic moment release of 1.59e+24 dyne-cm and 2.80e+25 dyne-cm for the Kahak and Kazerun earthquakes, respectively. Furthermore, due to the non-uniqueness of the inversion problem, a set of solutions is presented for both events. Among these models, the final solutions for both earthquakes resulting from the ISC hypocenter and GCMT focal mechanism give the smoothest synthetic data with the largest amount of data fitting. For the Kahak earthquake, the ISC hypocenter provides the best fit to the observed data with the maximum total variance reduction of 35.30 % for the spatial and 54.50 % for the spatiotemporal distribution. For the Kazerun earthquake, the best fit to the observed data with the maximum total variance reduction of 54.44 % is obtained using the ISC hypocenter. Also, the sensitivity of the slip models to some influential parameters such as rupture velocity and rise time are explored. This sensitivity test shows that increasing the rupture velocity increases the seismic moment and decreases the total variance reduction. Moreover, different values of the rise time demonstrate that the rise time growth reduces the rupture area and the seismic moment. Another result of this test is that slip distribution is heavily influenced by the number of stations and different sets of data. Finally, a comparison between the slip pattern in the fault plane and its projection on the earth&rsquo;s surface illustrates that the aftershocks distribute primarily outside of the majority of the slip. Since the aftershocks are a phase of relaxing stress concentrations, they are expected to spread outside of the slip patch. For the Kahak earthquake, the distribution occurs near the western end of the slip model. There is also a predominance of aftershocks which surrounds the majority of the slip in the Kazerun model. Therefore, an analysis of the aftershock distribution and slip patterns may prove the reliability of the solutions.Some physical properties of mesoscale eddies in the Caspian Sea basins based on numerical simulations
https://jesphys.ut.ac.ir/article_81510.html
This paper investigates the mechanism of the eddy&rsquo;s formation and their locations in the Caspian Sea using numerical simulations. The HYCOM model is used to simulate the evolutions of eddies. The model ran for 18 years from 1992 to 2009 while river runoff and atmosphere forcing are applied into the model as input files. The model output is appropriately compared to some observation data. The results indicate that one cyclonic eddy in the middle and two cyclonic and anticyclonic eddies in the southern basin of the Caspian Sea are the main eddies in this closed Sea. Herein we prepare a comprehensive map to show the exact location of eddies with their important features like scales of them in all months using a model simulation outputs. These eddies showed different behaviors in all seasons. Topographic steering seems to be very important in the formation of these mesoscale deep basin size eddies. . .A Study on the Effects of Solar Protons on the NOy by Magnetic Storm Events from 2003 to 2012: A Comparison between the Southern and Northern Hemispheres
https://jesphys.ut.ac.ir/article_81522.html
In the study of solar-terrestrial relationships, magnetic storms and solar activity play important roles. In this paper, the intense magnetic storms in company with solar proton events occurred in October and November 2003, January 2005, December 2006, January and March of 2012 have been considered. The variation of the odd nitrogen (NOy) oxides and ozone in the stratospheric layer is investigated by the effects of energetic particle precipitation. Anomaly percentage of the odd nitrogen (NOy) oxides and ozone are calculated separately for the Southern and the Northern hemispheres and geographic latitude from 60 to 80 degrees. The analyzed results of the observational data showed that the intense magnetic storms which consist of more than 500 (particles/cm2 s sr) solar energetic proton (E&gt;10MeV), gave rise to increase the odd nitrogen (NOy) oxides in the stratosphere, from level 1 mb to 200 mb. Also, the results showed that on November 2003, January 2005, December 2006, January and March of 2012 the odd nitrogen (NOy)oxides which consist of more than 500 (particles/cm2 s sr) increased in the Northern hemisphere but for the Southern hemisphere a little decreased. Among the events of the magnetic storms in the autumn and winter seasons, the only event on the October 2003, showed that the odd nitrogen (NOy) oxides increased in the Southern hemisphere. The results showed that the increase in the odd nitrogen (NOy) oxides caused the decrease of ozone in the altitude below the odd nitrogen (NOy) with a delay.The analyzed results of observation data suggested that the strong magnetic storms consist of a solar proton more than 500 (particles/cm2 s sr) give rise to increasing the odd nitrogen oxides in the stratosphere, from level 1mb to 200 mb. Since the solar zenith angle is dependent on the season, and also, it is very different in the Northern Hemisphere and Southern Hemisphere Polar Regions, the solar zenith angle impacts the background atmosphere. This suggests that the influences of the solar proton event that produced 〖NO〗_y changed ozone somewhat differently in the two hemispheres (Jackman et al. 2014). Because of the difference between the solar zenith angle in fall and winter, we expected a different behavior of 〖NO〗_yin the Northern Hemisphere and Southern Hemispheres. This study consists of three magnetic storm events in winter (January 2005; January 2012; March 2012) and of three magnetic storm events in fall (October 2003, November 2003; December 2006). The main conclusions can be summarized as follows: 1) We observed a large increase in the anomaly percentage of 〖NO〗_y over the Northern Hemisphere, on November 2003, January 2005, December 2006, January 2012 and March 2012; for the Southern Hemisphere, We observed a small decrease in 〖NO〗_y in the same dates except for March 2012. 2) In October 2003 and March 2012 , We observed a large increase in the anomaly percentage of 〖NO〗_y over the Southern Hemisphere and only in October 2003, we observed a decrease in the anomaly percentage of 〖NO〗_yover the Northern Hemisphere.3) Increase in 〖NO〗_y always cause a decrease in the stratospheric ozone (of short duration and with a delay of several days) at the level lower than the level of increase in〖 NO〗_y.4) The variation of the anomaly percentage of the stratospheric ozone directly related to the variation of 〖NO〗_y, but the decrease in ozone cannot be considered as the exact model of increase in〖 NO〗_y.5) Proton flux and increase in influence energetic particles (when geomagnetic storms occurred) in the upper and the middle atmosphere cause of the increase in ionization. 6) There is a delay (from few hours to several days) between starting time of geomagnetic storm event and the time of the maximize proton flux, time of the increase in 〖NO〗_y and time of the decrease in ozone.GROUNDWATER PROSPECTIVITY MAPPING USING INTEGRATED GIS, REMOTE SENSING, AND GEOPHYSICAL TECHNIQUES; A CASE STUDY FROM NORTHEASTERN NIGERIA
https://jesphys.ut.ac.ir/article_81559.html
An integrated GIS, Remote sensing, and Geophysical techniques has been successfully applied to generate the previously non available groundwater prospectivity map for the present study area. Selected thematic maps were integrated using the weighted sum tool of the spatial analyst tool of the ArcGIS software. The five thematic maps used are: lithology map, drainage density map, slope map, lineaments density map, and the topographic map of the area. The groundwater prospectivity map generated was reclassified into low, moderate, high, and very high potential zones on the basis of their assigned layer rank, which also depends on their degree of influence on groundwater occurrence. Areas around Gombe, Wuyo, Deba, Alkaleri, Kaltungo, Misau, Nafada, Bajoga towns are the regions that showed very high prospects for groundwater occurrence. Data processing filters such as: horizontal derivatives, Analytic signal processing, 3d-euler depth estimation was applied on the magnetic data in order to map structures and lithologic contacts before its subsequent integration with other structural lineaments as a thematic layer. Vertical electrical sounding (VES) data were used to compute hydraulic conductivity, and Transmisivity etc. for the acquiferous layers identified. The results of the present study showed some regions that are classified as highly prospective to be consistent with high Transmisivity and high yield values. The final outcome (groundwater potential map) of this research demonstrated that GIS/remote sensing, and the geophysical technique employed is a very powerful tool for generating groundwater prospectivity map, which is very vital in terms of planning for groundwater exploration and exploitation.On the propagation of VLF electric field emissions associated with earthquakes in the middle layer of the earth crust
https://jesphys.ut.ac.ir/article_81560.html
An ensemble of elementary radiators is generated on the basement rock because of the applied stresses in the preparation zone of the earthquakes in the earth crust. Considering such an &lsquo;ensemble&rsquo; as the source of electromagnetic signals, the strength of the electric field is estimated at different distances and frequencies lying in range (3 &ndash; 27 kHz) at three different conductivities of the crustal layers (10-8, 10-9, 10-10 S/m). The results of the computation are presented in this paper. Moreover, propagation distances for the seismogenic VLF emissions have also been calculated in the frequency band (3 &ndash; 27 kHz) at the conductivities lying in the range (10-8 - 10-10 S/m) within the limit of detectability of measuring instruments (10-7 V/m). It is observed that these distances increase with the decrease of conductivity of the middle layer of crust. Furthermore, theoretical results of comptations are verified from the experimental observations of the seismic event that occurred at the distance of 698 km from the observing station at Chaumuhan Mathura (Geographic lat. 27.490N, long. 77.670E). In addition to this, generation and propagation mechanisms of seismo-electromagnetic radiations have also been discussed briefly.Contribution of source emissions in the air pollution modeling - a WRF/Chem case study
https://jesphys.ut.ac.ir/article_81756.html
Between 16 and 21 December 2017, several megacities in Iran, such as Tehran, Tabriz,, and Isfahan experienced a considerable increase in air pollutants. In this article, using the HTAP_v2 global emissions data, and by the WRF/Chem modeling system, the concentration time series of some important gaseous criteria pollutants, including NO2, SO2, and CO have been simulated. The variations of the time series of the pollutants and the comparisons of the results in Tehran with the measurement data showed that although the WRF/Chem simulations in Tehran presented considerable over-estimations, but the model&rsquo;s performance with regard to the time variations of the concentrations of the gaseous agents over the polluted episode is acceptable, and therefore, could be considered in the operational air quality systems. Since emission data are not available for many metropolitan areas over Iran, the HTAP_v2 global dataset could be used as the emissions data with reliable accuracy for the numerical air quality models. Sea surface pressure during 16 to 21 of December 2017 indicates the settling of high pressure systems from north east of Iran, which gradually intensifies over north-west and central parts of Iran, about 4 hpa. In other words, it increases from 1024 hpa in 16 December to 1028 hpa in 21 December. Variations of surface humidity over the study period are not significant. Dew point deficit over the north-west is about 2 and over the central parts of Iran is about 7 &deg;C. Furthermore, 10 m wind speed does not show considerable variation and is generally less than 10 knots. Using the ECMWF meteorological reanalysis data as the initial and boundary conditions, the model WRF/Chem has been run with two domains. Quantitative comparisons between the WRF/Chem results and the measurements show a considerable overestimation for Tehran. Considering the increase of the pollutants over the beginning and the middle of simulation period and its decreasing over the end of the simulation period, the temporal variations of the model results, especially for Tehran, present a good agreement with regard to the variations of the pollutants for this episode. Regarding the WRF/Chem results which have been run by the HTAP global emission data, these dataset are accurate enough to be used in the regional models. Therefore, for the mesoscale simulations (less than 1000 km), the HTAP global dataset provide reliable and valid emissions data which are highly valuable especially for those regions and urban areas without any local measured emissions data. Although the WRF/Chem model is a regional model, the model&rsquo;s grid points could be set with a high spatial resolution to simulate the urban air pollutants. Since the model WRF/Chem in this study has shown a good performance in the estimation of the variations in the air pollutant concentrations over the urban areas, this capability could be used to set up operational air quality models for the urban areas, as an air quality warning and advisory system. If there are national emissions data, they could be used instead of the global emissions to reach more accurate model results in air quality modeling.Investigating mode osculation phenomenon in MASW and MALW methods
https://jesphys.ut.ac.ir/article_81507.html
There are two types of seismic waves: those can propagate inside a medium (body waves) and those traveling along the Earth&rsquo;s surface (surface waves). In the last decades, a number of papers dealing with surface waves have been published but it must be recalled that their theoretical description and first applications date back to almost a century ago. Surface waves have been in fact used for a number of applications since 1920s: nondestructive testing (even for medical applications, geotechnical studies and crustal seismology). Recently the interest toward their application has increased both for the increasing demand for efficient methodologies to apply in engineering projects and because the recent regulations addressing the assessment of the seismic hazard (for instance the Eurocode8) are giving the necessary emphasis to the determination of shear-wave velocity vertical profile. This parameter is commonly used in geotechnical studies for classifying soil types.Among various methods for estimating shear-wave velocity profile, MASW and MALW methods are most popular because of their fast performance, low cost and their nondestructive nature. These methods are based on analyzing dispersive properties of Rayleigh and Love waves. In surface wave methods a correct identification of the modes is essential to avoid serious errors in building near surface shear wave velocity model. Here we consider the case of higher-mode misidentification known as &ldquo;osculation&rdquo; where the energy peak shifts at low frequencies from the fundamental to the first higher mode. This jump occurs around a well-defined frequency where the two modes get very close to each other. This problem is known to take place in complex subsurface situations, for example in inversely dispersive sites or in presence of a strong impedance contrast, such as a soil layer resting on top of the bedrock. This phenomenon can cause a misleading interpretation of dispersion curve by the operator, which is completely hazardous for engineering projects.In this paper we investigated mode osculation phenomenon for both MASW and MALW methods using synthetic and real datasets. We showed that MALW has a far better performance facing this problem, while it is a main drawback for the MASW method. Generally, when we encounter a low-velocity layer in the subsurface, the identification of Rayleigh wave&rsquo;s fundamental mode (MASW method) becomes almost impossible, while at the same time dispersion modes of Love waves (MALW method) are well separated, even in extreme conditions. In addition, we showed that performing single-station microtremor ellipticity analysis can also be quite useful. It can warn against the presence of a strong impedance contrast, it indicates the critical frequency at which mode osculation takes place, and also the HVSR data can be used as a constraint in the inversion process of surface wave data. So performing HVSR method alongside MASW and MALW methods not only can predict mode osculation frequency and strong impedance contrasts presence, but also can help us with joint-inversion of the surface wave data, resulting in a more solid Vs profile. We evaluated the performances of the proposed methods on real and synthetic seismic data and results were satisfying.Investigation of a suitable geometric design for the CONT14 observation network to improve the accuracy of EOPs by construction of VLBI stations in Iran
https://jesphys.ut.ac.ir/article_81508.html
Very long baseline interferometry (VLBI) has been used since the mid-1960s as a spatial geodetic tool for accurately determining coordinates on the ground, determining the Earth&#039;s rotational axis with very high accuracy and extracting important parameters related to earth. The basic principle of VLBI is measuring the time difference between the arrival time of a radio wave in two or more antennas, which is referred to as the time delay. To achieve this purpose, first the atomic clock must be used and secondly the clocks in the antennas must be synchronous. Earth orientation parameters (EOP) are a set of parameters that describe irregularities in the Earth&#039;s rotation. The VLBI method can be used to derive EOP. These parameters can be used for transformation between international terrestrial reference frame (ITRF) and celestial reference frame (ICRF) or vice versa. This transformation takes place through a sequence of rotations related to precession/nutation (NUTX, NUTY), earth rotation (DUT1) and polar motion (XPO, YPO). The geophysical effects of the Earth as well as the effects of celestial bodies such as the moon or the sun on the Earth&#039;s rotation, lead to changes in the EOP; therefore, changes in geophysical parameters of the earth can be obtained from changes in the EOP. The purpose of this study is to investigate the accuracy of the EOP after adding new observation station to the CONT14 observation network. These observation station are artificially constructed in Iran and the accuracy of EOP before and after adding new station to the network is investigated. The received wave from radio sources considered as a planar wave due to the great distance of radio sources from the stations. The wave from the quasars reaches to the antennas at different times. In this study, the CONT14 session has been used. CONT sessions are one of the most famous and important sessions in which the stations collect data continuously for two weeks. On average, the CONT sessions takes place every three years. Due to the large amount of data in these sessions, the EOP are determined with high accuracy. Due to the importance of CONT sessions, we will investigate the effect of constructing stations in Iran on the accuracy of the EOP in one of the CONT sessions, which will be added to the CONT14 observation network. Due to the high cost of constructing a VLBI observation station and to approaching reality, we will add five stations to the network in maximum case. The local network resulting from the five new stations covers the whole of Iran and the location of these five stations has been chosen arbitrarily. With analyzing the data that collected by the CONT14 session, the accuracy of the EOP is obtained. After adding new observation stations to CONT14 network and performing the new session, the collected data is processed again and the precision of the EOP is obtained. A comparison of the precision obtained in the new mode with precision obtained in CONT14 session shows the degree of improvement of EOP accuracy. In geodesy, the precision of the results is always discussed along with the results, and high-precision data are of interest to researchers because the models obtained by the data are closer to reality. The accuracy of the results can be increased by the high number of observations as well as improving the geometry of the network. In this study, the effect of constructing one or more VLBI observation stations in Iran on the precision of the EOP was investigated. By comparing the EOP precision in all possible observation networks, we came to the conclusion that if two observation stations are constructed in Ahvaz and Mashhad and add them to the CONT14 observation network, we can improve the EOP accuracy by about 10.14%.Retracking Sentinel-3A SAR waveforms to monitor the water level of a small inland water body
(Case study: Doroudzan Dam Reservoir, Shiraz, Iran)
https://jesphys.ut.ac.ir/article_81509.html
In inland water bodies, the water level obtained from the Level-2 data of the altimetry missions is not correct. Therefore, to correct the water level measured in these areas, it is necessary to retracking the return waveforms. In this study, data from level-2 and level-1 SRAL altimeter of Sentinel-3A mission, measured in SAR mode, in the period from March 2016 to November 2019 to monitor the water level of Doroudzan Dam, has been used. The threshold retracking algorithm with different thresholds has also been used to retrack the waveforms in the level one data. The results showed that the OCOG retracker in L-2 data with an RMSE value of 38.23 cm and a correlation of 99.23% with in situ gauge data compared to other retrackers in L-2 data from Doroudzan dam has higher accuracy in estimating the time series of the water level. The Ocean retracker also has results close to those of the OCOG retracker, indicating that these two retrackers perform well in restoring water levels. After obtaining the water level time series from the retrackers in the L-2 data and selecting the optimal level two retracker, the return waveforms from the L-1 data were first retracked using the threshold algorithm. Then the time series of the water level for different thresholds were obtained and compared with in situ gauge data, which showed that the threshold of 60% with a value of RMSE 37.73 cm and a correlation of 99.30% improved %1.3 accuracies and increase of %0.07 correlation with in situ gauge data has been optimized for the time series of water level obtained from L-2 retracker. Also, the results showed that, especially in the period from 2017 to 2018, the difference in water levels resulting from the retracking of the return waveforms with the optimal threshold algorithm (60%) with in situ gauge data less than the optimal L-2 retracker (OCOG). The average water level of Doroudzan Dam from the threshold of 60% was analyzed. Results showed the highest growth in water level with 4.09 m from March 6 to April 2, 2019, which corresponds to usually rainy months. The most significant decrease in the water level with 2.80 meters occurred from April 29, 2019, to May 26, 2019, which are usually low rainfall months. The results also showed during the study period a slight increase in the water level of Doroudzan Dam. Due to the hard, challenging shape, and topography of Doroudzan Dam and its confused waveforms, therefore, in the above study area, it is not possible to expect high accuracy from both the retrackers in the L-2 data and the results of the waveform retracking. Therefore, the proximity of RMSE results and correlation goes back to the shape and topography of the Doroudzan Dam reservoir. The results of this study show the high suitability of the Sentinel-3 mission in monitoring the water level from inland water bodies, which is still a challenging area for satellite altimetry to monitor. Indeed, for a better understanding of the performance of this mission, more samples need to be analyzed.The general circulation in the North Atlantic and Pacific and its relationship with development and strengthening of Azores and Hawaiian subtropical anticyclones
https://jesphys.ut.ac.ir/article_81513.html
Subtropical Anticyclones are among the large-scale atmospheric centers of action in the northern hemisphere in the east of the oceans. Clockwise flow and high surface pressure are two prominent features of these systems. These systems have an annual trend and usually achieve maximum flow and surface pressure in the summer, especially in July. Understanding the factors influencing the development and intensification of these anticyclones has been the favorite of many researchers. One of these factors has been the general circulation of the atmosphere. In this study, a climatological study of the general atmospheric circulation, including the Hadley and Walker circulation, has been performed. Their role in the development and strengthening of subtropical anticyclones has been investigated. The research has been done in three parts; 1- Mean Meridian Circulation, 2- meridional circulation in the North Atlantic and Pacific, and 3- Walker circulation in the North Atlantic and Pacific. In this study, the meridional component of wind, vertical velocity (omega), and horizontal wind divergence have been used. Data at 27 pressure levels with a horizontal resolution of 0.25 &times; 0.25 &deg; were extracted from the European Center for Medium Weather Forecasting (ECMWF) and the ERA5 version. The monthly mean of the data used was conducted over 40 years, from 1979 to 2018. The Mass Stream Function (MSF) method has been used to quantify the meridional and walker circulation.The Mean Meridian Circulation showed that the meridional circulation in the equinox months consists of a pair of Hadley cells in which air rises in the tropics and subsides in the subtropics. Also, a solstitial cell with the ascent in the outer tropics of the summer hemisphere and subsidence in the outer tropics of the winter hemisphere. Although the Mean Meridional Circulation showed that mass transfer takes place in the summer of the Northern Hemisphere to the Southern Hemisphere and the Hadley circulation could not explain and justify the maximum activity of the subtropical anticyclones, but the meridional circulation at smaller cross-sections in the East Atlantic and Pacific showed that the Hadley cells play a vital role in mass transfer to the subtropics and mid-latitudes. The mean walker circulation (20- 40 &deg; N) showed that the source of this circulation is only the latent heat released on the waters and the lands of the western oceans have no role in mass transfer to the east. westerly and southwesterly winds form mass transfer in the Walker circulation to the northeast of the oceans. Heating in northwestern Africa and North America is another phenomenon that plays a role in subsidence in the North Atlantic and Pacific. The subsidence induced from heating on African lands is much more severe than in North America. This may depend on the climate and extent of these areas. Therefore, as a result of this research, it can be said that three processes: Hadley circulation, Walker circulation, and heating on the lands adjacent to the eastern oceans, are effective in mass transfer and subsidence in the east Atlantic and Pacific. These conditions form strong northerly winds in the eastern oceans and trade winds in the tropics and effectively develop and strengthen subtropical anticyclones.Projected consecutive dry and wet days in Iran based on CMIP6 bias‐corrected multi‐model ensemble
https://jesphys.ut.ac.ir/article_81514.html
Climate change with changes in precipitation patterns around the world can cause significant changes in the frequency, intensity and duration of precipitation events. In the context of climate change and with the increase of extreme climate events, irreparable consequences are imposed on the environment and the economy. Therefore, it is necessary to have an appropriate understanding of the frequency, intensity and spatial distribution of these extreme events in order to take a fundamental step in preventing damage caused by them. The purpose of this study is to analyze the characteristics of consecutive dry / wet days during 1975-2014 and 2021-2100 based on the output of CMIP6 models. In this regard, the evaluation of CMIP6 models against gauge precipitation data has been done in Iran.In this study, historical precipitation (1975-2014) and scenarios-based output of CMIP6 models under shared socioeconomic pathways (SSPs) in the two future periods (2021-2060 and 2061-2100) were used. Basic statistics of r, RMSE, MBE and receiver operating characteristic (ROC) were used to validate the precipitation output of selected models (GFDL-ESM4, IPSL-CM6A-LR, MPI-ESM1-2-HR, MRI-ESM2-0, UKESM1-0-LL). Then, consecutive dry and wet days were calculated using the CDD and CWD indices of the Expert team on climate change detection and indices (ETCCDI). After examining each individual model, an ensemble model is applied with independent weighted mean (IWM) method.The results showed that among the five CMIP6 models, the IPSL-CM6A-LR model has the most underestimation and the UKESM1-0-LL has the most overestimation for Iran precipitation. The average amount of precipitation bias in the whole country for GFDL-ESM4 (2.56), IPSL-CM6A-LR (2.29), MPI-ESM1-2-HR (2.89), MRI-ESM2-0 (2.18). And UKESM1-0-LL (2.53) mm were calculated. The skill score has improved significantly since applying the multi model ensemble (MME). Consecutive dry days in Iran will increase by a maximum of 26.4 days under the SSP5-8.5 scenario in the period 2061-2100 for the Caspian Sea basins and Lake Urmia. In contrast, consecutive wet days will decrease in these two basins.Validation results for the period of 1975-2014 showed that compared to observations, CMIP6 models have a high performance in estimating precipitation in Iran. However, despite the uncertainties in precipitation change, the CMIP6 results provide evidence that the anomaly of consecutive dry and wet periods is an indicator for short-term droughts under increasing climate change conditions. Consecutive dry days will increase significantly in the north and northwest of Iran in the future.The maximum changes related to CDD and CWD indices are observed under SSP5&ndash;8.5 scenario, while the lowest frequency for both indices is under SSP1&ndash;2.6 scenario. Examination of CDD and CWD anomalies showed that even in the optimistic scenario (SSP1-2.6), drought responses to climate change are significant. Consecutive dry periods are increasing in most of the northern, northwestern and northeastern regions of Iran. It is urgent to consider these changes in the hydrological cycle as a tool to improve water management, especially in the northern and northwestern regions of Iran. Also, in some areas, such as the southeast and the coasts of the Persian Gulf, there is a significant decrease in consecutive dry periods, which indicates an increase in precipitation on a seasonal and inter-annual scale in the future.Calculating magnetic activity cycle of M-type dwarf stars using GLS technique and index-Hα: Proxima Centauri
https://jesphys.ut.ac.ir/article_81515.html
index-; Proxima Centauri The study of the existence of life or habitable zone somewhere in the universe, beyond the Earth, has been one of the important researches in the field of astronomy and astrophysics in the last few decades. Countless studies have been done and are being done theoretically and experimentally.Proxima Centauri () with visual magnitude of 11.01 and at a distance of 1.3 pc is the closest star to Earth after the Sun and is especially important for our knowledge of very cool stars. This M5.5V spectral type star is the faintest member of the Alpha Centauri ternary star system, located about 1400 astronomical units closer to Earth than the other members. The physical characteristics of this star, including radius (), mass (, rotational periodicity (1.5 35) and its age, which is about 4.85 billion years old, are well determined. Despite its old age, Proxima Centauri is an active star, and like the sun it has activity cycle (the activity cycle of the sun is about 11 years).Generally, M-type stars are hard to study due to their optical faintness. But Studying Proxima Centauri can improve our knowledge of very cool stars as its proximity lets us to observe it with great accuracy. Moreover, its similarity to the sun and the possibility of having a system of planets around it and consequently the study of life on these planets is of particular importance.This paper aims to determine the activity cycle of Proxima Centauri star using spectral line and to evaluate the generalized Lamb-Scargel periodogram technique (GLS) to determine the period of active dwarf stars, including Proxima centauri.The GLS is an extension to the Lomb-Scargle periodogram which takes into account the measurement of errors and also is more suitable for time series with non-zero average. GLS tries to fit the sinusoidal equation to the time series and find the power spectrum for frequencies. We consider a given periodogram peak, derived from GLS, significant when it exceeds the one present &ldquo;false alarm probability&rdquo; level (FAP), which means there is 99% confidence that it is real and could not be simulated by Gaussian noise. FAP levels are calculated by performing random permutations of the data with similar times of observations. For this purpose, we used HARPS spectroscopic data over a period from 2004 to 2017. HARPS, the High Accuracy Radial velocity Planet Searcher at the European Southern Observatory La Silla 3.6m Cassegrain telescope is dedicated to the discovery of extrasolar planets. It is a fibre-fed high resolution echelle spectrograph. This instrument is used to accurately measure radial velocities of the order of 1 m/s in extrasolar planet research. The spectral area is 378-691 nm and its resolution 115,000. Therefore, from this point of view, we can say that our analysis is more accurate than others.The magnetic activity period of Proxima Centauri obtained as 2349 days, which is in good agreement with the results obtained from other methods. Therefore, our results confirm the efficiency and superiority of the generalized Lamb-Scargel periodogram technique in determining the period of active cool dwarf stars.Evaluating CRUST1.0 crustal model efficiency for Moho depth estimation in Middle East region
https://jesphys.ut.ac.ir/article_81523.html
Study of Moho in Middle East and surrounding region is of great importance for scientists, because it has a rich geological history and contains parts of the Eurasian, Indian, African and Arabian plates as the main plates and some small plates. According to complexity and different tectonic structures in Middle East using a proper method that yields a Moho depth model which is in accordance with these structures, has a great importance. In this paper we compare the Moho depth obtained from two different methods, 1) Gravity data inversion of spherical prisms (tesseroids) and 2) Moho depth evaluation using tesseroids and CRUST1.0 crustal model. Determining of Moho depth from gravity data is a nonlinear inverse problem. Regarding the extent of the study area we use an efficient inversion method (Uieda&rsquo;s inversion method) in order to consider the earth&#039;s curvature by using spherical prisms instead of rectangular prisms. In this method one needs to minimize the &Gamma;(p)= ϕ(p)+ &mu; &theta;(p) cost function, where ϕ(p) is the fidelity term, &theta;(p) is the penalty term and &mu; is regularization parameter. In this method in addition to Moho depth, we need to estimate three hyper parameters namely the regularization parameter (&mu;), Moho reference level (h_n) and density contrast (∆&rho;). They are estimated in two steps during the inversion by holdout-cross validation method .To estimating the relief of the Moho from gravity data, first one must obtain the gravitational effect of the target anomalous density distribution attributed to the Moho relief, this requires eliminating all gravity effects other than that of the target anomalous density from observed data. In the first method tesseroid modeling is used to calculate the gravity effect of the topography and sediments. The effect of topography and crustal sediments are removed using global topography and crustal models. In the second method first we extract Moho depth over the study region from CRUST1.0 model and then evaluate gravity effect arising from this anomalous Moho, then using inversion method to estimate the Moho depth from CRUST 1.0 model. According to the results, the minimum depth of Moho is about 12 km in some parts of Indian Ocean and the maximum depth is about 54 km in the west of Tibetan plateau from the first method which is in accordance with plate boundaries and correlates well with the prominent tectonic features of the Middle East region. The Moho depth obtained from the second method varies between 7.5 and 49 km where the minimum depth is related to the parts of Indian Ocean and maximum depth is appeared in parts of the Zagros in Iran. Comparing the results of two methods demonstrate the acceptable performance of the adapted inversion procedure and utilizing of spherical prisms but the calculated Moho depth from second method failed to estimate acceptable Moho depth especially in divergent boundary at Red sea, Gulf of Aden and Indian Ocean. The results indicated that the CRUST1.0 model, at least over an area with large extent, is not a suitable model for gravity inversion and Moho depth estimation.Troposphere Electromagnetic Intensification in Generating Participation
https://jesphys.ut.ac.ir/article_81537.html
Decreased precipitation and water scarcity are some of the important challenges in most parts of Iran in recent years and need a cost-effective solution based on high technical knowledge and equipment; To improve the meteorological conditions with modern technologies, one can use the high voltage injection air ionization equipment. The result efficiently can increase cloud-water vapor concentration nuclei due to generate duplex clouds. Recent theoretical and experimental work suggests that a charged atmosphere will have a lower nucleation barrier and will also help stabilize embryonic particles. This allows nucleation to occur at lower vapor concentrations and demonstrates that charged particle and molecular clusters, condensing around natural air ions can grow significantly faster than corresponding neutral clusters. The theoretical dynamic locating of the injection model also indicates that the nucleation rate of particles in the non-charged regions (without injection) is limited by the ion production rate from other sources such as cosmic rays. Thus, stable charged particle concentration by injection resulting from condensation and growth can survive long after ion injection and ionization. Theoretical study of dynamic locating of injection model establishes a relationship between the dynamic locating electromagnetic region of changing point ionization and precipitation microphysics. Mechanism troposphere ionization and the Earth electromagnetic field properties cannot be excluded and there are established electrical effects on precipitation microphysics. Building on the relationship between changing points and ion injection the observations are extended to the realm of electromagnetic field microphysics by exploring this model.The injection produces positive /negative ions and free electrons. Many of these ions will be quickly lost to ion-ion recombination. Some of the ions escape recombination or reduced ion concentrations because the ionization produced by the electric field often is decreased because of the dust storm or wind that are generated in fixed changing points. As we presented in this article dynamic locating of injection in the troposphere is very important to provide additive effects increasing cloud concentrations and generating precipitation, which is the main achievement of this analytical-simulation work. In this analytical-simulation study, which is based on real and experimental data taken from the western and southwestern regions of Iran, we first review the background of the results obtained from the injection process and the effect of generating clouds in the troposphere.Then we obtain the results of the same data with the theoretical effect of dynamic locating and simulation with injection at the electromagnetic changing points. The results of the previous data assuming maximization of utility have been recalculated and compared. The injection results are optimized by a dynamic locating technique that affects utility indices of maximum electromagnetic changing field between troposphere-ground the earth thickness. Due to the increased generation of rainy clouds and maximization of their concentrations and increased local precipitation by the dynamic locating method at the injection site and the optimal operation of the equipment is investigated. The theoretical model that is presented shows that the theoretical dynamic locating of injection model by increasing in ionizing effect leads to a 15-20% decrease in precipitation, decrease 11% temperature, increase 10% humidity.