Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Residual static correction Using Tunable Q Factor Discrete Wavelet TransformResidual static correction Using Tunable Q Factor Discrete Wavelet Transform1127956810.22059/jesphys.2021.303296.1007214FAZahraSadeghiM.Sc. Student, Department of Earth Sciences, Faculty of Sciences and Modern Technologies, Graduate University of Advanced Technology, Kerman, IranAli RezaGoudarziAssociate Professor, Department of Earth Sciences, Faculty of Sciences and Modern Technologies, Graduate University of Advanced Technology, Kerman, Iran0000-0003-2944-2681Journal Article20200628The derivation of the static reference corrections was generally based on a fairly simple geological model close to the surface. The lack of detailed information near the surface leads to inaccuracies in this model and, therefore, in static corrections. Residual static corrections are designed to correct small inaccuracies in the near-surface model. Their application should lead to an improvement of the final section treated compared to that in which only static corrections is applied. For example, if the final stacked section is to be inverted to produce an acoustic impedance section, it is important that the variations in amplitude along the section represent the changes in the reflection coefficient as close as possible. This is unlikely to be the case if small residual static errors are present. In addition, static reference corrections are not a unique set of values because a change in reference results in a different set of corrections. Due to variation in the Earth's surface, velocities, and thicknesses of near-surface layers, the shape of the travel time hyperbola changes. These deviations, called static, result in misalignments and events lost in the CMP, so they must be corrected during the processing. After correcting the statics of long wavelengths, there are still some short-wavelength anomalies. These “residual” statics are due to variations not counted in the low-velocity layer. The estimation of the residual static in complex areas is one of the main problems posed by the processing of seismic data, and the results from this processing step affect the quality of the final reconstructed image and the results of the interpretation. Residual static can be estimated by different methods such as travel time inversion, power stacking, and sparsity maximization, which are based on a coherent surface assumption. An effective method must be able to denoise the seismic signal without losing useful data and have to function properly in the presence of random noise. In the frequency domain, it is possible to separate the noise from the main data, so denoising in the frequency domain can be useful. Besides, the transformation areas are data-driven and require no information below the surface. The methods in the frequency domain generally use the Fourier transform, which takes time and has certain limits. Wavelet transformation methods always provide a faster procedure than Fourier transformation. We have found that this type of wavelet transform could provide a data-oriented method for analyzing and synthesizing data according to the oscillation behavior of the signal. Tune able Q Factor Discrete Wavelet Transform (TQWT) is a new method that provides a reliable framework for the residual static correction. In this transformation, the quality factor (Q), which relates to the particular oscillatory behavior of the data, could be adjusted in the signal by the user, and this characteristic leads to a good correspondence with the seismic signal. The Q factor of an oscillatory pulse is the ratio of its center frequency to its bandwidth. <br />TQWT is developed by a tow channel filter bank. The use of a low-pass filter eliminates high-frequency data; these high-frequency components are the effect of residual static. After filtering, the data will be smoother; this amount of correction gives the time offset for the residual static correction. This time difference must apply to all traces. Applying this method to synthetic and real data shows a good correction of the residual static.The derivation of the static reference corrections was generally based on a fairly simple geological model close to the surface. The lack of detailed information near the surface leads to inaccuracies in this model and, therefore, in static corrections. Residual static corrections are designed to correct small inaccuracies in the near-surface model. Their application should lead to an improvement of the final section treated compared to that in which only static corrections is applied. For example, if the final stacked section is to be inverted to produce an acoustic impedance section, it is important that the variations in amplitude along the section represent the changes in the reflection coefficient as close as possible. This is unlikely to be the case if small residual static errors are present. In addition, static reference corrections are not a unique set of values because a change in reference results in a different set of corrections. Due to variation in the Earth's surface, velocities, and thicknesses of near-surface layers, the shape of the travel time hyperbola changes. These deviations, called static, result in misalignments and events lost in the CMP, so they must be corrected during the processing. After correcting the statics of long wavelengths, there are still some short-wavelength anomalies. These “residual” statics are due to variations not counted in the low-velocity layer. The estimation of the residual static in complex areas is one of the main problems posed by the processing of seismic data, and the results from this processing step affect the quality of the final reconstructed image and the results of the interpretation. Residual static can be estimated by different methods such as travel time inversion, power stacking, and sparsity maximization, which are based on a coherent surface assumption. An effective method must be able to denoise the seismic signal without losing useful data and have to function properly in the presence of random noise. In the frequency domain, it is possible to separate the noise from the main data, so denoising in the frequency domain can be useful. Besides, the transformation areas are data-driven and require no information below the surface. The methods in the frequency domain generally use the Fourier transform, which takes time and has certain limits. Wavelet transformation methods always provide a faster procedure than Fourier transformation. We have found that this type of wavelet transform could provide a data-oriented method for analyzing and synthesizing data according to the oscillation behavior of the signal. Tune able Q Factor Discrete Wavelet Transform (TQWT) is a new method that provides a reliable framework for the residual static correction. In this transformation, the quality factor (Q), which relates to the particular oscillatory behavior of the data, could be adjusted in the signal by the user, and this characteristic leads to a good correspondence with the seismic signal. The Q factor of an oscillatory pulse is the ratio of its center frequency to its bandwidth. <br />TQWT is developed by a tow channel filter bank. The use of a low-pass filter eliminates high-frequency data; these high-frequency components are the effect of residual static. After filtering, the data will be smoother; this amount of correction gives the time offset for the residual static correction. This time difference must apply to all traces. Applying this method to synthetic and real data shows a good correction of the residual static.https://jesphys.ut.ac.ir/article_79568_19ff1c4091227292b106584ea647f32f.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Estimation of average shear V_sz and compressional V_pzwaves velocities using wavelength-depth relation obtained from surface waves analysisEstimation of average shear V_sz and compressional V_pzwaves velocities using wavelength-depth relation obtained from surface waves analysis13267963510.22059/jesphys.2021.305097.1007229FASasanGhavamiM.Sc. Student, Department of Earth Physics, Institute of Geophysics, University of Tehran, Tehran, IranHamid RezaSiahkoohiProfessor, Department of Earth Physics, Institute of Geophysics, University of Tehran, Tehran, IranJournal Article20200728Shear wave velocity ( ) and its average based on travel time from the surface to a depth of 30 m, is known as ( ) are often used in engineering projects to determine soil parameters, evaluate the dynamic properties of the soil and classify it. This quantity is directly related to the important property of soil and rock, i.e., their shear strength. The average shear wave velocity is used in geotechnics to assess soil liquefaction and in earthquake engineering to determine soil period, site amplification coefficient, and determination of attenuation. Usually, the average shear wave velocity is obtained from shear wave refraction survey, PS logging or from shear wave velocity profile obtained by inversion of experimental dispersion curve of surface waves. Surface wave analysis is one of the methods for estimating the profile of shear wave velocity, but inverting of dispersion curve is a time-consuming part of this process and also, the inverse problem has a non-unique solution. This becomes more evident when the goal is to determine a two- or three-dimensional shear wave velocity model. <br />This study provides a method to estimate directly the average shear wave velocity ( ) as well as the average compressional wave velocity ( ) from dispersion curves of surface waves without the need to invert the dispersion curves. For this purpose, we need to exploit the relation between surface wave wavelength and investigation depth. Estimating the wavelength-depth relationship requires access to a shear wave velocity model (a reference model) in the study area, which can be obtained from well data, refraction seismic profiles, or by inverting one of the experimental surface wave dispersion curves. <br />The is then estimated directly from dispersion curve using the wavelength-depth relationship. In addition, due to the dependence of the value of to Poisson's ratio and the sensitivity of the estimated wavelength-depth relationship to this ratio, we estimate the Poisson's ratio profile and average compressional velocity ( ) for the study area, from the . <br />For a given range of Poisson's ratio values, theoretical dispersion curves of the synthetic earth models are determined by forward modeling. Then using these dispersion curves and estimated average shear wave velocity of the model, the wavelength-depth relationship corresponding to each Poisson's ratio is determined. In the next step by comparing experimental and estimated wavelength-depth relationships, one can estimate the Poisson's ratio at each depth. Then the average compressional wave velocity ( ) is estimated using the and the Poisson's ratios. <br />We evaluated the performance of the proposed method by applying on both real MASW seismic data set from USA and synthetic seismic data. The synthetic data collected over synthetic earth model and showed that the average shear and compression waves velocities are estimated with uncertainty of less than 10% in layered earth model with very large lateral variations in shear and compression waves velocities. <br />According to the results, the proposed method can be used to take the non-destructive advantages of the surface wave method in engineering, geotechnical, and earthquake engineering projects to get the average shear wave velocity .Shear wave velocity ( ) and its average based on travel time from the surface to a depth of 30 m, is known as ( ) are often used in engineering projects to determine soil parameters, evaluate the dynamic properties of the soil and classify it. This quantity is directly related to the important property of soil and rock, i.e., their shear strength. The average shear wave velocity is used in geotechnics to assess soil liquefaction and in earthquake engineering to determine soil period, site amplification coefficient, and determination of attenuation. Usually, the average shear wave velocity is obtained from shear wave refraction survey, PS logging or from shear wave velocity profile obtained by inversion of experimental dispersion curve of surface waves. Surface wave analysis is one of the methods for estimating the profile of shear wave velocity, but inverting of dispersion curve is a time-consuming part of this process and also, the inverse problem has a non-unique solution. This becomes more evident when the goal is to determine a two- or three-dimensional shear wave velocity model. <br />This study provides a method to estimate directly the average shear wave velocity ( ) as well as the average compressional wave velocity ( ) from dispersion curves of surface waves without the need to invert the dispersion curves. For this purpose, we need to exploit the relation between surface wave wavelength and investigation depth. Estimating the wavelength-depth relationship requires access to a shear wave velocity model (a reference model) in the study area, which can be obtained from well data, refraction seismic profiles, or by inverting one of the experimental surface wave dispersion curves. <br />The is then estimated directly from dispersion curve using the wavelength-depth relationship. In addition, due to the dependence of the value of to Poisson's ratio and the sensitivity of the estimated wavelength-depth relationship to this ratio, we estimate the Poisson's ratio profile and average compressional velocity ( ) for the study area, from the . <br />For a given range of Poisson's ratio values, theoretical dispersion curves of the synthetic earth models are determined by forward modeling. Then using these dispersion curves and estimated average shear wave velocity of the model, the wavelength-depth relationship corresponding to each Poisson's ratio is determined. In the next step by comparing experimental and estimated wavelength-depth relationships, one can estimate the Poisson's ratio at each depth. Then the average compressional wave velocity ( ) is estimated using the and the Poisson's ratios. <br />We evaluated the performance of the proposed method by applying on both real MASW seismic data set from USA and synthetic seismic data. The synthetic data collected over synthetic earth model and showed that the average shear and compression waves velocities are estimated with uncertainty of less than 10% in layered earth model with very large lateral variations in shear and compression waves velocities. <br />According to the results, the proposed method can be used to take the non-destructive advantages of the surface wave method in engineering, geotechnical, and earthquake engineering projects to get the average shear wave velocity .https://jesphys.ut.ac.ir/article_79635_345a1da72c0f3914abb432b7cfb14703.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Evaluation of Precise Point Positioning method with different combinations of Dual-frequencies of Galileo and BeiDou using PPPteh softwareEvaluation of Precise Point Positioning method with different combinations of Dual-frequencies of Galileo and BeiDou using PPPteh software27407956910.22059/jesphys.2021.305671.1007233FAKamalParvaziPh.D. Student, Department of Surveying and Geomatics Engineering, Faculty of Engineering, University of Tehran, Tehran, IranSaeedFarzanehAssistant Professor, Department of Surveying and Geomatics Engineering, Faculty of Engineering, University of Tehran, Tehran, IranAbdo-el RezaSafariProfessor, Department of Surveying and Geomatics Engineering, Faculty of Engineering, University of Tehran, Tehran, IranJournal Article20200712Due to advances in global navigation satellite systems, it has been possible for satellites to send different frequencies. For this reason, different combinations of these frequencies can be considered to form ionospheric codes and phase observations. In this study, the aim is to evaluate the Precise Point Positioning method using a combination of different frequencies. For this purpose, the PPPteh software provided by the authors, written under MatLab is used. PPPteh has the ability to process observations from four GPS, GLONASS, BeiDou and Galileo satellite systems to perform precise point positioning. In this software, there are all possible combinations for making Dual-frequency ionosphere-free observations for all different frequencies. There are three modes for combining different frequencies for the GPS positioning system, ten modes for the Galileo system, and three modes for building the BeiDou satellite system to make ionospheric-free observations. To evaluate the precise point positioning method, four steps have been considered in terms of position accuracy and convergence time: 1) First, use the observations of two frequencies related to GPS and determine the position, 2) Combine the two systems satellite GPS and Galileo and select the best combination model, 3) Combining the two systems GPS and BeiDou and selecting the best combination and 4) Finally, after the previous three steps, the combination position will be determined using the three systems by the best frequency model and the results will be compared with each other. Based on the results provided for the Galileo and BeiDou navigation satellite systems, two combinations and were selected as the best combinations for use in determining the precise point positioning, respectively. Following the precise point positioning, the addition of observations on BeiDou satellites has reduced convergence time and, in most cases, increased the three-dimensional accuracy of the coordinate components. Using a combination of the signals has a better quality than the other two combinations. The same process was followed for observations of Galileo satellites, according to which the use of observations related to Galileo satellites when combined with GPS observations has increased accuracy and reduced convergence time. The use of signal signals is of better combination than the other three combinations. Finally, by combining all three systems and considering the selected frequency model in the first stage, it was concluded that the combination of three satellite navigation satellite systems GPS, Galileo and BeiDou significantly improved both in reducing convergence time and increasing the three-dimensional accuracy of the coordinates provided. Also, the error provided (the difference in the estimated coordinates with the final coordinates of the station from the IGS file), when using the Galileo and BeiDou systems in combination with the GPS, is noticeably different both in convergence and in the accuracy of the coordinates. Combining all three systems together increases accuracy and reduces convergence time. But in dual-combination with GPS, the use of Galileo satellite observations gives us higher accuracy as well as less convergence time. Therefore, choosing the right signals to form ionosphere-free observations in determining the exact absolute position as well as combining different observations with the correct weight for each signal in combination with GPS, can meet the user's needs in terms of accuracy and convergence.Due to advances in global navigation satellite systems, it has been possible for satellites to send different frequencies. For this reason, different combinations of these frequencies can be considered to form ionospheric codes and phase observations. In this study, the aim is to evaluate the Precise Point Positioning method using a combination of different frequencies. For this purpose, the PPPteh software provided by the authors, written under MatLab is used. PPPteh has the ability to process observations from four GPS, GLONASS, BeiDou and Galileo satellite systems to perform precise point positioning. In this software, there are all possible combinations for making Dual-frequency ionosphere-free observations for all different frequencies. There are three modes for combining different frequencies for the GPS positioning system, ten modes for the Galileo system, and three modes for building the BeiDou satellite system to make ionospheric-free observations. To evaluate the precise point positioning method, four steps have been considered in terms of position accuracy and convergence time: 1) First, use the observations of two frequencies related to GPS and determine the position, 2) Combine the two systems satellite GPS and Galileo and select the best combination model, 3) Combining the two systems GPS and BeiDou and selecting the best combination and 4) Finally, after the previous three steps, the combination position will be determined using the three systems by the best frequency model and the results will be compared with each other. Based on the results provided for the Galileo and BeiDou navigation satellite systems, two combinations and were selected as the best combinations for use in determining the precise point positioning, respectively. Following the precise point positioning, the addition of observations on BeiDou satellites has reduced convergence time and, in most cases, increased the three-dimensional accuracy of the coordinate components. Using a combination of the signals has a better quality than the other two combinations. The same process was followed for observations of Galileo satellites, according to which the use of observations related to Galileo satellites when combined with GPS observations has increased accuracy and reduced convergence time. The use of signal signals is of better combination than the other three combinations. Finally, by combining all three systems and considering the selected frequency model in the first stage, it was concluded that the combination of three satellite navigation satellite systems GPS, Galileo and BeiDou significantly improved both in reducing convergence time and increasing the three-dimensional accuracy of the coordinates provided. Also, the error provided (the difference in the estimated coordinates with the final coordinates of the station from the IGS file), when using the Galileo and BeiDou systems in combination with the GPS, is noticeably different both in convergence and in the accuracy of the coordinates. Combining all three systems together increases accuracy and reduces convergence time. But in dual-combination with GPS, the use of Galileo satellite observations gives us higher accuracy as well as less convergence time. Therefore, choosing the right signals to form ionosphere-free observations in determining the exact absolute position as well as combining different observations with the correct weight for each signal in combination with GPS, can meet the user's needs in terms of accuracy and convergence.https://jesphys.ut.ac.ir/article_79569_e077835c159613e65f8d97546ee63def.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Investigation of the near-field and directivity effects in earthquake hazard analysis studies - a case study of Doroud faultInvestigation of the near-field and directivity effects in earthquake hazard analysis studies - a case study of Doroud fault41577963410.22059/jesphys.2021.307079.1007236FABehzadMalekiM.Sc. Graduated, Department of Earth Physics, Institute of Geophysics, University of Tehran, Tehran, IranHabibRahimiAssociate Professor, Department of Earth Physics, Institute of Geophysics, University of Tehran, Tehran, IranMohammad RezaHosseiniM.Sc. Student, Department of Earth Physics, Institute of Geophysics, University of Tehran, Tehran, IranJournal Article20200803In this study, considering the location of Dorud city in the area near the strike-slip and seismic fault of Doroud, the effects of the near site and the directivity due to rupture have been investigated in seismic hazard analysis studies. <br />Doroud fault is located near the cities of Doroud and Boroujerd, in the western part of Iran. Dorud and Boroujerd are among the important cities of Iran in the agricultural industry and also due to the pristine nature in these areas has always been of interest to tourists. The micro-earthquakes recorded in this area indicate the activity of the Doroud fault system. In order to prevent possible earthquake damage in this area, seismicity studies can be useful to study the acceleration of the ground by considering the effects of the site in order to strengthen the construction of civil structures. <br />Abrahamson (2000) and Somerville et al., (1997) were among the first researchers to establish studies based on this, and the relationships and methods proposed by them are more acceptable today in applying the directional effect. These researchers considered two parameters of angle and ratio of fault length as a direct factor in the effect of orientation and examined the results for the acceleration spectrum created. The effect of orientation can lead to the formation of long-period pulses in the earth's motion, which some proposed models (eg Somerville et al., 1997) can measure the quantity of this effect in estimating earthquake risk analysis with a deterministic and probabilistic approaches. (Abrahamson, 2000). In this study, seismic hazard has been investigated, compared and evaluated by considering the effects of Doroud fault in different periods and different return periods by considering the effect of orientation and also without applying the effect of orientation. <br />Near-field and directional effects can lead to long-period pulses in ground motion parameter, and for structures with long periods such as bridges and dams near faults with high activity rates. The inclusion of directional effects in attenuation relationships, to see whether for deterministic and probabilistic approach can have a great impact on the results of realistic seismic hazard analysis. Doroud fault is one of the most important faults in Iran with a history of large earthquakes in the early instrumental period and its mechanism of strike-slip mechanism, It can intensify the strong motion parameters during earthquakes for long periods in the city of Dorud, and consequently cause serious damage to structures with long periods in this area. <br />In this study, the parameters of strong ground motion in the analysis of probabilistic earthquake hazard by applying direction for the range of Doroud fault have been estimated. In addition, by examining the disaggregation of earthquake hazard, the effect of direction for the contribution of distance and magnitude in estimating the strong motion parameter has been evaluated. In the short and long return periods, the effect of directivity for different periods for the strong motion has been estimated and evaluated by the Somerville and Abrahamson method. The estimated acceleration is calculated and evaluated for three return periods, 50, 475 and 2475 years and in periods of 0.75, 1, 2, 3 and 4 sec. The value of the strong motion parameter was directly related to the increase of the return period and the period, so that the highest amount of acceleration increase (17.16 percentage) with the effect of directivity was calculated in the return period of 2475 years and in the 4-second period.In this study, considering the location of Dorud city in the area near the strike-slip and seismic fault of Doroud, the effects of the near site and the directivity due to rupture have been investigated in seismic hazard analysis studies. <br />Doroud fault is located near the cities of Doroud and Boroujerd, in the western part of Iran. Dorud and Boroujerd are among the important cities of Iran in the agricultural industry and also due to the pristine nature in these areas has always been of interest to tourists. The micro-earthquakes recorded in this area indicate the activity of the Doroud fault system. In order to prevent possible earthquake damage in this area, seismicity studies can be useful to study the acceleration of the ground by considering the effects of the site in order to strengthen the construction of civil structures. <br />Abrahamson (2000) and Somerville et al., (1997) were among the first researchers to establish studies based on this, and the relationships and methods proposed by them are more acceptable today in applying the directional effect. These researchers considered two parameters of angle and ratio of fault length as a direct factor in the effect of orientation and examined the results for the acceleration spectrum created. The effect of orientation can lead to the formation of long-period pulses in the earth's motion, which some proposed models (eg Somerville et al., 1997) can measure the quantity of this effect in estimating earthquake risk analysis with a deterministic and probabilistic approaches. (Abrahamson, 2000). In this study, seismic hazard has been investigated, compared and evaluated by considering the effects of Doroud fault in different periods and different return periods by considering the effect of orientation and also without applying the effect of orientation. <br />Near-field and directional effects can lead to long-period pulses in ground motion parameter, and for structures with long periods such as bridges and dams near faults with high activity rates. The inclusion of directional effects in attenuation relationships, to see whether for deterministic and probabilistic approach can have a great impact on the results of realistic seismic hazard analysis. Doroud fault is one of the most important faults in Iran with a history of large earthquakes in the early instrumental period and its mechanism of strike-slip mechanism, It can intensify the strong motion parameters during earthquakes for long periods in the city of Dorud, and consequently cause serious damage to structures with long periods in this area. <br />In this study, the parameters of strong ground motion in the analysis of probabilistic earthquake hazard by applying direction for the range of Doroud fault have been estimated. In addition, by examining the disaggregation of earthquake hazard, the effect of direction for the contribution of distance and magnitude in estimating the strong motion parameter has been evaluated. In the short and long return periods, the effect of directivity for different periods for the strong motion has been estimated and evaluated by the Somerville and Abrahamson method. The estimated acceleration is calculated and evaluated for three return periods, 50, 475 and 2475 years and in periods of 0.75, 1, 2, 3 and 4 sec. The value of the strong motion parameter was directly related to the increase of the return period and the period, so that the highest amount of acceleration increase (17.16 percentage) with the effect of directivity was calculated in the return period of 2475 years and in the 4-second period.https://jesphys.ut.ac.ir/article_79634_02974662665fc0934cc3f9b16ad789f4.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Determining the Elastic thickness of the lithosphere in The Zagros Mountains using the Admittance functionDetermining the Elastic thickness of the lithosphere in The Zagros Mountains using the Admittance function59757958210.22059/jesphys.2021.309605.1007243FASamiraGhalehnoviPh.D. Student, Department of Earth Physics, Institute of Geophysics, University of Tehran, Tehran, IranVahidEbrahimzadeh ArdestaniProfessor, Department of Earth Physics, Institute of Geophysics, University of Tehran, Tehran, IranJournal Article20200909Zagros orogeny is one of the most active orogenic belts among the mountain ranges extending approximately 2000 kilometers from the Anatolian fault in eastern Turkey to the Minab fault in southern Iran. Concerning the importance of this region as well as the essential role of elastic thickness in controlling the rate of deformation under applied loads, determination of Te in Zagros Fold and Thrust belt has been conducted. The lithosphere's elastic thickness (Te) is a convenient measure of the flexural rigidity, which is defined as the resistance to bending under applied loads. <br />To determine the elastic thickness of the lithosphere, the spectral admittance function is applied. We applied the load deconvolution of the admittance function between free-air gravity and topography data for estimation of Te. The Free air anomalies with a five arc-minute resolution are utilized in this study. <br />In flexural isostatic studies, the gravity and topography data are compared with theoretical models to estimate several parameters of the lithosphere. In the simplest model, a plate has been flexed by a surface load, with the magnitude of the resulting deflection, which is governed by Te. <br />Using the random fractal surfaces as the initial surface and subsurface loads applying at lithosphere, the lithosphere is modeled, and the post flexural gravity and topography are determined. Based on these new fields, the predicted admittance function is determined. Finally, the best-fitting Te is one that minimized the misfit between the observed and predicted functions. Additionally, the weighted misfit by the jackknife error is applied to estimate the observed admittance. <br />The accuracy of the method is checked through synthetic modeling. Two fractal surfaces are used as the two initial surface and subsurface loads applied to the lithosphere. After calculating the corresponding gravity and topography data by the load deconvolution method, the observed and predicted admittance are estimated. The best-fitting Te will be obtained by minimizing the misfit between observed and predicted functions. After confirming the accuracy of the method in Te determination, the technique will be applied to the real data acquired from the NCC as follow. <br />We consider a three-layered crust during the lithosphere modeling on which the internal loading is applied on the middle crust. To model the lithosphere, the global CRUST 1.0 is applied by treating the lithosphere as a three-layer crust. <br />The 2D map of Te variations in the target area is depicted by utilizing the load deconvolution of the admittance function between free-air gravity and topography data. High-precision ground gravity data, which is more accurate than satellite data, allows us to detect more details on Te variations in the region. <br />Based on the obtained results, the estimated range of Te in the survey region can be considered low to intermediate. This predicted range is in good accordance with the area's geology background as it is regarded as a young, active orogeny system. Te range and hence the lithosphere's predicted strength to deformation is supported by the previous studies using different geophysical and seismological studies. The mean value of Te in the area is 37±2 km. The maximum amount is detected in the Sanandaj-Sirjan zone. The overall predicted trend of Te follows the geological background of the region. Additionally, the estimated trend for Te and the strength to the applied load and deformation is in good agreement with the previous geophysical and seismological studies conducted in the region.Zagros orogeny is one of the most active orogenic belts among the mountain ranges extending approximately 2000 kilometers from the Anatolian fault in eastern Turkey to the Minab fault in southern Iran. Concerning the importance of this region as well as the essential role of elastic thickness in controlling the rate of deformation under applied loads, determination of Te in Zagros Fold and Thrust belt has been conducted. The lithosphere's elastic thickness (Te) is a convenient measure of the flexural rigidity, which is defined as the resistance to bending under applied loads. <br />To determine the elastic thickness of the lithosphere, the spectral admittance function is applied. We applied the load deconvolution of the admittance function between free-air gravity and topography data for estimation of Te. The Free air anomalies with a five arc-minute resolution are utilized in this study. <br />In flexural isostatic studies, the gravity and topography data are compared with theoretical models to estimate several parameters of the lithosphere. In the simplest model, a plate has been flexed by a surface load, with the magnitude of the resulting deflection, which is governed by Te. <br />Using the random fractal surfaces as the initial surface and subsurface loads applying at lithosphere, the lithosphere is modeled, and the post flexural gravity and topography are determined. Based on these new fields, the predicted admittance function is determined. Finally, the best-fitting Te is one that minimized the misfit between the observed and predicted functions. Additionally, the weighted misfit by the jackknife error is applied to estimate the observed admittance. <br />The accuracy of the method is checked through synthetic modeling. Two fractal surfaces are used as the two initial surface and subsurface loads applied to the lithosphere. After calculating the corresponding gravity and topography data by the load deconvolution method, the observed and predicted admittance are estimated. The best-fitting Te will be obtained by minimizing the misfit between observed and predicted functions. After confirming the accuracy of the method in Te determination, the technique will be applied to the real data acquired from the NCC as follow. <br />We consider a three-layered crust during the lithosphere modeling on which the internal loading is applied on the middle crust. To model the lithosphere, the global CRUST 1.0 is applied by treating the lithosphere as a three-layer crust. <br />The 2D map of Te variations in the target area is depicted by utilizing the load deconvolution of the admittance function between free-air gravity and topography data. High-precision ground gravity data, which is more accurate than satellite data, allows us to detect more details on Te variations in the region. <br />Based on the obtained results, the estimated range of Te in the survey region can be considered low to intermediate. This predicted range is in good accordance with the area's geology background as it is regarded as a young, active orogeny system. Te range and hence the lithosphere's predicted strength to deformation is supported by the previous studies using different geophysical and seismological studies. The mean value of Te in the area is 37±2 km. The maximum amount is detected in the Sanandaj-Sirjan zone. The overall predicted trend of Te follows the geological background of the region. Additionally, the estimated trend for Te and the strength to the applied load and deformation is in good agreement with the previous geophysical and seismological studies conducted in the region.https://jesphys.ut.ac.ir/article_79582_f3d88f9d619888617bc90c6de23d7fda.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421An Analytical solution to two-dimensional unsteady pollutant transport equation with arbitrary initial condition and source term in the open channelsAn Analytical solution to two-dimensional unsteady pollutant transport equation with arbitrary initial condition and source term in the open channels77907957110.22059/jesphys.2021.287486.1007153FANedaMashhadgarmePh.D. Student, Department of Water Structures, Tarbiat Modares University, Tehran, IranMehdiMazaheriAssistant Professor, Department of Water Structures, Tarbiat Modares University, Tehran, IranJamalMohammad Vali SamaniProfessor, Department of Water Structures, Tarbiat Modares University, Tehran, IranJournal Article20190825Pollutant dispersion in environment is one of the most important challenges in the world. The governing equation of this phenomenon is the Advection-Dispersion-Reaction (ADRE) equation. It has wide applications in water and atmosphere, heat transfer and engineering sciences. This equation is a parabolic partial differential equation that is based on the first Fick’s law and conservation equation. The applications mathematical models of pollution transport in rivers is very vital. Analytical solutions are useful in understanding the contaminant distribution, transport parameter estimation and numerical model verification. One of the powerful methods in solving nonhomogeneous partial differential equations analytically in one or multi-dimensional domains is Generalized Integral Transform Technique (GITT). This method is based on eigenvalue problem and integral transform that converts the main partial differential equation to a system of Ordinary Differential Equation (ODE). In this research, an analytical solution to two-dimensional pollutant transport equation with arbitrary initial condition and source term was obtained for a finite domain in the rivers using GITT. The equation parameters such as velocity, dispersion and reaction factor were considered constant. The boundary condition was assumed homogenous. In this research, the source term is considered as point pollutant sources with arbitrary emission time pattern. To extract the analytical solution, the first step is choosing an appropriate eigenvalue problem. The eigenvalue must be selected based on Self-Adjoint operator and can be solved analytically. In the next, the eigenfunction set was extract by solving the eigenvalue problem with homogenous boundary condition using the separation of variables method. Then the forward integral transform and inverse transform were defined. By implementing the transform and using the orthogonality property, the ordinary differential equation system was obtained. The initial condition was transformed using forward transform and the ODE system was solved numerically and the transformed concentration function was obtained. Finally, the inverse transform was implemented and the main analytical solution was extracted. In order to evaluate the extracted solution, the result of the proposed solution was compared with the Green’s Function Method (GFM) solution in the form of two hypothetical examples. In this way, in the first example, the initial condition function as an impulsive one at the specific point in the domain and one point source with the exponential time pattern were considered. In the second example, the initial condition was similar to the first example and two point sources with irregular time pattern were assumed. The final results were represented in the form of the concentration contours at different times in the velocity field. The results show the conformity of the proposed solution and GFM solution and report that the performance of the proposed solution is satisfactory and accurate. The concentration gradient decreases over time and the pollution plume spreads and finally exits from the domain at the resultant velocity direction due to the advection and dispersion processes. The presented solutions have various applications; they can be used instead of numerical models for constant- parameters conditions. The analytical solution is as an exact, fast, simple and flexible tool that is conveniently stable for all conditions; using this method, difficulties associated with numerical methods, such as stability, accuracy, etc., are not involved. Also because of the high flexibility of the present analytical solutions, it is possible to implement arbitrary initial condition and multiple point sources with more complexity in emission time patterns. So it can be used as a benchmark solution for the numerical solution validation in two-dimensional mode.Pollutant dispersion in environment is one of the most important challenges in the world. The governing equation of this phenomenon is the Advection-Dispersion-Reaction (ADRE) equation. It has wide applications in water and atmosphere, heat transfer and engineering sciences. This equation is a parabolic partial differential equation that is based on the first Fick’s law and conservation equation. The applications mathematical models of pollution transport in rivers is very vital. Analytical solutions are useful in understanding the contaminant distribution, transport parameter estimation and numerical model verification. One of the powerful methods in solving nonhomogeneous partial differential equations analytically in one or multi-dimensional domains is Generalized Integral Transform Technique (GITT). This method is based on eigenvalue problem and integral transform that converts the main partial differential equation to a system of Ordinary Differential Equation (ODE). In this research, an analytical solution to two-dimensional pollutant transport equation with arbitrary initial condition and source term was obtained for a finite domain in the rivers using GITT. The equation parameters such as velocity, dispersion and reaction factor were considered constant. The boundary condition was assumed homogenous. In this research, the source term is considered as point pollutant sources with arbitrary emission time pattern. To extract the analytical solution, the first step is choosing an appropriate eigenvalue problem. The eigenvalue must be selected based on Self-Adjoint operator and can be solved analytically. In the next, the eigenfunction set was extract by solving the eigenvalue problem with homogenous boundary condition using the separation of variables method. Then the forward integral transform and inverse transform were defined. By implementing the transform and using the orthogonality property, the ordinary differential equation system was obtained. The initial condition was transformed using forward transform and the ODE system was solved numerically and the transformed concentration function was obtained. Finally, the inverse transform was implemented and the main analytical solution was extracted. In order to evaluate the extracted solution, the result of the proposed solution was compared with the Green’s Function Method (GFM) solution in the form of two hypothetical examples. In this way, in the first example, the initial condition function as an impulsive one at the specific point in the domain and one point source with the exponential time pattern were considered. In the second example, the initial condition was similar to the first example and two point sources with irregular time pattern were assumed. The final results were represented in the form of the concentration contours at different times in the velocity field. The results show the conformity of the proposed solution and GFM solution and report that the performance of the proposed solution is satisfactory and accurate. The concentration gradient decreases over time and the pollution plume spreads and finally exits from the domain at the resultant velocity direction due to the advection and dispersion processes. The presented solutions have various applications; they can be used instead of numerical models for constant- parameters conditions. The analytical solution is as an exact, fast, simple and flexible tool that is conveniently stable for all conditions; using this method, difficulties associated with numerical methods, such as stability, accuracy, etc., are not involved. Also because of the high flexibility of the present analytical solutions, it is possible to implement arbitrary initial condition and multiple point sources with more complexity in emission time patterns. So it can be used as a benchmark solution for the numerical solution validation in two-dimensional mode.https://jesphys.ut.ac.ir/article_79571_0ce8705497654c0f9ac07cfc13d72df4.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Application of Principal Component Analysis (PCA) in Fuzzy Inference System (FIS) for Time-Series Modeling of IonosphereApplication of Principal Component Analysis (PCA) in Fuzzy Inference System (FIS) for Time-Series Modeling of Ionosphere911077958310.22059/jesphys.2021.296272.1007190FAMir RezaGhaffari RazinAssistant Professor, Department of Geoscience Engineering, Faculty of surveying, Arak University of Technology, Arak, Iran0000-0002-5579-5889Journal Article20200121The ionosphere is a layer of Earth's atmosphere extending from an altitude of 100 to more than 1000 km. Typically total electron content (TEC) is used to study the behavior and properties of the ionosphere. In fact, TEC is the total number of free electrons in the path between the satellite and the receiver. TEC varies greatly with time and space. TEC temporal frequencies can be considered on a daily, monthly, seasonal and annual basis. Understanding these variations is crucial in space science, satellite systems and positioning. Therefore, ionosphere time series modeling is very important. It requires a lot of observations to model the ionosphere temporal frequencies. As a result, it requires a model with high speed and accuracy. In this paper, a new method is presented for modeling the ionosphere time series. The principal component analysis (PCA) method is combined with the fuzzy inference system (FIS) and then, the ionosphere time series are modeled. The advantage of this combination is to increase the computational speed, reduce the convergence time to the optimal solution as well as increase the accuracy of the results. With the proposed model, the ionosphere can be analyzed at shorter time resolutions. <br />Principal component analysisis a statistical procedure that uses anorthogonal transformationto convert a set of observations of possibly correlated variables into a set of values oflinearly uncorrelatedvariables calledprincipal components.This transformation is defined in such a way that the first principal component has the largest possiblevariance, and each succeeding component in turn has the highest variance possible under the constraint that it isorthogonalto the preceding components. The resulting vectors are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables. Fuzzy inference systems (FIS) take inputs and process them based on the pre-specified rules to produce the outputs. Both the inputs and outputs are real-valued, whereas the internal processing is based on fuzzy rules and fuzzy arithmetic. FIS is the key unit of a fuzzy logic system having decision making as its primary work. It uses the “IF…THEN” rules along with connectors “OR” or “AND” for drawing essential decision rules. <br />To evaluate the proposed method of this paper, observations of Tehran's GNSS station, in 2016 have been used. This station is one of the International GNSS Service (IGS) in Iran. Therefore, its observations are easily accessible and evaluated. The statistical indices dVTEC = |VTEC<sub>GPS</sub>-VTEC<sub>model</sub>|, correlation coefficient and root mean square error (RMSE) are used to evaluate the new method. The statistical evaluations made on the dVTEC show that for the PCA-FIS combination model, this index has a lower numerical value than the FIS model without PCA as well as the global ionosphere map (GIM-TEC) and NeQuick empirical ionosphere model. The correlation coefficients are obtained 0.890, 0.704 and 0.697 for PCA-FIS, GIM and NeQuick models with respect to the GPS-TEC as a reference observation. Using the combination of PCA and FIS, the convergence speed to an optimal solution decreased from 205 to 159 seconds. Also, the RMSE of training and testing steps have also been significantly reduced. Northern, eastern, and height component analysis in precise point positioning (PPP) also show higher accuracy of the proposed model than the GIM and NeQuick model. The results of this paper show that the PCA-FIS method is a new method with precision, accuracy and high speed for time series modeling of TEC variations.The ionosphere is a layer of Earth's atmosphere extending from an altitude of 100 to more than 1000 km. Typically total electron content (TEC) is used to study the behavior and properties of the ionosphere. In fact, TEC is the total number of free electrons in the path between the satellite and the receiver. TEC varies greatly with time and space. TEC temporal frequencies can be considered on a daily, monthly, seasonal and annual basis. Understanding these variations is crucial in space science, satellite systems and positioning. Therefore, ionosphere time series modeling is very important. It requires a lot of observations to model the ionosphere temporal frequencies. As a result, it requires a model with high speed and accuracy. In this paper, a new method is presented for modeling the ionosphere time series. The principal component analysis (PCA) method is combined with the fuzzy inference system (FIS) and then, the ionosphere time series are modeled. The advantage of this combination is to increase the computational speed, reduce the convergence time to the optimal solution as well as increase the accuracy of the results. With the proposed model, the ionosphere can be analyzed at shorter time resolutions. <br />Principal component analysisis a statistical procedure that uses anorthogonal transformationto convert a set of observations of possibly correlated variables into a set of values oflinearly uncorrelatedvariables calledprincipal components.This transformation is defined in such a way that the first principal component has the largest possiblevariance, and each succeeding component in turn has the highest variance possible under the constraint that it isorthogonalto the preceding components. The resulting vectors are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables. Fuzzy inference systems (FIS) take inputs and process them based on the pre-specified rules to produce the outputs. Both the inputs and outputs are real-valued, whereas the internal processing is based on fuzzy rules and fuzzy arithmetic. FIS is the key unit of a fuzzy logic system having decision making as its primary work. It uses the “IF…THEN” rules along with connectors “OR” or “AND” for drawing essential decision rules. <br />To evaluate the proposed method of this paper, observations of Tehran's GNSS station, in 2016 have been used. This station is one of the International GNSS Service (IGS) in Iran. Therefore, its observations are easily accessible and evaluated. The statistical indices dVTEC = |VTEC<sub>GPS</sub>-VTEC<sub>model</sub>|, correlation coefficient and root mean square error (RMSE) are used to evaluate the new method. The statistical evaluations made on the dVTEC show that for the PCA-FIS combination model, this index has a lower numerical value than the FIS model without PCA as well as the global ionosphere map (GIM-TEC) and NeQuick empirical ionosphere model. The correlation coefficients are obtained 0.890, 0.704 and 0.697 for PCA-FIS, GIM and NeQuick models with respect to the GPS-TEC as a reference observation. Using the combination of PCA and FIS, the convergence speed to an optimal solution decreased from 205 to 159 seconds. Also, the RMSE of training and testing steps have also been significantly reduced. Northern, eastern, and height component analysis in precise point positioning (PPP) also show higher accuracy of the proposed model than the GIM and NeQuick model. The results of this paper show that the PCA-FIS method is a new method with precision, accuracy and high speed for time series modeling of TEC variations.https://jesphys.ut.ac.ir/article_79583_50df2bf94e3a5a214c88b3a55645a7f1.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Numerical Modelling and Automatic Detection of submesoscale eddies in Persian Gulf Using aVector Geometry AlgorithmNumerical Modelling and Automatic Detection of submesoscale eddies in Persian Gulf Using aVector Geometry Algorithm1091257958110.22059/jesphys.2021.307109.1007237FAOmidMahpeykarPh.D. Student, Department of physical oceanography, Faculty of marine science and oceanography, Khorramshahr University of Marine Science and Technology, Khorramshahr, IranAmirAshtari LarkiAssistant Professor, Department of physical oceanography, Faculty of marine science and oceanography, Khorramshahr University of Marine Science and Technology, Khorramshahr, IranMohammadAkbarinasabAssociate Professor, Department of Marine Physic, Faculty of Marine and Oceanic Sciences, University of Mazandaran, Babolsar, IranJournal Article20200803Nowadays, marine data containing both observational and measured values as well as the output of numerical models are largely available; But analyzing and processing this data is time consuming and tedious due to the heavy volume of information.Identifying and extracting eddies is one of the most important aspects of physical oceanography, and automatic detection algorithms of eddies are one of the most basic tools for analysing eddies. The general circulation of the Persian Gulf is a cyclonic circulation that is affected by tide, wind stress, and thermohaline forces. In this study, using the Mike model based on the three-dimensional solution of the Navier Stokes equations, assumption of incompressibility, Boussinesq approximation and hydrostatic pressure, the circulation in the Persian Gulf was modeled. Then a vector geometry algorithm has been used for detection of eddies in this region. Four constraints were derived in conformance with the eddy velocity field definition and characteristics in this algorithm. Eddy centers are determined at the points where all of the constraints are satisfied. The four constraints follow: (i) Along an east–west (EW) section, v has to reverse in sign across the eddy center, and its magnitude has to increase away from it; (ii) Along a north–south (NS) section, u has to reverse in sign across the eddy center, and its magnitude has to increase away from it: the sense of rotation has to be the same as for v; (iii) Velocity magnitude has a local minimum at the eddy center; and (iv) Around the eddy center, the directions of the velocity vectors have to change with a constant sense of rotation. The constraints require two parameters to be specified: one for the first, second, and fourth constraints and one for the third one. The first parameter, a, defines how many grids points away the increases in the magnitude of v along the EW axes and u along the NS axes are checked. It also defines the curve around the eddy center along which the change in direction of the velocity vectors is inspected. The second parameter, b, defines the dimension (in grid points) of the area used to define the local minimum of velocity. The main data used to detect eddies are numerical model outputs, including velocity components. These outputs are the result of numerical modeling with thermohaline and wind stress forces. In total, 4308 cyclonic and 2860 anticyclonic eddies are detected at the surface and 617 cyclonic and 329 anticyclonic eddies are found in the deepest layer, depth of 50 meters, for daily data during one year. The number of eddies is highest in winter, and the lowest in summer and the average radius of anticyclonic eddies is maximum in winter and minimum in summer for cyclonic eddies. Most eddies have a radius of 5-10 km and lifespan of 3-6 days. Also, as the lifespan of eddies increases, they penetrate deeper into the water. The percentage of eddy penetration or the ratio of the number of eddies of the deepest layer to the surface layer is 15% for cyclonic eddies and 10% for anticyclonic eddies. This indicates that the energy loss in the cyclonic eddies is less than in the anticyclonic eddies and is probably due to the alignment of the rotating eddy with the overall circulation of the Persian Gulf.Nowadays, marine data containing both observational and measured values as well as the output of numerical models are largely available; But analyzing and processing this data is time consuming and tedious due to the heavy volume of information.Identifying and extracting eddies is one of the most important aspects of physical oceanography, and automatic detection algorithms of eddies are one of the most basic tools for analysing eddies. The general circulation of the Persian Gulf is a cyclonic circulation that is affected by tide, wind stress, and thermohaline forces. In this study, using the Mike model based on the three-dimensional solution of the Navier Stokes equations, assumption of incompressibility, Boussinesq approximation and hydrostatic pressure, the circulation in the Persian Gulf was modeled. Then a vector geometry algorithm has been used for detection of eddies in this region. Four constraints were derived in conformance with the eddy velocity field definition and characteristics in this algorithm. Eddy centers are determined at the points where all of the constraints are satisfied. The four constraints follow: (i) Along an east–west (EW) section, v has to reverse in sign across the eddy center, and its magnitude has to increase away from it; (ii) Along a north–south (NS) section, u has to reverse in sign across the eddy center, and its magnitude has to increase away from it: the sense of rotation has to be the same as for v; (iii) Velocity magnitude has a local minimum at the eddy center; and (iv) Around the eddy center, the directions of the velocity vectors have to change with a constant sense of rotation. The constraints require two parameters to be specified: one for the first, second, and fourth constraints and one for the third one. The first parameter, a, defines how many grids points away the increases in the magnitude of v along the EW axes and u along the NS axes are checked. It also defines the curve around the eddy center along which the change in direction of the velocity vectors is inspected. The second parameter, b, defines the dimension (in grid points) of the area used to define the local minimum of velocity. The main data used to detect eddies are numerical model outputs, including velocity components. These outputs are the result of numerical modeling with thermohaline and wind stress forces. In total, 4308 cyclonic and 2860 anticyclonic eddies are detected at the surface and 617 cyclonic and 329 anticyclonic eddies are found in the deepest layer, depth of 50 meters, for daily data during one year. The number of eddies is highest in winter, and the lowest in summer and the average radius of anticyclonic eddies is maximum in winter and minimum in summer for cyclonic eddies. Most eddies have a radius of 5-10 km and lifespan of 3-6 days. Also, as the lifespan of eddies increases, they penetrate deeper into the water. The percentage of eddy penetration or the ratio of the number of eddies of the deepest layer to the surface layer is 15% for cyclonic eddies and 10% for anticyclonic eddies. This indicates that the energy loss in the cyclonic eddies is less than in the anticyclonic eddies and is probably due to the alignment of the rotating eddy with the overall circulation of the Persian Gulf.https://jesphys.ut.ac.ir/article_79581_66d2c7441b4a8994da0366fcd5a7ab19.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Elemental analysis of air-full dust in World heritage city of Yazd by Laser Induced Breakdown SpectroscopyElemental analysis of air-full dust in World heritage city of Yazd by Laser Induced Breakdown Spectroscopy1271447958410.22059/jesphys.2021.308120.1007242FANafiseSedighiM.Sc. Student, Department of Physics, Yazd University, Yazd, IranMohammad AliHaddadAssistant Professor, Department of Physics, Yazd University, Yazd, Iran0000-0003-2542-0485Journal Article20200912The dust and the environmental pollutions caused by dust storms are a serious environmental hazard, particularly in arid and semi-arid civilian regions in the world. Controlling and decreasing the harmful or undesirable effects of dust can be achieved by accurately identifying and analyzing dust samples. For this goal, various elemental analysis methods are commonly used for identifying and characterizing dust materials. <br />The City of Yazd (UNESCO Heritage Center) is located in Iran's central region. It is surrounded by many industrial, mineral sites, and deserts. The city's urban areas suffer air pollution due to seasonal wind, the lack of annual rainfall, and dust storms. Hence, the dust concentration reaches higher than of standard limits occasionally in this city. In this paper, a study to characterize and analyze the falling-dust in Yazd city is reported. Initially, the sampling procedure was conducted at five different locations for two months using marble dust collectors. The size distributions and morphology of dust samples were studied by Scanning Electron Microscopy (SEM), X-Ray Diffraction technique (XRD). Moreover, samples' elemental composition was analyzed using Energy Dispersive X-Ray Spectroscopy (EDX) and distinctly, Laser-Induced Breakdown Spectroscopy (LIB). The analysis of SEM images and XRD patterns of dust particles allows studying the dust's size and morphology of samples. The size of 1 to 30 microns was estimated for the dust particles with the maximum size distributions between 2 to 7 microns. Also, capsular, triangular, spherical, irregular, and polyhedral shapes are revealed by recorded particles' images. The XRD analyses show the existence of silicates, carbonates, phosphates mineral groups, calcites, quartz, gypsum, magnesium carbonate, and aluminum phosphates components in samples. <br />Laser-induced breakdown spectroscopy (LIBS) is a non-contact, fast response, high sensitivity, real-time, and multi-elemental analytical detection technique based on emission spectroscopy to measure the elemental composition. The elemental characterization of powder samples was carried out by investigating the emission spectra of breakdown plasma in the sample region. A 1064-nm Nd:YAG laser operating at high energy (100 mJ, 1 to 20 Hz), was focused on the surface of the tiny amount of powder sample to form an emitting plasma. The emission of produced plasma from the sample was collected by eight optical fibers and was detected by the spectrometer. The applied experimental setup allowed to record spectra in the range of 200 to 1200 nm with a spectral resolution of 0.4 nm. In total, 74 atomic emission lines of generated plasma were analyzed. Spectral analysis of obtained spectra enables to identify several elements such as calcium, silicon, iron, magnesium, aluminum, carbon, and other elements with less abundance such as potassium, sodium, strontium, manganese, titanium, cobalt, vanadium, barium and lead in the elemental composition of dust samples. The results deduced using the LIBS technique agree unambiguously with results obtained by EDX analysis of dust samples in this work. It is found that Laser-Induced Breakdown spectroscopy is a rapid, reliable, and powerful analytical tool for the diagnostic and detection of multiple elements for solid dust samples. Also, this technique is comparable with standard methods such as atomic absorption spectroscopy (AAS) and X-Ray Fluorescence (XRF) for chemical and elemental analysis of urban, mineral, and industrial dust.The dust and the environmental pollutions caused by dust storms are a serious environmental hazard, particularly in arid and semi-arid civilian regions in the world. Controlling and decreasing the harmful or undesirable effects of dust can be achieved by accurately identifying and analyzing dust samples. For this goal, various elemental analysis methods are commonly used for identifying and characterizing dust materials. <br />The City of Yazd (UNESCO Heritage Center) is located in Iran's central region. It is surrounded by many industrial, mineral sites, and deserts. The city's urban areas suffer air pollution due to seasonal wind, the lack of annual rainfall, and dust storms. Hence, the dust concentration reaches higher than of standard limits occasionally in this city. In this paper, a study to characterize and analyze the falling-dust in Yazd city is reported. Initially, the sampling procedure was conducted at five different locations for two months using marble dust collectors. The size distributions and morphology of dust samples were studied by Scanning Electron Microscopy (SEM), X-Ray Diffraction technique (XRD). Moreover, samples' elemental composition was analyzed using Energy Dispersive X-Ray Spectroscopy (EDX) and distinctly, Laser-Induced Breakdown Spectroscopy (LIB). The analysis of SEM images and XRD patterns of dust particles allows studying the dust's size and morphology of samples. The size of 1 to 30 microns was estimated for the dust particles with the maximum size distributions between 2 to 7 microns. Also, capsular, triangular, spherical, irregular, and polyhedral shapes are revealed by recorded particles' images. The XRD analyses show the existence of silicates, carbonates, phosphates mineral groups, calcites, quartz, gypsum, magnesium carbonate, and aluminum phosphates components in samples. <br />Laser-induced breakdown spectroscopy (LIBS) is a non-contact, fast response, high sensitivity, real-time, and multi-elemental analytical detection technique based on emission spectroscopy to measure the elemental composition. The elemental characterization of powder samples was carried out by investigating the emission spectra of breakdown plasma in the sample region. A 1064-nm Nd:YAG laser operating at high energy (100 mJ, 1 to 20 Hz), was focused on the surface of the tiny amount of powder sample to form an emitting plasma. The emission of produced plasma from the sample was collected by eight optical fibers and was detected by the spectrometer. The applied experimental setup allowed to record spectra in the range of 200 to 1200 nm with a spectral resolution of 0.4 nm. In total, 74 atomic emission lines of generated plasma were analyzed. Spectral analysis of obtained spectra enables to identify several elements such as calcium, silicon, iron, magnesium, aluminum, carbon, and other elements with less abundance such as potassium, sodium, strontium, manganese, titanium, cobalt, vanadium, barium and lead in the elemental composition of dust samples. The results deduced using the LIBS technique agree unambiguously with results obtained by EDX analysis of dust samples in this work. It is found that Laser-Induced Breakdown spectroscopy is a rapid, reliable, and powerful analytical tool for the diagnostic and detection of multiple elements for solid dust samples. Also, this technique is comparable with standard methods such as atomic absorption spectroscopy (AAS) and X-Ray Fluorescence (XRF) for chemical and elemental analysis of urban, mineral, and industrial dust.https://jesphys.ut.ac.ir/article_79584_78633e4f244f69e628ef201d241d3b09.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Evaluation of cumulus schemes of HWRF model in forecasting tropical cyclone characteristics, Gonu tropical cyclone case studyEvaluation of cumulus schemes of HWRF model in forecasting tropical cyclone characteristics, Gonu tropical cyclone case study1451747957810.22059/jesphys.2021.310820.1007250FANafisehPegahfarAssistant Professor, Atmospheric Science Center, Iranian National Institute for Oceanography and Atmospheric Science, Tehran, Iran0000-0003-4885-7428Journal Article20200929Sensitivity of numerical models in the prediction of Tropical Cyclone (TC) characteristics has been considered in numerous research studies. In this research, application of five cumulus schemes of HWRF (Hurricane Weather Research and Forecasting) model, including KF, SAS, BMJ, TiedTKE and SASAS has been examined during Tropical Cyclone Gonu (TCG) from 4 to 7 June 2007. The simulations have been conducted using three nests with 27, 9 and 3 km resolutions. To this aim, the performance of schemes in predicting TCG intensity using minimum surface pressure and maximum 10-m wind speed are analyzed. Following, their effect on forecasting the radius of maximum wind is evaluated. The parameters of lower-level divergence, upper-level convergence, potential temperature, potential vorticity, Convective Available Potential Energy (CAPE), wind vector (both horizontal and vertical components), wind shear, precipitation and radar reflectivity have been analyzed. The results of the simulations have been compared with the analysis data, IMD and TRMM observational data and routine atmospheric parameter measured at the Chabahar station. The comparison was done in different time of TCG lifetime. To examine the performance of HWRF cumulus schemes for track and intensity of the TCG, the whole life cycle of TCG was considered. To test the efficiency of HWRF cumulus schemes in predicting some dynamical and thermodynamical parameters, the time of maximum intensity of TCG (18 UTC on 4 June 2007) was focused on. To evaluate the functionality of HWRF cumulus schemes in the coastal area, the outputs were discussed in the last two days of the TCG life cycle. <br />Results showed that based on the used configuration, none of the five cumulus schemes predicted the TCG reaching the southern coast of Iran. Moreover, neither the pressure decrease nore the maximum wind speed were predicted accurately at the time of maximum intensity of TCG. Until TCG intensity was more that category-3, neither minimum surface pressure trend and nor the maximum wind speed trend have been forecasted well. However, for the less intense conditions, two schemes of TiedTKE and SAS produced the nearest values. The performance of all five cumulus schemes, similarly predicted the radius of the maximum wind, except TiedTKE scheme that predicted the super cyclone 6 hours earlier. The analysed and simulated of the vertical cross sections of potential temperature and horizontal wind were similar, respectively. The simulated values of the vertical component of wind were considerably larger than those from the analysis data and were also closer to the TCG center. The maximum values of simulated CAPE were off the Oman coast compared to the analysis values. Only the simulations using SASAS cumulus schemes showed the strongest potential vorticity near the surface. The simulated updrafts and downdrafts were larger than those from the analysis data. The simulated values of the major updrafts and downdrafts were closer to the center of the TCG, comparing to those from the analysis data. The upper-level divergence patterns were seen in both simulations using all 5 cumulus schemes and also in the analysis data, while the lower-level convergences were not captured neither in the simulations nor in the analysis data. The maximum value of the simulated accumulated precipitation using all 5 cumulus schemes were 80 mm in a 6 hour interval, however, the observational value from the TRMM was 25 mm/h. The predicted radar reflectivity from the simulations were similar and the simulated maximum values were the same, but the expansions of the simulated maximum values were different. All cumulus schemes predicted the wind shear values less than the analytical values. At Chabahar station, the observational values of the 10-m wind speed, sea level pressure, and temperature have been compared to the simulated values using all 5 cumulus schemes, in the period of 6-7 Jun 2007. The statistical parameters of correlation, standard deviation and root mean square were used to identify the best cumulus scheme. The least error prediction was obtained using KF cumulus schemes to predict the 10-m wind, the TiedTKE cumulus scheme to simulate sea level pressure the observed, and SASAS cumulus schemes to produce temperature.Sensitivity of numerical models in the prediction of Tropical Cyclone (TC) characteristics has been considered in numerous research studies. In this research, application of five cumulus schemes of HWRF (Hurricane Weather Research and Forecasting) model, including KF, SAS, BMJ, TiedTKE and SASAS has been examined during Tropical Cyclone Gonu (TCG) from 4 to 7 June 2007. The simulations have been conducted using three nests with 27, 9 and 3 km resolutions. To this aim, the performance of schemes in predicting TCG intensity using minimum surface pressure and maximum 10-m wind speed are analyzed. Following, their effect on forecasting the radius of maximum wind is evaluated. The parameters of lower-level divergence, upper-level convergence, potential temperature, potential vorticity, Convective Available Potential Energy (CAPE), wind vector (both horizontal and vertical components), wind shear, precipitation and radar reflectivity have been analyzed. The results of the simulations have been compared with the analysis data, IMD and TRMM observational data and routine atmospheric parameter measured at the Chabahar station. The comparison was done in different time of TCG lifetime. To examine the performance of HWRF cumulus schemes for track and intensity of the TCG, the whole life cycle of TCG was considered. To test the efficiency of HWRF cumulus schemes in predicting some dynamical and thermodynamical parameters, the time of maximum intensity of TCG (18 UTC on 4 June 2007) was focused on. To evaluate the functionality of HWRF cumulus schemes in the coastal area, the outputs were discussed in the last two days of the TCG life cycle. <br />Results showed that based on the used configuration, none of the five cumulus schemes predicted the TCG reaching the southern coast of Iran. Moreover, neither the pressure decrease nore the maximum wind speed were predicted accurately at the time of maximum intensity of TCG. Until TCG intensity was more that category-3, neither minimum surface pressure trend and nor the maximum wind speed trend have been forecasted well. However, for the less intense conditions, two schemes of TiedTKE and SAS produced the nearest values. The performance of all five cumulus schemes, similarly predicted the radius of the maximum wind, except TiedTKE scheme that predicted the super cyclone 6 hours earlier. The analysed and simulated of the vertical cross sections of potential temperature and horizontal wind were similar, respectively. The simulated values of the vertical component of wind were considerably larger than those from the analysis data and were also closer to the TCG center. The maximum values of simulated CAPE were off the Oman coast compared to the analysis values. Only the simulations using SASAS cumulus schemes showed the strongest potential vorticity near the surface. The simulated updrafts and downdrafts were larger than those from the analysis data. The simulated values of the major updrafts and downdrafts were closer to the center of the TCG, comparing to those from the analysis data. The upper-level divergence patterns were seen in both simulations using all 5 cumulus schemes and also in the analysis data, while the lower-level convergences were not captured neither in the simulations nor in the analysis data. The maximum value of the simulated accumulated precipitation using all 5 cumulus schemes were 80 mm in a 6 hour interval, however, the observational value from the TRMM was 25 mm/h. The predicted radar reflectivity from the simulations were similar and the simulated maximum values were the same, but the expansions of the simulated maximum values were different. All cumulus schemes predicted the wind shear values less than the analytical values. At Chabahar station, the observational values of the 10-m wind speed, sea level pressure, and temperature have been compared to the simulated values using all 5 cumulus schemes, in the period of 6-7 Jun 2007. The statistical parameters of correlation, standard deviation and root mean square were used to identify the best cumulus scheme. The least error prediction was obtained using KF cumulus schemes to predict the 10-m wind, the TiedTKE cumulus scheme to simulate sea level pressure the observed, and SASAS cumulus schemes to produce temperature.https://jesphys.ut.ac.ir/article_79578_4d20b0003b557a8fc73f0c7b07bb97ed.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Cumulus Clouds from the rough surface perspectiveCumulus Clouds from the rough surface perspective1751867957910.22059/jesphys.2021.311393.1007254FAJafarCheraghalizadehPh.D. Student, Department of physics, University of Mohaghegh Ardabili, Ardabil, IranMortezaNattagh NajafiAssociate Professor, Department of physics, University of Mohaghegh Ardabili, Ardabil, Iran0000-0001-8949-6855AhadSaber TazehkandAssistant Professor, Department of physics, University of Mohaghegh Ardabili, Ardabil, IranJournal Article20201011Although it is well-known the clouds show a fractal geometry for a long time, their detailed analysis is missing in the literature yet. Within scattering of the received radiation from the sun, clouds play a very important role in the energy budget in the earth atmosphere. It was shown that the surface fluctuations and generally the statistics of the clouds has a very important impact on the scattering and the absorption of the radiation of the sun. In this paper we first study the relation between the visible light intensity and the width of the cumulus clouds. To this end, we find that the received intensity is , where , and To this end we supposed that the transmitted intensity of light from a column of cloud is proportional to where (summation of the absorbed and the scattered contributions). Using this relation, we find a one to one relation between the cloud width and the intensity of the received visible light in low intensity regime. By calculating the Mie scattering cross sections for the physical parameters of the clouds, we argue that this correspondence works for thin enough clouds, and also the width of the clouds is proportional to the logarithm of the intensity. The Mie cross section is shown to behave almost like for large enough s, where is the angle of radiation of sun with respect to earth’s surface, or equivalently the cloud’s base. This allows us to map the system to two-dimensional rough media. Then exploiting the rough surface techniques, we study the statistical properties of the clouds. We first study the roughness, defined for rough surfaces as . This study on the local and global roughness exponents (α_l and α_g respectively) show that the system is self-similar. We also consider the fractal properties of the clouds. Importantly by least square fitting of the roughness we show numerically that the exponents are and . We study also the other statistical observables and their distributions. By studying the distribution of the local curvature (for various scales) and the height variable we conclude that these functions, and consequently the system is not Gaussian. Especially the distribution of the height profile follows the Weibull distribution, defined via the relation for and zero otherwise. The reasoning of how this relation arises is out of scope of the present work, and is postponed to our future studies. The studies on the local curvature, defined via reveals the same behaviors and structure. All of these show that the problem of the width of cumulus clouds maps to a non-Gaussian self-similar rough surface. Also we show that the system is mono-fractal, which requires . Given these results, the authors think that the top of the clouds are anomalous random rough surfaces that affect the albedo of cloud fields.Although it is well-known the clouds show a fractal geometry for a long time, their detailed analysis is missing in the literature yet. Within scattering of the received radiation from the sun, clouds play a very important role in the energy budget in the earth atmosphere. It was shown that the surface fluctuations and generally the statistics of the clouds has a very important impact on the scattering and the absorption of the radiation of the sun. In this paper we first study the relation between the visible light intensity and the width of the cumulus clouds. To this end, we find that the received intensity is , where , and To this end we supposed that the transmitted intensity of light from a column of cloud is proportional to where (summation of the absorbed and the scattered contributions). Using this relation, we find a one to one relation between the cloud width and the intensity of the received visible light in low intensity regime. By calculating the Mie scattering cross sections for the physical parameters of the clouds, we argue that this correspondence works for thin enough clouds, and also the width of the clouds is proportional to the logarithm of the intensity. The Mie cross section is shown to behave almost like for large enough s, where is the angle of radiation of sun with respect to earth’s surface, or equivalently the cloud’s base. This allows us to map the system to two-dimensional rough media. Then exploiting the rough surface techniques, we study the statistical properties of the clouds. We first study the roughness, defined for rough surfaces as . This study on the local and global roughness exponents (α_l and α_g respectively) show that the system is self-similar. We also consider the fractal properties of the clouds. Importantly by least square fitting of the roughness we show numerically that the exponents are and . We study also the other statistical observables and their distributions. By studying the distribution of the local curvature (for various scales) and the height variable we conclude that these functions, and consequently the system is not Gaussian. Especially the distribution of the height profile follows the Weibull distribution, defined via the relation for and zero otherwise. The reasoning of how this relation arises is out of scope of the present work, and is postponed to our future studies. The studies on the local curvature, defined via reveals the same behaviors and structure. All of these show that the problem of the width of cumulus clouds maps to a non-Gaussian self-similar rough surface. Also we show that the system is mono-fractal, which requires . Given these results, the authors think that the top of the clouds are anomalous random rough surfaces that affect the albedo of cloud fields.https://jesphys.ut.ac.ir/article_79579_ff3b4374cb41fe6ab594f774b10ffe16.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X47120210421Statistical Evaluation of Cloud Seeding Operations in Central Plateau of Iran in the 2015 Water YearStatistical Evaluation of Cloud Seeding Operations in Central Plateau of Iran in the 2015 Water Year1872037958510.22059/jesphys.2021.312050.1007255FABanafshehZahraieAssociate Professor, School of Civil Engineering, College of Engineering, University of Tehran, Tehran, Iran0000-0003-3557-9254HamedPoursepahy SamianPost-Doc Researcher, Water Institute, University of Tehran, Tehran, Iran0000-0002-4300-2511MohsenNasseriAssistant Professor, School of Civil Engineering, Faculty of Engineering, University of Tehran, Tehran, Iran0000-0002-7584-7631S. MahmoodTaheriProfessor, School of Engineering Science, College of Engineering, University of Tehran, Tehran, IranJournal Article20201104Iran is located in an arid and semi-arid region and has experienced a reduction of average rainfall in recent years. This has turned the attention to the use of new methods such as cloud seeding to achieve more water resources. In this regard, cloud seeding operations have been carried out in the country since 1998. The purpose of this study was to evaluate cloud seeding projects in the 2015 water year (January, February, and March 2015) in the central region of Iran, including the provinces of Yazd, Kerman, Fars, Isfahan, and some adjacent provinces. The evaluation was performed statistically using stepwise multiple regression. Two different approaches have been used for evaluation. In the first approach, precipitation at stations located in the target area of cloud seeding operations is estimated based on the precipitation at stations in the control area using stepwise multiple regression and then taking into account a 90% confidence interval for this estimate, the effectiveness or ineffectiveness of the cloud seeding operation at each station is determined. In the second approach, the volume of precipitation in each province in the target area is estimated based on the precipitation in stations outside in the control area using stepwise multiple regression and then by considering a 90% confidence interval for this estimate, the effectiveness of cloud seeding operations on the rainfall volume of each province has been investigated. The target area in different months was selected based on the HYSPLIT model results. Due to the inconsistent spatial distribution of rain gauges in the target areas, parts of the target areas lacking enough rain gauges were excluded from further analysis. To define the boundaries of the exclusion areas, Inverse Distance Weighted (IDW) method was used to find the influence of the radius around each rain gauge. The influence radius values were selected as 93940, 89569, and 149015 m for the months of January, February, and March, respectively. Finally, the minimum value of 89569 m was selected as the influence radius. The results of both methods indicate the impact of cloud seeding operations this year in these areas. In particular, the volume of precipitation in February in all provinces located in the target area of cloud seeding operations has increased from 15 to 80 percent. Surface runoff generated from the increased precipitation due to cloud seeding were estimated by the two methods of Soil Conservation Service (SCS) and Rational method. The estimated surface runoffs generated by SCS and rational methods were 1318.5 and 1329.5 million m<sup>3</sup>, respectively. The groundwater recharge in the three months of January, February, and March is estimated as 105.3, 425.6, and 156.3 million m<sup>3</sup>, respectively. It is important to note that runoff and groundwater recharge estimations by the method used in this study are subject to high uncertainties, and the estimations can only represent the order of magnitude of impacts of cloud seeding operations, and therefore, exact numbers should not be used for water resources planning and management purposes. Further investigation in areas with more rain gauges can assist in a more accurate assessment of could seeding operations.Iran is located in an arid and semi-arid region and has experienced a reduction of average rainfall in recent years. This has turned the attention to the use of new methods such as cloud seeding to achieve more water resources. In this regard, cloud seeding operations have been carried out in the country since 1998. The purpose of this study was to evaluate cloud seeding projects in the 2015 water year (January, February, and March 2015) in the central region of Iran, including the provinces of Yazd, Kerman, Fars, Isfahan, and some adjacent provinces. The evaluation was performed statistically using stepwise multiple regression. Two different approaches have been used for evaluation. In the first approach, precipitation at stations located in the target area of cloud seeding operations is estimated based on the precipitation at stations in the control area using stepwise multiple regression and then taking into account a 90% confidence interval for this estimate, the effectiveness or ineffectiveness of the cloud seeding operation at each station is determined. In the second approach, the volume of precipitation in each province in the target area is estimated based on the precipitation in stations outside in the control area using stepwise multiple regression and then by considering a 90% confidence interval for this estimate, the effectiveness of cloud seeding operations on the rainfall volume of each province has been investigated. The target area in different months was selected based on the HYSPLIT model results. Due to the inconsistent spatial distribution of rain gauges in the target areas, parts of the target areas lacking enough rain gauges were excluded from further analysis. To define the boundaries of the exclusion areas, Inverse Distance Weighted (IDW) method was used to find the influence of the radius around each rain gauge. The influence radius values were selected as 93940, 89569, and 149015 m for the months of January, February, and March, respectively. Finally, the minimum value of 89569 m was selected as the influence radius. The results of both methods indicate the impact of cloud seeding operations this year in these areas. In particular, the volume of precipitation in February in all provinces located in the target area of cloud seeding operations has increased from 15 to 80 percent. Surface runoff generated from the increased precipitation due to cloud seeding were estimated by the two methods of Soil Conservation Service (SCS) and Rational method. The estimated surface runoffs generated by SCS and rational methods were 1318.5 and 1329.5 million m<sup>3</sup>, respectively. The groundwater recharge in the three months of January, February, and March is estimated as 105.3, 425.6, and 156.3 million m<sup>3</sup>, respectively. It is important to note that runoff and groundwater recharge estimations by the method used in this study are subject to high uncertainties, and the estimations can only represent the order of magnitude of impacts of cloud seeding operations, and therefore, exact numbers should not be used for water resources planning and management purposes. Further investigation in areas with more rain gauges can assist in a more accurate assessment of could seeding operations.https://jesphys.ut.ac.ir/article_79585_f75d52faa7f2d49e89c1366105db91a1.pdf