Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Simulation of tsunami generation, propagation and run-up in the western Makran, Part 1: Simulation of the generationSimulation of tsunami generation, propagation and run-up in the western Makran, Part 1: Simulation of the generation4955086588010.22059/jesphys.2018.246921.1006949FAAminRashidiPh.D. Student, Department of Earth Physics, Institute of Geophysics, University of Tehran, IranZaher HosseinShomaliAssociate Professor, Department of Earth Physics, Institute of Geophysics, University of Tehran, Iran0000-0001-6254-7560NasserKeshavarz FarajkhahAssistant Professor, Geoscience Division, Research Institute of Petroleum Industry (RIPI), Tehran , IranJournal Article20171212Tsunami is an oceanic gravity wave generated by the displacement of huge volumes of water. There are three main types of disturbances: underwater earthquakes, submarine landslides and sudden earth surface movements adjacent to the ocean (volcanoes, meteorites, rock falls, sub-aerial landslides and ship sinking). Most tsunamis are caused by large shallow earthquakes in subduction zones (Satake and Tanioka, 1999). Sumatra–Andaman (2004) and Honshu, Japan (2011) tsunami events and following widespread damages and tragic consequences demonstrated the need of worldwide attention, awareness and preparedness for tsunami hazard mitigation. While the world draws its attention to tsunamis in the Indian Ocean, further attention is increased in the eastern areas of the Indian Ocean near Indonesia. Western Makran is located in the northwestern Indian Ocean basin. It has received less attention as a potential tsunamigenic zone. The Makran region is a 1000-km section of the Eurasian-Arabian plate boundary and located offshore Pakistan in the northwestern Indian Ocean where the oceanic crust of Arabian plate is being subducted beneath Eurasian plate since the Early Cretaceous along a north dipping subduction zone (Byrne et al., 1992; Smith et al., 2013). Following the great earthquake in Pasni-Ormara on 1945.11.27, Mw=8.1 (Byrne et aL, 1992), the coastline uplifted by about 2 m (Page et al., 1979). This event was accompanied by a significant regional tsunami, with run-up in the 5–10 m range which caused about 4000 deaths along the very sparsely populated Makran coast (Heck, 1947; Ambraseys and Melville, 1982; Okal and Synolakis, 2008). The Makran may be seismically segmented along its length into a western and an eastern segment, distinguished by different levels of seismicity (lower in the west). Moderate to large magnitude earthquakes are either related to the down going slab at intermediate depths or superficial in the eastern Makran (e.g. 1765, 1851 and 1945 earthquakes), while western Makran is marked with almost no seismicity in the coastal area at present but might have experienced a strong earthquake in 1483 (Byrne et al., 1992; Zarifi, 2006).
The lack of earthquakes for many years has increased the possibility of locking the western Makran segment. This means that, it could generate a potential tsunami event in the future that can threat the Gulf of Oman and the Makran coastlines. Because of the tsunamigenic potential of Makran subduction zone, also importance of strategic geographic location, financial role of Makran coast in Iran, accessibility to international waters, ability to communicate with other countries and its cultural, natural and historical tourism potential along with the establishment of ports and coastal and offshore installations in the region, tsunami can be a real threat. Consequently, it is indispensable to have accurate studies and estimates for tsunami risk mitigation. The aim of this study is to simulate tsunami generation in western Makran numerically for estimating the initial condition for tsunami propagation. Tsunami generation mechanism should be modeled as the first step in the process of tsunami modeling. The generation modeling problem should be studied geophysically and geologically, therefore it is a very important and vital stage in tsunami simulation. To estimate the static uplift of seafloor, we can use the fault models e.g., Okada (1985) and Mansinha and Smylie (1971) which are the analytical solution of deformation field caused by instantaneous rupture on an elastic finite fault plane. The theory was proposed originally by Mansinha and Smylie (1971) and then improved by Okada (1985). We need the fault parameters (Hypocenter (Latitude, Longitude and Depth), Length and Width of Fault Plane, Dislocation (Slip), Strike direction, Dip angle and Rake (slip) angle) to compute the deformation. A tsunami scenario with defined source parameters was constructed in the Gulf of Oman to compute the deformation field based on the Okada algorithm. The source model was based on Okal and Synolakis (2008) and Smith et al. (2013) with a length of 450 km, a width of 100 km and a dislocation of 10 m which has a moment magnitude (<em>M<sub>w</sub></em>) of 8.7. The result of this study represents the initial profile of the tsunami while including the uplift and subsidence in the study area. The earthquake scenario predicted maximum seafloor uplift of 3.5 m and maximum subsidence of 2.4 m. The deformation field covered an area from 23.5° N to 27.5° N and from 56° E to 63° E. The southern coastal areas of Iran and Pakistan experienced subsidence and the northern coastlines of Oman experienced uplift. The outcome can be used as the input in the simulation of tsunami propagation.Tsunami is an oceanic gravity wave generated by the displacement of huge volumes of water. There are three main types of disturbances: underwater earthquakes, submarine landslides and sudden earth surface movements adjacent to the ocean (volcanoes, meteorites, rock falls, sub-aerial landslides and ship sinking). Most tsunamis are caused by large shallow earthquakes in subduction zones (Satake and Tanioka, 1999). Sumatra–Andaman (2004) and Honshu, Japan (2011) tsunami events and following widespread damages and tragic consequences demonstrated the need of worldwide attention, awareness and preparedness for tsunami hazard mitigation. While the world draws its attention to tsunamis in the Indian Ocean, further attention is increased in the eastern areas of the Indian Ocean near Indonesia. Western Makran is located in the northwestern Indian Ocean basin. It has received less attention as a potential tsunamigenic zone. The Makran region is a 1000-km section of the Eurasian-Arabian plate boundary and located offshore Pakistan in the northwestern Indian Ocean where the oceanic crust of Arabian plate is being subducted beneath Eurasian plate since the Early Cretaceous along a north dipping subduction zone (Byrne et al., 1992; Smith et al., 2013). Following the great earthquake in Pasni-Ormara on 1945.11.27, Mw=8.1 (Byrne et aL, 1992), the coastline uplifted by about 2 m (Page et al., 1979). This event was accompanied by a significant regional tsunami, with run-up in the 5–10 m range which caused about 4000 deaths along the very sparsely populated Makran coast (Heck, 1947; Ambraseys and Melville, 1982; Okal and Synolakis, 2008). The Makran may be seismically segmented along its length into a western and an eastern segment, distinguished by different levels of seismicity (lower in the west). Moderate to large magnitude earthquakes are either related to the down going slab at intermediate depths or superficial in the eastern Makran (e.g. 1765, 1851 and 1945 earthquakes), while western Makran is marked with almost no seismicity in the coastal area at present but might have experienced a strong earthquake in 1483 (Byrne et al., 1992; Zarifi, 2006).
The lack of earthquakes for many years has increased the possibility of locking the western Makran segment. This means that, it could generate a potential tsunami event in the future that can threat the Gulf of Oman and the Makran coastlines. Because of the tsunamigenic potential of Makran subduction zone, also importance of strategic geographic location, financial role of Makran coast in Iran, accessibility to international waters, ability to communicate with other countries and its cultural, natural and historical tourism potential along with the establishment of ports and coastal and offshore installations in the region, tsunami can be a real threat. Consequently, it is indispensable to have accurate studies and estimates for tsunami risk mitigation. The aim of this study is to simulate tsunami generation in western Makran numerically for estimating the initial condition for tsunami propagation. Tsunami generation mechanism should be modeled as the first step in the process of tsunami modeling. The generation modeling problem should be studied geophysically and geologically, therefore it is a very important and vital stage in tsunami simulation. To estimate the static uplift of seafloor, we can use the fault models e.g., Okada (1985) and Mansinha and Smylie (1971) which are the analytical solution of deformation field caused by instantaneous rupture on an elastic finite fault plane. The theory was proposed originally by Mansinha and Smylie (1971) and then improved by Okada (1985). We need the fault parameters (Hypocenter (Latitude, Longitude and Depth), Length and Width of Fault Plane, Dislocation (Slip), Strike direction, Dip angle and Rake (slip) angle) to compute the deformation. A tsunami scenario with defined source parameters was constructed in the Gulf of Oman to compute the deformation field based on the Okada algorithm. The source model was based on Okal and Synolakis (2008) and Smith et al. (2013) with a length of 450 km, a width of 100 km and a dislocation of 10 m which has a moment magnitude (<em>M<sub>w</sub></em>) of 8.7. The result of this study represents the initial profile of the tsunami while including the uplift and subsidence in the study area. The earthquake scenario predicted maximum seafloor uplift of 3.5 m and maximum subsidence of 2.4 m. The deformation field covered an area from 23.5° N to 27.5° N and from 56° E to 63° E. The southern coastal areas of Iran and Pakistan experienced subsidence and the northern coastlines of Oman experienced uplift. The outcome can be used as the input in the simulation of tsunami propagation.https://jesphys.ut.ac.ir/article_65880_9e0a59d76acca38a2066721bc07f9bbd.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Simulation of tsunami generation, propagation and run-up in the western Makran, Part 2: Simulation of the propagation and run-upSimulation of tsunami generation, propagation and run-up in the western Makran, Part 2: Simulation of the propagation and run-up5095216777010.22059/jesphys.2018.247914.1006955FAAminRashidiPh.D. Student, Department of Earth Physics, Institute of Geophysics, University of Tehran, IranZaher HosseinShomaliAssociate Professor, Department of Earth Physics, Institute of Geophysics, University of Tehran, Iran0000-0001-6254-7560NasserKeshavarz FarajkhahAssistant Professor, Geoscience Division, Research Institute of Petroleum Industry (RIPI), Tehran , IranJournal Article20171226Tsunami numerical modeling is a mathematical description of tsunami life cycle circle including generation, propagation and run-up. Numerical simulation is a powerful tool to understand the impacts of past and future events. It is critical to use the results of tsunami simulation such as tsunami waves propagation patterns, time series, amplitudes and run-up along coastlines to mitigate tsunami hazard of possible future events. Tsunami waves propagate with a velocity up to 700 to 950 km/h in the ocean without losing a lot of energy. As they reach shallow waters, their amplitude grows larger in the wave shoaling process. Nonlinear shallow water equations are often used to model tsunami wave propagation and run-up. <br />The aim of this study is simulation of tsunami wave propagation and run-up in the western Makran for a tsunamigenic scenario capable of generating a <em>Mw</em> 8.7 magnitude. The initial condition to of model the tsunami propagation is computed using the Okada's algorithm. The COMCOT hydrodynamic model is used for the numerical tsunami simulation. The COMCOT is capable of solving non-linear shallow water equations in both Spherical and Cartesian coordinates using explicit staggered leap-frog finite difference schemes and a nested grid configuration. <br />Tsunami propagation is highly influenced by the bathymetry. A three level nested grid system with different resolutions is used for tsunami simulation in this study. Configuring a nested grid system in tsunami modeling is necessary to compute tsunami run-up and inundation on dry land. The simulation is then performed for a total run time of 90 minutes with a time step of 0.5 min for the parent grid and 0.0625 min for the finest grid. Numerical modeling of tsunami run-up and inundation is performed for the western (C1), central (C2) and eastern (C3) parts of the Makran coastline in the south of Iran. <br />The trapping of tsunami waves inside the Gulf of Oman causes more impacts on the coastlines of Iran and Oman in comparison to the other areas. To investigate the time histories of tsunami waves after the generation by the tsunmigenic scenario, we put 18 virtual gauges near and along the southeastern coastline of Iran. Generally, it takes about 20 minutes for maximum tsunami wave amplitudes to be observed at the southeastern coastlines of Iran. The maximum tsunami wave heights computed for the gauges near Jask and Chabahar are 2/8 and 3/3 m respectively. The entire southeastern coastline of Iran is impacted by such tsunami waves. The maximum computed tsunami wave height along the southeastern coastline of Iran is 11m. <br />The maximum tsunami wave field exhibits a significant local hazard field inside the Gulf of Oman posed to the shores of Iran and Oman. The maximum tsunami amplitude reaches up to 11 m and 6 m inside the Gulf of Oman the Arabian Sea Basins, respectively. The results of run-up modeling show that the maximum computed run-up for the C1, C2 and C3 areas are 10, 17 and 19 m. The maximum tsunami inundation distance for those areas are 6, 6 and 4 km, respectively. The considerable values of inundation distance are due to low elevation topography of the affected coasts. Computing the tsunami inundation distance can be used in choosing evacuation lines during the possible future tsunamis and finding safer locations along the coastal areas. Accurate tsunami simulations are required to develop a tsunami early warning system and estimate the tsunami inundation on dry land. To perform more accurate simulations, high resolution local bathymetric/topographic maps are required, especially for the major ports in southeastern Iran.Tsunami numerical modeling is a mathematical description of tsunami life cycle circle including generation, propagation and run-up. Numerical simulation is a powerful tool to understand the impacts of past and future events. It is critical to use the results of tsunami simulation such as tsunami waves propagation patterns, time series, amplitudes and run-up along coastlines to mitigate tsunami hazard of possible future events. Tsunami waves propagate with a velocity up to 700 to 950 km/h in the ocean without losing a lot of energy. As they reach shallow waters, their amplitude grows larger in the wave shoaling process. Nonlinear shallow water equations are often used to model tsunami wave propagation and run-up. <br />The aim of this study is simulation of tsunami wave propagation and run-up in the western Makran for a tsunamigenic scenario capable of generating a <em>Mw</em> 8.7 magnitude. The initial condition to of model the tsunami propagation is computed using the Okada's algorithm. The COMCOT hydrodynamic model is used for the numerical tsunami simulation. The COMCOT is capable of solving non-linear shallow water equations in both Spherical and Cartesian coordinates using explicit staggered leap-frog finite difference schemes and a nested grid configuration. <br />Tsunami propagation is highly influenced by the bathymetry. A three level nested grid system with different resolutions is used for tsunami simulation in this study. Configuring a nested grid system in tsunami modeling is necessary to compute tsunami run-up and inundation on dry land. The simulation is then performed for a total run time of 90 minutes with a time step of 0.5 min for the parent grid and 0.0625 min for the finest grid. Numerical modeling of tsunami run-up and inundation is performed for the western (C1), central (C2) and eastern (C3) parts of the Makran coastline in the south of Iran. <br />The trapping of tsunami waves inside the Gulf of Oman causes more impacts on the coastlines of Iran and Oman in comparison to the other areas. To investigate the time histories of tsunami waves after the generation by the tsunmigenic scenario, we put 18 virtual gauges near and along the southeastern coastline of Iran. Generally, it takes about 20 minutes for maximum tsunami wave amplitudes to be observed at the southeastern coastlines of Iran. The maximum tsunami wave heights computed for the gauges near Jask and Chabahar are 2/8 and 3/3 m respectively. The entire southeastern coastline of Iran is impacted by such tsunami waves. The maximum computed tsunami wave height along the southeastern coastline of Iran is 11m. <br />The maximum tsunami wave field exhibits a significant local hazard field inside the Gulf of Oman posed to the shores of Iran and Oman. The maximum tsunami amplitude reaches up to 11 m and 6 m inside the Gulf of Oman the Arabian Sea Basins, respectively. The results of run-up modeling show that the maximum computed run-up for the C1, C2 and C3 areas are 10, 17 and 19 m. The maximum tsunami inundation distance for those areas are 6, 6 and 4 km, respectively. The considerable values of inundation distance are due to low elevation topography of the affected coasts. Computing the tsunami inundation distance can be used in choosing evacuation lines during the possible future tsunamis and finding safer locations along the coastal areas. Accurate tsunami simulations are required to develop a tsunami early warning system and estimate the tsunami inundation on dry land. To perform more accurate simulations, high resolution local bathymetric/topographic maps are required, especially for the major ports in southeastern Iran.https://jesphys.ut.ac.ir/article_67770_89239005b72f9a4e2afcd1252ccfc447.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Local gravity field modeling using basis functions of harmonic nature and vector airborne Gravimetry, Case Study: Gravity field modeling over north-east of Tanzania regionLocal gravity field modeling using basis functions of harmonic nature and vector airborne Gravimetry, Case Study: Gravity field modeling over north-east of Tanzania region5235346777310.22059/jesphys.2018.247842.1006953FAMohsenFeiziPh.D. Student, Department of Geodesy, Faculty of Surveying and Geomatic Engineering, K. N. Toosi University of Technology, Tehran, IranMehdiRaoofian NaeeniAssistant Professor, Department of Geodesy, Faculty of Surveying and Geomatic Engineering, K. N. Toosi University of Technology, Tehran, IranJournal Article20171224Many different methods for gravity field modelling have been investigated, among which, the harmonic expansion has been widely used due to harmonic nature of gravity potential field that satisfies Laplace equation in an empty space. This method, however, cannot reach to a high resolution in a gravity field, and suffers from omitting the high frequency gravity signals and therefore it is not appropriate for local gravity field modelling. To overcome this drawback and recover high frequency features of gravity field, appropriate basis functions with local support should be used. One of the methods for local gravity field modeling based on local harmonic function is spherical cap harmonic analysis. In this method, the Dirikhlet boundary value problem for Laplace equation is solved for boundary conditions on the surface of a spherical cap which results in Eigen expansion of the solution in terms of the associated Legendre function of non-integer degree and integer order. Another method that can be used for local gravity field modeling is rectangular harmonic analysis. In this method, Laplace equation is solved in a local Cartesian coordinate system and boundary conditions are applied on a plane area which. In this approach, trigonometric functions are used as basis functions.
In this study, the problem of local gravity field modeling based on both spherical cap, and rectangular harmonic expansion is investigated. Also, a numerical study is conducted to show the performance of each method for local gravity field modeling. To do so the observations of vector airborne gravimetry in the northwest of Tanzania in Highland region are used to derive the coefficients of each model. The low-frequency part of observed gravity field is removed from the data using EGM2008 geo-potential model, and the resulting residual gravity field is considered for local modelling. Since the governing equations for determination of the coefficients suffer from an ill-conditioning problem, it is necessary to apply some regularization schemes to find the optimum solution. Here, the Tikhonov regularization method is utilized to obtain the regular solution. In this study, the edge effect for each model is also analyzed. To show this effect, the results of models are compared with the observations of gravity at some control points distributed both within the study area and its margin. It should be noted that the maximum degree of expansion in harmonic series, plays an important role in appropriate fitting of local gravity field models to the gravity data and it has significant effects on the computational task of determining the coefficients of each model. For this purpose, local gravity field modelling is calculated with different value of maximum degree of expansion and then regarding to the result (accuracy of local gravity model by comparing with control points), appropriate value of maximum degree of expansion for each model is determined.
Finally the results of two models are compared to each other to show the performance of each models in local gravity field modeling. The results of this study reveal that ASHA has the ability to model local gravity with accuracy of about 1 mGal, and RHA method in the best situation can just achieve to a 3 mGal accuracy, although the convergence rate in RHA model is faster than ASHA model. Also by comparing the edge effect on each models, it is seen that the edge effect in two models and in all directions occurred but in a Z direction of RHA model that are more significant than the other directions in two models and one may conclude that the edge effect of RHA are much larger than that of ASHA. Finally, the result obtained shows that ASHA model can have better results for local gravity modelling.Many different methods for gravity field modelling have been investigated, among which, the harmonic expansion has been widely used due to harmonic nature of gravity potential field that satisfies Laplace equation in an empty space. This method, however, cannot reach to a high resolution in a gravity field, and suffers from omitting the high frequency gravity signals and therefore it is not appropriate for local gravity field modelling. To overcome this drawback and recover high frequency features of gravity field, appropriate basis functions with local support should be used. One of the methods for local gravity field modeling based on local harmonic function is spherical cap harmonic analysis. In this method, the Dirikhlet boundary value problem for Laplace equation is solved for boundary conditions on the surface of a spherical cap which results in Eigen expansion of the solution in terms of the associated Legendre function of non-integer degree and integer order. Another method that can be used for local gravity field modeling is rectangular harmonic analysis. In this method, Laplace equation is solved in a local Cartesian coordinate system and boundary conditions are applied on a plane area which. In this approach, trigonometric functions are used as basis functions.
In this study, the problem of local gravity field modeling based on both spherical cap, and rectangular harmonic expansion is investigated. Also, a numerical study is conducted to show the performance of each method for local gravity field modeling. To do so the observations of vector airborne gravimetry in the northwest of Tanzania in Highland region are used to derive the coefficients of each model. The low-frequency part of observed gravity field is removed from the data using EGM2008 geo-potential model, and the resulting residual gravity field is considered for local modelling. Since the governing equations for determination of the coefficients suffer from an ill-conditioning problem, it is necessary to apply some regularization schemes to find the optimum solution. Here, the Tikhonov regularization method is utilized to obtain the regular solution. In this study, the edge effect for each model is also analyzed. To show this effect, the results of models are compared with the observations of gravity at some control points distributed both within the study area and its margin. It should be noted that the maximum degree of expansion in harmonic series, plays an important role in appropriate fitting of local gravity field models to the gravity data and it has significant effects on the computational task of determining the coefficients of each model. For this purpose, local gravity field modelling is calculated with different value of maximum degree of expansion and then regarding to the result (accuracy of local gravity model by comparing with control points), appropriate value of maximum degree of expansion for each model is determined.
Finally the results of two models are compared to each other to show the performance of each models in local gravity field modeling. The results of this study reveal that ASHA has the ability to model local gravity with accuracy of about 1 mGal, and RHA method in the best situation can just achieve to a 3 mGal accuracy, although the convergence rate in RHA model is faster than ASHA model. Also by comparing the edge effect on each models, it is seen that the edge effect in two models and in all directions occurred but in a Z direction of RHA model that are more significant than the other directions in two models and one may conclude that the edge effect of RHA are much larger than that of ASHA. Finally, the result obtained shows that ASHA model can have better results for local gravity modelling.https://jesphys.ut.ac.ir/article_67773_b74ba17c06428627615fe362e063915c.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Water content inversion of MRS data a case study of Nineh Mahallat, central IranWater content inversion of MRS data a case study of Nineh Mahallat, central Iran5355556777910.22059/jesphys.2018.248427.1006957FAMahdiFallah SafariPh.D. Student, Department of Earth Physics, Institute of Geophysics, University of Tehran, IranMohammad KazemHafiziProfessor, Department of Earth Physics, Institute of Geophysics, University of Tehran, Iran0000-0002-5634-1141RezaGhanatiAssistant Professor, Department of Earth Physics, Institute of Geophysics, University of Tehran, Iran0000-0003-1138-7442Journal Article20180110Magnetic resonance sounding (MRS) is a relatively new approach and is the only geophysical method which is directly sensitive to the underground water molecules. MRS is based on the principal of Nuclear Magnetic Resonance (NMR). A wire loop with different diameter depending on the depth of aquifers, is laid out on the ground. The wire loop is used for both transmission of the oscillating magnetic field and reception of the MRS signal. This method proved to be sufficiently accurate and to have a high resolving capability. In the geophysical application of Magnetic Resonance, the groundwater is the target of investigation. Inverting MRS data provides significant information regarding depth and thickness of the aquifer, distribution of water content and, under favorable conditions, hydraulic conductivity. In this method water content is defined based on the portion of the total volume of subsurface occupied by the free water which is unattached to grain walls and can be extracted from the rock and signal of bounded water which is captured by grains is not included. That is to say that signals related to the bounded water which is absorbed by the grains of the medium is excluded from the calculation process. This method is sensitive to the noise level so estimation of signal parameters and inversion plays an important role. The inverse problem of MRS is ill-posed meaning that the solution is not unique. On the other hand, within a certain depth range, two layers with different thickness and water content but with the same product could return the same theoretical sounding curve. The inversion of this method is carried out according to the well-known Tikhonov method. Solution of MRS inversion like other inverse problems in geophysics is not a continuous function of the data in which there are a small perturbation of the input data that can cause a large perturbation of the model parameters. Consequently, regularization methods should be employed to tackle possible instabilities in solution process. Moreover defining the kind of regularization a proper choice of the regularization parameter is essential. There are various methods available. In this paper the L-Curve is used. From model space point of view, there are various schemes for inverting MRS data including fixed geometry and variable geometry approaches in conjunction with using different methods of the objective function optimization. In fixed geometry approach, the model is assumed to have fixed layers with increasing layer thickness in depth, in fact the water content is allowed to vary; and in variable geometry approach it assumes a small number of layers, where both water content and layer thickness can vary. To numerically demonstrate the performance of the proposed inversion algorithm, we used a seven-layer model consisting of three horizontal, homogeneous, by 30% water content. In this paper, stable and unique solution is sought through the fixed geometry approach and imposing Tikonov regularization with constraints. After the test of inversion algorithm on synthetic data, Iran and Germany data were used to illustrate algorithm field use and to verify model results. Estimation of water content of synthetic data, Iran and Germany data shows a reasonable efficiency of the proposed strategy.Magnetic resonance sounding (MRS) is a relatively new approach and is the only geophysical method which is directly sensitive to the underground water molecules. MRS is based on the principal of Nuclear Magnetic Resonance (NMR). A wire loop with different diameter depending on the depth of aquifers, is laid out on the ground. The wire loop is used for both transmission of the oscillating magnetic field and reception of the MRS signal. This method proved to be sufficiently accurate and to have a high resolving capability. In the geophysical application of Magnetic Resonance, the groundwater is the target of investigation. Inverting MRS data provides significant information regarding depth and thickness of the aquifer, distribution of water content and, under favorable conditions, hydraulic conductivity. In this method water content is defined based on the portion of the total volume of subsurface occupied by the free water which is unattached to grain walls and can be extracted from the rock and signal of bounded water which is captured by grains is not included. That is to say that signals related to the bounded water which is absorbed by the grains of the medium is excluded from the calculation process. This method is sensitive to the noise level so estimation of signal parameters and inversion plays an important role. The inverse problem of MRS is ill-posed meaning that the solution is not unique. On the other hand, within a certain depth range, two layers with different thickness and water content but with the same product could return the same theoretical sounding curve. The inversion of this method is carried out according to the well-known Tikhonov method. Solution of MRS inversion like other inverse problems in geophysics is not a continuous function of the data in which there are a small perturbation of the input data that can cause a large perturbation of the model parameters. Consequently, regularization methods should be employed to tackle possible instabilities in solution process. Moreover defining the kind of regularization a proper choice of the regularization parameter is essential. There are various methods available. In this paper the L-Curve is used. From model space point of view, there are various schemes for inverting MRS data including fixed geometry and variable geometry approaches in conjunction with using different methods of the objective function optimization. In fixed geometry approach, the model is assumed to have fixed layers with increasing layer thickness in depth, in fact the water content is allowed to vary; and in variable geometry approach it assumes a small number of layers, where both water content and layer thickness can vary. To numerically demonstrate the performance of the proposed inversion algorithm, we used a seven-layer model consisting of three horizontal, homogeneous, by 30% water content. In this paper, stable and unique solution is sought through the fixed geometry approach and imposing Tikonov regularization with constraints. After the test of inversion algorithm on synthetic data, Iran and Germany data were used to illustrate algorithm field use and to verify model results. Estimation of water content of synthetic data, Iran and Germany data shows a reasonable efficiency of the proposed strategy.https://jesphys.ut.ac.ir/article_67779_2b5ce87bd82e9d7d7a254f878ede5415.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023High resolution Kirchhoff seismic migration via 1-norm regularized least-squaresHigh resolution Kirchhoff seismic migration via 1-norm regularized least-squares5575736778110.22059/jesphys.2018.250955.1006968FAToktamZandPh.D. Student, Department of Earth Physics, Institute of Geophysics, University of Tehran, IranHamid RezaSiahkoohiProfessor, Department of Earth Physics, Institute of Geophysics, University of Tehran, Iran0000-0002-9227-2972AliGholamiAssociate Professor, Department of Earth Physics, Institute of Geophysics, University of Tehran, IranJournal Article20180218For decades Kirchhoff migration has been one of the simplest migration algorithms and also the most frequently used method of migration in industry. This is due to its relatively low computational cost and its flexibility in handling acquisition and topography irregularities. The standard seismic migration operator can be regarded as the adjoint of a seismic forward modeling operator, which acts on a set of subsurface parameters to generate the observed data. Such adjoint operators are able to provide an approximate inverse of the forward modeling operator and only recover the time of the events (Claerbout, 1992). They cannot retrieve the amplitude of reflections, thus leading to a decrease in the resolution of the final migrated image. The standard seismic migration (adjoint) operators can be modified to better approximate the inverse operators. Least-squares migration (LSM) techniques have been developed to fully inverse the forward modeling procedures by minimizing the difference between observed and modeled data in a least-squares sense. An LSM is able to reduce the (Kirchhoff) migration artifacts, enhance the resolution and retrieve seismic amplitudes. Although implementing LSM instead of conventional migration, leads to resolution enhancement. It also brings some new numerical and computational challenges which need to be addressed properly. Due to the ill-conditioned nature of the inverse operator and also incompleteness of the data, the method generates unavoidable artifacts which severely degrade the resolution of the migrated image obtained by the non-regularized LSM method. The instability of LSM methods suggests developing a regularized algorithm capable of including reasonable physical constraints. Including the seismic wavelet into the migration operator, migration will generate the earth reflectivity image which can be considered as a sparse image, so applying the sparseness constraint, e.g., via the minimization of the 1-norm of reflectivity model, can help to regularize the model and prevent it from getting noisy artifacts (Gholami and Sacchi, 2013). <br />In this article, based on the Bregmanized operator splitting (BOS), we propose a high resolution migration algorithm by applying sparseness constraints to the solution of least-squares Kirchhoff migration (LSKM). The Bregmanized operator splitting is employed as a solver of the generated sparsity-promoting LSKM for its simplicity, efficiency, stability and fast convergence. Independence of matrix inversion and fast convergence rate are two main properties of the proposed algorithm. Numerical results from field and synthetic seismic data show that migrated sections generated by this 1-norm regularized Kirchhoff migration method are more focused than those generated by the conventional Kirchhoff/LS migration. <br />Regular spatial sampling of the data at Nyquist rate is another major challenge which may not be achieved in practice due to the coarse source-receiver distributions and presence of possible gaps in the recording lines. The proposed model-based migration algorithm is able to handle the incompleteness issues and is stable in the presence of noise in the data. In this article, we tested the performance of our proposed method on synthetic data in the presence of coarse sampling and also acquisition gaps. The results confirmed that the proposed sparsity-promoting migration is able to generate accurate migrated images from incomplete and inaccurate data.For decades Kirchhoff migration has been one of the simplest migration algorithms and also the most frequently used method of migration in industry. This is due to its relatively low computational cost and its flexibility in handling acquisition and topography irregularities. The standard seismic migration operator can be regarded as the adjoint of a seismic forward modeling operator, which acts on a set of subsurface parameters to generate the observed data. Such adjoint operators are able to provide an approximate inverse of the forward modeling operator and only recover the time of the events (Claerbout, 1992). They cannot retrieve the amplitude of reflections, thus leading to a decrease in the resolution of the final migrated image. The standard seismic migration (adjoint) operators can be modified to better approximate the inverse operators. Least-squares migration (LSM) techniques have been developed to fully inverse the forward modeling procedures by minimizing the difference between observed and modeled data in a least-squares sense. An LSM is able to reduce the (Kirchhoff) migration artifacts, enhance the resolution and retrieve seismic amplitudes. Although implementing LSM instead of conventional migration, leads to resolution enhancement. It also brings some new numerical and computational challenges which need to be addressed properly. Due to the ill-conditioned nature of the inverse operator and also incompleteness of the data, the method generates unavoidable artifacts which severely degrade the resolution of the migrated image obtained by the non-regularized LSM method. The instability of LSM methods suggests developing a regularized algorithm capable of including reasonable physical constraints. Including the seismic wavelet into the migration operator, migration will generate the earth reflectivity image which can be considered as a sparse image, so applying the sparseness constraint, e.g., via the minimization of the 1-norm of reflectivity model, can help to regularize the model and prevent it from getting noisy artifacts (Gholami and Sacchi, 2013). <br />In this article, based on the Bregmanized operator splitting (BOS), we propose a high resolution migration algorithm by applying sparseness constraints to the solution of least-squares Kirchhoff migration (LSKM). The Bregmanized operator splitting is employed as a solver of the generated sparsity-promoting LSKM for its simplicity, efficiency, stability and fast convergence. Independence of matrix inversion and fast convergence rate are two main properties of the proposed algorithm. Numerical results from field and synthetic seismic data show that migrated sections generated by this 1-norm regularized Kirchhoff migration method are more focused than those generated by the conventional Kirchhoff/LS migration. <br />Regular spatial sampling of the data at Nyquist rate is another major challenge which may not be achieved in practice due to the coarse source-receiver distributions and presence of possible gaps in the recording lines. The proposed model-based migration algorithm is able to handle the incompleteness issues and is stable in the presence of noise in the data. In this article, we tested the performance of our proposed method on synthetic data in the presence of coarse sampling and also acquisition gaps. The results confirmed that the proposed sparsity-promoting migration is able to generate accurate migrated images from incomplete and inaccurate data.https://jesphys.ut.ac.ir/article_67781_dad2a03e6770921976d649dcbac4a4d5.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Estimation of regularization parameter by active constraint balancing for 2D inversion of gravity dataEstimation of regularization parameter by active constraint balancing for 2D inversion of gravity data5755836784610.22059/jesphys.2018.252876.1006978FAMeysamMoghadasiM.Sc. Student, Department of Petroleum and Geophysics, Faculty of Mining, Petroleum and Geophysics Engineering, Shahrood University of Technology, Shahrood, IranAliNejati KalatehAssociate Professor, Department of Petroleum and Geophysics, Faculty of Mining, Petroleum and Geophysics Engineering, Shahrood University of Technology, Shahrood, IranMohamadRezaieAssistant Professor, Faculty of Civil Engineering, Malayer University, Malayer, IranJournal Article20180227Inversion method is very common in the interpretation of practical gravity data. The goal of 3D inversion is to estimate density distribution of an unknown subsurface model from a set of known gravity observations measured on the surface. The regularization parameter is one of the effective parameters for obtaining optimal model in inversion of the gravity data for similar inversion of other geophysical data. For estimation of the optimum regularization parameter the statistical criterion of Akaike’s Bayesian Information Criterion (ABIC) usually used. This parameter is experimentally estimated in most inversion methods. The choice of the regularization parameter, which balances the minimization of the data misfit and model roughness, may be a critical procedure to achieve both resolution and stability. In this paper the Active Constraint Balancing (ACB) as a new method is used for estimating the regularization parameter in two- dimensional (2-D) inversion of gravity data. This technique is supported by smoothness-constrained least-squares inversion. We call this procedure “active constraint balancing” (ACB). Introducing the Lagrangian multiplier as a spatially-dependent variable in the regularization term, we can balance the regularizations used in the inversion. Spatially varying Lagrangian multipliers (regularization parameters) are obtained by a parameter resolution matrix and Backus-Gilbert spread function analysis. For estimation of regularization parameter by ACB method use must computed the resolution matrix R. The parameter resolution matrix R can be obtained in the inversion process with pseudo-inverse multiplied by the kernel G.
(1)
The spread function, which accounts for the inherent degree of how much the ith model parameter is not resolvable, defined as:
(2)
where M is the total number of inversion parameters, is a weighting factor defined by the spatial distance between the ith and jth model parameters, and is a factor which accounts for whether the constraint or regularization is imposed on the ith parameter and its neighboring parameters. In other words, the spread function defined here is the sum of the squared spatially weighted spread of the ith model parameter with respect to all of the model parameters excluding ones upon which a smoothness constraint is imposed. In this approach, the regularization parameter λ(x,z) is set by a value from log-linear interpolation.
(3)
where and are the minimum and maximum values of spread function , respectively, and the and are minimum and maximum values of the regularization parameter λ(x,z), which must be provided by the user. With this method, we can automatically set a smaller value λ(x,z) of the regularization parameter to the highly resolvable model parameter, which corresponds to a smaller value of the spread function in the inversion process and vice versa. Users can choose these minimum and maximum regularization parameters by setting variables LambdaMin and LambdaMax. For getting the target an algorithm is developed that estimates this parameter. The validity of the proposed algorithm has been evaluated by gravity data acquired from a synthetic model. Then the algorithm used for inversion of real gravity data from Matanzas Cr deposit. The result obtained from 2D inversion of gravity data from this mine shows that this algorithm can provide good estimates of density anomalous structures within the subsurface.Inversion method is very common in the interpretation of practical gravity data. The goal of 3D inversion is to estimate density distribution of an unknown subsurface model from a set of known gravity observations measured on the surface. The regularization parameter is one of the effective parameters for obtaining optimal model in inversion of the gravity data for similar inversion of other geophysical data. For estimation of the optimum regularization parameter the statistical criterion of Akaike’s Bayesian Information Criterion (ABIC) usually used. This parameter is experimentally estimated in most inversion methods. The choice of the regularization parameter, which balances the minimization of the data misfit and model roughness, may be a critical procedure to achieve both resolution and stability. In this paper the Active Constraint Balancing (ACB) as a new method is used for estimating the regularization parameter in two- dimensional (2-D) inversion of gravity data. This technique is supported by smoothness-constrained least-squares inversion. We call this procedure “active constraint balancing” (ACB). Introducing the Lagrangian multiplier as a spatially-dependent variable in the regularization term, we can balance the regularizations used in the inversion. Spatially varying Lagrangian multipliers (regularization parameters) are obtained by a parameter resolution matrix and Backus-Gilbert spread function analysis. For estimation of regularization parameter by ACB method use must computed the resolution matrix R. The parameter resolution matrix R can be obtained in the inversion process with pseudo-inverse multiplied by the kernel G.
(1)
The spread function, which accounts for the inherent degree of how much the ith model parameter is not resolvable, defined as:
(2)
where M is the total number of inversion parameters, is a weighting factor defined by the spatial distance between the ith and jth model parameters, and is a factor which accounts for whether the constraint or regularization is imposed on the ith parameter and its neighboring parameters. In other words, the spread function defined here is the sum of the squared spatially weighted spread of the ith model parameter with respect to all of the model parameters excluding ones upon which a smoothness constraint is imposed. In this approach, the regularization parameter λ(x,z) is set by a value from log-linear interpolation.
(3)
where and are the minimum and maximum values of spread function , respectively, and the and are minimum and maximum values of the regularization parameter λ(x,z), which must be provided by the user. With this method, we can automatically set a smaller value λ(x,z) of the regularization parameter to the highly resolvable model parameter, which corresponds to a smaller value of the spread function in the inversion process and vice versa. Users can choose these minimum and maximum regularization parameters by setting variables LambdaMin and LambdaMax. For getting the target an algorithm is developed that estimates this parameter. The validity of the proposed algorithm has been evaluated by gravity data acquired from a synthetic model. Then the algorithm used for inversion of real gravity data from Matanzas Cr deposit. The result obtained from 2D inversion of gravity data from this mine shows that this algorithm can provide good estimates of density anomalous structures within the subsurface.https://jesphys.ut.ac.ir/article_67846_72be5c510b34b759aa6d738a1dc0728c.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Determination of earthquake inter-event empirical Green’s function using virtual seismometer method, Case study: Kahurak Fault in the Rigan area, southeastern IranDetermination of earthquake inter-event empirical Green’s function using virtual seismometer method, Case study: Kahurak Fault in the Rigan area, southeastern Iran5855946778010.22059/jesphys.2018.255351.1006998FATaghiShirzadPost-Doc, Department of Earth Physics, Institute of Geophysics, University of Tehran, Iran0000-0002-8382-4990MahsaAfraM.Sc. Graduated, Department of Earth Physics, Institute of Geophysics, University of Tehran, IranJournal Article20180505Analysis of earthquake events provides an efficient tool to extract the empirical Green's function (EGF) between pairs of earthquakes by interferometry approach. Because of sparse distribution of stations or low seismicity, many classical seismic studies (earthquake-receiver systems, ambient seismic noise and etc.), may yield a poor or noisy calculation of the tomographic result maps. However, inter-event EGFs between two earthquake locations can be retrieved by virtual stations, first outlined by Curtis et al. (2009). These EGFs are equivalent to the waveform produced as an impulse at one receiver location and that recorded by another receiver. Several researchers (e.g. Hong and Menke, 2006; Curtis et al., 2009) used a source-receiver reciprocity theorem, to indicate that inter-event EGFs could be retrieved when their waveforms are recorded by a set of receivers surrounding the events. According to this theorem, the stacked cross-correlations of event-pair (between a pair of earthquake event) waveforms recorded by these set of receivers, are equivalent to the estimated EGFs (Curtis et al., 2009). This technique can, therefore, provide new insight and useful tool to study fault planes and Earth's interior where real receivers can not be installed. However, the fault plane where earthquake ruptures occur at depth is often interpreted as being a transitional zone which is characterized by asperities and barriers (Aki, 1984). Thus the aftershock events interferometry approach could be applied to study fault plane by retrieving accurate, stable and reliable inter-event EGFs. After the Rigan earthquake occurred on 20 December 2010 (Mw 6.5) in Kerman province (southeastern Iran), aftershock events extended along the hidden part of the Kahurak Fault. In this paper, the cross-correlation of aftershock events was applied to retrieve the inter-event EGF on the hidden part of the Kahurak Fault plane in the Rigan area. This event-pair example was selected based on some criteria that the most important of these conditions is the similar (approximately) depth of events due to the ease of operation and processing. Aftershock event-pair projection and data processing is similar to that explained in detail by Bensen et al. (2007). The mean and trend were removed and the data were decimated to 10 sps. Time and frequency domain normalizations were then applied to suppress the influence of instrument irregularities and high energy events. After cross-correlation and stacking procedure, event-pair EGF signal was extracted. Then, 1D and 2D synthetic signals were generated using computer program in seismology (Herrmann and Ammon, 2013) and SPECFEM , respectively. Horizontal velocity result at depth of ~4 km, which is calculated by Shirzad et al. (2017), was applied for both 1D and 2D synthetic input modeling. Comparison between inter-event EGF and synthetic signals indicates that the inter-event EGF is in agreement with the synthetic models. Also, inter-event EGF signal propagates on the hidden part of the Kahurak fault plane. The correlation coefficient of 1D and 2D synthetic inter-event 031-163 EGF signals are of the order of ~75% and 80% within the signal window. In conclusion, these inter-event EGFs can be used for investigating the laterally varying the 2D mapping of surface wave group and/or phase velocities.Analysis of earthquake events provides an efficient tool to extract the empirical Green's function (EGF) between pairs of earthquakes by interferometry approach. Because of sparse distribution of stations or low seismicity, many classical seismic studies (earthquake-receiver systems, ambient seismic noise and etc.), may yield a poor or noisy calculation of the tomographic result maps. However, inter-event EGFs between two earthquake locations can be retrieved by virtual stations, first outlined by Curtis et al. (2009). These EGFs are equivalent to the waveform produced as an impulse at one receiver location and that recorded by another receiver. Several researchers (e.g. Hong and Menke, 2006; Curtis et al., 2009) used a source-receiver reciprocity theorem, to indicate that inter-event EGFs could be retrieved when their waveforms are recorded by a set of receivers surrounding the events. According to this theorem, the stacked cross-correlations of event-pair (between a pair of earthquake event) waveforms recorded by these set of receivers, are equivalent to the estimated EGFs (Curtis et al., 2009). This technique can, therefore, provide new insight and useful tool to study fault planes and Earth's interior where real receivers can not be installed. However, the fault plane where earthquake ruptures occur at depth is often interpreted as being a transitional zone which is characterized by asperities and barriers (Aki, 1984). Thus the aftershock events interferometry approach could be applied to study fault plane by retrieving accurate, stable and reliable inter-event EGFs. After the Rigan earthquake occurred on 20 December 2010 (Mw 6.5) in Kerman province (southeastern Iran), aftershock events extended along the hidden part of the Kahurak Fault. In this paper, the cross-correlation of aftershock events was applied to retrieve the inter-event EGF on the hidden part of the Kahurak Fault plane in the Rigan area. This event-pair example was selected based on some criteria that the most important of these conditions is the similar (approximately) depth of events due to the ease of operation and processing. Aftershock event-pair projection and data processing is similar to that explained in detail by Bensen et al. (2007). The mean and trend were removed and the data were decimated to 10 sps. Time and frequency domain normalizations were then applied to suppress the influence of instrument irregularities and high energy events. After cross-correlation and stacking procedure, event-pair EGF signal was extracted. Then, 1D and 2D synthetic signals were generated using computer program in seismology (Herrmann and Ammon, 2013) and SPECFEM , respectively. Horizontal velocity result at depth of ~4 km, which is calculated by Shirzad et al. (2017), was applied for both 1D and 2D synthetic input modeling. Comparison between inter-event EGF and synthetic signals indicates that the inter-event EGF is in agreement with the synthetic models. Also, inter-event EGF signal propagates on the hidden part of the Kahurak fault plane. The correlation coefficient of 1D and 2D synthetic inter-event 031-163 EGF signals are of the order of ~75% and 80% within the signal window. In conclusion, these inter-event EGFs can be used for investigating the laterally varying the 2D mapping of surface wave group and/or phase velocities.https://jesphys.ut.ac.ir/article_67780_d3e7cf4f915357c6ae9244f92603ce31.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Efficiency investigation of tesseroid based methods for computing gravimetric terrain correctionEfficiency investigation of tesseroid based methods for computing gravimetric terrain correction5956066777810.22059/jesphys.2018.256945.1007004FAMehdiGoliAssistant Professor, Civil Engineering Faculty, Shahrood University of Technology, IranJournal Article20180506The gravitational effect of topographical masses is one of the important component of the gravity field, which plays a key role in geophysical and geodetic studies. For geophysical interpretations, it is necessary to eliminate the effect of topography as a disturbing factor from the observed gravity data. In geodetic applications, the solution of geodetic boundary problem such as Stokes requires mass free space above the geoid. In present study efficiency of different tesseroid based methods are compared with well-known rectangular prism to evaluate the gravimetric terrain corrections up to distance of 1.5 arc-degree known as the Hayford-Bowie zone. For this purpose, the mathematical formula: the vertical derivative of Newton integral and the digital elevation model (DEM) are used as data. In computing the topographic effect, we are involved with the two factors: 1- the integral element (point, line, plane, rectangular prism, tesseroid, etc.) and 2- geometry of topography (planar, spherical and ellipsoidal), which causes some difficulties to understand the subject. Finite element method is a general and standard method for estimating the terrain correction. In this method, the gravitational topographic effect is evaluated as the total gravitational effect of the smaller elements.
Tesseroid is the geometrical body bounded by two concentric spheres. This element uses the spherical geometry of topography which introduces relative error of about 1% (Novak and Grafarend, 2005). By choosing this element, the Newton integral and its radial derivatives do not have an analytic solution, and numerical integration must be applied. The rectangular prism element, has been used frequently to compute terrain correction in various studies. It uses planar geometry and has an analytical solution for Newton's integral and its derivatives. Recently many studies investigated tesseroid based method to compute the potential and attraction of topographic masses, see, [Fukushima, 2017; Grombein et al., 2013; Heck and Seitz, 2007; Uieda et al., 2016]. Fukushima's method utilizes the 3D numerical double-exponential integration method, HS's method uses the Tylor series up to term 2 and the PM method is the zero term approximation of HS method. The simulation studies demonstrated the higher accuracy of tesseroid based methods compared to the method of prism in the literature. However, their performance is not tested for gravimetric terrain correction. The main goal of this study is the investigation of efficiency, in terms of speed and accuracy, of four tesseroid methods: Fukushima, Martinec-Vanicek (MV), Heck-Seitz (HS), point mass (PM) compared with prism in Hayford-Bowie zone.
To investigate the computation accuracy, we used bounded spherical shell with constant thinness and density for which the analytical exact solution exists. The thinness of the shell have been chosen 1000 meter and the computation point is located on the origin of bounded spherical shell on the equator in the spherical coordinate (0,0,1000). The computation of terrain correction are discretized in different zones: innermost, inner and outer correspond respectively to , and and with different sizes. The contribution of innermost zone is over 75% of total effect. Numerical results indicate the success of the prism for topographic effect in all three zones, especially for masses in neighborhoods of computation points, than those methods based on tesseroid. To overcome the effect of Earth's curvature, the elevation of computation point is corrected using a simple formula. Also, our calculations show that, in innermost zone, the topography should be discretized in 30 meter elements to achieve 10 Gal level of accuracy.The gravitational effect of topographical masses is one of the important component of the gravity field, which plays a key role in geophysical and geodetic studies. For geophysical interpretations, it is necessary to eliminate the effect of topography as a disturbing factor from the observed gravity data. In geodetic applications, the solution of geodetic boundary problem such as Stokes requires mass free space above the geoid. In present study efficiency of different tesseroid based methods are compared with well-known rectangular prism to evaluate the gravimetric terrain corrections up to distance of 1.5 arc-degree known as the Hayford-Bowie zone. For this purpose, the mathematical formula: the vertical derivative of Newton integral and the digital elevation model (DEM) are used as data. In computing the topographic effect, we are involved with the two factors: 1- the integral element (point, line, plane, rectangular prism, tesseroid, etc.) and 2- geometry of topography (planar, spherical and ellipsoidal), which causes some difficulties to understand the subject. Finite element method is a general and standard method for estimating the terrain correction. In this method, the gravitational topographic effect is evaluated as the total gravitational effect of the smaller elements.
Tesseroid is the geometrical body bounded by two concentric spheres. This element uses the spherical geometry of topography which introduces relative error of about 1% (Novak and Grafarend, 2005). By choosing this element, the Newton integral and its radial derivatives do not have an analytic solution, and numerical integration must be applied. The rectangular prism element, has been used frequently to compute terrain correction in various studies. It uses planar geometry and has an analytical solution for Newton's integral and its derivatives. Recently many studies investigated tesseroid based method to compute the potential and attraction of topographic masses, see, [Fukushima, 2017; Grombein et al., 2013; Heck and Seitz, 2007; Uieda et al., 2016]. Fukushima's method utilizes the 3D numerical double-exponential integration method, HS's method uses the Tylor series up to term 2 and the PM method is the zero term approximation of HS method. The simulation studies demonstrated the higher accuracy of tesseroid based methods compared to the method of prism in the literature. However, their performance is not tested for gravimetric terrain correction. The main goal of this study is the investigation of efficiency, in terms of speed and accuracy, of four tesseroid methods: Fukushima, Martinec-Vanicek (MV), Heck-Seitz (HS), point mass (PM) compared with prism in Hayford-Bowie zone.
To investigate the computation accuracy, we used bounded spherical shell with constant thinness and density for which the analytical exact solution exists. The thinness of the shell have been chosen 1000 meter and the computation point is located on the origin of bounded spherical shell on the equator in the spherical coordinate (0,0,1000). The computation of terrain correction are discretized in different zones: innermost, inner and outer correspond respectively to , and and with different sizes. The contribution of innermost zone is over 75% of total effect. Numerical results indicate the success of the prism for topographic effect in all three zones, especially for masses in neighborhoods of computation points, than those methods based on tesseroid. To overcome the effect of Earth's curvature, the elevation of computation point is corrected using a simple formula. Also, our calculations show that, in innermost zone, the topography should be discretized in 30 meter elements to achieve 10 Gal level of accuracy.https://jesphys.ut.ac.ir/article_67778_30eb075e7341c934fce8d786f51035d2.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Study of tropopause folding frequency and its seasonal changes during 2013-2015 emphasizing over Southwest AsiaStudy of tropopause folding frequency and its seasonal changes during 2013-2015 emphasizing over Southwest Asia6076246485210.22059/jesphys.2018.234992.1006909FARezaBorhaniPh.D. Student, Department of Space Physics, Institute of Geophysics, University of Tehran, IranFarhangAhmadi-GiviAssociate Professor, Department of Space Physics, Institute of Geophysics, University of Tehran, IranSarmadGhaderAssociate Professor, Department of Space Physics, Institute of Geophysics, University of Tehran, Iran0000-0001-9666-5493AlirezaMohebalhojehProfessor, Department of Space Physics, Institute of Geophysics, University of Tehran, Iran0000-0002-5906-8486Journal Article20170618This research is aimed to study the global distribution of tropopause folding frequency and its seasonal changes, emphasizing the ones over the Southwest Asia, for a 3-year period from Jan. 2013 up to Dec. 2015. For this purpose, the European Centre for Medium-Range Weather Forecasts (ECMWF) (ERA- Interim) reanalysis data set including wind, temperature and geopotential height were used. The horizontal resolution of the initial fields is 1×1 degrees in longitudinal and latitudinal directions prepared operationally every six hours at 60 levels. Applying the initial fields, the secondary fields, such as potential vorticity and potential temperature were calculated. From the 60 vertical levels, about 19 levels extending from 600 to 100 hPa cover the depth of all tropopause folding events studied here. In this research, we define the 2PVU potential vorticity surface as the dynamical tropopause (1PVU corresponds to 10<sup>-6 </sup>m<sup>2</sup>s<sup>-1</sup>Kkg<sup>-1</sup>). Identification of tropopause folding is based on the algorithm developed by Sprenger et al. (2003) and Gray (2003) and refined by Škerlak et al. (2014) using pseudosoundings in each of the grid points. A 3-D labeling algorithm is used to distinguish between stratospheric and tropospheric air masses and labeling them according the PV values. After labeling, the tropopause folds are identified at every grid points from the vertical profiles of the label field as areas of multiple crossings of dynamical tropopause. The frequency of folds at each grid point over a chosen period is calculated from the number of folding divided by the total 6-hourly instances corresponding to the season, and finally expressed as a percentage. According to this algorithm, tropopause folds are classified into three categories as shallow, medium and deep.
The analysis of spatio–temporal distributions of tropopause folds shows that the frequency of folding events over subtropical and mid latitude regions (between 20° to 40° north and south latitudes) is higher than the other latitudes in both the Northern and Southern Hemispheres and their frequency is increased remarkably in the winter season. Tropopause foldings in the Northern Hemisphere winter are seen as a relatively narrow band located in the subtropical latitude that surrounds zonally the whole Hemisphere, while in the summer season, foldings are concentrated in the subtropical region of the Eastern Hemisphere. Also, tropopause foldings occur mainly as shallow type in the subtropical region but as medium or deep ones in higher latitudes. Foldings in high latitudes are attributed to large-scale deformation fields, as noted by Holton and Hakim (2013), that are confirmed with water vapor satellite images, while the ageostrophic frontal circulations affect the tropopause deformation in mid latitudes.
The other noticeable point is that the Southwest Asia region has positive anomalous values of tropopause folding frequency annually, relative to the Northern Hemisphere mean. This can be partly due to the Rossby wave breaking as pointed out by Martius et al. (2007) and Gabriel and Peters (2008). These anomalous values of folding frequency change in different seasons and obtain their maximum amounts in the summer time. Two regions with the maximum value of the folding frequency more than 5 times the Northern Hemisphere mean, seen over Iran–Afghanistan and the eastern of the Mediterranean Sea that occurred in June. The increase of folding frequency in the Southwest Asia during the summer season can be related mainly to the formation and existence of the monsoon anticyclone over the subtropical region of the Indian Ocean (Tyrlis et al., 2013) and partly to the baroclinic instability events. Results of the case study relevant to tropopause foldings in June 2015 show the existence of two strong jet streams in the aforementioned regions. Also, in the meridional cross-sections of wind and PV fields two principal areas of tropopause folding are seen in the west and downward of the jet streams locations. As expected, the potential temperature maps indicate the existence of marked baroclinic regions associated with the tropopause foldings.This research is aimed to study the global distribution of tropopause folding frequency and its seasonal changes, emphasizing the ones over the Southwest Asia, for a 3-year period from Jan. 2013 up to Dec. 2015. For this purpose, the European Centre for Medium-Range Weather Forecasts (ECMWF) (ERA- Interim) reanalysis data set including wind, temperature and geopotential height were used. The horizontal resolution of the initial fields is 1×1 degrees in longitudinal and latitudinal directions prepared operationally every six hours at 60 levels. Applying the initial fields, the secondary fields, such as potential vorticity and potential temperature were calculated. From the 60 vertical levels, about 19 levels extending from 600 to 100 hPa cover the depth of all tropopause folding events studied here. In this research, we define the 2PVU potential vorticity surface as the dynamical tropopause (1PVU corresponds to 10<sup>-6 </sup>m<sup>2</sup>s<sup>-1</sup>Kkg<sup>-1</sup>). Identification of tropopause folding is based on the algorithm developed by Sprenger et al. (2003) and Gray (2003) and refined by Škerlak et al. (2014) using pseudosoundings in each of the grid points. A 3-D labeling algorithm is used to distinguish between stratospheric and tropospheric air masses and labeling them according the PV values. After labeling, the tropopause folds are identified at every grid points from the vertical profiles of the label field as areas of multiple crossings of dynamical tropopause. The frequency of folds at each grid point over a chosen period is calculated from the number of folding divided by the total 6-hourly instances corresponding to the season, and finally expressed as a percentage. According to this algorithm, tropopause folds are classified into three categories as shallow, medium and deep.
The analysis of spatio–temporal distributions of tropopause folds shows that the frequency of folding events over subtropical and mid latitude regions (between 20° to 40° north and south latitudes) is higher than the other latitudes in both the Northern and Southern Hemispheres and their frequency is increased remarkably in the winter season. Tropopause foldings in the Northern Hemisphere winter are seen as a relatively narrow band located in the subtropical latitude that surrounds zonally the whole Hemisphere, while in the summer season, foldings are concentrated in the subtropical region of the Eastern Hemisphere. Also, tropopause foldings occur mainly as shallow type in the subtropical region but as medium or deep ones in higher latitudes. Foldings in high latitudes are attributed to large-scale deformation fields, as noted by Holton and Hakim (2013), that are confirmed with water vapor satellite images, while the ageostrophic frontal circulations affect the tropopause deformation in mid latitudes.
The other noticeable point is that the Southwest Asia region has positive anomalous values of tropopause folding frequency annually, relative to the Northern Hemisphere mean. This can be partly due to the Rossby wave breaking as pointed out by Martius et al. (2007) and Gabriel and Peters (2008). These anomalous values of folding frequency change in different seasons and obtain their maximum amounts in the summer time. Two regions with the maximum value of the folding frequency more than 5 times the Northern Hemisphere mean, seen over Iran–Afghanistan and the eastern of the Mediterranean Sea that occurred in June. The increase of folding frequency in the Southwest Asia during the summer season can be related mainly to the formation and existence of the monsoon anticyclone over the subtropical region of the Indian Ocean (Tyrlis et al., 2013) and partly to the baroclinic instability events. Results of the case study relevant to tropopause foldings in June 2015 show the existence of two strong jet streams in the aforementioned regions. Also, in the meridional cross-sections of wind and PV fields two principal areas of tropopause folding are seen in the west and downward of the jet streams locations. As expected, the potential temperature maps indicate the existence of marked baroclinic regions associated with the tropopause foldings.https://jesphys.ut.ac.ir/article_64852_3c47cff7310c9be6c17790a4cd19c3a3.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023World Teleconnection and Regional Teleconnections of IranWorld Teleconnection and Regional Teleconnections of Iran6256406607910.22059/jesphys.2018.238493.1006920FARezaDoostanAssistant Professor, Department of Geography, Ferdowsi university of Mashhad, Mashhad, Iran0000-0003-1436-7295Journal Article20170927Teleconnection is defined as a meaningful relationship of time variations of two meteorological patterns that are far from each other and teleconnection is an important principle in climate for the explanation of meterologic phenomena. The term was used for the first time in climate studies by Angstrom (1935). Later Wallace and Gautzler (1981) defined the concept as a meaningful correlation between the time series of a month or longer of the climate parameters in distant locations. Teleconnections are associated with the anomaly of atmospheric large scale and hemisphere circulation. Therefore, it is important to recognize teleconnections affecting the regional climate. In this regard the question is: What are the most important global teleconnections for the cold seasons in Iran? To find an appropriate answer, the monthly time series of teleconnections close to Iran and the world from 1950 to 2010 were selected and the Pearson correlation method was used. The correlations indicated that global phenomena associated with regional teleconnection are established in the North Atlantic, Europe and Western Siberia, while the Pacific Oceanographic centers have a weak connection with the regional teleconnection in the cold period. The most important phenomena in the middle layers of the atmosphere are the North Sea - Caspian Pattern (NCP), and the North Atlantic Oscillation (NAO) associated with regional teleconnection of the Western Europe - North Caspian. In the positive phase of the North Atlantic Oscillation and the North Sea - Caspian Pattern, the precipitation increases and the temperature decreases while in the negative phase, the temperature increases and precipitation decreases in Iran. These two phenomena are important and reliable predictors in Iran. At ground level, the Arctic Oscillation (AO), Scandinavia (SCAND), Eastern Atlantic - Western Russia (EA-WR) and East Atlantic (EA) related to regional teleconnection of Northern Europe, Northern Siberia, and Central Asia are the most important phenomena on the surface for the climate of Iran. In the negative phase of the Arctic Oscillation, Scandinavia, the precipitation increases and temperature decreases for the climate of Iran, and in the positive phase of these two, the climate of Iran experiences dries conditions. Also in the positive phase of the Eastern Atlantic and Eastern Atlantic - western Russia, the temperature decreases and the precipitation increases, and in the negative phase, the temperature increases and the precipitation decreases. It appears the global and regional teleconnections in Eurasia and the Northern Atlantic are not completely separated. Therefore, the global teleconnection associated with the climate of Iran are: the North Atlantic Oscillation, the North Sea - Caspian Pattern, Arctic Oscillation, Scandinavia, Eastern Atlantic - Western Russia and Eastern Atlantic. Often, teleconnections of the cold seasons of Iran are related to meridional westerlies along the blocking and cut-off low that are established in Western Europe, Central and Western Siberia to lower and higher latitudes in the same geographic locations. In climate studies one should not ignore the role of geographic featues on the surface of the earth, while considering the above-mentioned issue.Teleconnection is defined as a meaningful relationship of time variations of two meteorological patterns that are far from each other and teleconnection is an important principle in climate for the explanation of meterologic phenomena. The term was used for the first time in climate studies by Angstrom (1935). Later Wallace and Gautzler (1981) defined the concept as a meaningful correlation between the time series of a month or longer of the climate parameters in distant locations. Teleconnections are associated with the anomaly of atmospheric large scale and hemisphere circulation. Therefore, it is important to recognize teleconnections affecting the regional climate. In this regard the question is: What are the most important global teleconnections for the cold seasons in Iran? To find an appropriate answer, the monthly time series of teleconnections close to Iran and the world from 1950 to 2010 were selected and the Pearson correlation method was used. The correlations indicated that global phenomena associated with regional teleconnection are established in the North Atlantic, Europe and Western Siberia, while the Pacific Oceanographic centers have a weak connection with the regional teleconnection in the cold period. The most important phenomena in the middle layers of the atmosphere are the North Sea - Caspian Pattern (NCP), and the North Atlantic Oscillation (NAO) associated with regional teleconnection of the Western Europe - North Caspian. In the positive phase of the North Atlantic Oscillation and the North Sea - Caspian Pattern, the precipitation increases and the temperature decreases while in the negative phase, the temperature increases and precipitation decreases in Iran. These two phenomena are important and reliable predictors in Iran. At ground level, the Arctic Oscillation (AO), Scandinavia (SCAND), Eastern Atlantic - Western Russia (EA-WR) and East Atlantic (EA) related to regional teleconnection of Northern Europe, Northern Siberia, and Central Asia are the most important phenomena on the surface for the climate of Iran. In the negative phase of the Arctic Oscillation, Scandinavia, the precipitation increases and temperature decreases for the climate of Iran, and in the positive phase of these two, the climate of Iran experiences dries conditions. Also in the positive phase of the Eastern Atlantic and Eastern Atlantic - western Russia, the temperature decreases and the precipitation increases, and in the negative phase, the temperature increases and the precipitation decreases. It appears the global and regional teleconnections in Eurasia and the Northern Atlantic are not completely separated. Therefore, the global teleconnection associated with the climate of Iran are: the North Atlantic Oscillation, the North Sea - Caspian Pattern, Arctic Oscillation, Scandinavia, Eastern Atlantic - Western Russia and Eastern Atlantic. Often, teleconnections of the cold seasons of Iran are related to meridional westerlies along the blocking and cut-off low that are established in Western Europe, Central and Western Siberia to lower and higher latitudes in the same geographic locations. In climate studies one should not ignore the role of geographic featues on the surface of the earth, while considering the above-mentioned issue.https://jesphys.ut.ac.ir/article_66079_fbec7ad6e4b45d34fcd09083ec7cc7f1.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Avalanche potential forecast using a numerical weather predicton model,(Case study: Shahrestanak zone, 26-28 December 2016)Avalanche potential forecast using a numerical weather predicton model,(Case study: Shahrestanak zone, 26-28 December 2016)6416586588410.22059/jesphys.2018.246101.1006945FASaharTajbakhshAssistant Professor, Atmospheric Survey Research Group, Atmospheric Science and Meteorological Research Center (ASMERC), Tehran, IranAmirhoseinNikfalMeteorology expert, Atmospheric Science and Meteorological Research Center (ASMERC), Tehran, IranJournal Article20171122Avalanches are likely to happen in winter time mountainous regions of Iran, and its timely prediction can help to improve the road traffic safety. The aim of this study is to provide the avalanche first guess using Numerical Weather Prediction (NWP) model outputs. Since the meteorological observations in mountainous areas are very scarce, access to snow data in ground measurements is not feasible; it seems that making use of numerical models to simulate the associated data and predicting the avalanche first guess may be a reliable method. For this purpose, three avalanche events which occurred in Chalous Road (Kanduan-Gachsar) were investigated synoptically as the case studies. Then the precipitation (water equivalent to snow, snow thickness), temperature and wind outputs of Weather Research and Forecasting (WRF) model were analysed based on the Spangler classification tables in order to determine the potential of avalanche events.
Snow density, snow water equivalent and snow depth are the most important factors of snow cover that have fundamental applications in predicting avalanche and flood events. The predictions in this study are based on the WRF model numerical simulations. Spangler et al. (2009) presented a model for estimating the avalanche potential based on the three components of the region’s climatic conditions, present weather survey and forecasting avalanche indices. For verification of the model, the threshold values for precipitation, temperature and wind were calculated in Colorado. In the present study, only the third part of the Spengler method (prediction of avalanche occurrence potential) was applied to make the first guess of avalanche potential occurrence in the mountainous regions of the country. The period 26 -29 December 2016 for heavy snow conditions with multiple avalanche was considered as a case study. The area under study is the Shahrestanak (36, 10 °N and 51.31° E) on the Chalous road, which experienced more than 10 avalanche events during winter 2016. Its elevation is about 2230 to 2240 meters and is located in Alborz Province. There are similar climatic conditions in the two regions of Colorado and Alborz based on the Gutten climate classification but their temperature and type of snowpack are different according to the Sturm snowpack classification. The type of snowpack in the Colorado area is prairie (thin and moderate cold snow covering with substantial wind drifting, with the maximum depth of about 1 meter),while the type of snowpack in Alborz area is ephemeral (thin and warm snow that melts down soon and its depth is between zero to 50 cm). Hence, it seems that the thresholds of the meteorological indicators related to avalanche potential in Alborz region could be slightly lower than its thresholds in Colorado area. The synoptic study was done using the mean daily ERA-Intrim data. In the present study, the patterns of sea level pressure, thickness of 500/1000 hPa, geopotential heights and temperature with relative vorticity of 500 hPa, vertical velocity at 700 hPa, as well as relative humidity and winds of 850 hPa were studied in order to identify the weather conditions during the avalanche period. Also, using the WRF meso-scale model (ver. 3.9), the simulation of atmospheric condition is conducted for 4 days (26-29 December 2016). Temperature, wind, cloudiness, snow thickness, snow water equivalent and snow cover (the most important indicators of avalanche) were determined using the WRF numerical prediction model. Then the potential of avalanches’ occurrence was investigated according to the Spengler model.
Here, we attempt to investigate the potential of avalanche event in a case study using a NWP model and determine the probability of occurrence of avalanche based on one of the existing methods. The case study was selected in such a way that no significant atmospheric conditions were observed in the area. There was only one case of avalanche over 24 hours in the Chalous Road, Shahrestanak position. The results showed that the patterns of weather forecasting in this study are in agreement with actual weather conditions and there is no specific feature for avalanche occurrence in these patterns. The presented thresholds for the avalanche potential have good match in this case study. Winds of less than 9 m/s, snow depth of less than 30 cm in 24 hours, temperature of less than 8 °C in 12 hours, unstable temperature in the range of -4 to -10°C, and cloudy sky during the day without rain emphasize the probability of avalanche and not its certainty. To predict the high potential for avalanche, the atmospheric thresholds should be higher than these values.Avalanches are likely to happen in winter time mountainous regions of Iran, and its timely prediction can help to improve the road traffic safety. The aim of this study is to provide the avalanche first guess using Numerical Weather Prediction (NWP) model outputs. Since the meteorological observations in mountainous areas are very scarce, access to snow data in ground measurements is not feasible; it seems that making use of numerical models to simulate the associated data and predicting the avalanche first guess may be a reliable method. For this purpose, three avalanche events which occurred in Chalous Road (Kanduan-Gachsar) were investigated synoptically as the case studies. Then the precipitation (water equivalent to snow, snow thickness), temperature and wind outputs of Weather Research and Forecasting (WRF) model were analysed based on the Spangler classification tables in order to determine the potential of avalanche events.
Snow density, snow water equivalent and snow depth are the most important factors of snow cover that have fundamental applications in predicting avalanche and flood events. The predictions in this study are based on the WRF model numerical simulations. Spangler et al. (2009) presented a model for estimating the avalanche potential based on the three components of the region’s climatic conditions, present weather survey and forecasting avalanche indices. For verification of the model, the threshold values for precipitation, temperature and wind were calculated in Colorado. In the present study, only the third part of the Spengler method (prediction of avalanche occurrence potential) was applied to make the first guess of avalanche potential occurrence in the mountainous regions of the country. The period 26 -29 December 2016 for heavy snow conditions with multiple avalanche was considered as a case study. The area under study is the Shahrestanak (36, 10 °N and 51.31° E) on the Chalous road, which experienced more than 10 avalanche events during winter 2016. Its elevation is about 2230 to 2240 meters and is located in Alborz Province. There are similar climatic conditions in the two regions of Colorado and Alborz based on the Gutten climate classification but their temperature and type of snowpack are different according to the Sturm snowpack classification. The type of snowpack in the Colorado area is prairie (thin and moderate cold snow covering with substantial wind drifting, with the maximum depth of about 1 meter),while the type of snowpack in Alborz area is ephemeral (thin and warm snow that melts down soon and its depth is between zero to 50 cm). Hence, it seems that the thresholds of the meteorological indicators related to avalanche potential in Alborz region could be slightly lower than its thresholds in Colorado area. The synoptic study was done using the mean daily ERA-Intrim data. In the present study, the patterns of sea level pressure, thickness of 500/1000 hPa, geopotential heights and temperature with relative vorticity of 500 hPa, vertical velocity at 700 hPa, as well as relative humidity and winds of 850 hPa were studied in order to identify the weather conditions during the avalanche period. Also, using the WRF meso-scale model (ver. 3.9), the simulation of atmospheric condition is conducted for 4 days (26-29 December 2016). Temperature, wind, cloudiness, snow thickness, snow water equivalent and snow cover (the most important indicators of avalanche) were determined using the WRF numerical prediction model. Then the potential of avalanches’ occurrence was investigated according to the Spengler model.
Here, we attempt to investigate the potential of avalanche event in a case study using a NWP model and determine the probability of occurrence of avalanche based on one of the existing methods. The case study was selected in such a way that no significant atmospheric conditions were observed in the area. There was only one case of avalanche over 24 hours in the Chalous Road, Shahrestanak position. The results showed that the patterns of weather forecasting in this study are in agreement with actual weather conditions and there is no specific feature for avalanche occurrence in these patterns. The presented thresholds for the avalanche potential have good match in this case study. Winds of less than 9 m/s, snow depth of less than 30 cm in 24 hours, temperature of less than 8 °C in 12 hours, unstable temperature in the range of -4 to -10°C, and cloudy sky during the day without rain emphasize the probability of avalanche and not its certainty. To predict the high potential for avalanche, the atmospheric thresholds should be higher than these values.https://jesphys.ut.ac.ir/article_65884_7be8675a6f9d57080bcbb8cb5e32da7d.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Study of the Turbulence Characteristics of the Bottom Ekman Layer in the Persian Gulf using Large Eddy Simulation ApproachStudy of the Turbulence Characteristics of the Bottom Ekman Layer in the Persian Gulf using Large Eddy Simulation Approach6596706778210.22059/jesphys.2018.249600.1006965FAMohamadFarmanaraPh.D. Student, Department of Marine and Atmospheric Science (non-Biologic), Faculty of Marine Science and Technology, University of Hormozgan, Bandar Abbas, IranHosseinMalakootiAssociate Professor, Department of Marine and Atmospheric Science (non-Biologic), Faculty of marine science and technology, University of Hormozgan, Bandar Abbas, Iran0000-0003-2208-1238SmaeylHassanzadehProfessor, Department of Physics, Faculty of Sciences, University of Isfahan, Isfahan, IranJournal Article20180113In this study, the turbulent properties of the bottom Ekman layer of the Persian Gulf is studied, using a Parallelized Large-Eddy Simulation Model (PALM). Three numerical experiments were carried out with emphasis on stratification effects. A reference experiments (EXP C) without vertical gradients of the potential temperature and salinity and two experiments with vertical gradient of the potential temperature and salinity. The initial values of the surface potential temperature and salinity and their vertical gradients, provided from in situ data, were chosen according to August (EXP A) and November (EXP N) condition of the Persian Gulf. The eastern part of the Iranian coastal area of the Persian Gulf near the Hormuz Strait was chosen because there is a considerable western current during the year in this area. Also, the sea is deep enough to observe a distinctive pycnocline layer which separates surface and bottom mixed layer. The domain size is 200m×200m×100m in x, y and z directions respectively. A pycnocline layer with 40m and 20m deep was considered for August and November, respectively. A Geostrophic current with 0.15m s^(-1) speed is supposed to flow in x direction over the rough sea bed. The bottom boundary condition of the momentum flux was set to Dirichlet in order to create a no-slip condition at the sea bed. The simulations were carried out for 48h to include at least two inertial periods and avoid inertial oscillations. The results showed that the stratification limits the bottom Ekman layer depth and it does not grow with time. While in EXP C, where the fluid is neutral, a rapid growth of the bottom Ekman layer is obvious during the first 20h and its maximum depth reaches 60m. The Ekman cross-stream current component cannot entrain into pycnocline layer and it vanished at the bottom of the pycnocline layer. In autumn in which the pycnocline layer is thinner, the Ekman spiral is broadened and the magnitude of the Ekman cross-stream current component is 25% larger in compare to summer. The maximum value of the Ekman cross-stream current component is about 0.04m s^(-1) in EXP C and EXP A while it is about 0.05m s^(-1) in EXP N. The hodograph of the horizontal velocity in EXP C is more similar to Ekman theoretical solution. The stream-wise component of the horizontal velocity decrease with the same rate near the sea bed in all experiments which implies that the stratification does not have much effect on bottom stress. It is concluded that when an intense pycnocline exists and the bottom mixed layer is thin, less time is needed to trigger the turbulence. The bulk turbulent kinetic energy in all experiments is the same. Since the bottom boundary is assumed adiabatic and there is no heat flux from the bottom, the heat budget in the neutral BBL is approximately conserved (molecular diffusion is not considerable compared). Then a pycnocline is necessary to maintain heat conservation after the formation of a mixed layer. These intensified pycnocline can be observed as distinctive peaks in vertical profiles of the buoyancy frequencies near the top and bottom of the stratified layer. The thickness of the intensified pycnocline grows with time and at the end of simulation reaches about few meters. These intensified pycnocline layers in November is thicker than that of August. Also at the bottom interface of the stratified, the shear stress increases.In this study, the turbulent properties of the bottom Ekman layer of the Persian Gulf is studied, using a Parallelized Large-Eddy Simulation Model (PALM). Three numerical experiments were carried out with emphasis on stratification effects. A reference experiments (EXP C) without vertical gradients of the potential temperature and salinity and two experiments with vertical gradient of the potential temperature and salinity. The initial values of the surface potential temperature and salinity and their vertical gradients, provided from in situ data, were chosen according to August (EXP A) and November (EXP N) condition of the Persian Gulf. The eastern part of the Iranian coastal area of the Persian Gulf near the Hormuz Strait was chosen because there is a considerable western current during the year in this area. Also, the sea is deep enough to observe a distinctive pycnocline layer which separates surface and bottom mixed layer. The domain size is 200m×200m×100m in x, y and z directions respectively. A pycnocline layer with 40m and 20m deep was considered for August and November, respectively. A Geostrophic current with 0.15m s^(-1) speed is supposed to flow in x direction over the rough sea bed. The bottom boundary condition of the momentum flux was set to Dirichlet in order to create a no-slip condition at the sea bed. The simulations were carried out for 48h to include at least two inertial periods and avoid inertial oscillations. The results showed that the stratification limits the bottom Ekman layer depth and it does not grow with time. While in EXP C, where the fluid is neutral, a rapid growth of the bottom Ekman layer is obvious during the first 20h and its maximum depth reaches 60m. The Ekman cross-stream current component cannot entrain into pycnocline layer and it vanished at the bottom of the pycnocline layer. In autumn in which the pycnocline layer is thinner, the Ekman spiral is broadened and the magnitude of the Ekman cross-stream current component is 25% larger in compare to summer. The maximum value of the Ekman cross-stream current component is about 0.04m s^(-1) in EXP C and EXP A while it is about 0.05m s^(-1) in EXP N. The hodograph of the horizontal velocity in EXP C is more similar to Ekman theoretical solution. The stream-wise component of the horizontal velocity decrease with the same rate near the sea bed in all experiments which implies that the stratification does not have much effect on bottom stress. It is concluded that when an intense pycnocline exists and the bottom mixed layer is thin, less time is needed to trigger the turbulence. The bulk turbulent kinetic energy in all experiments is the same. Since the bottom boundary is assumed adiabatic and there is no heat flux from the bottom, the heat budget in the neutral BBL is approximately conserved (molecular diffusion is not considerable compared). Then a pycnocline is necessary to maintain heat conservation after the formation of a mixed layer. These intensified pycnocline can be observed as distinctive peaks in vertical profiles of the buoyancy frequencies near the top and bottom of the stratified layer. The thickness of the intensified pycnocline grows with time and at the end of simulation reaches about few meters. These intensified pycnocline layers in November is thicker than that of August. Also at the bottom interface of the stratified, the shear stress increases.https://jesphys.ut.ac.ir/article_67782_be32d3d971c4c30b0dce82b6017fbdfd.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Distribution of plasma flow velocity field at the base of chromospheric jetsDistribution of plasma flow velocity field at the base of chromospheric jets6716776778410.22059/jesphys.2018.250417.1006966FAElnazAmirkhanlouM.Sc. Student, Department of Physics, Payame Noor University, zanjan, IranEhsanTavabiAssociate Professor, Department of Physics, Payame Noor University, Tehran, Iran0000-0003-3210-9716SimaZeighamiAssistant Professor, Department of Physics, Tabriz Branch, Islamic Azad University, Tabriz, Iran0000-0002-1246-4473Journal Article20180203The sun is constantly throwing ionized particles out of its surface and causing solar storms. Various investigations have been carried out for finding out the origin of these storms for 50 years, in the photosphere, corona, and chromosphere. Large-scale coronal holes are usually areas that are definitely accepted for origin of the storms. However, research is still being done in other areas. Network jets are one of the issues that we have tried to investigate the distribution of their velocity field and their structures and also their role in plasma flows. Oscillations and transverse displacement of the jet axis can be interpreted as the presence of transverse waves along their axis. The two types of waves responsible for these fluctuations are magneto-hydrodynamic waves and alfvenic waves. In this paper, we studied the transverse displacement of the network jet axis, with the FLCT algorithm under IDL. The FLCT method is widely used to obtain the speed of moving features. The observed area is so large that we can identify many of the network jets. After choosing the coordinates of the item using the FLCT algorithm, we intend to obtain the Alfven velocity of the desired coordinates. The FLCT algorithm is a mathematical program used to construct a two-dimensional velocity field of connected images. The calculation of speed in this method depends on three factors: 1. Isolate the point on the image, 2- Calculation of correlation function between two images, 3. Peak location of the mutual correlation function, calculated for each pixel of the velocity. The FLCT algorithm uses interpolation to eliminate the complexity of the fixed angle on the center of the images. In results we can see the images analyzed in the IDL program, using MATLAB software to show the speed vectors that are torsional and indicate the speed of the alphabet. The images are in pixels and each pixel are is 0.3 sec. We estimated the chromosphere mass velocity of about 20 kms<sup>-1</sup> using FLCT. Some of the network jets in the images seem to be other than the second type solar spicules sticks. However, we noticed that the speed of the jets is generally twice as large as the second type of sticks, which indicates the high contribution of these jets to the mass and energy of the solar atmosphere. We have noticed that network jets are important regardless of their relationship with second-generation sticks. A bunch of network jets is considered as an example of a jet. The network jet mechanism demonstrates the dynamics of the jets with high speeds (close to the speed of the Alfven in the interface area), which allows magnetic reconnection between the small magnetic rings and the background. Based on observational findings, several theoretical models and numerical simulations have been developed to describe the mechanism of these structures. Of course, unlike the remarkable improvements created by very accurate observations and the expansion of numerical theories and simulations, it is still unclear and their mutual relationship, their physical parameters, the definition of their formation mechanism and their possible role in the solar corona heat is unknown. These ambiguities are mainly due to the difference in the appearance of these phenomena when viewed in a variety of spectral lines.The sun is constantly throwing ionized particles out of its surface and causing solar storms. Various investigations have been carried out for finding out the origin of these storms for 50 years, in the photosphere, corona, and chromosphere. Large-scale coronal holes are usually areas that are definitely accepted for origin of the storms. However, research is still being done in other areas. Network jets are one of the issues that we have tried to investigate the distribution of their velocity field and their structures and also their role in plasma flows. Oscillations and transverse displacement of the jet axis can be interpreted as the presence of transverse waves along their axis. The two types of waves responsible for these fluctuations are magneto-hydrodynamic waves and alfvenic waves. In this paper, we studied the transverse displacement of the network jet axis, with the FLCT algorithm under IDL. The FLCT method is widely used to obtain the speed of moving features. The observed area is so large that we can identify many of the network jets. After choosing the coordinates of the item using the FLCT algorithm, we intend to obtain the Alfven velocity of the desired coordinates. The FLCT algorithm is a mathematical program used to construct a two-dimensional velocity field of connected images. The calculation of speed in this method depends on three factors: 1. Isolate the point on the image, 2- Calculation of correlation function between two images, 3. Peak location of the mutual correlation function, calculated for each pixel of the velocity. The FLCT algorithm uses interpolation to eliminate the complexity of the fixed angle on the center of the images. In results we can see the images analyzed in the IDL program, using MATLAB software to show the speed vectors that are torsional and indicate the speed of the alphabet. The images are in pixels and each pixel are is 0.3 sec. We estimated the chromosphere mass velocity of about 20 kms<sup>-1</sup> using FLCT. Some of the network jets in the images seem to be other than the second type solar spicules sticks. However, we noticed that the speed of the jets is generally twice as large as the second type of sticks, which indicates the high contribution of these jets to the mass and energy of the solar atmosphere. We have noticed that network jets are important regardless of their relationship with second-generation sticks. A bunch of network jets is considered as an example of a jet. The network jet mechanism demonstrates the dynamics of the jets with high speeds (close to the speed of the Alfven in the interface area), which allows magnetic reconnection between the small magnetic rings and the background. Based on observational findings, several theoretical models and numerical simulations have been developed to describe the mechanism of these structures. Of course, unlike the remarkable improvements created by very accurate observations and the expansion of numerical theories and simulations, it is still unclear and their mutual relationship, their physical parameters, the definition of their formation mechanism and their possible role in the solar corona heat is unknown. These ambiguities are mainly due to the difference in the appearance of these phenomena when viewed in a variety of spectral lines.https://jesphys.ut.ac.ir/article_67784_44c67792da362396318d0b495883dd37.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Assessment and Calibration of Gilan Radar Precipitation intensity using ground station dataAssessment and Calibration of Gilan Radar Precipitation intensity using ground station data6796956607810.22059/jesphys.2018.252113.1006974FASaharTajbakhshAssistant Professor, Atmospheric Survey Research Group, Atmospheric Science and Meteorological Research Center (ASMERC), Tehran, IranFurughMomenpoorExpert in the Meteorological Office of Gilan, Rasht, IranHosseinAbedHead of the Applied Meteorological Research Group of Gilan, Rasht, IranSamanehNegahExpert in the Meteorological Office of Gilan, Rasht, IranNimaFarid MojtahediExpert in Department of Applied Meteorology, Gilan, Rasht, IranJournal Article20180214Radar is a remote sensing instrument that sends electromagnetic waves with specific power to atmosphere and evaluates the amount of return power. It can then measure the difference between the send and retune powers and detect atmospheric phenomena as clouds. Using this tool, there is a wide, continuous and integrated monitoring of atmospheric phenomena. Like any remote sensing device that has, data of weather radars can also have errors. One of the most important measures to eliminate or minimize the radar data errors is calibration, and correction of radar index coefficients. The purpose of this paper is to extract an appropriate relationship for precipitation intensity related to radar reflectivity in Gilan. The Gilan ground based radar installed at the Kiashahr Port is a German-made GEMATRONIK MTEOR1600C type operating in the dual-polarization Doppler radar frequency band (c-band). In general, in order to calibrate the weather radars, “a” and “b” coefficients are required to modify in the Marshall Palmer initial formula for the target area. For this purpose, we tried to estimate the coefficients of this relationship (the relationship between precipitation intensity and radar reflection intensity) in a three years period (2012-2015) and to find new coefficients. In this study, the Doppler filter method (IIR Doppler Filter) was used to remove clutters. This filter was installed in the signal processor. In order to calibrate the Gilan radar, the rain gauge data of the Rasht airport synoptic station was compared with radar data. In this way, the precipitation statistics of the meteorological station were extracted using available meteorological and scdata software in the selected period and were separated based on two views of the season and precipitation severity.
Then the precipitation intensity was calculated based on radar data. Due to the large amount of raw data in Rainbow software, the data format was converted from binary to text. In the next step, the power regression is made between meteorological radar data ( ) and the automatic rain gauge (in mm / h), based on the existing default coefficients. Then, the new coefficient (a’ and b’) were determine by introducing the linear equation, a (a') and a new b (b'). In the third step, the precipitation intensity was re-calculated by applying new coefficients in radar measuremets. Now, there are 2 precipitation intensity values which are obtained by default and new coefficients. The intensity precipitation values were compared with observation of meteorological data using root mean square error in different seasons regardless of intensity. The same process was performed for the severity of observed precipitation and calculated precipitation by the radar regardless of seasons. The most important results of this study are relative improvement of radar estimation from precipitation intensity after correction of coefficients, which was 38% in March to May (spring) and 22% in December to February (winter). During the months of July to November (summer and autumn), there was almost no improvement. Also, based on precipitation intensity (regardless of seasons), average accuracy of precipitation was increased by 25% and severe precipitation by 47%. While in gentle precipitation, this method did not work and there was no improvement.Radar is a remote sensing instrument that sends electromagnetic waves with specific power to atmosphere and evaluates the amount of return power. It can then measure the difference between the send and retune powers and detect atmospheric phenomena as clouds. Using this tool, there is a wide, continuous and integrated monitoring of atmospheric phenomena. Like any remote sensing device that has, data of weather radars can also have errors. One of the most important measures to eliminate or minimize the radar data errors is calibration, and correction of radar index coefficients. The purpose of this paper is to extract an appropriate relationship for precipitation intensity related to radar reflectivity in Gilan. The Gilan ground based radar installed at the Kiashahr Port is a German-made GEMATRONIK MTEOR1600C type operating in the dual-polarization Doppler radar frequency band (c-band). In general, in order to calibrate the weather radars, “a” and “b” coefficients are required to modify in the Marshall Palmer initial formula for the target area. For this purpose, we tried to estimate the coefficients of this relationship (the relationship between precipitation intensity and radar reflection intensity) in a three years period (2012-2015) and to find new coefficients. In this study, the Doppler filter method (IIR Doppler Filter) was used to remove clutters. This filter was installed in the signal processor. In order to calibrate the Gilan radar, the rain gauge data of the Rasht airport synoptic station was compared with radar data. In this way, the precipitation statistics of the meteorological station were extracted using available meteorological and scdata software in the selected period and were separated based on two views of the season and precipitation severity.
Then the precipitation intensity was calculated based on radar data. Due to the large amount of raw data in Rainbow software, the data format was converted from binary to text. In the next step, the power regression is made between meteorological radar data ( ) and the automatic rain gauge (in mm / h), based on the existing default coefficients. Then, the new coefficient (a’ and b’) were determine by introducing the linear equation, a (a') and a new b (b'). In the third step, the precipitation intensity was re-calculated by applying new coefficients in radar measuremets. Now, there are 2 precipitation intensity values which are obtained by default and new coefficients. The intensity precipitation values were compared with observation of meteorological data using root mean square error in different seasons regardless of intensity. The same process was performed for the severity of observed precipitation and calculated precipitation by the radar regardless of seasons. The most important results of this study are relative improvement of radar estimation from precipitation intensity after correction of coefficients, which was 38% in March to May (spring) and 22% in December to February (winter). During the months of July to November (summer and autumn), there was almost no improvement. Also, based on precipitation intensity (regardless of seasons), average accuracy of precipitation was increased by 25% and severe precipitation by 47%. While in gentle precipitation, this method did not work and there was no improvement.https://jesphys.ut.ac.ir/article_66078_df21ef5de4bf19afb365f2abf9a102a2.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X44320181023Spatial variations of runoff, sediment and runoff threshold of Gharehshiran watershed in Ardabil ProvinceSpatial variations of runoff, sediment and runoff threshold of Gharehshiran watershed in Ardabil Province6977136778310.22059/jesphys.2018.253440.1006980FAEbrahimAsgariM.Sc. Graduated, Department of Natural Resources, Faculty of Agriculture and Natural Resources, University of
Mohaghegh Ardabili, Ardabil, Iran0000-0003-3838-2166AbazarEsmali-OuriAssociate Professor, Department of Natural Resources, Faculty of Agriculture and Natural Resources, University of
Mohaghegh Ardabili, Ardabil, IranRaoofMostafazadehAssistant Professor, Department of Natural Resources, Faculty of Agriculture and Natural Resources, University of
Mohaghegh Ardabili, Ardabil, Iran0000-0002-0401-0260GholamrezaAhmadzadehAssistant Professor, Department of Geology, Faculty of Basic Sciences University of Mohaghegh Ardabili, Ardabil, IranJournal Article20180303Diverse factors affect the characteristics of the watershed that lead to spatial and temporal variations in the runoff and sediment production processes. Runoff and sediment are the main important elements in the hydrological cycle, and their changes directly affect river systems and sedimentary environments; and their spatial and temporal variations change the morphology of the rivers. Due to differences in soil characteristics, source materials and geological formations, vegetation and slope in different parts of a region, the amount of runoff and sediment produced in these areas can vary with spatial variations. The purpose of this study is to evaluate the spatial variations of runoff and sediment and runoff threshold using rainfall simulation data in the Gharehshiran watershed in Ardebil Province. Considering the importance of spatial distribution of sampling points across the catchment area, the locations of the samples were determined, taking into account the access path to the points, as well as sampling in different formations through determining the boundaries of the study area.
The field experiments and simulation of precipitation were carried out using a 1×1m rainfall simulator in 45 points in different geologic formations of the watershed area. The amount of runoff and sediment were measured in each experiment along with recording the threshold time of runoff generation. The measured variables were mapped and interpolated by using Kriging method over the study area. To assess the accuracy of the interpolation results, 7 samples were selected randomly and the Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Mean Bias Error (MBE) statistical measures were calculated by comparing observational and estimated values. Then, the correlation between the studied variables in various geological formations was evaluated using Pearson correlation analysis. The relationship between sediment and runoff amount, and runoff threshold time were also evaluated using a triple diagram model.
The results of the interpolated maps showed that the lowest values of runoff time threshold (1.99-3.17 min) were observed in the geological formations of upper part of the watershed having dacite and tracite igneous, volcanic rocks. While the runoff time thresholds were increased (6.13-7.25 min) in the low land areas with the old alluvial terraces. The amount of generated runoff in the upper hillslopes of the watershed with dacite and tracite rocks was estimated as (6.07-7.25 lit/m<sup>2</sup>), and the amount of sediment was low (1.25-1.66 g/l). Meanwhile, in the lower parts of old alluvial terraces, the amount of runoff production was low (2.20-3.50 lit/m<sup>2</sup>) and the amount of produced sediment was higher with values of (2.25-3.5 g/l). The results of correlation analysis showed that the correlation coefficients between runoff threshold and runoff volume were significant at 0.01 significant level (r = -0.802). Also, a significant negative correlation (r = -0.672), were observed between runoff and sediment values.
The relationship between the runoff time threshold and the sediment content was positive at significant level of 0.01 (r = 0.900). The results of interdependency between the sediment, runoff and runoff time threshold values using triple diagram models showed that the sediment amount was about 2g/l at high runoff time thresholds of 4 minutes with 2.5-5.5 lit/m<sup>2 </sup>runoff amounts.
In general, it can be said that the sediment production in the study area is strongly under the effects of runoff amounts in lower time thresholds of runoff. As a remark, the results pointed out that the internal relationship of runoff and sediment production are affected by a variety of effective factors which requires comprehensive studies to reach a final conclusion.Diverse factors affect the characteristics of the watershed that lead to spatial and temporal variations in the runoff and sediment production processes. Runoff and sediment are the main important elements in the hydrological cycle, and their changes directly affect river systems and sedimentary environments; and their spatial and temporal variations change the morphology of the rivers. Due to differences in soil characteristics, source materials and geological formations, vegetation and slope in different parts of a region, the amount of runoff and sediment produced in these areas can vary with spatial variations. The purpose of this study is to evaluate the spatial variations of runoff and sediment and runoff threshold using rainfall simulation data in the Gharehshiran watershed in Ardebil Province. Considering the importance of spatial distribution of sampling points across the catchment area, the locations of the samples were determined, taking into account the access path to the points, as well as sampling in different formations through determining the boundaries of the study area.
The field experiments and simulation of precipitation were carried out using a 1×1m rainfall simulator in 45 points in different geologic formations of the watershed area. The amount of runoff and sediment were measured in each experiment along with recording the threshold time of runoff generation. The measured variables were mapped and interpolated by using Kriging method over the study area. To assess the accuracy of the interpolation results, 7 samples were selected randomly and the Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Mean Bias Error (MBE) statistical measures were calculated by comparing observational and estimated values. Then, the correlation between the studied variables in various geological formations was evaluated using Pearson correlation analysis. The relationship between sediment and runoff amount, and runoff threshold time were also evaluated using a triple diagram model.
The results of the interpolated maps showed that the lowest values of runoff time threshold (1.99-3.17 min) were observed in the geological formations of upper part of the watershed having dacite and tracite igneous, volcanic rocks. While the runoff time thresholds were increased (6.13-7.25 min) in the low land areas with the old alluvial terraces. The amount of generated runoff in the upper hillslopes of the watershed with dacite and tracite rocks was estimated as (6.07-7.25 lit/m<sup>2</sup>), and the amount of sediment was low (1.25-1.66 g/l). Meanwhile, in the lower parts of old alluvial terraces, the amount of runoff production was low (2.20-3.50 lit/m<sup>2</sup>) and the amount of produced sediment was higher with values of (2.25-3.5 g/l). The results of correlation analysis showed that the correlation coefficients between runoff threshold and runoff volume were significant at 0.01 significant level (r = -0.802). Also, a significant negative correlation (r = -0.672), were observed between runoff and sediment values.
The relationship between the runoff time threshold and the sediment content was positive at significant level of 0.01 (r = 0.900). The results of interdependency between the sediment, runoff and runoff time threshold values using triple diagram models showed that the sediment amount was about 2g/l at high runoff time thresholds of 4 minutes with 2.5-5.5 lit/m<sup>2 </sup>runoff amounts.
In general, it can be said that the sediment production in the study area is strongly under the effects of runoff amounts in lower time thresholds of runoff. As a remark, the results pointed out that the internal relationship of runoff and sediment production are affected by a variety of effective factors which requires comprehensive studies to reach a final conclusion.https://jesphys.ut.ac.ir/article_67783_2bcf903f41e24d8ec837ee370c59efee.pdf