%0 Journal Article %T The Effect of the type of training algorithm for multi-layer perceptron neural network on the accuracy of monthly forecast of precipitation over Iran, case study: ECMWF model %J Journal of the Earth and Space Physics %I Institute of Geophysics, University of Tehran %Z 2538-371X %A Pakdaman, Morteza %D 2022 %\ 04/21/2022 %V 48 %N 1 %P 213-226 %! The Effect of the type of training algorithm for multi-layer perceptron neural network on the accuracy of monthly forecast of precipitation over Iran, case study: ECMWF model %K Bayesian Regularization %K Levenberg-Marquardt algorithm %K multi-layer perceptron neural network %K ECMWF %R 10.22059/jesphys.2022.327450.1007341 %X Due to increasing atmospheric disasters in the Iran, accurate monthly and seasonal forecasts of rainfall as well as temperature, can help decision makers to better plan for the future. Meanwhile, machine learning methods are widely used today in predicting temperature and precipitation. For this purpose, the outputs of climate models are processed with the help of observational data and machine learning methods and a more accurate forecast of temperature and precipitation (or other climatic variables) are provided. In the meantime, methods based on multilayer perceptron artificial neural networks are widely used.In a multi-layer perceptron artificial neural network, the design of the network architecture is very important and this design can directly affect the ability of the neural network to solve the problem. In designing network architecture, questions such as the number of neurons in each layer, the number of layers, network activity functions in each layer, etc. must be answered. In some cases, there are methods to answer each of the above questions, but in most cases, a suitable architecture for the specific problem under study must be found by trial and error. One of the important steps in using machine learning methods (in general) and especially the use of perceptron artificial neural network method is the training stage. During the neural network training process, which actually leads to solving a mathematical optimization problem, the optimal network weights are calculated as its adjustable parameters.Today, various types of artificial neural networks are used in various fields of atmospheric science and climatology for purposes such as classification, regression and prediction. But the fundamental question in the use of artificial neural networks is how they are designed and built. One of the important points in using artificial neural networks that should be considered by designers is choosing the right algorithm for network training. In this paper, six different methods are considered for training a multilayer perceptron neural network including: Bayesian Regularization algorithm, Levenberg-Marquatt algorithm, Conjugate Gradient with Powell/Beale Restarts, BFGS Quasi-Newton algorithm, Scaled Conjugate Gradient and Fletcher-Powell Conjugate Gradient methods for monthly forecasting of precipitation that are reviewed and compared. In mathematical optimization methods based on derivatives and gradient vectors, the second-order derivative of the objective function, called the Hessian matrix, and its inverse, play an essential role in the calculations. On the other hand, with increasing the number of variables, the size of the matrix increases and its inverse calculation is computationally time consuming. Therefore, in the improved optimization methods, it is tried to approximate the inverse matrix of the objective function with some tricks.Because the ECMWF model has six different lead times, 72 different models can be proposed for 12 different months of the year. For this purpose, data for the period 1993 to 2010 were used as network training data and data for the period 2011 to 2016 for testing. To evaluate the performance of different neural networks, three indices of correlation coefficient, mean square error and Nash-Sutcliffe index were used. Results indicated that the Bayesian Regularization and Levenberg-Marquatt, Conjugate Gradient with Powell/Beale Restarts outperforms other training algorithms. %U https://jesphys.ut.ac.ir/article_85443_b497a54b04457615abf2bc852612c253.pdf