عنوان مقاله [English]
Many steps of seismic data processing sequence suppose that data sets are sampled in time and spatial dimensions uniformly. Today, this assumption is true but only in time dimension. Modern seismic exploration equipment permits seismic data sets to be sampled uniformly and densely in time dimension. However, along spatial dimension uniform and dense sampling are not possible because of operating constraints, failure of equipment, topography conditions or commercial problems. It has been proved that the results of most of seismic data processing techniques are dependent on regularity, adequate sampling and density of input data sets. The fact that we need to interpolate seismic data sets causes several new-born approaches in this field. In most of the available seismic processing software, this task is done by ‘binning’ the data. This operation is one of the error sources of seismic sections. Moreover, there are some other different computational techniques to interpolate and reconstruct seismic data on a regular grid. Some of these approaches reconstruct seismic data at the given points using physical concepts of wave propagation and solving Kirchhoff's formula. In spite of practicability of these methods, need of initial accurate information about velocity model, geological property and high computational efforts restrict the domain of operation for these methods. Nowadays various mathematical methods are provided using the design of prediction filters, mathematical transformation and some other methods use rank reduction of data matrix to interpolate seismic data. According to their utilized assumptions, computational cost, noise, sampling type, and density of input data, each of these methods have their own constraints in performance and artifacts in final results which should be recognized. In science and engineering branches, a well-known algorithm that deals with signals is Matching Pursuit (MP). Originally, MP has been introduced to time-frequency transformation and finding the frequency content of signals. This transformation represents a signal as a linear composition of vectors that are available in a complete bank of time-frequency atoms (also called Dictionary). MP is an iterative algorithm that at each iteration finds a base vector in the dictionary that best matches to the signal, then subtracts the image of signal along this vector from the signal and updates the signal. This process will be continued until the remained signal is negligible. Originally, to have a good decomposition, this dictionary should contain a vast amount and kinds of wavelets like Gabor functions that each has its own dilation, modulation and translation.Heretofore MP is used to produce a single frequency seismic attribute in geophysics. For seismic data reconstruction and interpolation purposes, sine functions are applied as base vectors. The process of interpolation by MP that uses sine functions needs to solve a Lomb-Scargle periodogram at each iteration that may need to have many computations. Due the lots of works that have been done on this subject, today multi-dimension and multi-component seismic data set can be interpolated using sine functions at MP. Other functions that can be used as MP’s base vectors are Fourier coefficients. Here, after some brief explanation about MP’s algorithm and formulations we use Fourier coefficients as the base vectors of MP, interpolate and reconstruct some synthetic and real two and three dimensional seismic data. Despite of some random noises that are due to calculation and other estimations,the traces are reproduced acceptably. The results show that amplitude and frequency contents of events are well preserved. The noticeable point is that the traces that reproduced at original sampling points are nearly identical to original traces. This property and ability to interpolate data with completely non-uniform sampling grid separates Fourier MP from many of previous interpolation methods. Cautiously picking of several base function simultaneously is proposed to reduce needed iterations and speed up the algorithm. Windowing the input data and using an antialiasing mask are proposed to achieve the assumption of sparse frequency content and linearity of events and remove aliasing effects.