نوع مقاله : مقاله پژوهشی
نویسندگان
1 گروه ژئوفیزیک، دانشکده علوم زمین، دانشگاه تحصیلات تکمیلی علوم پایه زنجان، زنجان، ایران.
2 مدیریت اکتشاف نفت، شرکت ملی نفت ایران، تهران، ایران.
چکیده
کلیدواژهها
موضوعات
عنوان مقاله [English]
نویسندگان [English]
Seismic data jitter sampling is one of the new seismic data acquisition methods developed recently to reduce seismic data acquisition costs. In this method, the number of seismic sources and receivers is less than the number determined by the Nyquist-Shannon Theorem. The Nyquist-Shannon theorem states that the sampling rate of a digital signal must be more than twice the bandwidth of the signal to avoid aliasing. To circumvent aliasing, the jitter sampling method uses compressed sensing technique. This technique is based on the principle that the sparsity of a signal can be used to recover it from fewer samples than required by the Nyquist–Shannon sampling theorem in two conditions. First, the signal needs to be sparse in some domains, like the frequency domain. Second, the signal must be randomly sampled in the main domain, like the time or space domain. In this type of data sampling method, the randomness of sampling appears as a white noise in the transform domain. Therefore, it can be said that the compressed sensing method plays the role of a denoising technique in the transformation domain. In conventional compressed sensing methods, it is assumed that the data is undersampled on a regular grid. Fourier transform, Curvelet transform, and wavelet transform are some of the transforms that are used in these types of compressed sensing methods. On the other hand, sometimes in real seismic data acquisition, the shots and receivers cannot have a regular geometry due to the natural and civil obstacles. Therefore, sampling on a regular grid is not always possible in seismic data acquisition. This means that using the conventional compressed sensing method for seismic data regularization doesn’t seem to be an appropriate choice. To address this issue, some geophysicists have proposed to use discrete Fourier transform as the data transformation technique in compressed sensing. Discrete Fourier transform does not require sampling on a uniquespace grid. However, this transform is slow and needs a huge number of computations. In this paper, we used the non-uniquespace fast Fourier transform instead of the discrete Fourier transform. The method doesn’t need a sampling scheme on a regular grid and is much faster than discrete Fourier transform. This method is based on the conventional fast Fourier transform and an interpolation technique. The method can be applied on multidimensional pre-stack seismic data. Therefore, it can consider correlation between traces in different dimensions while interpolating the lost traces. On the other hand, a problem with fully random sampling is that there is no control over the locations of the samples on a signal. This means that, if a signal is sampled randomly, some parts of the signal may be oversampled while the other parts may not be sampled with enough points. This phenomenon may have a bad impact on the regularized result if the signal changes erratically. To avoid this situation, in this paper, a sampling protocol will be introduced to improve the control over random sampling. In this protocol, the samples are picked randomly in small windows over the length of the signal. In this sampling technique, the size of the windows and the number of random samples can be controlled easily. Moreover, the sampling scheme doesn’t need to be on a regular grid and the samples can be chosen anywhere along the signal. A set of 2D and 3D synthetic and 2D real seismic data were used to examine the performance of the proposed method. The results show that the method can regularize irregular seismic data properly.
کلیدواژهها [English]