درون‌یابی داده‌های لرزه‌ای با مجموعه‌ای از تابع‌های منظم‌ساز جدید

نویسندگان

گروه فیزیک زمین، موسسه ژئوفیزیک دانشگاه تهران

چکیده

در عملیات لرزه اکتشافی اغلب به علت وجود موانع طبیعی، غیرطبیعی و یا صرفه‌جویی در هزینه‌ها، برداشت داده به‌صورت منظم و یک شکل صورت نمی‌گیرد. بنابراین، نیازمندیم تا با روش‌های ریاضی ردلرزه‌های مفقود شده را درون‌یابی و بازسازی کنیم. متاسفانه بسیاری از روش‌های امروزی در پر کردن درست و دقیق مکان ردلرزه های خالی ناتوان ‌هستند. در سال‌های اخیر نظریة نمونه‌برداری فشرده در حل مسئله درون‌یابی و بازسازی داده‌های لرزه‌ای بسیار کارآمد ظاهر شده است. براساس این نظریه می‌توان ثبت‌های لرزه‌ای چشمه مشترک را در یک حوزه تُنُک مناسب (برای مثال حوزه کرولت) و با یک معادلة بهینه‌سازی، بازسازی و درون‌یابی کرد.
در این مقاله از مجموعه‌ای از تابع‌های پتانسیل برای حل مسئله به کمک نظریه نمونه‌برداری فشرده بهره می‌بریم. علاوه‌ بر این روشی نیز برای تعیین پارامتر منظم‌سازی در این گونه مسائل معرفی خواهیم کرد. سپس نتایج را با تابع‌های پتانسیل متفاوت و مرسوم مقایسه می‌کنیم و در انتها بهترین و بهینه‌ترین تابع پتانسیل که منجر به جواب‌های دقیق‌تر می‌شود معرفی خواهد شد.

کلیدواژه‌ها


عنوان مقاله [English]

Seismic data interpolation via a series of new regularizing functions

نویسندگان [English]

  • B. Tavakoli
  • A. Gholami
  • H.R. Siahkoohi
چکیده [English]

Natural signals are continues, therefore, digitizing is an essential task enabling us to use computing tools to process them. According to the Nyquist/Shannon sampling theory, the sampling frequency must be at least twice the maximum frequency contained in the signal which is being sampled; otherwise, some high frequencies may be aliased and result in a bad reconstruction. The Nyquist sampling rate makes it possible to reconstruct the original signal exactly from its acquired samples.
To enhance the efficiency of sampling process, a procedure is to use a high sampling rate. But the huge volume of generated data by this approach is a major challenge in many fields, like seismic exploration, and moreover, sometimes the sampling equipment cannot handle the broad frequency band.
 Seismic data acquisition includes sampling in time and spatial directions of a waveform that is generated by some sources like dynamite. Sampling should be done according to a regular pattern of receivers. Nevertheless, generally due to some acquisition obstacles seismic data sets are irregularly sampled in spatial direction(s). This irregularity causes a low quality seismic images that contain artifacts and missing traces.
One of the approaches that have been developed to deal with this defect is interpolation of the acquired data according to a regular grid. Through the interpolation we can achieve an estimation of the fully sampled desired signal. This approach can also be as a tool to design an acquisition geometry which is sparser and results in more cost effective survey.
Compressive sensing (CS) theory has been developed helping us to sample data below Nyquist sampling rate while being able to reconstruct them by considering the solution of an optimization problem. This theory claims that the signals/images that can be presented sparsely under a pre-specified basis or frame can be reconstructed accurately from a few numbers of its samples. The principle of the CS is based on the Tikhonov regularization like equation (eq. 1) which utilizes sparsifying regularization terms. In equation (1), the CS sampling operator, , contains three elements: (i) a sparsifying transform C which provides a sparse presentation of signals/images according to the used basis, (ii) measurement matrix M which for seismic issue is identity matrix, and (iii) under sampling operator S which is incoherent with sparsifying operator C.
Curvelet transform contains a frame set whose elements have a great correlation with curve-like reflection events presented in seismic data and can provide a sparse presentation of seismic images. The under sampling scheme used in this paper is Jitter that allows controlling the maximum gap size between known traces. Another commonly used under sampling scheme is Gaussian random or binary random. Since under sampling appearance in frequency domain is a Gaussian random noise, the interpolation problem can be treated as a nonlinear de-noising problem. Curvelet frames are an optimal choice for this purpose.
The sparsity regularization plays a leading role in CS theory. This approach has also been effectively applied on other problems like de-noising and de-convolution. There are a wide range of functions that can impose sparsity in regularization equation. The performance of these functions to interpolate an incomplete data is related to their ability in coherency with initial model properties. There are a variety of potential functions and the l1-norm is the well-known and commonly used of them. But still a comprehensive study to find out which of them is more efficient for seismic image reconstruction is necessary. This defect is because of absence of a general potential function. Here we use a general potential function which enables us to compare the efficiency of a wide range of potential functions and find the optimum one for our problem. This regularization function incudes lp-norm functions and others as its especial cases which are presented in Table 1. This general function covers both convex and non-convex regularization functions. In this paper we use the potential function to compare the efficiency of different approaches in CS algorithm.
Through solving regularization problems a controversial part is setting the best regularization parameter, . Here due to redundancy of curvelet transform, assigning a proper parameter will face some difficulties. Many approaches like L-curve, Stain’s unbiased risk estimate (SURE), and generalized cross validation (GCV), face some difficulties in finding this parameter. Therefore, we inclined to use some nonlinear approaches, such as NGCV (Nonlinear GCV) and WSURE (Weighted SURE).
The efficiency of the mentioned methods for estimating regularization parameter and choosing the best potential function is evaluated by considering a synthetic noisy seismic image. By under-sampling this image and removing more than 60% of its traces, the initial/observed model will be reconstructed. This imperfect image serves as our acquired seismic data. In solving equation (1) we use a forward-backward splitting recursion algorithm. Finally through this algorithm we could reach the optimum potential function and a method to estimate the regularization parameter.

کلیدواژه‌ها [English]

  • Seismic data interpolation/reconstruction
  • Compressive sensing
  • Sparsity
  • Curvelet transform