It is sometimes easy to forget, amidst the panoply of recent trends now enlivening geophysics – fractures, microseismic, reservoir monitoring, etc. – that much of our work relies on one main idea: extracting signal from data. In this issue, we revisit this basic principle and explore the implications in different contexts. Two of the articles are focussed on the very topical subject of data reconstruction. Of course, though the term data reconstruction is widely accepted, what we really mean by this is signal reconstruction. After all, the data is full of stuff, broadly described as noise, that we would rather eliminate rather than reconstruct. Well, perhaps. What we regard as noise today may be tomorrow’s signal. An example is multiple energy: currently we still try to remove multiples as much as possible, but recently there have been efforts in the marine processing world to make use of multiples within imaging, with sometimes impressive results (Lu et al., 2013). So, watch that space. Generally it is helpful to make a distinction between random noise and coherent noise – though sometimes what appears as random noise is nothing more than poorly sampled signal or coherent noise! The best understanding of random noise is provided by statistical methods, involving distributions, signal-to-noise ratios, and the like, whereas the best understanding of coherent noise is via the physics of wave propagation, multipathing, dispersive modes, and so on. Naturally this leads to different approaches when trying to extract the signal, depending on the type of noise in which it is embedded. On the other hand it sometimes the characteristics of the signal which are the most useful in separating it from the noise. Which brings us back to reconstruction, the basic principle in all reconstruction methods is to recognize some characteristic of the signal, such as spatial continuity, which allows us to jump over the gaps where it has not been recorded.
Continuity in the spatial domain is naturally related to sparsity in the wavenumber domain. The first article on data reconstruction, by Naghizadeh and Sacchi, is an analysis of this fundamental principle behind reconstruction. Sparsity is the assumption that we have more measurements in the data than are required to represent the signal alone. For example in seismic inversion, if it is assumed that there are potential reflections on each sample, then there isn’t really much that can be done about the limit imposed by the Nyquist theorem. On the other hand, if it can be confidently asserted that the number of reflections is small compared to the number of samples, then there is a chance to locate these reflections with great accuracy (van Riel and Berkhout, 1985). The same principle extends to multi-dimensions and can often be exploited in the frequency or wavenumber domain to great effect. This naturally leads into considerations of regular versus random sampling. The authors construct some very thought provoking and illuminating synthetic examples, which provide some clarity on this vexed question.
The second article on reconstruction, by Stanton and Sacchi, also emphasizes the principle of sparsity, or “parsimony”, and shows that a variety of superficially dissimilar approaches, ranging from minimization of norms through rank reduction of tensors (with catchy acronyms such as POCS, ALFT, MWNI), are all fundamentally based on the same principle. Perhaps we should not be surprised, then, if they all lead to similar results, hence the roads leading to Rome of their title. They demonstrate this using 3-D reconstruction for a synthetic shot gather and 5-D reconstruction of a land dataset. It has to be said, though, that the authors found one algorithm – SEQSVD, which is based on tensor reconstruction – had a slightly better ability to reconstruct curved events. To find out why, you must read their paper. So, perhaps one of the roads does lead to a nicer, or at least different, part of Rome!
Cary and Nagarajappa discuss the impact of noise on surface-consistent scaling. This paper is based on the talk which Peter Cary presented at the CSEG symposium earlier this year, and re-examines a step which has been largely taken for granted. The insight here is that conventional surface-consistent scaling theory is not properly formulated for noisy data. Like many of the best insights, it came from persistently encountering a problem with data “misbehaving” (it can be quite annoying when data ignores the theory). They identify the source of the problem as an amplitude estimation step based on RMS measurement, which provides biased estimates in the presence of random noise. They go on to explore different options for performing an unbiased estimation of signal amplitudes to improve surface-consistent scaling, and provide examples to illustrate the improvement obtained using their proposed approach.
So, whether it is the sparsity of signal in the wavenumber domain or the different characteristics of signal and noise in surface consistent processing, your authors today are all providing new insights into the fundamental distinction between signal and noise, and helping us to better extract the signal from the data. I hope you were able to extract a little signal from the noisy, partially coherent data I provided in the paragraphs above. Enjoy the read, and happy holidays!
About the Author(s)
Richard Bale is VP of Research and Development with Key Seismic Solutions. After a B.A. in mathematics from Cambridge in the UK, Richard joined GECO (subsequently Schlumberger) and pursued seismic R&D in the UK and Norway. After 17 years in the industry, he moved to Canada where he completed his Ph.D. in geophysics at the University of Calgary. Richard then returned to industry with research and management roles in Veritas and now Key. Richard is interested in seismic processing and imaging, and especially in multicomponent processing, and has written a number of papers on these topics.