Contributed by Jason Noble
Introduction by Session Chair
Mauricio Sacchi and Mike Perz
The Present State of Deconvolution of Land Seismic Data
Cosmetic Enhancement of Seismic Data By Loop Reconvolution
Paul Young and Andrew Wild
Gabor deconvolution: real and synthetic data experiences
Mike Perz, Larry Mewhort, Gary Margrave and Laurie Ross
Sparseness-constrained seismic deconvolution with Curvelets
G. Hennenfent, F. Hermann and R. Neelamani
Phase property of the Vibroseis wavelet
Linping Dong, Gary Margrave and Larry Mewhort
The afternoon of Tuesday May 17th 2005 at the CSEG convention saw a series of talks presented on Deconvolution in the format of a workshop. In previous conventions the format has proven to work very well, and given the variety of talks the afternoon looked to be very informative.
The session was chaired by Mike Perz and Mauricio Sacchi who did an excellent job of keeping things going. In Mike Perz’s introduction he made the observation that in preparing a paper for the upcoming SEG convention, there was no mention of decon as a presentation topic, it was a “dead issue”.
Peter Cary was first up with “The Present State of Deconvolution of Land Seismic Data”. This talk provided a broad overview of what could be considered the “state of the art” in deconvolution. While there wasn’t an earth shattering breakthrough, it set the stage for the rest of the afternoon. As always Peter did an excellent job in presenting what was certainly an excellent refresher of all the things we tend to take for granted.
The second talk was by far the most entertaining I had the pleasure of seeing at this year’s convention. A very self effacing Andrew Wild spoke on “Cosmetic Enhancement of Seismic Data by Loop Reconvolution”. It was an excellent example of a practical and pragmatic solution applied to a real problem, with no pretence of being more than that. The presentation of the process and its results was straight forward, and very well done. It did lead to some interesting questions about how such a simple procedure can actually work that well.
Next up was Mike Perz discussing some current research in “Gabor deconvolution: real and synthetic data examples”. As if to prove that deconvolution is not a dead issue, we were shown that Gabor decon is working its way towards production seismic processing. While the results were qualified as very preliminary on experimental software, and judged as showing mixed success, they were real data results using a process which was until very recently, purely experimental. It was good to see a new process making its way, however slowly, into the mainstream.
In an update to a 1997 SEG paper, Brian Link discussed “Wavelet Stability: Raising the Bar”. Brian discussed the greater need for maintaining lateral phase stability, and a methodology for not only attempting to ensure it, but to determine if there are problems. An eight years later reexamination showed continued research in techniques to improve control over phase concerns that have also proven their worth on their own.
If the tone of the afternoon seemed to be moving from practical to theoretical, the next talk fell perfectly into place. “Sparseness-constrained seismic deconvolution with Curvelets” was discussed by Felix Herrmann. Admittedly, I wish I had been more prepared for this talk; I hadn’t even heard the word “curvelet” for over a year, that said it was a fascinating look at the front end of research. It was qualified as such, a novel method to look at a problem. I found it very interesting, and at first a little alarming, that there could be such a thing as curvelet deconvolution. By the end of the talk I was left wondering what curvelets can’t do!
The final talk of the afternoon should have been the most controversial and, debatably, was. Linping Dong gave us “Phase property of the Vibroseis wavelet”. If one of the fundamental assumptions of the vibroseis method is that the wavelet is zero phase, this talk threw things for a loop. It began with the observation that vibroseis wavelets had been observed to behave as minimum phase wavelets, and proceeded to look for a justification for this to be so. Several real data examples highlighted this curious dilemma, and I think the controversy was averted by way of the authors admitting their own puzzlement with it. I breathed a sigh of relief when I overheard “it shouldn’t be but it is, and we don’t know why”.
The discussion period afterward never materialized as time constraints forced in upon the attendees. Several very informal small discussions erupted, but quickly worked their way into the hallway and beyond. All in all it was an excellent afternoon, at least touching on deconvolution at every level. From what we are doing, to maybe what we should be doing, to what we might be doing down the road. The session chairs did a terrific job of sneaking a little time here and there for extra questions as the need arose, and keeping the flow of the afternoon moving. I was most impressed by the overall high attendance throughout the afternoon; it was refreshing to see a good number of people present for every talk on this “dead issue”.
Geophysical Inversion Workshop
Contributed by Andrew Marshall
Introduction by Session Chair
Chaired by John Logel and Mike Burianyk
Inversion 2005 – yesterday, today, and tomorrow
Constrained Potential Field Inversion for Oil & Gas Applications
G. Perron, J. McGaughey, P. Fullagar and G. Pears
Seismic Traveltime Inversion
EM geophysics for hydrocarbons:Inversion applications and current research at UBC-GIF
S.Napier, D.Oldenburg, C.Farquharson and J.Cristall
Simultaneous inversion of prestack seismic data
Dan Hampson, Brian Russell and Brad Bankhead
Delineating resistive paleochannels with Transient Audio-Magnetotellurics: Implications for oil/gas exploration
The Seismic Inversion Workshop was held on the afternoon of Wednesday, May 18th at the 2005 CSEG National Convention. It was quickly noted that in the introduction that there was actually a Seismic Inversion session the previous day and that this workshop should be more properly named the Geophysical Inversion Workshop. Although geophysicists in Calgary typically think of inversion as some form of seismic impedance derivation, the session included presentations discussing inversion of potential field data, electrical and electromagnetic data and non-traditional approaches to the inverse modelling of seismic data. Consequently, the various session talks covered a wide range of geophysical inversion topics.
Larry Lines started out the presentations by giving an overview of the past, present and future of geophysical inversion. He defined inversion as a procedure for obtaining earth models which adequately describe geophysical datasets or in the seismic case, “from waves into rocks”. He noted how closely and reversely related the inversion process is to forward modelling and how in both cases, selecting physical parameters and identifying uncertainties are extremely important in producing useful results. Larry proceeded to discuss some of the inversion work, past and present, that has been done at the University of Calgary and even pointed out how many common seismic processing steps could be described as forms of inversion, with migration being an example of “structural inversion”.
Near the end of his talk, Larry stressed the importance of involving geologists and engineers in a more cooperative process to obtain more grounded and helpful inversions. These results may even be directly tied into reservoir production modelling, especially when time lapse seismic data is available. Larry also noted that more attempts should also be made to perform joint inversion where all the varied geophysical and geological data available be used and combined to produce more accurate earth models. He cited the combined use of seismic and gravity data as one example.
Gervais Perron was up next with his presentation on potential field data inversion for oil and gas applications. Gervais discussed how magnetic and gravity data may be used on a regional basin scale where and when other geophysical methods may not be as useful or cost effective. Two important potential field examples are approximating geological basement structure with magnetic data and defining shape, size and depth of salt bodies with gravity data. Gervais stated that although potential methods inversions are non-unique or give ambiguous solutions, defining proper data constraints and geometrically building a common earth model using all the geological and geophysical data available greatly assists in producing meaningful results. He noted how inversion is an iterative process where the proper weighting of data constraints in combination with robust inversion engines can greatly reduce the data misfit and ambiguity. Gervais showed a synthetic case study where a complex salt dome was created within a sedimentary sequence whose density gradient increased with depth. By constraining forward modelled gravity data with modelled seismic data in the inversion process, he was able to re-create, in three dimensions, the approximate size and shape of the complex salt dome. It was a good example of where, individually, neither gravity nor seismic data could be used to properly define the salt dome but when combined, an approximate earth model was created.
The following presentation was on seismic traveltime inversion by Emil Blias. He broke this inversion procedure down to three main steps. Emil’s first step was to determine traveltimes, which he explained is usually done by using velocity analysis, as autopicking is very difficult due to noise levels on prestack seismic data. Traveltimes are required for each CDP location so high density velocity analysis must be performed with great care being taken to account for possible shallow velocity anomalies. Emil’s next step was the determination of the initial depth velocity model where the number of layers generally depends on the number of reliable reflections that can be picked on the poststack data and be found on the prestack data using the velocity analysis. The final step in Emil’s procedure was to improve the initial model using either layered traveltime inversion or tomography. Emil is in favour of using the layered traveltime inversion as it uses vertical homogeneous layers between the reflectors. He has found the tomographic approach (of breaking the velocities into a grid of cells) is not connected to the various reflectors and can not reliably determine any vertical velocity changes. It only hides the problem and does not act to solve it.
Scott Napier gave the next presentation on electrical and electromagnetic (EM) inversions for hydrocarbon applications. After covering some introductory theory and background, Scott went into detail on the benefits and shortcomings of two specific geophysical methods. He noted how airborne electromagnetics methods are fast, cost effective, have minimal environmental impact and provide regional information helpful in isolating target areas for future examination. Some disadvantages for airborne EM surveys are that they may have difficulty detecting resistive targets in the overwhelming presence of conductive material and that they have limited depths of penetration even in resistive environments: perhaps 100-150 metres with frequency domain electromagnetics (FDEM) and 100-250 metres with time domain electromagnetics (TDEM). The second geophysical method Scott covered was DC resistivity, which is a ground based type of survey and consequently slower but still relatively inexpensive in relation to seismic methods. The resistivity method is more sensitive to resistive targets and has good depth penetration in excess of 300 metres. Scott proceeded to show two case studies. The first example showed how both airborne EM data and DC resistivity data were inverted to discriminate shallow gas channel targets and the second example illustrated the value of TDEM data inversion in a oil sands area. Scott finished off the presentation by providing an introduction to some the current research being done with marine 3D controlled source electromagnetic (CSEM) surveys with an emphasis on inverting to recover reservoir resistivities and geometry. The CSEM surveys currently use a horizontal electric dipole towed behind the ship in combination with seafloor receivers.
“Simultaneous inversion of prestack seismic data” was the name of the next talk given by Brian Russell. Brian explained that this was a new approach to simultaneously invert prestack PP and PS angle gathers to model P-impedance, S-impedance and density. His process is based on three assumptions: “The first is that the linearized approximation for reflectivity holds. The second is that PP and PS reflectivity as a function of angle can be given by the Aki-Richards equations. The third is that there is that there is a linear relationship between the logarithm of P-impedance and both S-impedance and density.” After explaining his process with a series of equations, Brian showed a gas sand model example for which, after several iterations, the cooperative inversion procedure succeeded in providing good matches to the original dipole sonic well curves. This talk was especially well attended and a number of questions were asked at the end of the presentation.
The last presentation was on transient audio-magnetotellurics ( TAMT) by David Goldak. He spent some time detailing the basic theory behind the method. David explained that, unlike most electrical and electromagnetic methods which use human derived sources or transmitters, TAMT relies on distant electrical activity such as lightning and its resultant resonances in the ionosphere as a natural EM transmitter. Similar to most geophysical methods, the power of stack dramatically improves the signal-to-noise of the recorded data and, in the case of TAMT, numerous transient events are stacked. David proceeded to show a Southern Manitoba example where a TAMT survey was conducted over a buried channel system. His two dimensional inversion of the TAMT data estimated the width and depth of the resistive channel located within the more conductive Cretaceous sediments. David noted the usefulness of this method for both paleochannel oil and gas exploration and oil sands applications. He also explained that there are new developments in the near future that should help remove noise from the data which would improve the data inversion results. In response to the curious crowd, David answered several questions. Summer is preferable for data acquisition due to the increased number of signals or lightning events and the thawed ground makes it easier for setting up and removing the electrodes. Night time recording is better due to improved ionosphere charging after the sun goes down.
After the very diverse range of inversion presentations, an informal discussion was held. Perhaps due to a combination of the presentations running a little late and the Wrap Up Party starting a little early just outside, the presentation room emptied out pretty quickly prior to the workshop discussion period. To be certain, inversion was the main topic of conversion for everyone over drinks at the Wrap Up Party. In any case, the official discussion was carried out by a small group consisting of the session chairs, the majority of the presenters and Richard Kellett (Pioneer Natural Resources) and Colleen Whelehan (EnCana), survivors from the audience.
It was noted that seismic talks were better attended, to which Brian Russell pointed out that the oil and gas geophysical community is very seismically focused. Michael Burianyk noted that some of major oil and gas companies are actually starting to develop potential fields departments.
John Logel asked the group whether inversion in the future is going to move toward being more data driven versus model driven. Gervais Perron thought that potential field inversion already is generally data driven but that the process and results always have to be carefully examined. Richard Kellett stressed that without applying proper parameters and restrictions, joint inversions of multiple data sets can reduce resolution and interpretable value. Richard also pointed out that with all field based geophysical methods, the uncertainty and repeatability of the data (i.e random error) is the most poorly determined parameter, so inversion schemes that are driven by the error bars are missing the point. He asked “What is the error bar on a seismic wave as measured by a geophone array?”
It was interesting to observe that although the discussion participants were largely involved with vastly different aspects of geophysics, they were all able to understand and share ideas quite easily. Although the data they work with is different, the basic inversion theory, terminology, equations and in some cases, even the algorithms are the same.
Ideas were floated amongst the group to participate in a joint inversion exercise akin to the EAGE Marmoussi experiment involving model based data and maybe a follow up real world example. The search is on for a geological scenario to fit the wide range of geophysical inversions.
In summary and to steal the words of Larry Lines, “The future years of inversion will involve attempts to describe the changing physical properties of the reservoir by matching geological, geophysical and engineering data.” Perhaps this could be expanded to include attempts to describe various geological and geophysical problems by the use and combination of various types of data inversions.
Thanks to the workshop participants not only for the presentations but also for their valuable contributions in compiling this summary.