The search for oil and gas has now moved into complex frontier areas such as the foothills in the onshore and into deeper waters and subsalt traps in the offshore, where geological risks are higher. The need to maximize the economic production of hydrocarbons in our current low price environment requires not only more advanced, but also more timely geoscience technology applications. The increased speed and capacity of modern supercomputers helps us in three ways: (1) to compute more accurate solutions that were previously intractable, (2) to compute state-of-the-art solutions in near real-time thereby impacting business decisions while a well is being drilled, and (3) to compute state-of-the art solutions of multiple rather than just the most likely reservoir scenario to better quantify the risk of a given prediction. While such endeavors require greater power, they also generate enormous quantities of data. Processing of such huge volumes of data on computers has been ever-demanding, pushing the limits of hardware speed, the system memory, data storage and the input/output operations per second. Consequently, the oil and gas industry and meteorology have long been and continue to be the two leading consumers of scientific supercomputing resources.

The supercomputers of the 1970s and 80s were large $20 million single machines manufactured by CDC, Cray, Fujitsu, and Thinking Machine. Today, supercomputing in the petroleum industry most commonly consists of the same hardware that sits on your desktop, but repackaged into large distributed clusters consisting of 1000s of processors. Such ‘high-performance computing (HPC)’ achieves its speed by solving multiple problems or multiple components of a problem in parallel.

Here, we list several of the more important compute-intensive petroleum applications.

1. Efforts that until now were too computationally intensive.

  1. Seismic imaging, or “migration” places seismic measurements made at the surface in their correct location in the subsurface. By doing so, a more accurate image of the subsurface is created. The simplistic migration methods that have evolved since the 1970s, have limitations in the presence of complex, steeply dipping seismic reflections such as those found on the flanks of salt domes, or when large variations in lateral velocity are encountered. While the wave-equation based reverse time migration (RTM) technique was introduced in the late 1970s, it was, until quite recently, too computationally intensive to use for 3D surveys. Thanks to HPC, RTM is now being run by many processing shops, and is the method of choice in imaging below salt. An added level of complexity (and computational cost) is seismic migration in the presence of dipping (anisotropic) shale layers, where the imaging velocity changes with the angle of wave propagation. It is not uncommon for large 3D marine RTM jobs to run for one or more months on thousands of compute nodes. Least-squares migration that accounts for irregular subsurface illumination requires multiple iterations of RTM.
  2. Full waveform inversion (FWI) is simply modeling in a loop, where the objective is to forward model the seismic response, and then perturb the velocities until the modeled data approximates the measured data on the surface of the earth. At present, almost all FWI algorithms exploit the low frequency component of the P-wave field, but can include the effects of ghosting, multiples, reverberations, head waves, and tunneling effects not handled by more ray-based tomography or iterative migration algorithms. Since the problem is highly nonlinear, effective solutions often use genetic and simulated annealing algorithms to obtain the solution. The high-resolution velocity models serve as an accurate input for depth imaging, which in turn provide better-defined subsurface images that are amenable to more meaningful interpretation. While not yet routine, FWI is commonly used to estimate the velocity model for structurally complex (e.g. subsalt) 3D marine surveys.
  3. Seismic modeling by itself is less commonly used than in other industries, such as civil or aerospace engineering where the goal is to evaluate the response of a suite of alternative designs. While we cannot change the design of the earth, we can change the design of the acquisition system, and thereby quantify the value of innovative geophone deployment, simultaneous multisource sweeps, and multicomponent acquisition on subsurface illumination. Most often modeling algorithms are internal to more sophisticated processing algorithms such as RTM (cross-correlation of forward and backward modeled wavefields) and FWI (modeling in a loop). The importance of seismic modeling in oil and gas exploration can be gauged from the SEAM project which is being carried out by the Society of Exploration Geophysicists (SEG), Tulsa. While seismic migration typically images longer wavelength P-wave data, seismic modelling requires the accurate simulation of shorter wavelength S-wave and ground roll events as well, increasing the cost by a factor of 36 (if VP=2VS), taking months of time on HPC machines. Future advances in RTM and FWI will be based on future advances in modeling that include effects of shear waves, attenuation, and more realistic surface conditions involving complex topography or shallow layers with higher VP/VS ratios.

2. Applications that need to run in near real time

Real time applications for weather prediction and counteracting a missile attack have long been a major user of supercomputer cycles. Predicting where a hurricane may hit or a missile may strike after the fact has little practical value. Very fast real time computing and processing through stack on a seismic acquisition ship was first used almost 20 years ago, allowing the acquisition company to go back and reshoot seismic lines that had problems. More recently, fast, broadband telecommunication and high speed processing of microseismic events generated during hydraulic fracturing allows the driller in “real time” to recognize problems, estimate stimulated volumes, and make adjustments to the completion process on site.

3. Applications that are run multiple times to generate statistically valid estimates

  1. Geostatistical inversion addresses issues of uncertainties in our subsurface estimates. Well logs have a vertical resolution on the order of 1 foot and a lateral resolution equal to the well spacing. In contrast, seismic data have a vertical resolution on the order of 10-20 m and a lateral resolution on the order of a seismic bin (say 30 m by 30 m). While not interactive, model-based prestack inversion is routinely applied to both 3D marine and seismic surveys worldwide. However, the vast majority of applications provide the “most likely” solution, which mathematically is that impedance when averaged to the 10-20 m resolution, best fits the measured seismic data. As the name implies, geostatistical inversion uses well logs statistics within a geometric context, i.e. how likely is a rock property going to vary with respect to a neighboring (1 ft away) or more distant (100 ft away) rock property, to construct a variogram. Instead of generating a single model averaged to the seismic resolution, geostatistical inversion uses HPC to generate hundreds of models at the well log resolution, all of which fit the seismic data, honor the logs at the wells, and honor the well log statistics away from the wells. These high resolution models can then be used to better quantify risk, providing estimates of P10, P50, and P90 of fluid volumes and most alternative models of reservoir plumbing.
  2. Stochastic reservoir simulations use the uncertainties in our static model and convert them to estimate uncertainties in our dynamic model. Ideally, our “best” static reservoir model is mathematically the “most likely” static reservoir model. If fluid flow were a linear process, the distribution of dynamic reservoir models would be similar to the static models. Such is not the case, since the distribution of thin shale barriers and other baffles may not change the net porosity but may radically change the ability to produce a reservoir. Today, most simulations are performed on an upscaled grid as fine-grid simulations are computationally not feasible. Upscaling permeability is different than upscaling porosity. Some operators thus run simulations on a suite of hundreds of models to better quantify uncertainties in reservoir performance. Since the early times when reservoir simulation was first attempted back in the 1960s, the level of complexity sought by the reservoir engineer has outpaced the capability of computing resources. High end machines have been used for the purpose, which are now the HPC machines. Such reservoir simulation exercises seem never ending, as with each new well drilled, the data needs to be included in the model and its effect assimilated.

For addressing the challenges faced by the oil and gas industry in terms of advanced technology applications and enhance drilling success, supercomputing needs have been, and will continue to grow. As per the Top500 list of the world’s most powerful supercomputers, the fastest computer has a rating of 33.86 petaflops. Future supercomputers will probably have exaflop capability (1 exaflop = 1000 petaflops), which will significantly reduce the time for number crunching required in the oil and gas industry.

End

     

About the Author(s)

Kurt J. Marfurt joined The University of Oklahoma in 2007 where he serves as the Frank and Henrietta Schultz Professor of Geophysics within the ConocoPhillips School of Geology and Geophysics. Marfurt’s primary research interest is in the development and calibration of new seismic attributes to aid in seismic processing, seismic interpretation, and reservoir characterization. Recent work has focused on applying coherence, spectral decomposition, structure-oriented filtering, and volumetric curvature to mapping fractures and karst with a particular focus on resource plays. Marfurt earned a Ph.D. in applied geophysics at Columbia University’s Henry Krumb School of Mines in New York in 1978 where he also taught as an Assistant Professor for four years. He worked 18 years in a wide range of research projects at Amoco’s Tulsa Research Center after which he joined the University of Houston for 8 years as a Professor of Geophysics and the Director of the Allied Geophysics Lab. He has received SEG best paper (for coherence), SEG best presentation (for seismic modeling) and as a coauthor with Satinder Chopra best SEG poster (for curvature) and best AAPG technical presentation. Marfurt also served as the EAGE/SEG Distinguished Short Course Instructor for 2006 (on seismic attributes). In addition to teaching and research duties at OU, Marfurt leads short courses on attributes for the SEG and AAPG.

Satinder Chopra has 30 years of experience as a geophysicist specializing in processing, reprocessing, special processing and interactive interpretation of seismic data. He has rich experience in processing various types of data like VSP, well log data, seismic data, etc., as well as excellent communication skills, as evidenced by the several presentations and talks delivered and books, reports, and papers written. He has been the 2010/11 CSEG Distinguished Lecturer, the 2011/12 AAPG/SEG Distinguished Lecturer and the 2014/15 EAGE e-Distinguished Lecturer. He is a member of SEG, CSEG, CSPG, EAGE, AAPG, and APEGA (Association of Professional Engineers and Geoscientists of Alberta).

References

Appendices

Join the Conversation

Interested in starting, or contributing to a conversation about an article or issue of the RECORDER? Join our CSEG LinkedIn Group.

Share This Article