Sei sulla pagina 1di 288

DecisionSpace Immersion Appendices

copyright 2002 by Landmark Graphics Corporation

Part No. 160674 Rev. A* 2001, 2002 Landmark Graphics Corporation All Rights Reserved Worldwide

August 8, 2002

This publication has been provided pursuant to an agreement containing restrictions on its use. The publication is also protected by Federal copyright law. No part of this publication may be copied or distributed, transmitted, transcribed, stored in a retrieval system, or translated into any human or computer language, in any form or by any means, electronic, magnetic, manual, or otherwise, or disclosed to third parties without the express written permission of:

Landmark Graphics Corporation Building 1, Suite 200, 2101 City West Blvd P.O. Box 42806 Houston, TX 77042, U.S.A. Phone: 281-560-1000 FAX: 281-560-1401 Web: www.lgc.com

Trademark Notice Landmark, the Landmark logo, 3D Drill View, 3D Drill View KM, 3DVIEW, Active Field Surveillance, Active Reservoir Surveillance, ARIES, Automate, BLITZ, BLITZPAK, CasingSeat, COMPASS, Contouring Assistant, DataStar, DBPlot, Decision Suite, Decisionarium, DecisionDesktop, DecisionSpace, DepthTeam, DepthTeam Explorer, DepthTeam Express, DepthTeam Extreme, DepthTeam Interpreter, DESKTOP-PVT, DESKTOP-VIP, DEX, DFW, DIMS, Discovery, Drillability Suite, DrillModel, DrillVision, DSS, Dynamic Surveillance System, EarthCube, EdgeCa$h, eLandmark, EPM, e-workspace, FastTrack, FZAP!, GeoDataLoad, GeoGraphix, GeoGraphix Exploration System, GeoLink, GES, GESXplorer, GMAplus, GrandBasin, GRIDGENR, I2 Enterprise, iDims, IsoMap, LandScape, LeaseMap, LMK Resources, LogEdit, LogM, LogPrep, Make Great Decisions, MathPack, Model Builder, MyLandmark, MyWorkspace, OpenBooks, OpenExplorer, OpenJournal, OpenSGM, OpenTutor, OpenVision, OpenWorks, OpenWorks Well File, PAL, Parallel-VIP, PetroBank, PetroWorks, PlotView, Point Gridding Plus, Pointing Dispatcher, PostStack, PostStack ESP, PRIZM, PROFILE, ProMAX, ProMAX 2D, ProMAX 3D, ProMAX 3DPSDM, ProMAX MVA, ProMAX VSP, pStaX, QUICKDIF, RAVE, Real Freedom, Reservoir Framework Builder, RESev, ResMap, RMS, SafeStart, SCAN, SeisCube, SeisMap, SeisModel, SeisSpace, SeisVision, SeisWell, SeisWorks, SeisXchange, SigmaView, SpecDecomp, StrataMap, Stratamodel, StratAmp, StrataSim, StratWorks, StressCheck, STRUCT, SynTool, SystemStart, T2B, TDQ, TERAS, Total Drilling Performance, TOW/cs, TOW/cs The Oilfield Workstation, Trend Form Gridding, Turbo Synthetics, VIP, VIP-COMP, VIP-CORE, VIP-DUAL, VIP-ENCORE, VIP-EXECUTIVE, VIP-Local Grid Refinement, VIP-POLYMER, VIPTHERM, WavX, Web OpenWorks, Well Editor, Wellbase, Wellbore Planner, WELLCAT, WELLPLAN, WellXchange, wOW, Xsection, ZAP!, Z-MAP Plus are trademarks, registered trademarks or service marks of Landmark Graphics Corporation. All other trademarks are the property of their respective owners.

Note The information contained in this document is subject to change without notice and should not be construed as a commitment by Landmark Graphics Corporation. Landmark Graphics Corporation assumes no responsibility for any error that may appear in this manual. Some states or jurisdictions do not allow disclaimer of expressed or implied warranties in certain transactions; therefore, this statement may not apply to you.

Landmark

DecisionSpace Immersion

The Oilfield Lifecycle ................................................................................................. A - 1


Source Rocks ................................................................................................................... Reservoir Rock ................................................................................................................ Traps ................................................................................................................................ The Oil Field Life Cycle .................................................................................................. Reconnaissance ................................................................................................................ Reconnaissance and seismic exploration .................................................................. Prospect generation.......................................................................................................... Discovery ......................................................................................................................... Reservoir Delineation ...................................................................................................... Facilities........................................................................................................................... Primary Production .......................................................................................................... Enhanced Recovery ......................................................................................................... Divestiture........................................................................................................................ A-2 A-3 A-4 A-5 A-6 A-6 A-8 A-9 A - 10 A - 11 A - 12 A - 13 A - 14

Basic Well Log Analysis .......................................................................................... B - 1


Rock Properties................................................................................................................ B - 2 The Drilling Environment................................................................................................ B - 5 Common Logging Measurements.................................................................................... B - 7 Archie's Water Saturation Equation................................................................................. B - 9 Interpretive Workflows.................................................................................................... B - 11

Basic Seismic Analysis............................................................................................ C - 1


Time-to-depth Conversion Uncertainties......................................................................... Velocity Background ................................................................................................ Rock Density ....................................................................................................... Rock Velocity ..................................................................................................... Formation Velocity ............................................................................................. Seismic Velocity ................................................................................................. Velocity Accuracy .............................................................................................. Velocity Frequency ............................................................................................. Migration Uncertainties ................................................................................................... Two Main Uncertainties of Depth Conversion ................................................... Ray Tracing Background .......................................................................................... Terminology ........................................................................................................ Forward Ray Tracing .......................................................................................... Seismic interpretation uncertainties................................................................................. C-2 C-3 C-3 C-4 C-6 C-7 C - 18 C - 20 C - 22 C - 24 C - 25 C - 25 C - 26 C - 33

Basic Geostatistical Analysis ............................................................................... D - 1


Introduction to Geostatistics Short Course ...................................................................... Lesson 1: Purpose of Geostatistics .................................................................................. Introduction ............................................................................................................... Qualitative / Quantitative Reasoning ........................................................................
R2003.2 Contents

D-2 D-4 D-4 D-4


iii

DecisionSpace Immersion

Landmark

Reservoir Planning .................................................................................................... Elimination of Surprises ........................................................................................... The Need For Decision Making ................................................................................ Quantification of Uncertainty and Risk Qualified Decision Making ....................... Lesson 2: Basic Concepts ................................................................................................ Definitions ................................................................................................................ Histograms ................................................................................................................ Probability Distributions ........................................................................................... Monte Carlo Simulation ............................................................................................ Bootstrap ................................................................................................................... Geostatistical and Other Key Concepts .................................................................... Petrophysical Properties ..................................................................................... Modeling Scale ................................................................................................... Uniqueness, Smoothing, and Heterogeneity ....................................................... Analogue Data .................................................................................................... Dynamic Reservoir Changes .............................................................................. Data Types .......................................................................................................... Numerical Facies Modeling ...................................................................................... Lesson 3: Geological Principles for Reservoir Modeling................................................ Reservoir Types ........................................................................................................ Modeling Siliciclastic Reservoirs ....................................................................... Modeling Carbonate Reservoirs ......................................................................... Modeling Principles .................................................................................................. Transforming Areal Coordinates .............................................................................. Transforming Z Coordinates ..................................................................................... Cell Size .............................................................................................................. Workflow .................................................................................................................. Lesson 4: Data Analysis .................................................................................................. Data Analysis ............................................................................................................ Outliers and Erroneous Data ..................................................................................... Lumping Populations ................................................................................................ Declustering .............................................................................................................. Trends ....................................................................................................................... Lesson 5: Spatial Data Analysis ...................................................................................... Introduction ............................................................................................................... Variograms ................................................................................................................ Components of the Variogram ............................................................................ Choosing variogram directions and lag distances ............................................... Variogram Interpretation .......................................................................................... Variogram Interpretation .................................................................................... Anisotropy .......................................................................................................... Cyclicity .............................................................................................................. Large Scale Trends ............................................................................................. Variogram Modeling ................................................................................................. Variogram Models .............................................................................................. Workflow ..................................................................................................................
iv Contents

D-6 D-7 D-8 D-9 D - 11 D - 11 D - 19 D - 20 D - 21 D - 22 D - 23 D - 23 D - 23 D - 23 D - 24 D - 24 D - 24 D - 25 D - 26 D - 26 D - 27 D - 29 D - 30 D - 35 D - 37 D - 39 D - 39 D - 42 D - 42 D - 42 D - 43 D - 45 D - 47 D - 49 D - 49 D - 49 D - 53 D - 55 D - 59 D - 59 D - 60 D - 61 D - 62 D - 62 D - 63 D - 66

R2003.2

Landmark

DecisionSpace Immersion

Lesson 6: Geostatistical Algorithms ................................................................................ Introduction ............................................................................................................... Kriging ...................................................................................................................... Discussion ........................................................................................................... Implementing Kriging ............................................................................................... The Kriging Variance ............................................................................................... Sequential Simulation ............................................................................................... Lesson 7: Structural Modeling......................................................................................... Introduction ............................................................................................................... Velocity Uncertainty ................................................................................................. Surface Based Modeling ........................................................................................... Surface Flapping ....................................................................................................... Fault Handling .......................................................................................................... Lesson 8: Seismic Data Integration ................................................................................. Introduction ............................................................................................................... Calibration of Data .................................................................................................... Cross Spatial Variability ........................................................................................... Cokriging .................................................................................................................. Colocated Cokriging ................................................................................................. Simulation Alternatives ............................................................................................ Annealing ............................................................................................................ Final Thoughts ................................................................................................................. 101

D - 67 D - 67 D - 68 D - 72 D - 73 D - 75 D - 77 D - 80 D - 80 D - 81 D - 83 D - 89 D - 93 D - 94 D - 94 D - 94 D - 96 D - 97 D - 97 D - 98 D - 98 D-

Basic Flow Analysis................................................................................................... E - 1


Basic Reservoir Mechanics.............................................................................................. Dissolved-Gas Drive ................................................................................................. Free-Gas Cap Expansion Drive ................................................................................ Water Drive ............................................................................................................... Gravity Drive ............................................................................................................ Combination Drive ................................................................................................... Gas Reservoirs .......................................................................................................... Fundamentals of Reservoir Simulation............................................................................ Step 1: Build an initial reservoir model .................................................................... Step 2. Define Wells and How They Operate ........................................................... Step 3: Run the Simulation ....................................................................................... Benefits of Simulation ........................................................................................ Simulation Applications ..................................................................................... Types of Reservoirs VIP Simulates .................................................................... Types of Processes VIP Simulates ...................................................................... Reservoir Management Decisions Influenced by Modeling/Simulation ............ Basic Equations, Volumetrics ............................................................................. Run the simulation .................................................................................................... Traditional Simulation ........................................................................................ Upgridding/Upscaling Example .........................................................................
R2003.2 Contents

E-2 E-2 E-3 E-4 E-5 E-5 E-5 E-6 E-7 E - 11 E - 12 E - 17 E - 17 E - 18 E - 19 E - 20 E - 22 E - 22 E - 22 E - 23


v

DecisionSpace Immersion

Landmark

Step 4: Analyze the Results ...................................................................................... Reservoir production analysis ................................................................................... Required Simulation Plotting Capabilities in a Probabilistic World .................. History matching .......................................................................................................

E - 24 E - 25 E - 25 E - 25

Basic Risk Analysis ................................................................................................... F - 1


Measuring value............................................................................................................... Net Present Value (NPV) .................................................................................... Efficiency Measures ........................................................................................... Breakeven Discount Rate (BDR) ........................................................................ Discounted Payout, Payout (DPO) ..................................................................... Deciding which is the better project ................................................................... Accelerating the investment ................................................................................ Measuring variability ....................................................................................................... Types of Risk ............................................................................................................ Political Risk (strategic management team) ....................................................... Economic Risk (corporate economist) ................................................................ Engineering Risk (engineer) ............................................................................... Geologic Risk (geologists / geophysicist) .......................................................... Risk vs. Uncertainty .................................................................................................. Methods to Quantify Risk ......................................................................................... Decision Trees .................................................................................................... Simulation ........................................................................................................... Spreadsheet ......................................................................................................... Sensitivity analysis ................................................................................................... Final Thoughts ................................................................................................................. F-2 F-2 F-2 F-2 F-3 F-3 F-5 F-6 F-6 F-6 F-6 F-6 F-6 F-7 F-7 F-8 F - 11 F - 13 F - 16 F - 18

Well Planning Basics ................................................................................................. G - 1


Components of Horizontal Wells .................................................................................... Directional Drilling Considerations................................................................................. Well Plan Types............................................................................................................... Redline Parameters .......................................................................................................... G-2 G-4 G-5 G - 10

Glossary............................................................................................................................ H - 1
Glossary of terms ............................................................................................................. H - 2

vi

Contents

R2003.2

Appendix A

The Oilfield Lifecycle

In this section you will learn about source rocks, reservoir rocks, and traps; you'll be introduced to the eight phases of the oil field life cycle (Reconnaissance, Prospect Generation, Discovery, Reservoir Delineation, Facilities, Primary Production, Enhanced Recovery, and Divestiture); and you'll see where Landmark products (including DecisionSpace) fit in the cycle.

Topics covered in this chapter:


Source Rocks Reservoir Rocks Traps The Oil Field Life Cycle Reconnaissance Prospect Generation Discovery Reservoir Delineation Facilities Primary Production Enhanced Recovery Divestiture

Landmark

The Oilfield Lifecycle

A-1

DecisionSpace Immersion

Landmark

Source Rocks
The upper part of the crust, the rocks we drill, our source and reservoir rocks, are Sedimentary Rocks. They are called sedimentary because they are composed of sediments. Sediments that were buried beneath the surface of the ground, and are now fairly solid rocks, that we call sedimentary. Where did these sedimentary rocks come from? We know that throughout geologic time, sea level has not always been constant. Many times, the oceans rose to cover surface of land with shallow seas. Each time, lots of sediments were deposited. Sediments were deposited as sand grains along ancient beaches, as mud particles in shallow offshore regions, and as biologic deposits of sea shells. These muds, sands, and shells accumulated to form what we call sedimentary rocks. Where does the gas and oil come from? Gas and oil comes from ancient organic material preserved in the sedimentary rock layers. Along with the sand and mud and shells, there's also lots of dead plant and animal material. This organic material is what becomes gas and oil. The sedimentary rock that contains the most organic material, and so is the best source rock for generating gas and oil is black shale. In fact, it's black because its high organic content. As this source rock gets buried deeper and deeper below the surface of ground by having newer sediments deposited over it, it begins to cook. It cooks because the deeper into the earth we go, the hotter it gets. When the temperature reaches about 150 F (~7000 ft), oil begins to form. It continues to form until the temperature reaches about 300 F (~18,000 ft) As the temperature gets even hotter, any remaining organic material generates natural gas.

A-2

Source Rocks

R2003.2.0.1

Landmark

DecisionSpace Immersion

Reservoir Rock
Once oil and gas forms, it's light compared to the water also present in subsurface, so it wants to rise. It flows through cracks in the source rock and hopefully hits a reservoir rock. A reservoir rock is a rock with lots of pore space. Pores are common in sedimentary rocks such as limestone and sandstone. Sandstone is made of little balls of quartz that don't fit together perfectly, leaving lots of pores. Limestone is made of old shell beds and coral reefs, and also has lots of pore space. Once gas and oil get into the reservoir rock, it wants to stay there because that is it's path of least resistance. The pores are often interconnected and the gas and oil can flow from pore to pore to pore, up the angle of the reservoir. The images below show examples of porosity and permeability.

R2003.2.0.1

Reservoir Rock

A-3

DecisionSpace Immersion

Landmark

Traps
As gas and oil flow up the angle of the reservoir, hopefully it hits a trap, a high point on the reservoir, that allows the gas and oil to concentrate. The fluids separate: gas on top, oil in middle, water on bottom. To complete the trap, we need a cap rock overlying the trap. This is a seal. Without a seal, there's no trap, because the gas and oil will leak to surface. In the very early days of exploration, oilmen looked for and drilled seeps, places where oil had leaked to the surface of ground, and they were usually very successful. But those days of easy exploration are long over. Shale and salt, because of their very low permeability, make good cap rocks.

A-4

Traps

R2003.2.0.1

Landmark

DecisionSpace Immersion

The Oil Field Life Cycle


The eight phases of the Oil Field Life Cycle are: Reconnaissance, Prospect Generation, Discovery, Reservoir Delineation, Facilities, Primary Production, Enhanced Recovery, and Divestiture. Although companies can enter and exit the cycle from any phase, they are discussed here from the first phases of exploration and preparation through the final phase of divestiture. Not all companies are involved in all phases, and some companies specialize in only one or two phases. Most integrated oil companies however are actually involved in all phases some where within their operations at all times. For example, they may be doing reconnaissance at one location, while bringing a field somewhere else into production.

R2003.2.0.1

The Oil Field Life Cycle

A-5

DecisionSpace Immersion

Landmark

Reconnaissance
A basin is nothing more than a large area of sedimentary deposition. This is where the source and reservoir rocks that generate and accumulate hydrocarbons were deposited. We've identified about 600 basins worldwide, and of these, about 200 have had little or no exploration. Companies must continually use reconnaissance to identify new reserves to replace the hydrocarbons they produce. If you think of an oil company as a bucket of oil, their goal is to keep that bucket full. A full bucket is worth more than a depleted one. But the bucket is constantly being drained as the company produces its oil and gas. So replacing reserves is important to the company for maintaining the long term value of the company, and for demonstrating this ability to their stockholders. Reconnaissance is the first step toward identifying these new reserves. Seismic exploration is the primary reconnaissance technique.

Reconnaissance and seismic exploration


Seismic exploration uses an acoustic source such as a dynamite explosion, travel time from the source to a receiver, and a recorder. The acoustic waves are reflected by subsurface formations (sedimentary rocks), to geophones on the ground, and transmitted to a recorder. It is the variations in the travel times of these acoustic waves that produce the time related records we correlate to build an image of the subsurface.

A-6

Reconnaissance

R2003.2.0.1

Landmark

DecisionSpace Immersion

Landmark plays a major role in reconnaissance, particularly through the use of SeisWorks. Briefly, SeisWorks: supports seismic interpretation in either time or depth includes full multi-survey merge capabilities allowing us to combine 2D and 3D projects, and merge multiple 3D projects allows us to correct differences in amplitude, phase and frequency across multiple surveys lets us interpret faults on either vertical seismic sections or time slices

R2003.2.0.1

Reconnaissance

A-7

DecisionSpace Immersion

Landmark

Prospect generation
This phase creates an inventory of potential opportunities, and prioritizes them based on risk. This step and the first are really nothing more than the necessary steps to prepare you for drilling.

A-8

Prospect generation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Discovery
Once preparation is complete, the discovery phase converts a prospect from a scientific novelty into an economic success or failure by drilling. You are never certain what is filling the pore spaces of your rocks until you spend your money, and punch a hole into the earth. In the figure below, some floormen are doing just that, as they spin a new stand of pipe into the hole. Landmark sells lots of products that are useful at this phase. For example: SeisWorks and EarthCube to help determine where to target; Wellbore Planner to help determine how to get to the target, and our large suite of drilling products (including WellPlan and DrillModel) that do everything from helping us design the optimum bit and drilling fluid combination, to modeling fluid response when a kick is encountered, to reliably costing our well.

R2003.2.0.1

Discovery

A-9

DecisionSpace Immersion

Landmark

Reservoir Delineation
If our discovery well was successful, we need to determined how big the reservoir is. This Delineation phase involves more drilling to ascertain reservoir quality, trap volume, drive mechanism, and cost to extract. This information will in part drive what size production, pipeline, and processing facilities we need to construct. StratWorks is one of Landmark's most useful products here. The image shown below is from StratWorks, and shows correlated well logs and interpreted faults. In addition to rapidly correlating well logs, StratWorks also allows us to build cross section, create subsurface maps, and compute volumes; all very useful for helping us to determine the extents of our reservoir. Our new DecisionSpace offering also fits nicely into the Reservoir Delineation workflow by allowing us to take limited amounts of data, and play what if scenario games to determine whether or not we move forward in the cycle, or divest early and iterate back to the preparation phases of reconnaissance and prospect generation.

A-10

Reservoir Delineation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Facilities
Once we know how big our reservoir is, we can design and construct facilities to extract our hydrocarbons from the reservoir at the lowest possible cost, and to process and treat them to have the highest market value. The below image shows a picture of a cracker used to process the produced fluids. This cracker is used to produce gasoline from heavy crude oil. The process of cracking uses controlled heat and pressure, and sometimes a chemical catalyst to break down heavier hydrocarbon molecules into lighter, more valuable products, like gasoline. Most traditional Landmark products don't fit very well into this phase of the OFLC, however, some drilling products, such as Casing Seat, Stress Check, and Well Cat are capable of designing optimal casing, liner, and tubing configurations. This is important to maximize production, while minimizing costs.

R2003.2.0.1

Facilities

A-11

DecisionSpace Immersion

Landmark

Primary Production
In the Primary Production phase, the natural energy present in the reservoir may be sufficient to move the fluids. The below image shows a picture of the Lucas well drilled at Spindletop in 1901. Some historians estimate that this well sprayed as many 100,000 barrels of oil per day for 9 days, before it could be brought under control.

A-12

Primary Production

R2003.2.0.1

Landmark

DecisionSpace Immersion

Enhanced Recovery
At some point, the natural propulsive energy within the reservoir can no longer move the hydrocarbons, and the Enhanced Recovery phase is initiated. As the natural energy declines, artificial lift may be required. Below is a cartoon of a sucker rod pump. It's up and down motion activates a downhole pump that raises fluids to the surface.

If natural gas is economically available, a type of artificial lift call gas lift is commonly used. In gas lift, gas is injected directly into the fluid column of a well. This injected gas aerates the fluid to make it exert less pressure than the formation, and so the formation pressure can once again force oil out of the wellbore. Another enhanced recovery technique is waterflooding. In this method, water is injected into the formation through injector wells in order to push the oil to the production wells. Similarly, gas may be injected back into the reservoir in order to maintain (or regain) formation pressure, and so drive production. A variety of Landmark products are useful during this lifecycle phase: Stratamodel for building structural and stratigraphic 3D frameworks that can be passed to VIP for full physics flow simulation, DSS for monitoring production and injection rates, and TOW/cs for the collection and analysis of our production data.
R2003.2.0.1 Enhanced Recovery A-13

DecisionSpace Immersion

Landmark

Divestiture
Divestiture occurs when the profitability of the field no longer achieve expectations, and the decision is made to either sell or abandon the field. Landmark's DecisionSpace offering is capable of forecasting future production through decline curve analysis, and both allow you to evaluate the economics of your field - which of course is the driving factor in the decision of whether or not to divest.

A-14

Divestiture

R2003.2.0.1

Appendix B

Basic Well Log Analysis

Well logs are one of the geoscientist's most important tools. They are used to: help correlate formations; help define lithology, porosity, and permeability; distinguish between gas, oil, and water in a reservoir; and estimate hydrocarbon reserves. This section provides an overview of well log analysis.

Topics covered in this chapter:


Rock Properties The Drilling Environment Common Logging Measurements Archie's Water Saturation Equation Interpretive Workflows

Landmark

Basic Well Log Analysis

B-1

DecisionSpace Immersion

Landmark

Rock Properties
Log responses are affected by various rock properties. These include; porosity, lithology, fluid saturation, permeability, and resisitivity. Porosity A rock is porous when it has many tiny spaces between its grains, as illustrated by the drawing below. These pores are always filled with something, usually water, sometimes oil or gas.

Porosity is defined as the ratio of void space in a rock to the total volume of rock. It is expressed as a fraction of 1.0, or on older logs, expressed in percent. Porosity is measured by sonic, density, neutron, and/or nuclear magnetic resonance logs. It is represented by the Greek symbol phi (), or just as PHI with either a suffix or prefix to distinguish a particular porosity type. For example, DPHI or PHID for density porosity. Lithology Lithology describes the solid part of rock. In the context of well log interpretation, lithology can be a simple description (sandstone, limestone, or dolomite), or given the proper combination of logging measurements, it can be a complex estimation of the major mineralogies. Lithology is especially important in well log interpretation because formation lithology greatly influences porosity log responses.

B-2

Rock Properties

R2003.2.0.1

Landmark

DecisionSpace Immersion

Fluid Saturation Fluid Saturation is the percentage of pore space in a rock which is filled with a particular fluid (gas, oil, or water). Fluid Saturation is usually expressed in terms of water saturation (Sw), because water saturation is the direct result of the defining equations. Water saturation is generally expressed as a fraction of 1.0. On some older logs, Sw is expressed in percent. Permeability A rock is permeable when the pores are connected, allowing fluid to flow through the rock. Permeability depends upon the size and shape of the pores, and the size, shape, and extent of their interconnections. A magnified illustration of good permeability is shown in the drawing below.

Permeability is measured in Darcies or milli-Darcies, and is represented by the symbol, K. Permeability is not directly measurable by logs, but is instead determined in the laboratory from core samples. Permeability can also be estimated from log-derived porosity and water saturation curves by the use of empirical equations. Resistivity Resistivity is the ability of a rock to resist the flow of an electric current. In general, the solid framework of the rock (the matrix) and the hydrocarbons (gas and oil) in the pore space are so resistive, that all electric current is forced to flow through the water present in the formation. The resistivity of the whole formation is therefore dependent on the amount and salinity of water present in the formation, and the complexity of the pore connections forming the paths the electrical current must flow through (the tortuosity).

R2003.2.0.1

Rock Properties

B-3

DecisionSpace Immersion

Landmark

Resistivity is measured in ohm-meters squared per meter, or ohmmeters. Logs can measure either resistivity or its reciprocal, conductivity, but the measurement is often only displayed in resistivity units. Resistivity is represented by the symbol R, usually with a lower case suffix which indicates a specific resistivity type. For example, Rw for water resistivity, or Rt for true resistivity.

B-4

Rock Properties

R2003.2.0.1

Landmark

DecisionSpace Immersion

The Drilling Environment


When a formation (particularly a porous and permeable one) is drilled, the borehole and the rock surrounding it are contaminated by drilling mud. When drilling mud filtrate invades the formation, it displaces the originally present gas, oil, or water, and consequently affects logging measurements. A diagram showing the drilling environment, terminology, and symbols used in log interpretation is shown below.

This invasion of the formation by drilling mud filtrate creates two zones around the borehole: a flushed zone and a transition zone. In the flushed zone, formation water is completely displaced by mud filtrate, and hydrocarbons may be up to 95% displaced. In the transition zone, formation fluids are only partially displaced.

R2003.2.0.1

The Drilling Environment

B-5

DecisionSpace Immersion

Landmark

Beyond the transition zone, in the undisturbed formation, pores are not contaminated with mud filtrate, and the formation remains saturated with its original fluids.

Even though logging measurements are made in, or through, the flushed and transition zones of the formation, the petrophysical interpretation process attempts to estimate parameters in the undisturbed part of the formation. Some tools, like the sonic, neutron, and density porosity tools, make their measurement in this disturbed zone. During interpretation, we assume that those properties remain constant through the disturbed zone to the undisturbed formation. Other tools, notably the resistivity tools, make several measurements at differing distances from the borehole. The near-borehole measurements are then used to correct the far measurements for drilling induced effects on the formation.

B-6

The Drilling Environment

R2003.2.0.1

Landmark

DecisionSpace Immersion

Common Logging Measurements


The following table shows the most common log types, their primary uses, and their basic response equations.

R2003.2.0.1

Common Logging Measurements

B-7

DecisionSpace Immersion

Landmark

Log Type

Curve Name : Units

Primary Use and Basic Equation


Correlation and gross lithology. Qualitative indication of permeability.

Estimate of formation water Spontaneous Potential: SP SP: mv (millivolts)

resistivity: Estimate of formation shale volume:

Gamma Ray: GR

GR: API Units

Correlation and gross lithology. Estimate of formation shale volume:

Estimate of drilling fluid invasion. Correlation. Qualitative indication of permeability. Resistivity: Dual Induction ILD, ILM, SFL: ohm-meters Estimate of formation fluid saturation:

Resistivity: Dual Laterolog

LLD, LLS, MSFL: ohm-meters Formation porosity:

Porosity: Sonic

DT: usec/ft or usec/m NPHI: decimal, referenced Formation porosity: NPHI output directly from tool. to a specific lithology Formation porosity: RHOB: gm/cc or kg/m^3

Porosity: Neutron

Porosity: Density PEF: barns/electron

Lithology: PEF values indicate lithology type. Lithology can also be estimated from combinations of 2 or 3 porosity measurements.

B-8

Common Logging Measurements

R2003.2.0.1

Landmark

DecisionSpace Immersion

Archie's Water Saturation Equation


In 1941, G. E. Archie of Shell Oil Company, presented a paper in which he empirically showed that the water saturation of a fluid filled formation can be determined by relating water resistivity, formation resistivity, and porosity. Archie's equation stated that:

Parameter Definition Sw PHI Rt Rw a m n Formation Water Saturation Porosity True Formation Resistivity Formation Water Resistivity Cementation Factor Cementation Exponent Saturation Exponent

Logging Measurement Computed from equation. From Sonic, Density, and/or Neutron logs. From deep Induction or deep Laterolog readings. From water samples or computed from SP. From core or other techniques. From core, Pickett plots, or other techniques. From core or other techniques.

All present methods of interpretation involving resistivity are derived from Archie's equation. This equation was based on measurements from clean sandstone core over limited ranges of porosity and formation water salinity. Over the years, widespread industry usage has expanded the application of this equation beyond the original porosity and water resistivity ranges, and into carbonate reservoirs, but Archie's equation still generally serves quite well. However, one environment in which the equation tends to produce pessimistic results is shaly sands.
R2003.2.0.1 Archie's Water Saturation Equation B-9

DecisionSpace Immersion

Landmark

Since Archie's equation was originally published, many new shaly sand equations have been developed. Many of these have seen some success, but none have proved universal. Some of these equations in general current use are: Simandoux, Modified Simandoux (Indonesian), Waxman-Smits, and the Dual Water Model.

B-10

Archie's Water Saturation Equation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Interpretive Workflows
Well log analysis is often a multi-step process. The number of steps involved depends, at least in part, on your general familiarity with the location where your data was acquired, and your familiarity with your specific data. Getting Started Workflow In the case where you are new to an area and the data, the getting started workflow shown on the following page illustrates two reconnaissance-style approaches. These approaches are intended to allow you to quickly scan large volumes of data and locate zones of interest. The scanning process differs slightly between clastic and carbonate environments. In clastic environments, shales usually have properties which vary slowly with depth, and both wet and hydrocarbon bearing reservoirs tend to stand out against this fairly consistent background. Resistivity is usually the property where these contrasts are most significant, and is therefore the best measurement to use for the first scan.

R2003.2.0.1

Interpretive Workflows

B-11

DecisionSpace Immersion

Landmark

In carbonate environments, there are often few consistent shales, and the properties of the individual units may vary widely. In this case, scanning for good porosity, rather than resistivity, will tend to pinpoint the reservoirs, and is therefore the best measurement to use for the first scan.

B-12

Interpretive Workflows

R2003.2.0.1

Landmark

DecisionSpace Immersion

Detailed Analysis Workflow Detailed log analysis involves choosing parameters unique to individual wells, and combining them with other regional or field-wide parameters. A sample detailed analysis workflow is presented on the following page. The shaly interpretation parameters, such as GRclean, SPclean, GRshale, and SPshale, can vary among wells due to both geologic changes with well location and inconsistencies in log measurement calibration and acquisition conditions. If a single measurement is used to measure porosity, the lithology must be estimated so that the proper matrix parameters can be chosen for the calculation. If two measurements are used in concert (usually neutron and density), lithologic parameters are generated as a by-product of the porosity calculation. True formation resistivity, Rt, is usually adequately estimated by the deep induction and/or deep laterolog measurements. However, given some drilling environment conditions (large borehole, extremely fresh or saline muds, deep invasion), corrections may have to be applied to these measurements. The Archie parameters (a, m, and n) are best determined from core measurements, a lengthy and expensive, and therefore rare occurrence. Their values can also be estimated from logs through graphical techniques, or predicted by local knowledge.

R2003.2.0.1

Interpretive Workflows

B-13

DecisionSpace Immersion

Landmark

Calculation of flushed zone saturation, Sxo, follows the same steps as for Sw calculations, except that mud filtrate resistivity, Rmf, is substituted for Rw, and flushed zone resistivity, Rxo, is substituted for Rt.

B-14

Interpretive Workflows

R2003.2.0.1

Appendix C

Basic Seismic Analysis

The structural model usually comes from a seismic interpretation of faults and horizons. At present, DecisionSpace does not account for the fault and horizon uncertainties impact in the simulation run. A workaround for this is to provide the simulation three structural models: a minimum, a median, and a maximum. Although, these are not equalprobable realizations of the structural model, this workaround allows you to run the economics on the extremes of the rock volume estimation. This appendix will detail some of the key uncertainties in developing the structural model from seismic data.

Topics covered in this chapter:


Time-to-depth conversion uncertainties Migration uncertainties Seismic interpretation uncertainties

Landmark

Basic Seismic Analysis

C-1

DecisionSpace Immersion

Landmark

Time-to-depth Conversion Uncertainties


The time-to-depth conversion is the transformation of the Z-axis from seismic measurement in time to depth. In many cases this corresponds to a vertical stretch from time to depth (or image ray tracing) plus residual corrections, or some other technique to tie seismic depth to well depth. This is one of the geophysicists best known uncertainties. The uncertainty related to this step is generally huge. Its impact on the rock volume is usually large because it shifts the whole reservoir up or down. Since the water-oil contact (WOC) remains at a constant depth, a small variation in the depth conversion implies a huge difference in the volume of oil in place. In many basins around the world this uncertainty can represent 50% or more of the total uncertainty related to rock-volume estimation. A typical depth conversion error is illustrated below. In this example the geologist used a velocity function from the discovery well and the structural section below. The step out well was drilled and much to the geologists surprise, the top of pay came in 1000 feet below prognosis. This happened because the depth conversion used an average velocity of 9000 f/s down to 2.0 seconds, while in reality, due to the structure, the average velocity is really 10000 f/s. It follows that the geologists simplified velocity model led to the 1000-foot depth conversion error at 2.0 seconds.

C-2

Time-to-depth Conversion Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

Velocity Background
The uncertainty around the depth conversion is centered on the velocity model. This section provides some basic velocity information essential for evaluating a velocity model. First of all the basic velocity equation is Distance = Time * Velocity. All of the rest of the geophysical velocity equations are a function of the travel path that the velocity is calculated on. Other than material properties the two main forces affecting velocity are temperature and pressure. The higher the temperature the slower the velocity and the lower the density. The higher the pressure the faster the velocity and the greater the density. Rock Density

2.6

Density g/cc 1.8 0

Depth feet Shale Density as a function of Depth (or Pressure)


The above graph shows the general soft rock trend of increasing density with depth or pressure.

R2003.2.0.1

Time-to-depth Conversion Uncertainties

C-3

DecisionSpace Immersion

Landmark

Rock Velocity

3.0
Anhydrite

Dolomite
n to s d e

Density g/cc

es m i

n to

n Sa Gypsum

Salt

e al h S

1.0

Salt Water Water Oil 5000 10000 15000 20000 25000

Velocity feet/second
As the density increases the velocity also increases.

C-4

Time-to-depth Conversion Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

The table below lists some typical acoustic velocities of certain materials.

Material
air (dry or moist) anyhdrite calcite cement (cured) dolomite dolomite (5) dolomite (20) drilling mud granite gypsum hydrogen (gas) limestone limestone (5) limestone (20) methane (gas) petroleum (oil) quartz salt sandstone sandstone (5) sandstone (20) sandstone (35) shales steel water (pure) water (100,000 mg NaCl/L) water (200,000 mg NaCl/L)

Velocity feet/second
1,100 20,000 20,000 to 22,000 12,000 23,000 20,000 15,000 6,000 19,700 to 20,000 19,000 4,250 21,000 18,500 13,000 1,500 to 1,600 4,200 18,000 to 18,900 15,000 19,000 16,000 11,500 9,000 6,000 to 17,000 20,000 4,800 5,200 5,500

*source
R2003.2.0.1

Petroleum Engineering Handbook


Time-to-depth Conversion Uncertainties C-5

DecisionSpace Immersion

Landmark

*Some of the velocity measurements are at standard temperature and pressure and not necessarily at reservoir conditions. You may have noticed the large velocity range for shales. The shale velocity depends on the amount of compaction and the pore fluid pressure within the shale matrix. When a shale becomes overpressured the fluid becomes the key velocity component. In the overpressured case the shale velocity can drop as low as 6,000 feet/second. Formation Velocity

Matrix 1 (Quartz)

Matrix 2 (Shale)

Pore Fluid (Water & Oil)

Matrix 1

Matrix 1

The velocity of the above formation is a function of its Matrix (percent quartz versus percent shale), its Porosity , and its Water Saturation Sw. In the overpressured case the grains of the matrix may not even be in contact, and the velocity slows toward the fluid velocity. In the case of a hard streak, the grains of the matrix are cemented together and the rock velocity takes on the velocity of the cement (usually a calcite cement).

C-6

Time-to-depth Conversion Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

Finally, the velocity is also a function of the direction at which the acoustic wave is traveling through the formation. Rock physics experiments show that the horizontal velocity of shale can be as much as 20% faster than the vertical velocity. This phenomenon is known as Anisotropy. In fact, most all rock formations have a certain degree of anisoptropy. Because of anisotropy the seismic velocities are usually different then the vertical depth conversion velocities. Hence, a large component of the seismic travel path is horizontal, and traveling at the horizontal rock velocity. Seismic Velocity The initial seismic velocity derived from surface acquisition is the NMO (normal moveout) or stacking velocity. You may be asking what the heck is NMO? Well NMO is the hyperbolic correction made to seismic traces to make them appear as if they were recorded at zero offset. The NMO correction is displayed on the following page. Once all the trace times are corrected to zero offset the traces are stacked together to get a more robust statistical solution. The NMO velocities are initially used to create the unmigrated or stacked image. Dix (1954) showed how interval velocities might be determined from NMO velocities.

R2003.2.0.1

Time-to-depth Conversion Uncertainties

C-7

DecisionSpace Immersion

Landmark

Field Recording Picture


Sounds Like Oil to Me!

Offset=0 t0

Raw Seismic Display Offset NMO time to move reflector up to time t0

t i m e

C-8

Time-to-depth Conversion Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

The following diagram will help define the parameters for calculating geophysical velocities.

Z0, T0

Surface

T1
Z1, T1

Z1 Z2 Z3

V1

Z2, T2

T2 T3

V2

V3

Z3, T3

Vi = Layer Interval Velocity Zi = Depth to base of Layer i

Zi = Layer Thickness
Ti= One Way travel time from surface to base of layer

Ti = One Way Layer transit times

R2003.2.0.1

Time-to-depth Conversion Uncertainties

C-9

DecisionSpace Immersion

Landmark

Based on the previous definitions the following geophysical velocities can be defined.

Zi -V IntervalVelocity -------i Ti 2 T i V i RMSVelocity ------------------------ V rms T i i Z i T i V i - -----------------------V AverageVelocity -----------ave T i T i


The NMO (Vnmo) or Stacking Velocity can also be defined:

Tx

2 X = T 0 + -----------------2 V nmo 2 x

T0

Tx Reflector

C-10

Time-to-depth Conversion Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

Vnmo is the velocity which when substituted into the normal moveout equation best fits normal moveout times to a hyperbola. See the figure below.

Slow Velocity hyperbola

Interval Velocity

X
Fast Velocity hyperbola

Velocity Semblance Display with Picks

R2003.2.0.1

Time-to-depth Conversion Uncertainties

C-11

DecisionSpace Immersion

Landmark

Applying the NMO velocities flattens the seismic data to make it appear as if all the traces were recorded at zero offset.

Data Flattened on NMO curves

For a single Horizontal reflector:

V i = V ave = V rms = V nmo

C-12

Time-to-depth Conversion Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

You may receive a file with stacking velocity functions in it from the people who processed the data. DepthTeam Express uses the Dix equation to convert stacking velocities to interval velocities. Before executing the conversion from Vnmo to Vint you need to be aware where assumptions about stacking velocities break down. For a single dipping reflector with interval velocity Vi:

a Vi

V i = V ave = V rms = V nmo cos a


As shown above, the stacking velocity is not equal to the true rock velocity. To compensate for the cosine component of Vnmo a process called Dip Moveout (DMO) is applied to the data. If you have stacking velocities computed after the application of DMO these will better reflect the real rock velocities. Another potential pitfall is pictured below:

cdp 60
T0

V=5000ft/s T0

T0 for Flat event = T0 for dipping event Vnmo for Flat Event = 5000 ft/s Vnmo for dipping event = (5000 ft/s) / (cos 60) = 10000 ft/s This situation cannot be resolved without DMO*. Only one velocity can be applied. * Special thanks to Dave Hale for award winning work on DMO.

R2003.2.0.1

Time-to-depth Conversion Uncertainties

C-13

DecisionSpace Immersion

Landmark

For multiple dipping reflectors accurate velocity calculations get much more complicated.

In complex geology, NMO tends to be hyperbolic over short offsets. Over large offsets, however, reflections on CDP gathers may not be hyperbolic. Another seismic velocity problem interpreters need to be aware of is diagramed on the next page.

C-14

Time-to-depth Conversion Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

NMO/Stacking velocities are often picked in such a way that they give unreasonable interval velocities. Consider the velocity picks below.

R2003.2.0.1

Time-to-depth Conversion Uncertainties

C-15

DecisionSpace Immersion

Landmark

The CDP gather looks like the display on page C-11, but the semblance display has more detailed picks. The additional picks have added more detail to the interval velocities. Although the Vnmo velocities look reasonable, the interval velocities calculated via the Dix equation are unreasonable.

Vi =

2 T V2 T V rms i 1 rms i 1 i i ------------------------------------------------------------------------------Ti Ti 1

Because the Dix equation is so sensitive to changes in the stacking velocities, we recommend that you smooth your stacking velocities before converting them to interval velocities. Multiples and noisy seismic gathers also contribute to errors in the stacking velocities, which cause additional errors in the Dix interval velocity calculation. One of the beauties of DepthTeam is that you will be able to visually QC your interval velocities before you run the depth conversion. The following table defines the meaning of Seismic Stacking Velocity (Vs) when referenced to the CMP domain:
Seismic Stacking Velocity

Model
Single Flat Layer Single Dipping Layer Flat Multi-Layered Earth Uniformly Dipping Layered Earth Arbitrarily Dipping Layered Earth Layered Earth with Arbitrary Curved Interfaces

Travel Time Equation


Hyperbolic Hyperbolic Hyperbolic Hyperbolic Hyperbolic

Velocity Interpretation
Formation or Interval Velocity Vs = VI Vs = VI / cos Vs = VRMS = root mean square velocity Vs = VRMS / cos Vs = f(Vk, k, dk) Ray Tracing Problem

Non-Hyperbolic

Vs = ? Seismic Velocity no longer a useful concept.

C-16

Time-to-depth Conversion Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

Unfortunately, non-hyperbolic moveout is the case for many geologic settings.


Non-Hyperbolic Moveout
Whats going on down there?

Velocity Anomaly

True Seismic Ray Path (ray tracing)

2 2 x 2 NMO Assumed Straight Ray Path t =to +( /v)


A major deficiency of the NMO equation is that it assumes straight ray paths with no lateral velocity variation. This is one among many of the reasons for prestack depth migration. You should also be aware that the amount of NMO to move t to to decreases with time and thus magnifies the NMO velocity error with time. A poor signal to noise ratio also increases the seismic velocity error.
R2003.2.0.1 Time-to-depth Conversion Uncertainties C-17

DecisionSpace Immersion

Landmark

Velocity Accuracy
Uses of Seismic Velocity - Vs

Uses
Signal Enhancement: NMO and Stack Structure: Migration Depth Conversion Stratigraphy: Stratigraphic Correlation or Lithologic Variation (Sand/Shale Ratio) Physical Properties: Porosity, Density, Fluid Content (Water/ Oil/Gas)

Required Accuracy
2 - 10% 1 - 5% 1 - 2%

Velocity Type
Vs VRMS, VAVG,, VI VI

1 - 2%

VI

C-18

Time-to-depth Conversion Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

Typical Depth Error

Method
Vertical depth conversion/Dix velocities Map Migration/CMP Coherency Inversion Geostatistical Pseudo-interval velocity integration

Typical Depth Error


-5 to 10% (Dix velocities typically too fast) +/-0 to 3% +/-0 to 1%

If 5 to 10 percent depth error is acceptable, you may be able to obtain this goal with a DepthTeam Express technique. This is significantly faster than using the more accurate methods found in DepthTeam Explorer and DepthTeam Extreme. For example, let us say that your prospect is an anticline at an approximate depth of 3000 meters, and an estimated structural relief of 500 meters. If the objective of your depth conversion project is to determine that the structure is present in depth, a DepthTeam Express workflow could be used to achieve this objective. Since the depth relief of the prospect is 17 percent (500 / 3000), an accuracy of +/- 5 to 10% is sufficient. However, if the estimated structural relief is only 30 meters, the depth relief of the prospect would be one percent (30 / 3000), requiring a more accurate solution than the +/- 5 to 10% accuracy that DepthTeam Express can provide. Large, thick targets can typically allow greater depth uncertainty than small thin ones. Allowable depth uncertainty, however, is always tied to drilling economics, so it is helpful to have some idea of the minimum allowable field size. The closer you are to this economic limit, the greater your needs for an accurate depth conversion solution.

R2003.2.0.1

Time-to-depth Conversion Uncertainties

C-19

DecisionSpace Immersion

Landmark

Velocity Frequency Different measurements of velocity occupy different frequency bands.

Sonic Log

Sample Rate ~ 1 Foot (or ~.0001 second) 0 - 5000 Hz

Check Shot Data

Sample Rate ~ 250 Feet (or ~.025 second) 0 - 20 Hz

NMO Velocity

Sample Rate ~.25 second 0 - 2 Hz

Seismic R.C Calculations

Data ~ 10 to 50 Hz 10 - 50 Hz

C-20

Time-to-depth Conversion Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

The more velocity frequency you have going into your velocity model the better the velocity resolution.

Velocity Model

10 - 50 Hz

2 - 10 Hz

0 - 2 Hz

R2003.2.0.1

Time-to-depth Conversion Uncertainties

C-21

DecisionSpace Immersion

Landmark

Migration Uncertainties
The two main goals of migration are to return the reflectors to their true positions and to collapse diffractions. In severe cases of lateral velocity variation, you may need to run a ray-tracing simulation to predict the magnitude of migration uncertainty. The following examples display how depth conversions often laterally miss the top of structure. This type of depth conversion problem is as old as prehistory, when the first spear fisherman tried to catch a fish.

C-22

Migration Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

For the fisherman, the bending of light waves at the air/water interface causes the fish to appear offset from where it really is. Similarly, for the dry hole driller, the bending of the seismic energy at the fast/slow layer interface causes the image of the geologic structure to appear offset from the actual structure. Many interpreters mistakenly believe that seismic Migration corrects this problem. In fact, seismic Time Migration does not account for ray bending at layer interfaces! Only a Depth Migration can properly account for this phenomenon.

R2003.2.0.1

Migration Uncertainties

C-23

DecisionSpace Immersion

Landmark

Two Main Uncertainties of Depth Conversion Converting from time to depth consists of two major parts: Velocity Estimation and Depth Conversion. DepthTeam is not a simple solution or workflow. Rather, DepthTeam is a collection of tools configured to provide a scalable series of depth conversion workflows. There are six main velocity estimation techniques in the DepthTeam solution: 1. Time/Depth Function Interpolation (DTExpress) 2. Well-Based Pseudo Velocity Estimation (DTExpress) 3. Seismically Derived Interval Velocity Estimation, Dix Inversion (DTExpress) 4. CMP Coherency Inversion (DTExplorer) 5. GeoStatistical Velocity Integration (DTExplorer) 6. Migration Velocity Analysis with Pre-Stack Depth Migration (DTExtreme) One of these techniques must be applied to your data before you can build your DecisionSpace structural framework.

C-24

Migration Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

Ray Tracing Background


Terminology The following picture visually describes some of the zero offset ray tracing terminology.

Seismic Datum or Surface

Image Rays

Normal Rays

Poststack Time Migration swings the Stack data along a Simple Operator to the Migrated Position

Reflector in Depth Reflector on Stacked Time Section

Velocity V2

Velocity V1

R2003.2.0.1

Migration Uncertainties

C-25

DecisionSpace Immersion

Landmark

The Image Rays are shot down normal to the surface. The Normal Rays are shot up from the reflector of interest at depth. Both types of ray tracing honor the physics of the earth model. That is to say the rays will bend when a velocity contrast is encountered. If you want to do some research, the basic equation governing this process is called Snells law. (When a density contrast is encountered the amplitude will change.) Forward Ray Tracing If you want to ray trace from surfaces picked from a time migration you need to use Image Ray instead of Normal Ray tracing. The following diagram further explores the terminology.

C-26

Migration Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

Advanced Map Migration Terminology

Seismic Datum or Surface

Image Rays

Normal Rays

Poststack Time Migration Operator (ignores Thin Lens Term, which handles effects of lateral velocity changes and structural dip)

True Reflector in Depth

Incorrectly Positioned Time Migration Reflector

Reflector on Stacked Time Section

Velocity V3
R2003.2.0.1

Velocity V2 __ (V2<V1)

Velocity V1
C-27

Migration Uncertainties

DecisionSpace Immersion

Landmark

The diagram below is for the case where V2 is greater than V1. Seismic Datum or Surface Image Ray

Normal Ray

Velocity V1 True Reflector Point in Depth Incorrectly Positioned Time Migration Reflector Point Reflector Point on Stacked Time Section Velocity V3 Velocity V2 ___ (V2>V1)

Stack vs. Time Migration vs. Depth Migration

C-28

Migration Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

The pre-processing workflow of DecisionSpace must use DepthTeam to do the Time to Depth conversion. DepthTeam offers a scalable solution to fit the degree of geologic complexity. DepthTeam comes in three levels; DepthTeam Express, DepthTeam Explorer, and DepthTeam Extreme. DepthTeam Express deals with the simpler time to depth conversions, whereas Explorer describes a more rigorous solution by means of such tools as ray-tracing. DepthTeam Extreme is for the most difficult imaging and mapping problems such as around salt and sub-salt. It is much more time consuming as it works through a full 3D prestack depth migration. The decision tree and geologic model below should serve as a guide for the most appropriate depth conversion method for a particular geologic setting.

Geologic Setting Migration Method Depth Conversion Method

No dip, no lateral velocity variations

Mild dipping events, mild lateral velocity variations Time Migration Conversion of time axis to depth axis along vertical rays

Dipping events, moderate lateral velocity variations Time Migration Conversion of time axis to depth axis along image rays

Severely dipping events, strong lateral velocity variations Depth Migration Conversion of time axis to depth axis done during the depth migration

No Migration Conversion of time axis to depth axis along vertical rays

Vertical Depth Conversion Appropriate Workflow TDQ

Vertical Depth Conversion DepthTeam Express

Map Migration

Prestack Depth Migration DepthTeam Extreme

DepthTeam Explorer

R2003.2.0.1

Migration Uncertainties

C-29

DecisionSpace Immersion

Landmark

TDQ

DT Express + TDQ

DT Explorer

DT Extreme

No dip, no lateral velocity variations.

Mild dipping events, mild lateral velocity variations with moderate faulting.

Dipping events, moderate lateral velocity variations.

Severely dipping events, strong lateral velocity variations.

Remember, the more input you have on building the velocity model the more benefit you will get out of the velocity model. The velocity model can become an interpretation asset that will allow you to address interpretation problems, some of which were previously addressable only with the drillbit.

C-30

Migration Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

The method of depth conversion you choose is also a function of the data and the resources you have available. Once you are aware of what you have available you can make a better informed decision on what type of depth conversion you need. Data Available Check shot surveys, time depth curves, sonic logs, vsp surveys, well tops, synthetic seismograms, seismic horizons, seismic velocities, dmo velocities, migration velocities, analytic velocity functions, time migrated seismic, depth migrated seismic, prestack depth migrated seismic. Software Available Which of the following do you have: Tie Geologic and Geophysical Data Tools Simple (SeisWorks) Advanced (SynTool) Velocity Model Building Tools Linear based (TDQ) Structural based (DepthTeam Express) Well top based (DepthTeam Express) Analytic/Gradient based (DepthTeam Express) Seismic velocity based (TDQ, DepthTeam Express) Coherency inversion based (DepthTeam Explorer) Geostatistical based (DepthTeam Explorer) Tomographic based (DepthTeam Extreme) 3D Visualization Tools EarthCube/OpenVision Velocity Model Calibration and Analysis Tools Simple (DepthTeam Express) Advanced (DepthTeam Express & DepthTeam Extreme) Depth Conversion Tools Vertical stretch (TDQ) Map migration (DepthTeam Explorer) Seismic depth migration (DepthTeam Extreme) Time Available There is always a trade off between time and accuracy. If you have more time available you can normally achieve a more accurate depth conversion. Geologic Setting How severe is the dip? How severe is the faulting? Are you in a basin where the velocity is a function of the depth of burial? Are velocities strictly a function of rock type? Are there different velocity gradients associated with various geologic formations?
Migration Uncertainties C-31

R2003.2.0.1

DecisionSpace Immersion

Landmark

Graphically, the accuracy of depth conversion follows the DepthTeam Continuum as shown below.

Prestack Depth Migration

PreStack Depth MIgration Migration Velocity Analysis

Depth Conversion Accuracy

Poststack Depth Migration Normal Ray Map Migration Image Ray Map Migration Vertical Stretch

Map Migration Coherency Inversion Velocities

Map Migration Geostatistical Inversion Velocities

Vertical Stretch Pseudo Velocities Vertical (or Dix Velocities) Stretch Calibrated T/D Curves

Map Migration Dix Velocities

Time / Energy / Expertise

C-32

Migration Uncertainties

R2003.2.0.1

Landmark

DecisionSpace Immersion

Seismic interpretation uncertainties


As seismic acquisition, processing and imaging technologies have improved, the interpretation of the seismic images has become much more accurate. In fact, when the seismic data are of high quality, automatic picking algorithms can be used, and the uncertainty of the interpretation can approach nil. In poor data areas, however, the interpretation plays a major role, and the uncertainty depends on many local factors (for example, distance to wells, structural complexity, quality of marker horizons, ...). The interpretation uncertainty in poor data areas is never negligible and must be addressed when building the structural model.

R2003.2.0.1

Seismic interpretation uncertainties

C-33

DecisionSpace Immersion

Landmark

C-34

Seismic interpretation uncertainties

R2003.2.0.1

Appendix D

Basic Geostatistical Analysis

This section is comprised of an eight lesson short-course that takes you from the purpose behind geostatistics and basic concepts, through data analysis techniques, variogram construction, kriging, and surface flapping.

Topics covered in this chapter:


Introduction to geostatistics short course Lesson 1: Purpose of Geostatistics Lesson 2: Basic Concepts Lesson 3: Geological Principles for Reservoir Modeling Lesson 4: Data Analysis Lesson 5: Spatial Data Analysis Lesson 6: Geostatistical Algorithms Lesson 7: Structural Modeling Lesson 8: Seismic Data Integration

Landmark

Basic Geostatistical Analysis

D-1

DecisionSpace Immersion

Landmark

Introduction to Geostatistics Short Course


A geostatistics course was developed for, and presented to Landmark Graphics in Austin, Texas in the summer of 1999. It was intended to be an informal training seminar for employees involved in the development, testing, and documentation of software that implements geostatistics. A scaled down version of the original course is included here as a reference resource. A full description of geostatistics is far beyond the scope of this manual. Landmark employees can get a quick description of geostatistics at http://longhorn.zycor.lgc.com/geostats/default.html. Non-Landmarkers should consult their favorite geostatistical text book. Lesson 1: Purpose of Geostatistics This first discusses the need for geostatistics and its applications to reservoir management, decision making in the face of uncertainty throughout the life cycle of a reservoir, and portfolio management. Lesson 2: Basic Concepts Lesson 2 deals with some of the basic concepts required for understanding geostatistics. Basic statistical tools, probability distributions, Monte Carlo simulation, stochastic modeling concepts are explained in this lesson. Lesson 3: Geological Principles for Reservoir Modeling Lesson 3 discusses different geological structure types and associated modeling concepts along with illustrative examples. Lesson 4: Data Analysis Reservoir data derived from wells and / or seismic is often unreliable and biased and therefore requires pre-processing. Lesson 4 discusses pre-processing issues such as declustering, trends, reconciliation of data, inference, and calibration of soft data.

D-2

Introduction to Geostatistics Short Course

R2003.2.0.1

Landmark

DecisionSpace Immersion

Lesson 5: Spatial Data Analysis Geostatistics differs from regular statistics in deals with spatially correlated data. The most common tool used for describing spatial correlation (variability) is the variogram. Lesson 5 discusses variograms, their interpretation, and modeling. Lesson 6: Geostatistical Algorithms One application in geostatistics is making accurate maps. Lesson 6 will discuss using estimation and simulation algorithms for map making. Lesson 7: Structural Modeling Lesson 7 discusses various aspects of structural modeling such as velocity uncertainty, thickness uncertainty, and how to handle faults. Lesson 8: Seismic Data Integration This lesson discusses calibration of seismic and well data, inference of cross correlation and various (multivariate) simulation techniques including cokriging, collocated cokriging, kriging with external drift, and annealing.

R2003.2.0.1

Introduction to Geostatistics Short Course

D-3

DecisionSpace Immersion

Landmark

Lesson 1: Purpose of Geostatistics


Introduction
Taken from a view of reservoir characterization, geostatistics can be defined as a collection of tools for quantifying geological information leading to the construction of 3D numerical geological models for the assessment and prediction of reservoir performance. Geostatistics deals with spatially distributed and spatially correlated phenomena. Geostatistics allows quantification of spatial correlation and uses this to infer geological quantities from reservoir data at locations where there are no well data (through interpolation and extrapolation). In addition, the main benefits from geostatistics are: (1) modeling of reservoir heterogeneity, (2) integrating different types of data of perhaps different support and different degrees of reliability, and (3) assessing and quantifying uncertainty in the reservoir model. There is no cookbook recipe for geostatistics. Each geostatistical study requires a certain degree of user-interaction, interpretation, customization, and iteration for a robust solution. These lessons offer a minimal guide to understanding the fundamental basics of geostatistics and the processes associated with the construction of numerical geological models. This material requires an understanding of basic statistical concepts, low level calculus, and some understanding of linear algebra notation.

Qualitative / Quantitative Reasoning


Geostatistics presents a probabilistic approach to the study of natural phenomena that vary in space. It was developed in the mining industry from a need to cope with earth science data in an intelligent and mathematically robust manner. It has, since its creation, been the preferred method for dealing with large data sets, integration of diverse data types, the need for mathematical rigor and reproducibility, and the need to make decisions in the face of uncertainty. In the oil industry, there is, first of all, a need for reliable estimates of the original hydrocarbon volumes in a reservoir. These in situ volumes are important for: (1) determining the economic viability of the reservoir, (2) allocating equity among multiple owners, (3) comparing the relative economic merits of alternative ventures, and (4)
D-4 Lesson 1: Purpose of Geostatistics R2003.2.0.1

Landmark

DecisionSpace Immersion

determining the appropriate size of production facilities [Deutsch, 1999]. A strength of geostatistics, as compared to more traditional interpolation techniques such as inverse-squared distance, and triangulation is the quantitative use of spatial correlation / variability models (e.g. variograms). Basically, the geostatistical interpolation routines will account for and reproduce geological interpretation information in estimating quantities at unsampled locations rather than blind interpolation between known data values. For some there is significant confusion regarding the use (and abuse) of geostatistics. Geostatistics consists of a set of mathematical tools, comprising of data analysis components and interpolation / extrapolation routines. Like all problems requiring a solution, the solution will not arise by blindly throwing tools at it. The tools must be used intelligently to extract a solution. A geostatistical study consists of a series of subjective (and interpretative) decisions. Many geoscientists are confused by the geostatistical solution because they do not understand how to cope with it. Geostatistics in recent years has come away from the idea of a single deterministic answer to earth science problems. Rather, it focuses on the uncertainty associated with that answer. Geostatistics will not tell you to drill a well two feet to your left, but rather drill between 0 and 10 feet to your left with a best chance of hitting oil at 2 feet to your left. Geostatistics will not yield the precise volumes of oil in a reservoir. It will estimate this volume and the uncertainty associated with this estimate. Geoscientists and engineers must still make the educated decision of potential well locations. However, they are now armed with tools allowing them to quantify the uncertainty and risks associated with the decisions they have to make. The shift in philosophy is accepting the fact that there is tremendous uncertainty in most reservoir related quantities and that we will never know the true answer and any deterministic suggestion is very likely to be wrong (no matter how much physics is behind it). We have to learn to make decisions in the face of uncertainty and start thinking in terms of probability of an outcome. Traditionally, decisions have often been made by visual estimate, e.g. by looking at a contour map and deciding which is the best part of the reservoir, by eye-balling correlation across 2 wells. Difficulties arise when moving from 1D or 2D to 3D data sets. Qualitative decision making rapidly becomes impossible and unreliable. A quantification framework becomes necessary to deal with various (possibly correlated) 3D data types, sampled over different volumes, with different levels of precision and reliability, relating to different earth attributes. Geostatistics is a toolbox to do just this.

R2003.2.0.1

Lesson 1: Purpose of Geostatistics

D-5

DecisionSpace Immersion

Landmark

Like all toolboxes, there are appropriate uses and limitations of geostatistics. Geostatistics is very useful throughout the life of a reservoir, but it has the most impact early on when there exists tremendous uncertainty in geological quantities. As a reservoir matures and additional data / information become available (e.g. well logs, 3D seismic, production data), the uncertainty shifts focus and likely decreases. There is usually less uncertainty as more data comes about: additional data imposes more constraints on the model, yielding fewer degrees of freedom, thus less uncertainty. This is known as the information effect. As time progresses, more information becomes available further constraining the model and reducing uncertainty. Taken to the extreme, the reservoir will eventually be completely known and there will be no uncertainty; geostatistics will have reached the limit of its usefulness, as shown in Figure 1.1.

Figure 1.1: Effect of acquisition of new data on the uncertainty remaining on the geological model.

Reservoir Planning
The motivation behind reservoir planning is maximizing the net present value of the reservoir by getting as much oil out as possible in the least amount of time. It consists of determining the best strategy for development of a reservoir or a field. In the early years of oil field exploration and reservoir production, a wide variety of different approaches were used to ascertain the best well locations and best exploitation strategies. Some were effective and others were not depending on the type of reservoir at hand and the complexity of the decisions to be taken, but mostly it was a hit or miss operation. Exploration and reservoir planning decisions were, at best, based on historical data from mature fields with similar properties and some geological information from outcrops. Around the same time frame as World War II, sonic tools and electric logs for geological exploration were developed. The data derived from these tools required different data management practices and paved the way for more rigorous workflows for reservoir exploitation leading to reservoir planning.
D-6 Lesson 1: Purpose of Geostatistics R2003.2.0.1

Landmark

DecisionSpace Immersion

There are three distinct phases to reservoir exploitation: (1) the reservoir exploration phase, (2) the exploitation phase, and (3) the enhanced recovery / abandonment phase. The use of geostatistics spans them all. At the beginning of the exploration phase, an oil reservoir has been discovered but its extent is not known. The aim of the exploration phase is to gain as much insight as possible about the size and boundaries of the reservoir. Past delineation practices were to drill step out wells in regular patterns until the well data showed that the limit of the reservoir had been reached. By enabling more accurate estimates of rock / petrophysical properties between wells, the use of geostatistics has helped reservoir delineation by allowing us to increase the distance between step out wells. The exploitation phase is the strategic placement of additional wells to maximize the net present value. In the past, most wells were located using a single interpretation of the data. The use of geostatistics can help optimize well placement through sensitivity studies by considering the effects of infill wells on other existing wells, maximizing oil bearing lithofacies connectivity, determining optimal number of infill wells to drill, etc... By enabling multiple realizations of the reservoir and its heterogeneity, geostatistics provides tools to evaluate the probable impact of various drilling options and recovery schemes, thus quantifying the risks associated with different development scenarios. In the enhanced recovery / abandonment phase the reservoir is evaluated for the effectiveness of an enhanced recovery phase such as steam injection, chemical injection, and so on. If no enhanced recovery scheme proves fruitful then the reservoir is abandoned.

Elimination of Surprises
Before we can discuss how geostatistics can be used to eliminate unwanted surprises, these must first be identified. Historically, reservoir modeling was performed in a deterministic mindset. In other words, a single model was created based on a unique interpretation of earth science data yielding a single deterministic response / solution to geological problems. In reality, due to the large uncertainty existing beyond the data and even within the data itself, one cannot express full confidence in a single deterministic response. Sources of uncertainty within the data arise not only from measurement or acquisition errors but also from interpretation errors. This applies to both direct (e.g. cores) and indirect measurements (e.g. logs, seismic). In addition, the resolution of some measurements (e.g. seismic) may be limited in that it either hides, blurs, or confuses important features. Also, well
R2003.2.0.1 Lesson 1: Purpose of Geostatistics D-7

DecisionSpace Immersion

Landmark

measurements represent a very limited sample of the reservoir. Wells are separated by very large distances as compared to the volume they sample, thus leading to tremendous uncertainty in interpolating geological quantities between wells. Above all, the biggest source of uncertainty may yet come from the high-level choice of the structural model and deposition environment. The uncertainty due to data and the subsequent surprise realized by putting too much faith in the data can be limited through rigorous data analysis exercises and proper calibration. Some reservoir modeling packages allow for the addition of trends and other geologic information, but only geostatistical methods enable modeling of heterogeneity and the inclusion of uncertainty in both the data and the choice of model. Geostatistics provides stochastic techniques to generate alternative models, each being a plausible solution (equiprobable). The value in modeling this way is the ability to quantify uncertainty in the model and to generate models that have a realistic level of heterogeneity. Heterogeneity is a component of uncertainty, defined as the magnitude of diversity in the reservoir. If the heterogeneity is poorly modeled, the resulting reservoir model may be too smooth and provide inaccurate assessment of the uncertainty in the model and predictions of the reservoir behavior. The surprises include but are not limited to the potential error of oil in place estimates, the flow characteristics, and water break through times. These surprises can cause catastrophic economic results. Using geostatistics prevents unwanted surprises creeping up in the modeling exercise through different means. Firstly, a geostatistical study forces practitioners to perform thorough data analysis and quality assurance steps before any modeling decision take place. Furthermore, rigorous model validation is also recommended. But most importantly surprises in engineering / business decisions are reduced through the probabilistic approach taken by geostatistics: a full probability distribution of outcome is available rather than deterministic and subjective best, worst, and most likely scenarios only.

The Need For Decision Making


A geostatistical study, or any reservoir modeling exercise for that matter, should not happen unless it is dictated by a key business decisions aimed at increasing the net present value of a reservoir, an asset or even a whole portfolio. Though major investment / divestment decisions must be made in the presence of significant uncertainty, geostatistics aims at providing a framework for working with uncertainty and managing it.
D-8 Lesson 1: Purpose of Geostatistics R2003.2.0.1

Landmark

DecisionSpace Immersion

Quantification of Uncertainty and Risk Qualified Decision Making


Uncertainty is the lack of assuredness about the truth of a statement or about the exact magnitude of a measurement. Uncertainty is the central concept in the decision making that follows geostatistical studies. Uncertainty is required in decision making because estimates do not agree with reality. For instance, an estimate with low uncertainty leads to easier decision making versus an estimate with high uncertainty. Figure 1.2 shows three histograms each of varying degrees of uncertainty. The left histogram shows a histogram with a mean of 0 with high uncertainty about the mean, the middle histogram shows decreased uncertainty about the mean, and the right histogram indicates little uncertainty about the mean.

Figure 1.2 Three histograms of decreasing uncertainty, the greatest uncertainty being on the left, and the least uncertainty being on the right.

R2003.2.0.1

Lesson 1: Purpose of Geostatistics

D-9

DecisionSpace Immersion

Landmark

Risk-qualified decision making requires (1) a quantification of uncertainty, and (2) quantification of the loss associated with the decision. By way of example, Figure 1.3 illustrates the concept of risk qualified decision making. A distribution of uncertainty is generated, and using a loss function, the risk is assessed and an optimal estimate (the estimate that incurs the least loss) is determined. Different loss functions can be used for pessimistic and optimistic estimates.

Figure 1.3, An illustration showing the concept of risk qualified decision making. Note that the loss function is scenario specific, and that the histogram of possible costs are in addition to those costs if the estimate were correct.

D-10

Lesson 1: Purpose of Geostatistics

R2003.2.0.1

Landmark

DecisionSpace Immersion

Lesson 2: Basic Concepts


This lesson covers some fundamental geostatistical and statistical concepts.

Definitions
Statistics Statistics is the science of collecting, processing, analyzing and interpreting numerical data. Statistics dilutes numerical information to provide (sometimes) clearer insights into a population. Geostatistics Geostatistics originally started as the study of phenomenon that vary in space but the science has evolved as a suite of mathematical tools for application to many other earth science problems. The strength of geostatistics is its stochastic approach to numerical modeling. While not all of the tools in the geostatistical toolbox are stochastic in nature, most of them, or at least the workflows they describe are, and it is in this arena that geostatistics has enjoyed the most success. In some ways geostatistics is the antithesis of traditional statistics; geostatistics takes sample data and infers a population rather than diluting sample information into a more digestible forms of information such as mean and variance. Geostatistics, unlike statistics, focuses on natural phenomena which are correlated in space. Typical features of importance are spatial continuity (or variability), spatial anisotropy, and trends. Variable A variable is a symbol which can take any one of a prescribed set of values. A variable that can assume any real number value is called a continuous variable (often denoted z in geostatistical jargon); any variable that can only assume an integer value is called a discrete or categorical variable. Porosity and permeability are continuous variables. Lithofacies classification are categorical and commonly denoted with the indicator variable i, where i is 1 if the category is present and 0 if not (more on this later). When a variable is distributed

R2003.2.0.1

Lesson 2: Basic Concepts

D-11

DecisionSpace Immersion

Landmark

in space it is called a regionalized variable. A regionalized variable is often denoted Z in geostatistics. Permeability and porosity are two examples of regionalized variables. The value of the attribute is, as mentioned above, a particular realization of the regionalized variable and is denoted by z. The regionalized variable is simply a function f(x) which takes on a possibly different value (z, or i) at any location in space. One would think that it is possible to examine a single mathematical function that characterizes a reservoir. More often than not, the variable varies so irregularly in space as to preclude any direct mathematical study of it [Journal, 1978]. Because earth science phenomenon involve complex processes, and because the regionalized variable is so erratic, the regionalized variable is considered a random variable. The random variable is a variable which takes a certain number of numerical values according to a certain probability distribution. The set of permissible values that the random variable can take is called the random function. Instead of attempting to model the regionalized variable analytically (mathematically), the regionalized variable is modeled as a random function. For example, the result of casting an unbiased die can be considered as a random variable which can take one of six equiprobable values. The set of values that the die can take is called the random function. If one result is 5 then this value is called a particular realization of the random variable result of casting the die. Similarly let us consider the permeability z(u) = 4 md, where u denotes a location in space. This measure of permeability can be considered as a particular realization of a certain random variable. Thus the set of permeabilities z(u) for all points u inside the reservoir (the regionalized variable z(u)) can be considered as a particular realization of the set of random function Z(u), for all locations u in the reservoir. Minimum The smallest data value in the data set. Maximum The largest data value in the data set.

D-12

Lesson 2: Basic Concepts

R2003.2.0.1

Landmark

DecisionSpace Immersion

Mean or Expected Value The mean, or expected value, is the weighted average of a random variable (or sample data), where the weights represent the probability of occurrence of each sample. If the sampling technique sampled unbiasedly, that is without preference, the data all have an equiprobable chance of being selected and all of the probabilities would be one, i.e. the mean is then obtained by adding all of the data and dividing by the number of observations. The expected value is denoted by E[X], or more simply m, and is defined by:

(2.1) where x is the value, n is the number of samples, E[X]is the expected value, and m is the mean. Median The midpoint of the ranked (i.e. sorted from smallest to largest) data. If there were 25 data, the median would be the 13th value. It also represents the 50th percentile in a cumulative histogram. Mode The mode is the most commonly occurring data value in the data set. Variance The variance is a measure of spread. It can be thought of as the average squared-distance of the data from the mean. It can be found using the equation below:

(2.4)

R2003.2.0.1

Lesson 2: Basic Concepts

D-13

DecisionSpace Immersion

Landmark

Standard Deviation The standard deviation is the square root of the variance. It is sometimes the preferred measure of spread because it has the same units as the mean whereas the variance has squared units.

(2.5) Coefficient of Skewness The coefficient of skewness is the average cubed difference between the data values and the mean. If a distribution has many small values and a long tail of high values then the skewness is positive, and the distribution is said to be positively skewed. Conversely, if the distribution has a long tail of small values and many large values then it is negatively skewed. If the skewness is zero then the distribution is symmetric. For most purposes we will only be concerned with the sign of the coefficient, and not its value.

(2.6) Coefficient of Variation The coefficient of variation is the ratio of the variance and the mean. While the standard deviation and the variance are measures of absolute variation from the mean, the coefficient of variation is a relative measure of variation and gives the standard deviation as a percentage of the mean. It is much more frequently used than the coefficient of skewness. A coefficient of variation (CV) greater than 1 often indicates the presence of some high erratic values (outliers).

(2.7)

D-14

Lesson 2: Basic Concepts

R2003.2.0.1

Landmark

DecisionSpace Immersion

Quantiles (Quartiles, Deciles, Percentiles...) Quartiles, Deciles, percentiles are used to break the data into quarters, tenths, and hundredths, respectively. Quantiles define any other user defined fraction. It is common to compare quantiles from two different distributions. Inter-Quartile Range The difference between the 75th and 25th percentiles. Quantile-Quantile Plots Quantile-quantile plots (Q-Q plot) are useful for comparing two distributions. A Q-Q plot takes the quantile value from one distribution and crossplots it against another. The result would be a straight line at a 45 line if the two distributions are the same shape. A change in slope indicates a difference in variance, and a parallel shift in any direction indicates a difference in the mean. Some uses of the Q-Q plot include core to log relations, comparing the results from different drilling campaigns, comparing the effects of declustering (to be discussed later), and comparing distributions by lithofacies.

Fig. 2.1 A quantile-quantile plot.

R2003.2.0.1

Lesson 2: Basic Concepts

D-15

DecisionSpace Immersion

Landmark

Covariance The covariance is the expected value E[(X-mx)(Y-my)] and is a measure of the linear relationship between the random variable X and Y. The thing to notice is that the covariance and the variance are the same if the variable X and Y are the same. Thus the two variables are called covariates. The covariance function can also be written as:

(2.8) where the first variable is indicated by Z(x), a data at one location, and the second variable is indicated by Z(x+h), a data at another location. The variable m is the drift component (mean). The distinction to be noted here is that the covariates can be the same variable but in different places as indicated Z(x) (read the random variable at location x), where x indicates the location specific nature of the covariates. Thus equation 2.8 can be read as the covariance between the two covariates of the same attribute but separated by a distance h.

D-16

Lesson 2: Basic Concepts

R2003.2.0.1

Landmark

DecisionSpace Immersion

Crossplot (or Scatterplot) The crossplot is a bivariate display of two covariates, or the same variable separated by a distance. The values from one distribution are used as the X coordinate and the values from another are used as the Y coordinate to plot a point on the crossplot.

Fig. 2.2a, A crossplot uses the a value from one distribution as the x coordinate and the value of another for the y coordinate to plot a point.

R2003.2.0.1

Lesson 2: Basic Concepts

D-17

DecisionSpace Immersion

Landmark

Correlation Coefficient Correlation is the characteristic of having linear interdependence between random variables or any two data sets. In general two sets of data can be positively correlated, negatively correlated, or not correlated. A useful tool for determining how two data sets are correlated is the crossplot. The following diagram shows data sets that are positively correlated, not correlated, and negatively correlated:

Fig. 2.2b The crossplot on the left illustrates positive correlation, the middle one shows no correlation, and the right one shows negative correlation. One measure of the extent of correlation is the correlation coefficient. The correlation coefficient can be calculated using the formula below:

(2.9) where Cov is the covariance and s is the standard deviation. A correlation coefficient of 1 means that the two data sets are perfectly correlated. Negative correlation coefficients indicate negative correlation and positive coefficients indicate positive correlation. A correlation coefficient of 0 indicates no correlation.

D-18

Lesson 2: Basic Concepts

R2003.2.0.1

Landmark

DecisionSpace Immersion

Uncertainty All numerical models would be found in error if we were to excavate that inter-well volume and take exhaustive measurements; there is uncertainty. Uncertainty is defined as the lack assurance about the truth of a statement or the exact magnitude of a measurement or number. It is impossible to establish the unique true distribution of petrophysical properties between widely spaced wells. The uncertainty regarding the distributions we model is due to our lack of knowledge or ignorance about the reservoir. Geostatistical techniques allow alternative realizations (possible models) to be generated providing a method for quantification of uncertainty [Deutsch, 1999]. Uncertainty is the central concept behind the decision making that usually follows any geostatistical study [Olea, 1991]. It is important to note that uncertainty is not an inherent feature of the reservoir, it is a product of our ignorance.

Histograms
A histogram is a bar chart comparing a variable to its frequency of occurrence. It is the most common way of graphically presenting a frequency distribution. The variable is usually organized into class intervals called bins. An example of a histogram is shown below.

Fig. 2.3 This figure diagrammatically illustrates the essential components of a histogram.

R2003.2.0.1

Lesson 2: Basic Concepts

D-19

DecisionSpace Immersion

Landmark

Probability Distributions
A probability distribution summarizes the probabilities that a random variable will take a certain value. A probability distribution and a cumulative distribution function are the same. Probability can be defined as the relative frequency of an event in the long run. If we repeat the experiment many times, the relative frequency of the outcomes should be the same as the random variables probability histogram. A cumulative frequency histogram is plotted using the cumulative distribution function (cdf) of the random variable Z. Some important features of the cdf include (1) its value is always between 0 and 1, (2) it is a non decreasing function and, (3) the values are not classified into bins. There are many different probability distributions each having different properties. The Gaussian distribution, or normal distribution, has qualities (integrates easily among others) that make it especially practical for use in geostatistics.

Fig. 2.5 A cumulative probability plot and a cumulative frequency plot are the same thing.

D-20

Lesson 2: Basic Concepts

R2003.2.0.1

Landmark

DecisionSpace Immersion

Monte Carlo Simulation


Monte Carlo simulation is any procedure that uses random numbers to obtain a probabilistic approximation to a solution. Monte Carlo Simulation proceeds in two steps: 1. A uniformly distributed random number between 0 and 1 is drawn 2. The random number is taken to represent a cumulative probability 3. the corresponding quantile is identified from the cdf Figure 2.9 shows how Monte Carlo simulation is performed.

Fig. 2.9 Monte Carlo simulation consists of drawing a normally distributed number and recording the appropriate value from the cdf. Monte-Carlo simulation is the foundation of all stochastic simulation techniques. Much care should be taken to ensure that the parent cdf is a representative distribution, as any biases will be translated into the results during the transformation.

R2003.2.0.1

Lesson 2: Basic Concepts

D-21

DecisionSpace Immersion

Landmark

Bootstrap
The bootstrap is a method of statistical resampling that allows uncertainty in the data to be assessed from the data themselves. The procedure is as follows: 1. Draw n values from the original data set with replacement 2. Calculate the required statistic. The required statistic could be any of the common summary statistics. For example, we could calculate the uncertainty in the mean from the first set of n values 3. Repeat L times to build up a distribution of uncertainty about the statistic of interest. For the example above we would find the mean of the n values L times yielding a distribution of uncertainty about the mean.

Fig. 2.10 The bootstrap is used to determine the uncertainty in the data itself. This diagram shows how the uncertainty in the mean is found. First randomly draw n values from the data set and calculate the mean. Repeat this many times, and the distribution of the mean quantifies the uncertainty about the mean.
D-22 Lesson 2: Basic Concepts R2003.2.0.1

Landmark

DecisionSpace Immersion

Geostatistical and Other Key Concepts


Petrophysical Properties There are three principle petrophysical properties discussed in this course: (1) lithofacies type, (2) porosity, and (3) permeability. Hard data measurements are the lithofacies assignments porosity and permeability measurements taken from core (perhaps log). All other data types including well logs and seismic data are called soft data and must be calibrated to the hard data. [Deutsch, 1998] Modeling Scale It is not possible nor optimal to model the reservoir at the scale of the hard core data. The core data must be scaled to some intermediate resolution (typical geological modeling cell size: 100 ft * 100 ft * 3 ft). Models are built to the intermediate scale and then possibly further scaled to coarser resolutions for flow simulation.

Core Data

Geological Model

Flow Model

Uniqueness, Smoothing, and Heterogeneity Conventional mapping algorithms were devised to create smooth maps that reveal large scale geologic trends. For fluid flow problems however, the extreme high and low values have been diluted and will often have a large impact on the flow response (e.g. time to breakthrough would be systematically under-estimated during a water flood). These algorithms remove the inherent variability of the reservoir; they remove the heterogeneity within the reservoir. Furthermore, they only provide one unique representation of the reservoir.

R2003.2.0.1

Lesson 2: Basic Concepts

D-23

DecisionSpace Immersion

Landmark

Analogue Data There are rarely enough data to provide reliable statistics, especially horizontal measures of continuity. For this reason analogue data from outcrops and similar more densely drilled reservoirs are used to help infer spatial statistics that are impossible to calculate from the available subsurface reservoir data. Dynamic Reservoir Changes Geostatistical models provide static descriptions of the petrophysical properties. Time dependent processes such as changes in pressure and fluid saturation are best modeled with flow simulators.

Data Types The following list represents the most common types of data used in the modeling of a reservoir: Core data ( and K by lithofacies) Well log data (stratigraphic surfaces, faults, measurements of petrophysical properties) Seismic derived structural data (surface grids / faults, velocities) Well test and production data (interpreted K, thickness, channel widths, connected flow paths, barriers) Sequence stratigraphic interpretations / layering (definition the continuity and the trends within each layer of the reservoir) Spatial patterns from regional geological interpretation Analogue data from outcrops or densely drilled similar fields (size, distributions, and measures of lateral continuity) Knowledge of geological processes / principles established through widely accepted theories (forward geologic modeling) [Deutsch, 1998]

D-24

Lesson 2: Basic Concepts

R2003.2.0.1

Landmark

DecisionSpace Immersion

Numerical Facies Modeling


At any instance in geological time there is a single true distribution of petrophysical properties in a single reservoir. This true distribution is the result of a complex succession of physical, chemical, and biological processes. Although some of these depositional and diagenetic processes are understood quite well, we do not completely understand all of the processes and the associated boundary conditions in sufficient detail to provide the unique true distribution. Numerical geological modeling in a reservoir characterization context means arriving at a gridded picture of the reservoir, each grid or cell, containing a numerical value. Reservoir characterization proceeds sequentially: modeling large-scale structures first (i.e. time-boundaries, major faults...), followed by the internal architecture due to lithofacies variations, finishing with petrophysical properties such as porosity and permeability. Petrophysical properties are often highly correlated with lithofacies type. Two ways to model lithofacies are object-based lithofacies modeling, and cell-based modeling.

R2003.2.0.1

Lesson 2: Basic Concepts

D-25

DecisionSpace Immersion

Landmark

Lesson 3: Geological Principles for Reservoir Modeling


Numerical geological models are built in an iterative refining fashion; we model the coarsest features first and revise the model by modeling progressively finer features. Modeling starts with a definition of the reservoir type, then significant geological structures, followed by petrophysical properties. This lesson is organized in the same fashion as a model would be constructed; coarse features followed by finer features. It starts with reservoir types, and move onto the relevant structures, followed by the modeling of petrophysical features.

Reservoir Types
There are two types of reservoirs to be concerned with: siliciclastic reservoirs, and carbonate reservoirs. Siliciclastic reservoirs are reservoirs with sandstone as host rock. Carbonate reservoirs are composed of either skeletal or non-skeletal debris from calcium carbonate secreting organisms. Siliciclastic reservoirs make up about 80% of the worlds known reservoirs and about 30% of the oil production world wide. Carbonate reservoirs make up the balance.

D-26

Lesson 3: Geological Principles for Reservoir Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

Modeling Siliciclastic Reservoirs

Figure 3.1, Seven different types of siliciclastic reservoirs Other factors governing the modeling of siliciclastic reservoirs: Net to gross ratio. The higher the percentage of reservoir quality sand, the less important it is to model individual objects. Diagenetic cements. The pore spaces may be filled in or cemented by later deposits.

R2003.2.0.1

Lesson 3: Geological Principles for Reservoir Modeling

D-27

DecisionSpace Immersion

Landmark

Table 3.1 summarizes seven different siliciclastic reservoirs.


Reservoir Type Braided, high energy fluvial Characteristic Shapes Braided stream channels Examples / Importance Majority of North Sea, excellent reservoirs Important Eolian, windblown Overlapping dune shapes, directional Some in UK North Sea, US and Canada onshore. Less important Meandering fluvial Meandering stream channels Australia, Saudi onshore Important Estuarine/ Bay Tidal channels or bars, sometimes more deltaic China. May be very important in future Gulf of Mexico Onshore Less Important Classical shoreface Facies change horizontally: foreshore (beach), upper shoreface, lower shoreface, offshore Turbidites, fans, storm caused slumpslook like braided channels that evolve into sheets. East Texas Field, Gulf of Mexico Onshore & Shallow Water Important Gulf of Mexico, Offshore California Currently very important Customized object based or hybrid approaches. Cell-based techniques that can handle ordering: truncated Gaussian or transition probability Object-based, unless high N/G, then cellbased for remnant shales Objects (tidal channels, sand bars), or cell-based Sometimes classic dune shapes modeled as objects Modeling Technique

Object based, unless high N/G, then cell based. Model remnant shales

Deltaic

Less ordered, lower energy, fans

Cell-based indicator approach though some clearly defined objects

Deep water

D-28

Lesson 3: Geological Principles for Reservoir Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

Modeling Carbonate Reservoirs By definition carbonate (limestone) rocks are those have greater than 50% carbonate material. The carbonate material is either derived from organisms that secrete carbonate as skeletal material or as fecal matter, or precipitated out of solution. Limestone is chemically unstable and is easily converted to dolomite when hydrothermal fluids rich in magnesium pass through it. Limestone is also easily metamorphised into other rock types such as marble. Most carbonate reservoirs can be modeled using cell based indicator simulation to model limestone / dolomite conversion. Dolomitization often has a directional trend. Fluid flow in rock is almost always directional and the magnesium required for dolomitzation is carried by hydrothermal fluids. The fluid flows through the rock, and magnesium replaces calcium creating dolomite. The trends can be seen with seismic (Dolomitized limestone has different acoustic properties than limestone). Because there are at least two rock types (limestone and dolostone) we must use estimation methods that make use of multiple variables. Trends such as the conversion of limestone to dolostone may also show up in geologic contour maps from the wells. In other cases, reefs may sometimes be modeled as objects, and there may be areal trends associated with these as well. (A change in sea level may cause a reef to die out and another to form further in or out.) Table 3.2 Table for Carbonate reservoirs

Reservoir Type Carbonate

Characteristic Shapes Elongated Reefs, lagoons, and platforms.

Examples / Importance Most of Russia's reservoirs, North Danish Sea Important now and in the future

Modeling Technique

Sometimes modeled as objects with areal trends (death and regeneration of reefs), but usually modeled with cells

R2003.2.0.1

Lesson 3: Geological Principles for Reservoir Modeling

D-29

DecisionSpace Immersion

Landmark

Fig. 3.2 Examples of different carbonate structures.

Modeling Principles
The first step modeling an oil bearing geological structure is to define the topology. The topology defines the coordinate system, grid dimensions, the orientation of the axis, and the cell dimensions. Standard practice is to use the Cartesian coordinate system. The Cartesian coordinate system defines a location in space by an x, y, and z coordinate. In terms of notation, geostatistics uses u to denote location.The grid dimensions are the maximum and minimum coordinates that the grid must cover to post all of the data. A good first approach is to plot the data points on a location map. In order to do this, the minimum and maximum data locations are required so that the extents of the plot can be set. One way to determine these parameters is to plot a histogram and calculate the summary statistics for each of the

D-30

Lesson 3: Geological Principles for Reservoir Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

coordinate axis. The advantage to this approach is that the minimum, maximum, and mean location for each axis is posted allowing you to determine the required parameters for the location map and get a feel for the sampling scheme. The distribution of locations can reveal a biased sampling scheme. Consider the following 2-D example:

Figure 3.3 A histogram of the X and Y data.

R2003.2.0.1

Lesson 3: Geological Principles for Reservoir Modeling

D-31

DecisionSpace Immersion

Landmark

Notice that the x axis data seems well distributed while the y axis data seems a little skewed. This implies that the sampling scheme was a little biased toward the northerly end of the map. The corresponding location map is shown below.

Figure 3.4 A location map of a sample data set.

D-32

Lesson 3: Geological Principles for Reservoir Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

The assessment of the sampling scheme was correct. There is a northerly bias in the sampling scheme. It is useful to draw a contour map of the data. A contour map helps gain some insight to the nature of the data, and can sometimes reveals important trends. The map below shows that most of the sampling occurred in areas of high potential. The map in Figure 3.5 does not reveal any trends but illustrates the value of a contour map.

Figure 3.5 A contour map using the sample data set. The accuracy of the map is not critical. Its purpose is simply to illustrate trends. The contour map illustrates that any areas of high potential (red areas) are heavily sampled. This is biased sampling procedure. The contour map also illustrates that we may want to extend the map in the east direction.

R2003.2.0.1

Lesson 3: Geological Principles for Reservoir Modeling

D-33

DecisionSpace Immersion

Landmark

It is common practice to use the Cartesian coordinate system and corner-point grids for geological modeling. The corner-point grid system is illustrated in Figure 3.6.

Fig. 3.6 The standard grid system used for geological modeling Notice that the Z dimension b in Figure 3.6 is not the same as the dimension a in the areal grid, but the XY dimension for both the areal and vertical grids are the same. For the sake of computational efficiency, the stacked areal grids are aligned with the Z axis. But, for flexibility, the Z axis need not be of the same dimensions as the areal grid. This technique proves valuable for: 1. modeling the hydrocarbon bearing formation as a stack of stratigraphic layers: It is intuitively obvious that a model should be built layer by layer with each layer derived from a homogenous depositional environment. Although each depositional environment occurred over a large span of time in our context, the depositional environment actually occurred for only a brief period of geological time. For our purposes it can be classified as a homogenous depositional environment. 2. volume calculations: The model must conform to the stratigraphic thickness as closely as possible. Modeling the formation as a sugar cube model leads to poor estimates. 3. flow calculations: Flow nets must have equipotential across facies. A sugar cube model would yield erroneous results.

D-34

Lesson 3: Geological Principles for Reservoir Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

This permits modeling the geology in stratigraphic layers. The stratigraphic layers are modeled as 2-D surface maps with a thickness and are then stacked for the final model. Thus having a non regular grid in the Z direction allows for conformity to thickness permitting accurate volume calculations.

Transforming Areal Coordinates


Geological events are rarely oriented with longitude and latitude. There is usually some azimuth, dip, or plunge to the formation. If the angle between the formation and the coordinate axis is large there will be error a' in the cell dimensions as indicated by Figure 3.7. Also, it is confusing to have to deal with the angles associated with azimuth, dip, and plunge, so we remove them and model in some more easily understood coordinate system.

Figure 3.7, Notice that with large deviations in dip that there will be some cell dimension error. It is common practice to rotate the coordinate axis so that it aligns with the direction of maximal continuity. The direction of maximal continuity can be derived from the contour map. A note about continuity; it is assumed that the direction of maximal continuity is that direction which the formation has the greatest continuity, and the

R2003.2.0.1

Lesson 3: Geological Principles for Reservoir Modeling

D-35

DecisionSpace Immersion

Landmark

direction of minimal continuity is perpendicular to the direction of maximal continuity. The rotations are performed in two steps. The first step removes the azimuth, the second removes the dip. In the event that there is plunge to the formation, the procedure for removing dip is repeated. The procedure is illustrated in Figure 3.8.

Figure 3.8 Illustrates the process of rotating the coordinate axis to be aligned with the major axis of the reservoir. First the axis are rotated about the z axis to accommodate the azimuth of the reservoir, then the second axis are rotated about the y axis to accommodate dip in the reservoir.

D-36

Lesson 3: Geological Principles for Reservoir Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

Sometimes fluvial channels can be difficult to model because they deviate significantly. In this case it is possible to straighten the channel using a mathematical transform. Figure 3.8 diagrammatically illustrates the straightening transform.

Figure 3.8, Transforming a twisty channel into a straight channel.

Transforming Z Coordinates
Reservoirs often consist of stratigraphic layers separated by a surfaces that correspond to some sequence of geologic time events, much like growth rings in a tree. The bounding surfaces that differentiate the strata are the result of periods of deposition, or periods of deposition followed by erosion. The surfaces are named according to these geologic events: Proportional: The strata conform to the existing top and base. The strata may vary in thickness due to differential compaction, lateral earth pressures, different sedimentation rates, but there is no significant onlap or erosion (Deutsch, 1999). Truncation: The strata conform to an existing base but have been eroded on top. The stratigraphic elevation in this case is the distance up from the base of the layer. Onlap: The strata conform to the existing top (no erosion) but have filled the existing topography so that a base correlation grid is required. Combination: The strata neither conform to either the existing top or bottom surfaces. Two additional grids are required.

R2003.2.0.1

Lesson 3: Geological Principles for Reservoir Modeling

D-37

DecisionSpace Immersion

Landmark

Figure 3.9 Illustrates proportional, truncation, onlap, and combination type correlation surfaces. The stratigraphic layers must be moved so that they conform to a regular grid. This is done by transforming the z coordinate to a relative elevation.

D-38

Lesson 3: Geological Principles for Reservoir Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

Figure 3.10 shows how strata is moved to a regular grid. Note that features remain intact. Only the elevation has been altered to a relative elevation.

Figure 3.10 Illustrates the result of transferring the z coordinate to a regular grid. It is important to note that these are temporary coordinate transforms. The data is transformed to a modeling space and then transformed back to reality. There are no property or distance changes here, just the movement from reality to some virtual space, then back to reality. Cell Size The cell size used in the model is a serious issue. If the cell size is too small an enormous number of cells will be required to populate the model. Too many cells holds the consequence of having a model that is too difficult to manipulate and very taxing on the CPU. If the cells are too large, then important geological features will be removed from the model. As processing power increases, model size is of lesser and lesser importance. Generally, models that range from 1 million cells to 5 million cells are appropriate.

Workflow
The specific process employed for 3-D model building will depend on the data available, the time available, the type of reservoir, and the skills of the people available. In general, the following major steps are required:

R2003.2.0.1

Lesson 3: Geological Principles for Reservoir Modeling

D-39

DecisionSpace Immersion

Landmark

Determine the areal and vertical extent of the model and the geological modeling cell size Establish a conceptual geological model and define zones for modeling For each zone: Define stratigraphic correlation Define the number of rock types, the data, and the spatial correlation Generate 3-D rock type model Establish porosity and permeability values and the spatial correlation Generate 3-D porosity models Generate 3-D permeability models Merge and translate back to real coordinates Verify the model Combine zones into a single model

D-40

Lesson 3: Geological Principles for Reservoir Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

Figure 3.11 illustrates the modeling concepts previously discussed above.

Fig. 3.11 A flow chart showing the reservoir modeling workflow.

R2003.2.0.1

Lesson 3: Geological Principles for Reservoir Modeling

D-41

DecisionSpace Immersion

Landmark

Lesson 4: Data Analysis


Data Analysis
Data analysis is the gathering, display, and summary data. Data analysis is an important step for building reliable numerical models. Important features of the data are realized, and erroneous data and outliers are revealed. In recent years the growth of geostatistics has made itself felt more in the petroleum industry than any other, and an important feature of this growth is the shift in philosophy from deterministic response to stochastic inference. Stochastic inference concerns generalizations based on sample data, and beyond sample data. The inference process aims at estimating the parameters of the random function model from sample information available over the study area. The use of sample statistics as estimates of the population parameters requires that the samples be volume / areally representative of the underlying population. Sampling schemes can be devised to ensure statistical representativity, but they are rarely used in reality. It is up to the geoscientist to repair the effect of biased sampling, integrate data of different types, cope with trends, and in general ensure that the data truly is representative of the population.

Outliers and Erroneous Data


Outliers and erroneous data can affect the summary statistics and subsequently the geostatistical model. Therefore it is sometimes necessary to remove or modify outliers. The following steps indicate one strategy for looking for outliers. Plot a histogram of the data and zoom in on extreme values. Perform summary statistics on the global statistics with and without the extreme values. Plot the cdf and examine extreme values Look a the probability plot for extreme values Search the location map for the outliers. Do they appear to be all in the same location? Do they appear to be inappropriate?

D-42

Lesson 4: Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

Show a crossplot of the local averages versus the data (every point is mapped versus the average of the surrounding data).

Figure 4.0a: A mean that differs from the mode significantly, a maximum that is significantly higher than the mean, or even a posting that sticks out indicates the presence of outliers. Figure 4.0a shows some of the things that indicate an outlier. A mean that deviates significantly from the median, a maximum that deviates significantly from the mean or the median, or a single posting way out in the middle of nowhere. There are three possible solutions for coping with outliers (1) we can decide to leave them as they are, (2) we can remove them from the data set, (3) or we alter the value to something more appropriate to the surrounding data. Each solution is left to professional judgement.

Lumping Populations
Sometimes it is strategically wise / unwise to lump all of the data together as a single data set. When performing a back of the envelope reservoir characterization it is wise (in some cases) to lump all of the data together. In the case of a full blown geostatistical study with multiple distinguishable lithofacies it is not wise. The question is how do we decide when to lump and when not to lump? The following points illustrate one strategy for determining when and when not to lump: Plot a histogram and look for multiple modes (peaks) Examine a probability plot and look for kinks or breaks in the distribution Plot condition histograms for each of the suspect distributions and overlay for comparison
Lesson 4: Data Analysis D-43

R2003.2.0.1

DecisionSpace Immersion

Landmark

Look at the spatial location of the suspect distributions. Is it feasible that there is more than one distribution?

Figure 4.0b, Should two distributions be separated? It depends on the study. Decisions made in the geostatistical study must be backed up by sound practice and good judgement. The strategies indicated only serve as an aid in justifying your decisions. You must also document your decisions, and may need to justify them.

D-44

Lesson 4: Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

Declustering
Earth science data is rarely collected with statistical representativity in mind. Data are often taken preferentially from areas that are high or low in value. There is nothing wrong with collecting data in this fashion, one would like prefer to have areas of high or low value delineated. Unfortunately this sampling practice leads to location biased sampling. In figure 4.1, the location map of the sample data illustrates location biased sampling. The areas of low potential are not as well represented as the areas of high potential.

Figure 4.1, Note that all areas are not sampled unbiasedly. Some areas are heavily sampled and others are poorly sampled.

R2003.2.0.1

Lesson 4: Data Analysis

D-45

DecisionSpace Immersion

Landmark

Declustering corrects the distribution for the effect of location-biased sampling. Declustering assigns a weight to each data and calculates the summary statistics using each weighted data.

Figure 4.2, The single data in Area 1 informs a much larger area than the 5 data in Area 2. Figure 4.2 illustrates location-biased sampling. The single data in Area 1 informs a larger area than the 5 data of Area 2. Intuitively one would weight each of the data in Area 2 by one fifth and the data in Area 1 by one. Calculating the weights this way is effective but a more efficient way is to overlay a grid and weight each data relative to the number of data in the cell area defined by the grid using Function 4.1 below:

(4.1)

D-46

Lesson 4: Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

where wi(c) is the weight, ni is the number of data appearing in the cell, and L0 is total number of cells with data. Figure 4.3 shows how geometric declustering, or cell declustering works.

Figure 4.3 The declustering weights for three different cells has been calculated, all other declustering weights are found in the same way.

Trends
Virtually all natural phenomena exhibit trends. Gravity works; vertical profiles of permeability and porosity fine upward within each successive strata (Deutsch, 1999). Since almost all natural phenomena exhibit a trend, it is not always appropriate to model using a stationary RV. Reducing the size of the study area to a size where the assumption of stationarity is appropriate or reducing the assumption of stationarity to the search radius are two methods for coping with trends. Universal kriging, an adaptation of ordinary kriging system produces good local estimates in the presence of trend. Universal kriging can also be used to calculate a trend automatically, but its use should be tempered with good judgement and sound reasoning instead of just accepting the result. The best method for coping with a trend is to determine the trend

R2003.2.0.1

Lesson 4: Data Analysis

D-47

DecisionSpace Immersion

Landmark

(as a deterministic process) subtract it from the observed local values and estimate the residuals and add the trend back in for the final estimate. (Mohan, 1989). Often it is possible to infer areal or vertical trends in the distribution of rock types and/or petrophysical properties, and inject this deterministic information into the model.

D-48

Lesson 4: Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

Lesson 5: Spatial Data Analysis


Introduction
Geostatistics focuses on natural phenomena which are correlated in space, a feature of all natural phenomena. Geostatistical modeling requires quantitative measures of spatial correlation for estimation and simulation, and the most commonly used tool for measuring spatial correlation is the semivariogram. For variogram calculation, it is essential to work with data that are free of outliers, trends, and are oriented in an appropriate coordinate system. This lesson begins with a qualitative look at the variogram, followed by the parameters of the variogram, then a quantitative look at variograms, and concludes with variogram interpretation and modeling.

Variograms
We start with a qualitative understanding of variograms. What is important here is to gain an understanding of the concept of spatial correlation. Recall that a variogram is a chart that converts distance to correlation. Figure 5.1 shows an experimental variogram. Scatterplot 5.1a in Figure 5.1 shows that at short distances, correlation is high. Crossplot 5.1b shows that as distance increases the correlation decreases, and crossplot 5.1c shows at some distance there is no correlation among the data.

Figure 5.1 Each point in the experimental variogram relates to the crossplot of two data separated by a distance h.
R2003.2.0.1 Lesson 5: Spatial Data Analysis D-49

DecisionSpace Immersion

Landmark

Consider the variogram and the resulting maps in Figure 5.2 below:

Figure 5.2 Two variograms and the corresponding maps. The variogram on the left shows no spatial correlation and the resulting map is random. The variogram on the right is very continuous showing extensive spatial correlation and the relevant map shows good spatial correlation. Figure 5.2 shows a map that was made using the variogram 5.2a. The variogram indicates that the data have no correlation at any distance, and hence image a is a random map. Image 5.2b was made using variogram 5.2b. Variogram 5.2b indicates that the data are well correlated at long distances and image 5.2b shows some correlation at long distance. Figure 5.3 shows a close up the two images in Figure 5.2. Notice that in figure 5.3b the colors gradually change from blue to

D-50

Lesson 5: Spatial Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

green to orange then red, and that this is not the case in figure 5.3a. In Figure 5.3a the colors change randomly with no correlation from one pixel to the next. A pixel in image 5.3a is not well correlated to a neighboring pixel, whereas in image 5.3b neighboring pixels are well correlated.

Figure 5.3 A close up of an area on each map shows that map a using the variogram having no spatial correlation is random whereas the map on the right which used a variogram that is continuous and thus the map shows spatial correlation. Correlation is the characteristic of having linear interdependence between random variables or between sets of numbers. Between what variables is the variogram measuring correlation? In the variograms presented so far, correlation is being measured between the same variable, but separated by a distance approximately equal to h. Figure 5.4 shows conceptually how an experimental variogram is calculated. The lag distance or distance h is decided upon by the practitioner. The two variables are (1) the data at the head of the vector, and (2) the data at the tail of the vector. The data tail of the vector (the circled end in figure 5.4) is called z(u) (the random variable at location u) and the data at the head of the vector is called z(u+h) (the random variable at location u+h). Starting with the smallest lag distance the algorithm

R2003.2.0.1

Lesson 5: Spatial Data Analysis

D-51

DecisionSpace Immersion

Landmark

visits each data and determines if there are any data approximately one lag away. If there are, the algorithm computes a variogram value for one lag. After each data has been visited, the algorithm doubles the lag distance and repeats the calculation. In this way the experimental variogram quantifies the spatial correlation of the data.

1 lag

2 lags

3 lags

Figure 5.4 The variogram is not calculated from one single point over varying distances h, rather it moves from point to point and calculates the variogram for each distance h at each data location.

D-52

Lesson 5: Spatial Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

Components of the Variogram There are a few parameters that define some important properties of the variogram: 1. Sill: the sill is equal to the variance of the data (if the data are normal score the sill will be one) 2. Range: the range is the distance at which the variogram reaches the sill 3. Nugget Effect: the nugget is the sum of all of the short scale measurement errors. Figure 5.5 illustrates the variogram parameters.

Figure 5.5a The components of the variogram; the sill is the variance of the variable under study, the range is the distance at which the variogram plateaus, the nugget effect is the short scale variability. The nugget effect is a measure of short scale variability, any error in the measurement value or the location assigned to the measurement contributes to the nugget effect. The range shows the extent of correlation, and the sill indicates the maximum variability, or the variance of the data. Figure 5.5b shows what happens when we change two parameters: the nugget effect and the range. Recall that the sill is a fixed value. It is the variance of the data. Images a, b, and c in Figure 5.5b shows the effect of different ranges. A variogram with no range is

R2003.2.0.1

Lesson 5: Spatial Data Analysis

D-53

DecisionSpace Immersion

Landmark

shown in image a, image b has an intermediate range, and image c has a long range. Images d, e, and f show the effect of increasing nugget effect. Image d shows the effect of no nugget effect, or no short scale variability, image e shows an intermediate amount of nugget effect, and image f shows pure nugget effect, or complete short scale variability.

Figure 5.5b Some variogram examples and images to show the effect of different nugget and range parameters. Qualitative Spatial Data Analysis In probabilistic notation the variogram is written:

(5.1)

D-54

Lesson 5: Spatial Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

Which says the variogram is the expected value of the squared difference of Z(u) and Z(u+h). The semivariogram is defined:

(5.2) To be precise the semivariogram is one half the variogram. In this lecture we will assume that the variogram and the semivariogram are synonymous. Before the variogram is calculated some data preparation must be performed. Data must be free from outliers and systematic trends. Also, since geostatistical simulation requires normally distributed data, the data must be transformed into normal space. Estimation and simulation of indicator data such as lithofacies requires that data be transformed to indicator space. For convenience and ease of understanding it is useful to transform the coordinate axis to be aligned with the reservoir, or in some cases to be aligned with the direction of maximal continuity. Choosing variogram directions and lag distances Spatial correlation is rarely isotropic. That is, spatial correlation is rarely the same in all directions. When a property changes with direction or distance it is said to anisotropic. Since geostatistics is preformed in 3D, we require a definition of the spatial correlation in all three directions, and most reservoirs exhibit 3D anisotropy. For this reason variogram analysis is performed iteratively. The first variogram that should be calculated should be omnidirectional; not considering directions of anisotropy, and in the horizontal plane. The calculation of the experimental omnidirectional variogram requires a lag distance, a lag tolerance, and number of lags. A good first estimation for the lag distance is a lag distance equal to the average distance between samples. The variogram is loosely defined as the average squared difference between data separated at a distance approximately equal to h. It is nearly impossible to calculate the

R2003.2.0.1

Lesson 5: Spatial Data Analysis

D-55

DecisionSpace Immersion

Landmark

variogram for data separated precisely by the distance h, so we include a lag distance tolerance. A good starting point for the lag distance tolerance is between one half of or equal to the lag distance. Figure 5.6 illustrates the concept of lag tolerance. The number of lags should not exceed more than two thirds of the field of study.

Figure 5.6 An illustration of the lag, lag tolerance, azimuth, azimuth tolerance and bandwidth parameters for variogram modeling Even calculating the omnidirectional experimental variogram is not an easy task. An acceptable lag distance for a omnidirectional experimental variogram requires an iterative approach; the lag distance and lag tolerance must be tweaked. After calculating the omnidirectional experimental variogram we must determine the direction of maximal and minimal continuity so that a 3D definition of the spatial correlation can be found. To define the 3D spatial continuity we require variograms for three directions: the direction of maximal continuity, the direction of minimal continuity and one other direction. We calculate these variograms and combine them to define the 3D spatial correlation. In geostatistics, the direction of minimal continuity is defined as perpendicular to the direction of

D-56

Lesson 5: Spatial Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

maximal continuity. This defines the spatial continuity for 2D geostatistics. For 3D geostatistics the remaining direction is defined as perpendicular to the 2D plane. Figure 5.7 illustrates this point. This is an adequate means of defining the 3D spatial continuity of the reservoir.

Figure 5.7 Defining 3D spatial continuity requires calculating variograms in 3 directions. There are three parameters required for determining 3D definition of the spatial continuity (1) the direction of spatial continuity, (2) the directions of the variogram, (3) and the azimuth tolerance. One useful tool for determining the direction of maximal and minimal continuity is the variogram map. The variogram map calculates the variogram from the center of the location map and radially outward in a clockwise/

R2003.2.0.1

Lesson 5: Spatial Data Analysis

D-57

DecisionSpace Immersion

Landmark

counterclockwise direction. The result is a map illustrating directions of minimal and maximal continuity as in Figure 5.8. Additionally the direction of maximal continuity can be found by searching for the variogram offering the greatest range, or by referring to a contour map of petrophysical properties.

Figure 5.8 As with the definition of the lag distance and the lag tolerance it is difficult to calculate the variogram along a single direction, thus we define a directional tolerance, or an azimuth tolerance. Figure 5.6 illustrates the concept of azimuth tolerance. A good starting point is an azimuth tolerance of 22.5, giving a total azimuth tolerance of 45. As with the omnidirectional variogram, a good set of 3D variograms requires an iterative approach; the lag distances and tolerances may be different in each direction, the azimuth tolerance may require tweaking to get a good experimental variogram, and the direction of maximal continuity may require tweaking.
NOTE: The other two directions are fixed. The direction of minimal continuity is always perpendicular to the direction of maximal continuity and the third direction is always perpendicular to the plane of continuity.

D-58

Lesson 5: Spatial Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

Variogram Interpretation
The experimental variogram is only one third of a variogram analysis. The remaining pieces are modeling and interpretation of the spatial correlation (the 3D variogram), and combining the models into a licit variogram model that defines the spatial correlation of the reservoir. Variogram Interpretation Variogram interpretation is important. The calculated variogram points are not directly usable since (1) noisy results should be discounted, (2) geological interpretation should be used in the final variogram model, and (3) we need licit variogram measure in all directions and distances. For these reasons, the variogram must be understood and then modeled appropriately (Deutsch, 1999). There are some important pointers for modeling the variogram: 1. the sill is the variance 2. If the data are normal scores then the sill is 1.0. 2. variogram values below the sill imply positive correlation, variogram values at the sill imply no correlation, variogram values at the sill imply no correlation, and above the sill implies negative correlation. 3. the range is the point where the variogram meets the sill, not the point where the sill appears to flatten out, or plateau. 4. a nugget effect of greater than 30% is unusual and should be investigated.

Figure 5.9a

R2003.2.0.1

Lesson 5: Spatial Data Analysis

D-59

DecisionSpace Immersion

Landmark

Anisotropy If a petrophysical property has a range of correlation that is dependent on direction then the petrophysical property is said to exhibit geometric anisotropy. if the petrophysical property reaches the sill in one direction and not in another it is said to exhibit zonal anisotropy. In other words, a variogram exhibits zonal anisotropy when the variogram does not reach the expected sill. Most reservoir data exhibit both geometric and zonal anisotropy. Figure 5.9 shows first geometric anisotropy, second zonal anisotropy, and lastly both forms of anisotropy.

Figure 5.9b Zonal anisotropy can be the result of two different reservoir features: (1) layering, the horizontal variogram does not reach the expected sill because there are layer like trends that exist and variogram is not reaching full variability; and (2) areal trends, the vertical variogram does not reach the expected sill due to a significant difference in the average value in each well.

D-60

Lesson 5: Spatial Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

Cyclicity Geological phenomenon often formed in repeating cycles, that is similar depositional environments occurring over and over. A variogram will show this feature as cyclicity. As the variogram measures the spatial correlation it will pass through regions that bear positive then negative correlation while still trending to no correlation. A cyclic variogram can be seen in Figure 5.10.

Figure 5.10 Gray scale image of an Eolian sandstone and the corresponding vertical and horizontal semivariograms. The semivariogram was calculated on the normal score transform of the gray scale level (finer grained low permeability sandstone appears darker). Note the cyclic behavior in the vertical direction, and the long range correlation of the horizontal variogram (Deutsch, 1999).

R2003.2.0.1

Lesson 5: Spatial Data Analysis

D-61

DecisionSpace Immersion

Landmark

Large Scale Trends Virtually all geological processes impart a trend in the petrophysical property distribution. Dolomitization is the result of hydrothermal fluid flow, upward fining of clastics, and so on, are large scale trends. Figure 5.11 shows how large scale trends affect the histogram. Trending causes the variogram to climb up and beyond the sill of the variogram.

Figure 5.11

Variogram Modeling
All directional variograms must be considered simultaneously to understand the 3D spatial correlation. 1. Compute and plot experimental variograms in what are believed to be the principal directions of continuity based on a-priori geological knowledge. 2. Place a horizontal line representing the theoretical sill. Use the value of the experimental (stationary) variance for continuous variables (1 if the data has been transformed to normal score) and p(l p) for categorical variables where p is the global proportion of the category of interest. In general, variograms are systematically fit to the theoretical sill and the whole variance below the sill must be explained in the following steps.

D-62

Lesson 5: Spatial Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

3. If the experimental variogram clearly rises above the theoretical sill, then it is very likely that there exists a trend in the data. The trend should be removed as detailed above, before proceeding to interpretation of the experimental variogram. 4. Interpretation: Short-scale variance: the nugget effect is a discontinuity in the variogram at the origin corresponding to short scale variability. It must be chosen as to be equal in all directions; pick from the directional experimental variogram exhibiting the smallest nugget. At times, one may chose to lower it or even set it to 0.0. Intermediate-scale variance: geometric anisotropy corresponds to a phenomenon with different correlation ranges in different directions. Each direction encounters the total variability of the structure. There may exist more than one such variance structure. Large-scale variance: (1) zonal anisotropy, characterized by directional variograms reaching a plateau at a variance lower than the theoretical sill, or (2) hole-effects representative of a periodic phenomenon (cyclicity) and characterized by undulations on the variogram. The hole-effect does not actually contribute to the total variance of the phenomena, however, its amplitude and frequency must be identified during the interpretation procedure, also, it can only exist in one direction.

5. Once all the variance regions have been explained and each structure has been related to a geological process, one may proceed to variogram modeling by selecting a licit model type (spherical, exponential, Gaussian) and correlation ranges for each structure. This step can be referred to as the parameter estimation part of variogram analysis. Constraining the variogram model by a prior interpretation step with identification of structure types can lead to a reliable automatic fit of the experimental variogram. Variogram Models There are 4 common model types They are: 1. The nugget effect. The nugget effect should normally only explain up to 30% of the variance. The nugget effect is that portion of the variance that is due to error and small scale variability. The nugget

R2003.2.0.1

Lesson 5: Spatial Data Analysis

D-63

DecisionSpace Immersion

Landmark

effect is numerically modeled using formula 5.3, and Figure 5.13 shows a nugget effect variogram.

(5.3)

Figure 5.13 2. The spherical model. The spherical model is the most common variogram model type. The spherical model is mathematically defined by formula 5.4, and Figure 5.14 shows a spherical type model.

(5.4)

Figure 5.14

D-64

Lesson 5: Spatial Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

3. The exponential model. The exponential model is similar to the spherical model but it approaches the sill asymptotically. It is mathematically defined by formula 5.5 and shown as a variogram in Figure 5.15.

(5.5)

Figure 5.15 4. The Gaussian model. The Gaussian model is typically used for modeling very continuous experiment variograms. It is mathematically defined by the formula 5.6 and shown as a variogram model in Figure 5.16.

(5.6)

R2003.2.0.1

Lesson 5: Spatial Data Analysis

D-65

DecisionSpace Immersion

Landmark

Figure 5.16

Workflow
Modeling the spatial correlation is the most difficult and important step in the geostatistical modeling process. Great care should be taken.

D-66

Lesson 5: Spatial Data Analysis

R2003.2.0.1

Landmark

DecisionSpace Immersion

Lesson 6: Geostatistical Algorithms


Introduction
At any instant in time there is a single true distribution of a geological attribute. The true distribution is not available, but we do the best we can to map the true distribution given some sample data. From the need to map the true distribution as accurately as possible many interpolating algorithms were developed. The most common, and the most useful is kriging. Kriging is a locally accurate and smooth interpolator, appropriate for visualizing trends, but inappropriate for flow simulation where preservation of heterogeneity in the reservoir is important. An extension of the kriging algorithm is sequential simulation. Sequential simulation is appropriate for simulation, and allows an assessment of uncertainty with alternative realizations. The first part of this lesson is devoted to the kriging algorithm and kriging variance. The lesson concludes with simulation of petrophysical properties.

R2003.2.0.1

Lesson 6: Geostatistical Algorithms

D-67

DecisionSpace Immersion

Landmark

Kriging
Consider the problem of estimating the value of an attribute at any unsampled location u, denoted z*(u), using only sample data collected over the study area A, denoted by z(un) as illustrated in Figure 6.1.

Figure 6.1 The algorithm used to solve this problem was pioneered by Danie Krige and in recognition of his efforts the algorithm is called kriging. The kriging algorithms are a family of generalized least squares regression techniques that estimate z*(u) using sample data z(un). There are several different flavors of kriging, each addressing different needs. In general the kriging equations are known as:

(6.1) Where z*(u) is the estimator at location u, m(u) is the mean at location u, z(ua) is one of the a different data at location u used in the estimate, m(ua) is the mean at location u, and la are the weights. The kriging equations state that the estimate is a weighted linear combination of the sample data, or more generally:

(6.2)

D-68

Lesson 6: Geostatistical Algorithms

R2003.2.0.1

Landmark

DecisionSpace Immersion

Reconsider figure 6.1 with equation 6.2 in mind. Equation 6.2 indicates that the estimator z*(u) is the weighted sum of the data, or mathematically:

(6.3) There are a few goals that we strive for when choosing the weights: 1. Closeness to the location being estimated. The estimator is equidistant from both knowns. 2. Redundancy between the data values. The knowns lie on either side of the estimator. If the knowns were both on the same side as the estimator then it would be more difficult to make the estimate. 3. Anisotropic continuity (preferential direction) 4. Magnitude of continuity / variability

Figure 6.2. The kriging weights must consider redundancy of the data, the closeness of the data, and the direction and magnitude of continuity.

R2003.2.0.1

Lesson 6: Geostatistical Algorithms

D-69

DecisionSpace Immersion

Landmark

There is one other goal when estimating the unknown attribute: minimize the error variance. If the error variance is minimized then the estimate will be the best estimate. The error variance is the expected value of the difference between the known and the estimate and is defined by:

(6.4) where z*(u) is the estimator, and z(u) is the true value. One obvious question raised by this equation is how can we determine the error if we do not know the true value? True, we do not know the true value, but we can choose weights that do minimize the error. To minimize the estimation variance take the partial derivative of the error variance (equation 6.4) and set to 0, but before taking the derivative equation 6.4 is expanded:

(6.5) The result is an equation that refers to the covariance between the data points C(ua,ub), and the data and estimator C(u,ub). A first this may seem like a problem because we have not discussed the covariance between the data and the estimator, but we did discuss the variogram, and the variogram and the covariance are related. Recall that the variogram is defined by:

and note that the covariance is defined by (the covariance is not the squared difference whereas the variogram is):

D-70

Lesson 6: Geostatistical Algorithms

R2003.2.0.1

Landmark

DecisionSpace Immersion

The link between the variogram and the covariance is:

so the variogram and the covariance are linked by:

(6.6) where (h) is the variogram, C(0) is the variance of the data, and C(h) is the covariance. This makes it possible to perform kriging in terms of the variogram instead of the covariance. Continuing with the derivation of the kriging equations, we know that formula 6.5 must be minimized by taking the partial derivative with respect to the weights and set to zero:

setting to zero...

(6.7)

R2003.2.0.1

Lesson 6: Geostatistical Algorithms

D-71

DecisionSpace Immersion

Landmark

The result of the derivation in terms of the variogram is the same because both the variogram and the covariance measure spatial correlation, mathematically:

(6.8) and the system of equations in terms of the variogram is:

(6.9) This is known as simple kriging. There are other types of kriging but they all use the same fundamental concepts derived here. Discussion There are a couple of motivations behind deriving kriging equations in terms of the covariance: 1. It's easier. Solving the kriging equations in terms of the variogram requires that the mean be carried throughout the derivation. It is easier to simplify in terms of covariance. 2. It is possible to have the variance at h=0 be zero with a variogram, this makes the matrix very unstable. The covariance is defined as the expected value of the difference, not the squared difference therefore the value of the covariance at h=0 is always large and hence the main diagonal in the matrix will always be large. Matrices that have small main diagonal elements such as when using the variogram are difficult for solution algorithms to solve due to truncation errors and so on. 3. It's easy to convert the variogram to covariance.

D-72

Lesson 6: Geostatistical Algorithms

R2003.2.0.1

Landmark

DecisionSpace Immersion

Implementing Kriging
Once again, consider the problem of estimating the value of an attribute at any unsampled location u, denoted z*(u), using only sample data collected over the study area A, denoted by z(un) as illustrated in Figure 6.3. Figure 6.3 shows the estimator (the cube), and the data (z(un)). To perform kriging just fill in the matrices. For example, filling in the left hand matrix, entry 1,1, consider the variogram between points 1 and 1. The distance between a point and itself is 0, and thus the first entry would be the nugget effect. Entry number 1, 2, consider the distance h between points 1 and 2, read the appropriate variogram measure and enter it into the matrix. repeat for the all of the variogram entries and solve for the weights ln.

Figure 6.3

Figure 6.4

Figure 6.5
R2003.2.0.1 Lesson 6: Geostatistical Algorithms D-73

DecisionSpace Immersion

Landmark

The estimate is then calculated as:

(6.10) The result is an estimate of the true value and the error associated with the estimate, as Figure 6.6 illustrates.

Figure 6.6

D-74

Lesson 6: Geostatistical Algorithms

R2003.2.0.1

Landmark

DecisionSpace Immersion

Kriging provides the best estimate but there are some issues:
The Pros and Cons of Kriging Pros: The best linear unbiased estimator Uses the variogram Smooths Does not honor the variogram Cons:

Gets the covariance Does not honor the between the data and histogram the estimate correct Does not quantify global uncertainty

Since kriging is a linear estimator, it smooths, and thus reduces the heterogeneity in the model. This is acceptable for attributes that are already smooth but in most cases kriged outputs are not acceptable for mapping because the true heterogeneity is removed. Another issue that arises from smoothing is the failure of kriging to honor the histogram and the variogram. This also a result of the smoothing effect of kriging. Kriging offers one solution to the estimation problem, but it offers the mean estimate for all points. There are other possibilities that are expressed in the error variance.

The Kriging Variance


Recall that the expanded kriging variance is:

(6.5) and that the kriging equation is:

(6.7)

R2003.2.0.1

Lesson 6: Geostatistical Algorithms

D-75

DecisionSpace Immersion

Landmark

substituting equation 6.7 into equation 6.5:

(6.11) Equation 6.11 is the kriging variance. Kriging may be considered spatial regression, that is, it creates an estimate that is smooth. There are many interpolation / estimation algorithms that construct smooth estimates. The advantage of kriging over other algorithms is that it provides quantification of how smooth the estimates are. The variance of the kriging estimate may be calculated as:

(6.12) substituting equation 6.7 into equation 6.12,

(6.13) Equation 6.13 tells us the variance of the kriging estimator at location u, however, the variance is stationary, that is, the variance should be the same everywhere. Therefore there is a missing variance equal to:

(6.14) Which is exactly the kriging variance. Thus the missing variance is the kriging variance. When kriging at a data location, the kriging variance is zero and there is no missing variance. When kriging with no local data the kriging variance is C(0) and all of the variance is missing.

D-76

Lesson 6: Geostatistical Algorithms

R2003.2.0.1

Landmark

DecisionSpace Immersion

Sequential Simulation
Sequential Simulation addresses the cons of kriging and still makes use of the kriging algorithm and all of the good things about kriging. One of the important things that kriging does correctly and provides the motivation for sequential simulation is that it gets the covariance between the data and the estimator right. The issue that leads to sequential simulation is the fact that although kriging gets the covariance correct between the data and the estimate, it fails to get the covariance between the estimates. That is, rather than use the estimators as additional data the kriging algorithm simply moves on to the next estimator not including the covariance between the new estimator and the last. Sequential simulation does just that. The first estimator is kriged with only data because that is all that there is, just data, no other estimates. The next estimate is kriged but the previous estimator is used as data and its variogram is included in the algorithm.

This is sufficient motivation to proceed sequentially with estimation, but there is still the issue of the missing variance. Recall that the missing variance is the kriging variance:

(6.15)

R2003.2.0.1

Lesson 6: Geostatistical Algorithms

D-77

DecisionSpace Immersion

Landmark

This missing variance must be added back in without changing the variogram reproduction properties of kriging. This is done by adding an independent component with a zero mean and the correct variance to the kriged estimate:

(6.16)

(6.17) The sequential simulation workflow is as follows: 1. Transform the original Z data to a standard normal distribution (all work will be done in normal space). We will see later why this is necessary. 2. Go to a location u and perform kriging to obtain kriged estimate and the corresponding kriging variance:

3. Draw a random residual R(u) that follows a normal distribution with mean of 0.0 and variance of 2SK(u). 4. Add the kriged estimate and residual to get simulated value:

Note that Z* (u) could be equivalently obtained by drawing from a normal distribution with mean Z*(u) and variance 2SK(u). 5. Add Y8 (u) ??? to the set of data to ensure that the covariance with this value and all future predictions is correct. As stated above, this is the key idea of sequential simulation, that is, to consider previously simulated values as data so that we reproduce the covariance between all of the simulated values.

D-78

Lesson 6: Geostatistical Algorithms

R2003.2.0.1

Landmark

DecisionSpace Immersion

6. Visit all locations in random order (to avoid artifacts of limited search). 7. Back-transform all data values and simulated values when model is populated. 8. Create another equiprobable realization by repeating with different random number seed. (Deutsch, 1999)
The Pros and Cons of Simulation Pros: Honors the histogram Cons: Multiple images Conceptually difficult to understand Not locally accurate

Honors the variogram

Quantifies global uncertainty Makes available multiple realizations

R2003.2.0.1

Lesson 6: Geostatistical Algorithms

D-79

DecisionSpace Immersion

Landmark

Lesson 7: Structural Modeling


Introduction
It is estimated that hydrocarbon reserves recoverable through improved reservoir management exceed new reserves that can be added through exploration. Increasingly, it is being recognized that 3D seismic data analysis is a critical reservoir management technology and plays a key role in reservoir detection, delineation, characterization, and monitoring. However, 3D seismic alone is inadequate for many applications due to its limited resolution, and the indirect and/or weak relationships between critical reservoir parameters such as permeability, porosity, and water saturation. As a result, it is generally recognized by reservoir scientists that proper reservoir description and monitoring require full integration of 3D seismic with engineering, geological (including geochemical and geostatistical), petrophysical, and borehole geophysical methods. Seismic is very good at resolving large scale structural features, but it is not good at resolving fine scale features such as resolving petrophysical properties. This task is left to geostatistics and well seismic fusion. Petrophysical property sensing tools such as logs offer fine scale measures of petrophysical properties but offer no insight into what lies beyond the tools range. This task is also left to geostatistics. Core data is an even more finely scaled measure of petrophysical properties, but it range of insight is even less than that of log data. Geostatistics is left to fill the knowledge gap. Geostatistics uses the coarse-scale structural information offered by seismic, the mid-scale information offered by electric logs and the fine scale information of core data to generate high resolution models of oil reservoirs. A reservoir model is built starting with the large scale features, the structural features, first. This lesson discusses the uncertainty in interpreted seismic surfaces, and in the event there is no reliable seismic data, how to simulate the surfaces that would define structure and fault handling.

D-80

Lesson 7: Structural Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

Velocity Uncertainty
The geophysical branch of exploration science is primarily concerned with defining subsurface geological features through the use of seismic techniques, or the study of energy wave transmissions through rock. Seismic techniques can be used to make subsurface maps similar to those developed by standard geological methods. The three main rock properties that the geophysicist studies are 1) elastic characteristics, 2) magnetic properties, and 3) density. Although studies of density and magnetic properties provide useful information, elastic characteristics are considerably more important since they govern the transmission of energy waves through rock. It is this elastic characteristic that is studied in seismic surveys. The word seismic pertains to earth vibrations which result from either earthquakes or artificially induced disturbances. Reflection seismic surveys record the seismic waves that return or reflect from subsurface formation interfaces after a seismic shock wave has been created on the surface. By measuring the time required for different waves to be reflected from different formations, the geophysicist can identify structural variations of the formations. Figure 7.1 illustrates this process in a typical land survey operation.

Figure 7.1, a typical land survey operation. The objective of seismic work is to develop maps that indicate structures which might form traps for oil or gas from the data provided on the record cross sections. The geophysicist makes maps by calibrating the seismic attribute to core data, well log data, and analogue data. Any feature that causes a change in propagation of sound in rock shows up in the seismic survey. Changes in dip, different rock types, possible faults, and other geological features that are some of the features indicated in the sections. These features are not immediately evident in the seismic data. Several steps are necessary to

R2003.2.0.1

Lesson 7: Structural Modeling

D-81

DecisionSpace Immersion

Landmark

convert seismic data into useful structural and stratigraphic information. Even the best data processing techniques cannot completely filter out all the unwanted noise and distortion, so making good, reliable interpretations of seismic data requires judgment and experience. Some sources of uncertainty include:
Survey Information: seismic information measures the time it takes for energy waves to propagate through rock, that is, the velocity of the waves traveling through rock. Velocity is distance divided by the amount of time it takes to traverse the distance in the numerator. The frequency of the energy source is known, as is the distance between the energy source and the geophone on presentday surface. This is sufficient information to resolve the location of structures below present-day surface. If the distance between the source and receiver is not known, there is uncertainty in the location of the subsurface. Density: one of the rock properties that affects the velocity of the waves is the density. One of the things that seismic is very good at is capturing distinct changes in wave velocity, and wave velocity is a function of density. Very dense rock has higher wave velocities than less dense rock. The density of rock is a result of the rock type, the porosity, the contents of the pore spaces (Water is incompressible and thus allows energy waves to pass whereas gas is incompressible and will not allow energy waves to pass.), and the overburden pressure (The further down you go, the greater the overburden pressures are and the greater the velocity of energy waves.). The density must be assumed or derived from core data. Hopefully the core information intersects all possible rock types in the area of study. Imagine a shale (of highly variable density) in the middle of the deposit yet unaccounted for in the well data. The results would be detrimental to the interpretation of subsurface. Oil, water, and gas have very different densities and hence oil / water and oil / gas interfaces are easily distinguished. Human Interpretation: there are elaborate algorithms for deciphering seismic information, but human interaction is still required. The decisions made by humans have uncertainty imbedded in them. There is no way of knowing the truth.

The uncertainties derived from velocity can permeate every aspect of a geostatistical reservoir study. The result is uncertainty in the structural surfaces used for modeling the reservoir. Some of the repercussions include:

D-82

Lesson 7: Structural Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

Gross Volume Estimates: there is a need for accurate estimates of original volume of hydrocarbon in the reservoir. These estimates are used for determination of the economic viability of the reservoir, comparison of the economic merits of other ventures, determination of the appropriate size of production facilities, and others. Uncertainty in velocities translates to uncertainty in location, extent and magnitude of the reservoir in question, which in turn translates to uncertainty in volume estimates. Flow Simulation: The prediction of reservoir performance and different production scenarios is a critical step in reservoir evaluation. The optimal number and location of production / injection wells can be significantly affected by even one shale structure (impedance to flow). Velocity uncertainty can make important structures indiscernible and thus significantly impact flow simulation. 3-D Connectivity: Recovery strategies strive to have the greatest possible connectivity between producing structures. Velocity uncertainty can fail to reveal important structures the might impact well location and the value of infill wells.

Surface Based Modeling


There are a few geostatistical methods for simulating surfaces that use well data for conditioning, and seismic data as soft data. Keep in mind the purpose behind surface based modeling is not to get the surface per se, but to define the sediment packages. One method models the geology as simple parametric surfaces. The approach to this method is to simulate construction of the reservoir in the same way that it was created; as a series of depositional events. Reservoirs are characterized by large-scale geologic events. Some of these events are depositional and some are erosional, but the interface for each of these events marks a time surface. In the case of a depositional environment the material between the base time surface and the top surface indicate a period of homogenous deposition. That is, the sediment package sandwiched between the top and base surfaces contain material that is homogenous in terms of its genesis and hence its petrophysical features. These large-

R2003.2.0.1

Lesson 7: Structural Modeling

D-83

DecisionSpace Immersion

Landmark

scale events occur over long periods in time. During the large-scale events, small-scale events occur within the time surfaces of the largescale events. Figure 7.2 shows that a reservoir is constructed in a hierarchical manner; large scale features followed by small scale features, each bound by a time surface.

Figure 7.2 An illustration of the hierarchical nature of a typical reservoir. Large-scale time surfaces are usually visible with seismic data, and sometimes small-scale time surfaces are discernible, but this is not always the case. Sometimes there is no good seismic data and the time surfaces are not available. The goal behind surface modeling is not to get the surfaces, but to provide constraints for modeling facies and petrophysical properties. Therefore, the constraints that define a surface can include not just time, but other properties such as grain size and trends (such as fining/coarsening upward). Once these surface parameters have been defined, then the properties within the sediment package, such permeability or porosity can be modeled bound by

D-84

Lesson 7: Structural Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

surfaces. The method eluded to earlier, the use of simple parametric surfaces to mimic geologic time surfaces, is particularity useful for modeling both large and small scale geologic events. Figure 7.3 illustrates a simple parametric surface and the parameters for its definition.

Figure 7.3 A simple parametric surface used for simulation of geologic event surfaces. The surface is parameterized by a center point x0, y0, an inner and outer radius tolerance, a length tolerance, an orientation angle, and a surface height. The simulated surfaces are parameterized by a center point (x0, y0), an inner and outer radius tolerance for the surface, a length tolerance, an orientation angle, and a maximum surface height. The center point is used by the algorithm to determine the location of the simulated surface. The inner and outer radius tolerance are user defined tolerances that are used to constrain the aspect of the surface to those that are in the reservoir. The length and height of the surfaces are also entered by the user as a tolerance so that the dimensions of the surfaces may be limited to those found in the reservoir. The angle of orientation

R2003.2.0.1

Lesson 7: Structural Modeling

D-85

DecisionSpace Immersion

Landmark

allows the user to orient the surface to the direction of deposition indicated by seismic or core data. Geologic surfaces are rarely smooth in reality so another parameter called undulation is added so that the surface better approximates reality. Figure 7.4 illustrates the concept of undulation.

Figure 7.4 The parametric surface after undulation has been added. The model is built surface by surface, and each surface is deposited on top of the existing surfaces (if there are any) using the following protocol: 1. The central location of the new surface (x0, y0) is selected stochastically. The distribution used for selection of the location is derived from the distribution of possible locations given the thickness of the reservoir. At the beginning of the simulation, all surfaces have the same probability of selection, but as the simulation continues the reservoir builds up thickness and there are fewer permissible surfaces that will comply to the selection and conditioning criteria. 2. The length of the surface X is randomly selected from a triangular pdf with the minimum and maximum parameters being user selected. 3. The inner and outer widths, the height of the surface and the orientation of the surface are selected from a triangular distribution with parameters provided by the user.

D-86

Lesson 7: Structural Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

4. The surface is dropped onto the reservoir. Any existing surfaces will truncate the new surface. Figure 7.5 shows the dropping principle in action.

Figure 7.5 The dropping principle used in the simulation of surfaces. 5. Condition the surface to the data. All of the surfaces are dropped to the same datum as indicated in figure 7.5. There are two solutions if the surface does not conform to the intersections provided by the well data: (1) raise the surface to meet the intersection, and (2) lower the surface to meet the intersection. If the surface is raised, it could be rejected if it is too short and will not be truncated by

R2003.2.0.1

Lesson 7: Structural Modeling

D-87

DecisionSpace Immersion

Landmark

existing surfaces. Instead, the surface is lowered to the intersection, as in Figure 7.6.

Figure 7.6 Conditioning of the surface to a single well data.

D-88

Lesson 7: Structural Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

6. Repeat until the reservoir is fully populated.

Figure 7.7 shows an example of simple parametric surface simulation.

Surface Flapping
Surface flapping is synonymous to surface uncertainty. Figure 7.8 shows surface flapping. The pink vertical lines are well data that the surface must be conditioned to. The light blue horizontal line is the gas oil contact and the pink horizontal line is the oil water contact. The dark blue lines are top surface realizations and the green lines are bottom surface realizations. There is uncertainty in the true location of the top surface and the bottom surfaces everywhere except at the wells. The blue and green lines illustrate the extent of uncertainty about these surfaces. The green lines do not flap as wildly as the blue lines. There is
R2003.2.0.1 Lesson 7: Structural Modeling D-89

DecisionSpace Immersion

Landmark

sound reasoning for this. Surface uncertainty cannot be assessed independently. Once the uncertainty in the top surface with respect to the present-day surface has been established, all other remaining surfaces will have less uncertainty. The uncertainty in remaining surfaces is accounted for in the uncertainty with respect to the distance between layers. One could imagine considering each of the surface uncertainties modeled independently, but in doing so negative volumes could be created (the surface lines could cross). Also, the distribution of thickness would be ignored. In those locations where the top and bottom surfaces cross there might be at minimum zero thickness, but the seismic derived distribution of thickness might suggest that there is a very low possibility of zero thickness. This is why we model the top surface uncertainty first conditioned to the uncertainty of the surface with respect to the present-day surface, and all remaining surfaces are modeled conditional to the uncertainty of the thickness between surfaces.

Figure 7.8 A diagram of surface flapping (uncertainty). Note that velocity uncertainty and how this uncertainty relates to depth / surface determination is not considered here. This is a simplified model meant only to illustrate surface uncertainty with respect to the well data. Assessing the uncertainty in surfaces is important for the determination of pore volume and hence predicted oil in place volumes. For example consider the calculation of the gross pore volume:
Pore Volume = Gross Rock Volume * Net-to-Gross Ratio * Net Porosity

D-90

Lesson 7: Structural Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

The net-to-gross ratio and net porosity are inferred from the well data, available seismic data and geological interpretations. There are uncertainties existing in the determination of the net-to-gross ratio and the net porosity due to limited well data and uncertainty in the calibration of soft seismic and geological data. Uncertainties in all factors propagate to uncertainty in the final calculation of pore volume. The uncertainty in pore volume is a function of the multivariate distribution of the three contributing factors: GRV, net-to-gross ratio, and net porosity. Inference of this multivariate distribution is difficult due to the poorly known dependencies such as the relationship between porosity and surface interpretation. A particular model of this multivariate distribution can be built assuming that the three factors are independent. We will adopt such a model. The distributions of uncertainty in the three controlling variables must be determined. The top and bottom surfaces will be stochastically modeled to quantify the distribution of uncertainty in the GRV. This modeling is guided by well data and the best estimate of the surface from seismic. The uncertainty of the average net-to-gross ratio and the net porosity are determined by bootstrap resampling from the best distribution that can be inferred from limited well data and supplementary seismic and geologic data. The gross rock volume is the reservoir volume above the oil/water contact (OWC) constrained by the top and bottom surfaces of reservoir. A gas-oil contact is needed for reservoirs with gas. Figure 7.9 shows a cross section view of a reservoir.

Figure 7.9. A cross-section of a hypothetical oil reservoir. The reservoir is constrained by top and bottom surfaces (black curves). The OWC is represented by a red horizontal line and the gas-oil contact (GOC) is denoted by a green horizontal line. The oil-containing volume of the reservoir is the portion of the reservoir constrained by both top/ bottom surfaces and the OWC/GOC levels (light blue shaded area),
R2003.2.0.1 Lesson 7: Structural Modeling D-91

DecisionSpace Immersion

Landmark

whereas the gas containing volume is the portion of reservoir constrained by top/bottom surfaces and above the GOC level (pink shaded area). The oil and gas containing volumes of the reservoir are of economic significance. The volume below the hydrocarbon contacts is also of importance in accounting for historical production data and in the prediction of aquifer drive. Once the distributions of the three factors are available, the uncertainty of pore volume is estimated by Monte Carlo simulation. Values of gross rock volume, net-to-gross ratio, and net porosity are sampled by Monte Carlo, and the value of pore volume is calculated. The procedure may be repeated many times and the distribution of pore volume is thus estimated. Figure 7.10 shows the surface uncertainty given three wells.

Figure 7.10 The bottom right hand image is an unconditioned surface section of a hypothetical oil reservoir. All other images are the result of multiple simulations with conditioning data.

D-92

Lesson 7: Structural Modeling

R2003.2.0.1

Landmark

DecisionSpace Immersion

Fault Handling
Faults can be considered as surfaces. Faults, in comparison to the surrounding rock, have no appreciable thickness and are thus appropriately modeled as surfaces. One can imagine modeling faults in the same fashion as indicated above; model as a surface conditioned to well data, and to the seismic distribution of uncertainty regarding the location of the fault. As with the aforementioned surfaces, fault surface uncertainty bears significant importance to reservoir characterization / flow simulation. This especially the case with faults because all reservoirs have undergone structural deformation and have faults/joints/deformation bands that are stress-release features which can (1) enhance permeability, (2) decrease permeability as a result of cements filling fractures and disconnecting once connected structures, and (3) occur at all scales. Small scale faults or fault sets are usually of low priority because (1) big faults are most important and are identified by seismic, (2) poorly understood, input statistics (spacing, density, size, characteristics) are very difficult to infer, and (3) small scale fractures can be approximately handled by effective flow properties. In general faults are difficult to work with as they present problems that are beyond the scope of this lesson. Faults tend to disappear and reappear, and have significant discontinuities. These issues are difficult to deal with in geostatistics. Normal faults are generally coped with by simple coordinate transformation, but this may not be an appropriate solution in all cases. The model may have to be broken up into smaller pieces where this assumption is valid.

R2003.2.0.1

Lesson 7: Structural Modeling

D-93

DecisionSpace Immersion

Landmark

Lesson 8: Seismic Data Integration


Introduction
Data Integration is a fundamental principle of geostatistics and reservoir modeling. Its goal is to explicitly account for all of the available data. Seismic data provides abundant data compared to well data that is sparse. Seismic data however does not give any directly usable information and must be calibrated to the limited well data. There are numerous techniques that permit the direct use of seismic data such as cokriging and collocated cokriging. There are also many methods for determining the uncertainty in estimated maps such as cosimulation and colocated cosimulation. Other simulation alternatives include annealing.

Calibration of Data
Usually the most difficult step in the geostatistical study is finding a relationship between the well parameter of interest and some aspect of the seismic data (Wolf et al., 1994). This step is handled quite eloquently with DecisionSpaces Well Seismic Fusion application. A relationship between seismic data to well data is often quite difficult to infer, and once found is summarized as a simplistic linear regression; none of the scatter is preserved. Figure 8.1 shows a calibration between sand thickness and a seismic attribute. This is a simplistic approach, and is not always appropriate for some petrophysical properties such as permeability.

Figure 8.1

D-94

Lesson 8: Seismic Data Integration

R2003.2.0.1

Landmark

DecisionSpace Immersion

Sometimes there are too few data to infer a relationship and the user must infer a relationship based on previous experience or analogue data. Figure 8.2 shows a scenario where there are too few data to infer a useful relation between porosity and depth. The distribution of depth throughout the reservoir is exhaustively known as a result of seismic interpretation. The depth information is exhaustively known, yet there are only five well data to infer a relation between porosity and depth.

Figure 8.2 Instead of abandoning the calibration, a relationship has been manually inferred (the grey ellipses) from analogue data retrieved from a reservoir that is similar. The proposed methodology is to sum the inferred distributions of porosity conditional to the depth, yielding an inferred global distribution for porosity. In terms of probabilistic notation:

(8.1) where C is a normalizing constant.

R2003.2.0.1

Lesson 8: Seismic Data Integration

D-95

DecisionSpace Immersion

Landmark

In summary, the calibration is performed by: (1) mapping the secondary variable X at all locations, (2) developing a bivariate relationship between X and the Y variable interest, and (3) generating a distribution of Y by combining the conditional distributions. Beware that the user must always justify the calibration routine.

Cross Spatial Variability


In the univariate case (one variable), the model variogram must be a compilation of different variogram models that are known to allow the solution to the system of kriging equations to be unique. As well, the model variogram that defines the spatial correlation in 3-D must be modeled in such a way that each variogram structure be the same and have the same contribution in all directions; only the range may change for each structure in each direction. In the multivariate case, there is a primary variable of interest, a secondary correlated variable, and an equivalent requirement called the linear model of coregionalization. Just as the univariate variogram is constructed as a set of intercorrelated random functions, the linear model of coregionalization provides a method for modeling the auto- and cross-variogram of two or more variables so that the variance of any possible combination of these variograms is always positive. This ensures a unique solution to the cokriging system of equations. The criteria are: (1) the determinant must be greater than zero, and (2) all diagonals must be greater than zero.

The linear model of coregionalization is a technique that ensures that estimates derived from cokriging have a positive or zero variance.

D-96

Lesson 8: Seismic Data Integration

R2003.2.0.1

Landmark

DecisionSpace Immersion

Cokriging
Often there is more than one type of available data, and often these data are correlated. Also, the available data are not always sampled at the same density; core samples and seismic data are good examples because coring only sample a small volume of the population whereas seismic can sample nearly the entire volume. When there is more than one type of available data, the data of interest is called the primary data and all other data is called secondary data. In kriging the spatial correlation of a single data type is used to make the best estimate at locations where there is no data. Cokriging differs from kriging in that it uses the spatial correlation of the primary data and the spatial correlation of the primary data to the secondary data to fortify the estimate of the primary variable. Cokriging is particularly useful when there is fewer primary data than secondary data.

Colocated Cokriging
The cokriging system of equations becomes unstable when the secondary variable is much more sampled than the primary variable because the secondary data is much more correlated to the closer secondary data than the primary is to the further primary data. Colocated cokriging is a reduced version of cokriging that consists of retaining only the colocated secondary variable which must be available at all locations u being estimated. The colocated cokriging estimator is written:

(8.4) The requirement that the secondary variable be known at all locations is not as limiting as one might think. Typically secondary data is seismic and seismic data is often available for all locations required, and if not, it is easy enough to estimate/simulate the attribute at all locations.

R2003.2.0.1

Lesson 8: Seismic Data Integration

D-97

DecisionSpace Immersion

Landmark

Simulation Alternatives
Annealing Annealing is an optimization algorithm based on an analogy with the physical process of annealing. The central idea behind simulated annealing is an analogy with thermodynamics, specifically with the way liquids freeze and crystallize, or metals cool and anneal. At high temperatures the molecules can move freely. As the temperature is slowly lowered, the molecules line up in crystals which represent the minimum energy state for the system (Deutsch, 1999). Here, the objective of the system is to get the energy as low as possible; the molecules all being optimally oriented so that the energy of the system is as small as possible. In simulated annealing the goal is to optimize some objective function. Instead of temperature, simulated annealing uses probability of acceptance. For example, before a liquid freezes, the molecules are free to move anywhere. As the liquid cools to freezing, the probability of the crystal accepting a molecule moving decreases. When the liquid becomes solid there is an even lesser probability of the crystal accepting a move, but it is never zero. In simulated annealing nodes are perturbed, that is moved to a different location in space. This is the analogue to molecules mobilizing to prepare for solidification. The simulated annealing algorithm calculates the energy of the system including the newly moved node and compares the value of the new objective function to the value of the objective function before the old. If the newly perturbed node leads the model toward optimality, the perturbation is accepted. If the perturbation moves the model away from optimality, the perturbation may or may not be rejected based on the probability of acceptance (the temperature of the model). If the temperature is high, there is a high probability of acceptance. If it is low, there is a low probability of acceptance. Perturbations are repeated until some stopping criteria is met such as the objective function being met or too many perturbations. The algorithm chooses the nodes for perturbation randomly so it is possible to create multiple realizations. The algorithm can be summarized by: 1. Establish an initial guess that honors the data by assigning a value to each cell by drawing from the conditional distribution of petrophysical property.

D-98

Lesson 8: Seismic Data Integration

R2003.2.0.1

Landmark

DecisionSpace Immersion

2. Calculate the initial objective function. Numerical measure of mismatch between the desired objective and of the initial guess. 3. Consider a change to the model. Randomly choose a non-data cell and then consider a new value for the petrophysical property from the conditional distribution. 4. Evaluate new objective function: better? - accept change worse? - consider the temperature and possibly reject the change

5. Is objective function close enough to zero? Have there been too many swaps? yes - done no - go to 3

Annealing seems like a very attractive technique but it does have issues. It is difficult to implement because there are so many parameters. Sometimes the choice of too many or the wrong objectives can make the problem intractable. It can take a significant amount of time to converge. Despite these issues, simulated annealing is a robust simulation algorithm that has a variety of applications.

R2003.2.0.1

Lesson 8: Seismic Data Integration

D-99

DecisionSpace Immersion

Landmark

An example showing simulated annealing in action is shown in figure 8.3. The figure shows a starting image and the object, the variogram. Initially, the starting image is a random array of the lithofacies in the correct proportions and the variogram shows randomness. Halfway through the algorithm, the image starts to take shape and the variogram starts to be honored. The final image shows the simulated structure and the honored variogram after the objective, honoring the variogram, has been met.

Figure 8.3 The figure shows the results from simulated annealing at three different stages: initial, halfway, and the final image. The objective is to honor the variogram shown at the bottom.

D-100

Lesson 8: Seismic Data Integration

R2003.2.0.1

Landmark

DecisionSpace Immersion

Final Thoughts
Variograms are rarely isotropic; geologic continuity and variogram continuity are direction-dependent. In sedimentary structures continuity in the vertical direction is typically less than the horizontal direction. Moreover, horizontal continuity depends on the direction of deposition and subsequent diagenetic alteration. A critical first step is to identify the vertical direction. This direction is perpendicular to the time stratigraphic correlation and often has the least continuity. The anisotropy in petroleum reservoirs is defined by a single angle that identifies the major and minor horizontal directions of continuity. The vertical direction is then assumed perpendicular to the horizontal direction. The main use of the variogram map is to detect the major and minor directions of continuity. For reservoir modeling geostatistics faces a unique problem. Most wells (particularly exploration wells) are vertical. This makes it straightforward to infer a vertical variogram, but difficult to infer a reliable horizontal variogram.
NOTE:
The variogram map will be very noisy and of little use in presence of sparse data. This is precisely the case when the directions of anisotropy are poorly understood. In fact, the single biggest problem in variogram interpretation is lack of data to calculate a reliable variogram; there is too little to interpret. The use of analogue data and familiarity with other reservoirs of similar depositional setting is critical to make up for too few data. You must also be aware of clustered data in regions of high reservoir quality. Hence, most wells are drilled on the top of the structure in areas believed to be of high reservoir quality... leaving little data on the flanks of the structure.

Virtually all geological processes impart a trend in the petrophysical property distribution, for example, fining or coarsening upward vertical trends or the systematic decrease in reservoir quality from proximal to distal portions of the depositional system. Such trends cause the variogram to show a negative correlation at large distances. If the data shows a systematic trend, this trend must be modeled and removed before variogram modeling and geostatistical simulation. The trend is added back to the estimated or simulated values at the end of the study.

R2003.2.0.1

Final Thoughts

D-101

DecisionSpace Immersion

Landmark

D-102

Final Thoughts

R2003.2.0.1

Appendix E

Basic Flow Analysis

Reservoir simulation is a process where a computer program is used to model fluid movement within a reservoir. Two main components are building the reservoir model and defining wells and how they will be operated. The simulator then models the reservoir performance specified time increments. This section covers the basic reservoir mechanics and the fundamentals of reservoir simulation.

Topics covered in this chapter:


Basic reservoir mechanics Fundamentals of reservoir simulation

Landmark

Basic Flow Analysis

E-1

DecisionSpace Immersion

Landmark

Basic Reservoir Mechanics


Fluids flow through the pores of a reservoir rock and into a well because of pressure. This pressure is called the reservoir drive. There are several different types of reservoir drives, and every oil reservoir displays at least one of them. Dissolved-Gas Drive Free-Gas Cap Expansion Drive Water Drive Gravity Drive Combination Drive Gas Reservoirs

Dissolved-Gas Drive
At depth, oil has a significant amount of natural gas dissolved in it. A Dissolved-Gas Drive reservoir (also called Solution Gas or Depletion) is driven by this dissolved gas. When a well is drilled into a reservoir, and production begins, gas dissolved in the oil phase at reservoir temperature and pressure is liberated as pressure declines. Some oil moves with the gas toward the production wells as the gas expands and moves to the lower pressure zones. This type of reservoir shows a rapid drop in reservoir pressure and production rate as fluids are produced. These type of wells usually require pumping at an early stage in their life.

E-2

Basic Reservoir Mechanics

R2003.2.0.1

Landmark

DecisionSpace Immersion

Reservoir Characteristics Reservoir Pressure Surface Gas-Oil Ratio Water Production Well Behavior Expected Oil Recovery Declines rapidly and continuously First low, then rises to maximum and then drops None Requires pumping at early stage 5% to 30% of original oil in place

Free-Gas Cap Expansion Drive


A gas cap is a large volume of gas at the top of a reservoir. When production wells are completed in the oil zone below the gas cap, a drop in pressure causes gas to move from the higher pressure cap region down toward the producing wells. The gas movement drives oil to the wells. Recoveries as high as 60% can occur in steeply dipping reservoirs with enough permeability to allow oil to drain to downstructure production wells.

Reservoir Characteristics Reservoir Pressure Surface Gas-Oil Ratio Water Production Well Behavior Expected Oil Recovery

Falls slowly and continuously Rises continuously in upstructure wells Absent or negligible Long flowing life depending on size of gas cap 20% to 40% of original oil in place
Basic Reservoir Mechanics E-3

R2003.2.0.1

DecisionSpace Immersion

Landmark

Water Drive
The most effective drive mechanism is Water Drive. In a Water Drive reservoir, water displaces oil as oil flows to production wells. (The expansion of water adjacent to or below the reservoir drives oil production.) This type of drive can maintain almost constant reservoir pressure and oil production throughout the life of a well. An effective reservoir management strategy for a water drive reservoir is to balance oil withdrawal with the rate of water influx. The well goes to water when the expanding water reaches the well, and the amount of water produced from the well shows a sharp increase.

Reservoir Characteristics Reservoir Pressure Surface Gas-Oil Ratio Water Production Well Behavior Expected Oil Recovery

Remains high Remains low Starts early and increases to appreciable amounts Flow until water production get excessive 35% to 75% of original oil in place

E-4

Basic Reservoir Mechanics

R2003.2.0.1

Landmark

DecisionSpace Immersion

Gravity Drive
In a Gravity Drive reservoir, gravity causes oil to migrate upward by pulling the heavier water down beneath it. The weight of the oil column causes the oil to flow into the well. A thick oil column and a permeable reservoir are required for this method to be efficient. Rate of recovery from a gravity drive reservoir is usually low compared to other drivers. It can however be effective in shallow, highly permeable, steeply dipping reservoirs.

Combination Drive
Combination Drive reservoirs are oil reservoirs that display several reservoir drive mechanisms. The most efficient reservoirs have a Combination Drive system of free-gas and water drives. In this case, the two drives push the oil into the well from both above and below.

Gas Reservoirs
Although the above comments apply to oil reservoirs, similar drives apply to gas reservoirs. Water drive and gas expansion with pressure depletion are most common. Recovery can be as high as 70% to 90% of original gas in place due to the relatively high mobility of gas.

R2003.2.0.1

Basic Reservoir Mechanics

E-5

DecisionSpace Immersion

Landmark

Fundamentals of Reservoir Simulation


Reservoir Simulation is the process of modeling fluid movement within a reservoir. It typically involves building one or more reservoir models, defining wells, and specifying how the wells will be operated. The simulator then models reservoir performance over some specified time increments. Reservoir simulation attempts to answer the following questions: 1. How big is the reservoir (bulk rock volume, BRV)? This includes the areal extent, structure and thickness as determined by seismic data and well calibration. This question is not directly answered by simulation, but rather by whatever pre-simulation software is used to build the simulation inputs (perhaps PowerModel). (Basic equation: BRV = Thickness * Area) 2. How much fluid is in the reservoir (pore volume, PV)? This is defined by BRV combined with porosity distribution. This question is not directly answered by simulation either. Like BRV, PV is also estimated by whatever pre-simulation software is used to build the simulation inputs. (Basic equation: PV = Porosity * BRV) 3. How much of the reservoir fluid is hydrocarbon (hydrocarbon pore volume, HCPV)? This is the portion of the reservoir that is filled with recoverable oil or gas. This depends to a large extent on the fluid contacts, and to a less extent on petrophysical properties, etc. This question is directly answered by reservoir simulation. (Basic equation: HCPV = (1-Water Saturation) * PV) 4. How fast can the hydrocarbon be produced? This question is directly answered by reservoir simulation, and is a function of rock properties, fluid properties, petrophysical properties, initial pressure, saturation pressure, aquifer properties, the number of wells, well properties and constraints, etc. This question is actually the driving motivation behind the need for simulation, as reservoir simulators allows you to assign "best guesses" for each of these variables/unknowns, and quickly run multiple simulations. 5. What are the costs and risks for development and production? This question is best answered by business management system software, such as DecisionSpace.

E-6

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

The traditional reservoir simulation process can be broken into the following 4 steps: 1. Build an initial reservoir model 2. Define wells and how they operate 3. Run the simulation 4. Analyze the results

Step 1: Build an initial reservoir model


Reservoir simulation begins by building an initial representation of the reservoir at a time before any production has occurred. At this point you are only concerned with how big the potential reservoir is, that is, it's bulk rock volume (BRV). To do this you delineate the reservoir by determining its areal extent, structure (anticline, etc.), and thickness. This step is important, as it determines if your reservoir is big enough to justify even doing modeling or simulation. Seismic is the primary data source, but well data may be used for calibration.

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-7

DecisionSpace Immersion

Landmark

Once you have a good understanding of how big the "box" is, you are ready to define its internal structure. This layering sequence is primarily the result of stratigraphic correlations. The model may be unfaulted, or display simple or complex faulting. It may be unfractured or naturally fractured. Data sources: Maps Seismic Data Well Data

E-8

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Now you are ready to define simulation gridblocks. These are the elements over which simulation calculations are performed. Within a layer, gridblocks do not have to be uniform, but each gridblock has 6 faces, and each face has 4 sides. The result is a volumetric representation of the reservoir which honors the layering and geology.

At this point, reservoir properties (porosity, permeability, etc.) are geostatistically assigned to each gridblock. The goal is the creation of a detailed numerical 3-D model that simultaneously honors a wide range of relevant geological, geophysical, and engineering data of varying degrees of resolution, quality, and certainty. This data may come from well logs, seismic attributes, and analogs. Data sources: Seismic Data Well Data

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-9

DecisionSpace Immersion

Landmark

Analogs

To complete the reservoir model, you (mathematically) fill the reservoir with fluid by specifying a pre-production (time = 0) hydrocarbon contact. This contact might represent heavy oil, black oil, volatile oil, condensates, wet gas, or dry gas. This position of this contact to a large extent controls how much hydrocarbon can ultimately be produced. One early source of uncertainty is that you may not have enough data to confidently identify the location of this contact. Data sources: Well Data Core Data

E-10

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Special tests, correlations, etc.

The example below show the same geologic model, but with different oil/water contacts (OWC). The reservoir with the deeper OWC obviously contains much more oil.

Step 2. Define Wells and How They Operate


Once you have a completed model, you are ready to define schemes of how you might want to produce the reservoir. Your scheme may consist of one or more wells, perhaps grouped into one or more platforms. Well planning and automatic well targeting tools are useful here. Some wells may be defined as producers, while others may be defined as injectors. Most simulators allow you to specify and simulate a variety of processes, such as primary production, water injection, gas injection, enhanced recovery, steam injection, and polymer injection.

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-11

DecisionSpace Immersion

Landmark

In the scheme below, primary depletion will be accomplished by multiple producers spaced on and around the structural high.

Step 3: Run the Simulation


Prior to being run, 3-D reservoir models are typically upscaled, that is, the data is scaled up (averaged) to a coarser resolution. Determining the appropriate scale is an important case-specific issue. A too small choice leads to large and inefficient computer use, which restricts the number of alternative scenarios and sensitivity runs that can be considered. A too large choice can lead to incorrect flow simulation results dues to inadequate representation of important geologic heterogeneities. Because simulators provide so much flexibility, naturally lots of parameters are assigned to the reservoir model that you can't realistically know very well. This uncertainty exists because of our ignorance or lack of knowledge. It is not an inherent feature of the reservoir. Business management software systems, like DMS, can take these uncertainties into account by allowing you to draw samples from distributions of probable values. These probable values are often based on history matching with other nearby wells that have been on production for some period of time. An actual simulation run can take from minutes to days depending on the size and complexity of the model,and may be automated for multiple realizations and multiple scenarios. With even one realization you have the flexibility to model how the reservoir will produce under a number of different scenarios. A good simulator can numerically optimize fluid flow assuming a varying number of wells, of various types (producers/injectors), with various well locations, various recovery processes (water injection, gas drive, drainage areas) and various facilities considerations.

E-12

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Shown below is one potential simulation for a reservoir with multiple wells.

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-13

DecisionSpace Immersion

Landmark

Reservoir Simulation, Predict Reservoir Performance

Primary depletion followed by water injection

As initial production depletion proceeds, a secondary gas cap forms. To maintain reservoir pressure, some wells are converted from producers to injectors. The gas cap shrinks.

E-14

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

The fundamental equation for measuring fluid flow rate in a porous medium is Darcy's Law. It governs much of the flow between gridblocks in a simulation run. It states that given a piece of rock of length "L", with barriers that only allow flow in one direction, and a pressure drop across that length, the flow rate is proportional to the pressure gradient and inversely proportion to the fluid's viscosity. In short, the higher the permeability, the higher the flow rate. Or, the bigger the pressure drop, the higher the flow rate.

If two fluids are flowing together (say not just water, but oil and water), an extra term is added to Darcy's equation for the "relative permeability" of that component. This "fudge factor" reduces the flow rate relative to what you'd have if it was totally filled with a single phase.

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-15

DecisionSpace Immersion

Landmark

For three phases, correlations based on combinations of the two phases are used to calculate what happens when all three phases are flowing together. Water relative permeability is a function of the water saturation. Gas relative permeability is a function of the gas saturation. Oil relative permeability is a function of a combination of the oil relative permeability for water/oil and the oil relative permeability for gas/oil. Based on Darcy's equation, others have theoretically derived relationships between fluid flow and pressure drop, and between permeability and average grain size. Visual Integration 3DVIEW/OpenVision

View seismic data and simulation results

E-16

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Benefits of Simulation Better Answers Can solve problems which simpler methods cannot solve

Increased productivity Can be automated, with multiple realizations, and multiple scenarios Risk Management

Decreased cycle time Better answers quicker

Simulation Applications History Matching Evaluation of different development scenarios Optimization of (What ifs) Number of wells Types of wells Well locations Recovery processes Facilities

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-17

DecisionSpace Immersion

Landmark

Types of Reservoirs VIP Simulates Types of containers Simple or Complex Faulting Unfractured Naturally Fractured

Types of Fluids Heavy Oil Black Oil Volatile Oil Condensates Wet and dry gas

E-18

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Types of Processes VIP Simulates Primary production Water injection Gas injection Enhanced recovery e.g. CO2 Flooding

Steam injection Polymer injection

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-19

DecisionSpace Immersion

Landmark

Reservoir Management Decisions Influenced by Modeling/Simulation


How big is the reservoir (Bulk Rock Volume)?

Areal Extent Structure Thickness Data sources: Seismic Data Well Data (done in PowerModel)
How much fluid can be in the reservoir (Pore Volume)?

BRV, Porosity distribution, Data sources: Seismic Data Well Data Analogs (done in PowerModel)
How much oil/gas (Hydrocarbon Pore Volume)?

Fluid Contacts Rock Properties Petrophysical Properties Pressure Other properties Data sources:
E-20

Well Data Core Data Special tests, correlations, etc.


R2003.2.0.1

Fundamentals of Reservoir Simulation

Landmark

DecisionSpace Immersion

(Simulation, done in DMS)


How fast can the oil/gas be produced?

Rock properties, Fluid properties Petrophysical properties, Initial pressure, Saturation pressure, Aquifer properties, # of wells, Well properties & constraints Data sources: Well Data Core Data Special tests, correlations, etc. (Simulation, done in DMS)
At what costs & risks? - DMS

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-21

DecisionSpace Immersion

Landmark

Basic Equations, Volumetrics BRV = <Thickness> * <Area> PV = <Porosity> * BRV HCPV = (1- <Water Saturation>) * PV Fluid Flow Darcys Law Single Phase Flow, arelationship for the fluid flow rate q through a porous medium, expressed as: kA p ------ q = ----- x where: k = permeability, A = cross-sectional area, = viscosity, and p = pressure difference across the distance x. Fluid Flow, Darcys Law Multiphase Phase Flow ??? plug in picture Relative Permeability Example ???

Run the simulation


Traditional Simulation Simplified view: Take all the data, Q/A and rationalize the data Build a model Run simulations (including sensitivities) Sometime you might build more than one model If History Matching then you would make changes to your model to match the past performance

Traditional emphasis on reservoir description with trends towards: Increasingly complex models Increasingly large geologic and simulation models Size of static geologic models growing faster than the ability to simulate

E-22

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Generally Means: Generally dealing with a few models, (or variations) Possibly 10 - 20 scenarios Longer Simulation Runs More Disk Space More Memory More Processors for parallel runs Upgridding and Upscaling Complex Models

Upgridding/Upscaling Example Initial geologic model 200,000 gridblocks. Upscaled model 15,000 gridblocks.

3 Days to simulate big model, 20 minutes to simulate upscaled model.

Moving Targets for Large Models, Order of Magnitude Estimates

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-23

DecisionSpace Immersion

Landmark

Big Models mean Longer Simulation Runs: CPU time increases with more gridblocks.
Large Simulation Models (Gridblocks) Early 1980s Late 1980s Early 1990s Present (Serial) Present (Parallel) 1000 - 5000 20,000 - 50,000 200,000 - 500,000 1 - 5 Million Large Geologic Models (Cells) ??? 100,000 - 500,000 20 - 50 Million

Big Models Mean Larger Disk Utilization

Step 4: Analyze the Results


Simulation attempts to answer the questions, "How much of the reservoir fluid is hydrocarbon?" and "How fast can this hydrocarbon be produced?" A primary tool for answering these question is the Production Plot. Based on the above simulation, a production plot might look like this:

The plot shows that initial production of oil and gas decreases quickly. At about 900 days, water injection begins, and new peaks of oil and gas production emerge. Water production also increases. At some point (after 3000 days) the water production surpasses oil production and the well "goes to water".

E-24

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Reservoir production analysis


Required Simulation Plotting Capabilities in a Probabilistic World Need statistical tools to analyze results based on parametric inputs. More generic non-reservoir specific software? Required capabilities: Ability to simultaneously analyze multiple iterations, Add or remove parameter sensitivities between iterations Crossplots with Color Posting of Attributes, Histograms, Linked Data Views, Ability to subset based on parameters

History matching
Process where you match the past behavior of the reservoir: Flow Rates Pressures

Usually involves changes in almost any reservoir property: Reservoir model, pvt, rel perms, aquifer, fault connectivities

Not Unique: Just because you match the past does not mean you will predict the future History Matching Example, Porosity Distribution History Matching Example, Initial Simulation Model History Matching Example Simulation Production Data

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-25

DecisionSpace Immersion

Landmark

So Whats the Problem? 1. Something wrong with the data? 2. Something wrong with the model? 3. Something wrong with the simulation? 4. All of the above? Option 1: Change the Relative Permeabilities

1 0.9 0.8 Krw Krow Krw mod

Relative Permeability

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

0.2

0.4

0.6

0.8

Water Saturation

E-26

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

History Matching Example Simulation Production Data

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-27

DecisionSpace Immersion

Landmark

Option 2: Change the Model

Insert a channel that does not intersect any well.

E-28

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

Reservoir Simulation Production Data

Initial History Matching Model Saturation History with Cutoffs

Early Time

Later Time

Channel History Matching Model Saturation History with Cutoffs

Early Time

Later Time

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-29

DecisionSpace Immersion

Landmark

History Match Example, Which is Correct?


1 0 .9 0 .8 K rw K ro w K rw m o d

Re lativ e Perme ability

0 .7 0 .6 0 .5 0 .4 0 .3 0 .2 0 .1 0 0

???

0. 2

0 .4

0 .6

0. 8

W a te r S at u r a ti o n

It depends!

Gas Injection

E-30

Fundamentals of Reservoir Simulation

R2003.2.0.1

Landmark

DecisionSpace Immersion

R2003.2.0.1

Fundamentals of Reservoir Simulation

E-31

DecisionSpace Immersion

Landmark

E-32

Fundamentals of Reservoir Simulation

R2003.2.0.1

Appendix F

Basic Risk Analysis

This appendix defines how to convert a myriad of estimated values into a format that managers can use to achieve their investment objectives. This is critical because errors tend to compound, rather than cancel out. With DecisionSpace we try and reduce the complex interaction of variables to a manageable size.

Topics covered in this chapter:


Measuring value Measuring variability

Landmark

Basic Risk Analysis

F-1

DecisionSpace Immersion

Landmark

Measuring value
All of the key performance indicators below have one thing in common, CASH FLOW. In the business world, cash is king, and if your ending bank balance is not greater than your beginning bank balance, you are in trouble. You must always keep in mind that when a project exceeds 5 to 10% of your companies capital budget considerations other than simple value measurements must be discussed. Net Present Value (NPV) NPV provides a consistent platform to make decisions among alternative investments. You must be aware that NPV gives the greatest weight to cash flows received early in the project life cycle. NPV is also biased in favor of larger projects because it does not adjust for project size. Efficiency Measures Some managers believe an efficiency ratio is the only project measure needed to make sound investment decisions. One such measure is NPV/Inv. This measure adjusts the magnitude of NPV based on the project size. A project with a higher efficiency ratio is said to use capital more efficiently. This measure is most useful when comparing alternative investment decisions. This measurement adjusts NPV for size of investment and favors investments with low initial capital outlay and large NPVs. Be careful when using this measure when you are leasing or selling equipment, as the denominator gets artificially small causing non-realistic efficiencies. According to this measurement, selling generates revenue with not cost, so the metric is distorted. Breakeven Discount Rate (BDR) This measurement BDR is also referred to as the internal rate of return IRR. The IRR equals the discount rate that sets the cash flow values equal. Normally you would accept a project when the IRR is greater than the discount rate, plus saying that the project earns 20%.

F-2

Measuring value

R2003.2.0.1

Landmark

DecisionSpace Immersion

This measurement provides a profit indicator, independent of the absolute size of the investment. Like the previous measurements, BDR is biased in favor of projects with low initial investment and early cash flows. Discounted Payout, Payout (DPO) The length of time (years or months) needed to recover the investment or initial outlay of capital. This defines the point where investment risk changes in the project. This is used as a crude risk measure, as most investors prefer smaller payout times. Unfortunately, this measurement ignores cash flows after payout and thus fails to provide a metric of the total value. Deciding which is the better project When evaluating/comparing projects, it is very useful to look at graphs of their NPV distributions. Project X and Project Y both have the same expected value: approximately $100 million. Project X has a narrow range of possibilities, and Project Y has a wide range of possibilities. Project X doesn't have as large an upside potential as Project Y, but it also doesn't have the downside exposure. Which asset would you want in your portfolio?

NPV distributions for two competing projects

There's no one correct answer. It depends on: Which one gives the greatest chance of meeting our financial goals? You can't know this just from looking at the probability distributions.
Measuring value F-3

R2003.2.0.1

DecisionSpace Immersion

Landmark

How robust the analysis is behind each? Maybe there is more technical analysis behind one project. How well it fits into the entire portfolio? One project may have different synergies than the other, and might impact our portfolio accordingly. If there are any better alternatives? Maybe there are other projects that are better alternatives. Whether or not you can afford the larger downside loss? If your a small company, you might not be able to afford the large potential downside loss of Project Y. It could put you out of business. What investment is required for each? You only know the values of each project. One project might require a much larger investment than the other, impacting the project's return. Which one is more aligned with corporate strategy?

The point is, that lots of things go into this type of decision. You cannot simply look at one event (in this case, NPV) and make a judgment. For example, when evaluating/comparing projects, it is also very useful to look at their NPV signature with respect to the discount rate. This is because single value measurements do not tell the complete story as to the relative risk of each project.
10.0 Project Y NPV, $MM 5.0 0.0 -5.0 Project X

10

20 Discount Rate

30

NPV profiles for competing projects

If your company is blessed with good cash flows and a large treasury, then Project Y is the logical selection because of its higher NPV. If your company needs immediate cash flow, then Project X is the logical selection. Project X is more robust and more attractive to the risk averse companies. If your partner is using a different risk metric, for example NPV/Inv, they may have a different ranking for their final decision.
F-4 Measuring value R2003.2.0.1

Landmark

DecisionSpace Immersion

Accelerating the investment Often times you may make a decision to allocate funds to acheive an objective of accelerating an investment. Yet, for any number of reasons, your decision may not achieve your objective. This is because after you make a decision uncertainties come into play. Uncertainties are uncontrollable elements that we sometimes call luck. Different alternative decisions that you make subject you to different uncertainties and thus a different range of outcomes. It is important to note that good decisions do not always leat to good outcomes, and bad decisions do not always lead to bad outcomes. After all there are a few people who actually win the lottery or win money in Las Vegas. example 5.7 page 169

R2003.2.0.1

Measuring value

F-5

DecisionSpace Immersion

Landmark

Measuring variability
Types of Risk
Political Risk (strategic management team) Nationalization Terrorism Government Regulations Environmental Movements Scientific Innovations

Economic Risk (corporate economist) Price per Barrel Price per MMCF Supply and Demand Inflation and Recession

Engineering Risk (engineer) Drilling Costs (drilling engineer) Production Costs (production engineer) Production Forecast (reservoir engineer)

Geologic Risk (geologists / geophysicist) Structure Reserves (net sand, net pay) Source (must have) Migration and Maturity (basin tectonic evolution) Seal (proximity to existing seal) Petrophysical Parameters (porosity, permeability, Sw, etcetera) Subsidence

F-6

Measuring variability

R2003.2.0.1

Landmark

DecisionSpace Immersion

Risk vs. Uncertainty


Risk and uncertainty are often used as synonyms. In this text, however, uncertainty is used when no gains or losses exist depending on the outcome. Risk exists when money is gained or lossed depending on the outcome. For example, there is an uncertainty whether or not there is oil under block 2/4 of the North Sea. If your company decides to drill for oil in block 2/4, there is a risk that there is no oil in the block and your company will lose several million dollars by drilling some dry holes. The uncertainties are represented by continuous or discrete distributions. When insufficient data are available to develop an uncertainty distribution a standard or generic distribution is used. These standard distributions have evolved over time. Examples include: Normal LogNormal Binomial Hypergeometric Uniform Triangular Gamma Chi Square

Methods to Quantify Risk


Many companies stop after calculating expected returns, and watching performance indicators. Some may go a little father and do a few sensitivity studies to account for higher or lower than expected prices or expenses. But few companies are concerned with their ability to determine the range of uncertainty or probability of achieving their asset or portfolio level goals. This is where the real value in DecisionSpace comes from. Decision trees and simulations are the most commonly used tools to evaluate risk. When a series of decisions needs to be made down stream (throughout a project), the decision tree is similar to the Latin Hypercube sampling of simulation. The decision tree will always take into account the low probability events. If built properly, the decision tree can handle complex dependencies. An advantage of simulations is that they can effectively use more uncertainties than decision trees.

R2003.2.0.1

Measuring variability

F-7

DecisionSpace Immersion

Landmark

Decision Trees The following three decision trees evaluate a progressively more complex drilling decision.

25% Success 30% Drill $0.1 70% 25% -$20, Dry Hole 50%

$160 (High) Optimistic $40 (Base) Most Likely $28 (Low) Pessimistic

Dont Drill

$0

Decisions Uncertainties Outcomes


Figure 1: Simple Single Well Geologic Risk and Cash Flow Decision Tree

Evaluating the decision tree in Figure 1 leads to a decision to drill, because the expected Drill Net Present Value (NPV) is $.1, while the Dont Drill NPV is $0. A spreadsheet model is used to calculate the NPV at every outcome/node of the tree. An influence diagram is useful when generating the deterministic spreadsheet model and the decision tree. A sensitivity analysis is also useful when evaluating the magnitude of risk of each uncertainty. Examples of a spreadsheet, an influence diagram, and a sensitivity analysis are shown in later sections. Oil companies must use the risked Expected Value (EV) NPV from the decision tree to run their economics, rather than the base case (median, P50) value. The EV is a single number that can represent an entire probability distribution (also known as probability-weighted average.)

F-8

Measuring variability

R2003.2.0.1

Landmark

DecisionSpace Immersion

Reserves Strategy Decision Drill -$$ (High) 30% (Base) 40% (Low) 30% Dont Drill $0

Oil Price 25% 50% 25%

Well Cost 25% 50% 25%

Operating Cost 25% 50% 25% $$$$ $$ $

Decision to Drill or Not 4 Uncertainties, each with 3 Outcomes 82 Possible Outcomes in this Decision Tree
Figure 2: More Complex Drilling Decision Tree

As before, each outcome/node in Figure 2 must be evaluated with a spreadsheet. The spreadsheet will calculate NPV based on the uncertainties from the decision tree. For gas wells uncertainties for initial production and decline curve must also be added. The more uncertainties added the bigger the range of possible values and the smoother the cumulative NPV curve (or probability distribution curve). These decision trees are excellent for making field development and secondary recovery type decisions. For example a decision to waterflood should never be made without running through this type of analysis. The tree format can easily add production scenarios and other uncertainties. Hybrid strategies are also modeled with decision trees. These trees can also display the value of information. For example, assume a company has a piece of structural pattern recognition software that can reduce the uncertainty of structure by 10%. The tree can quite easily calculate the value of that information, by evaluating the tree with and without the reduced uncertainty value.
R2003.2.0.1 Measuring variability F-9

DecisionSpace Immersion

Landmark

Figure 3 shows an example of a decision tree aiding in a decision to drill a single well.

Initial Drilling Cost 25% 50% 25%

Reserves Price Mechanical Problem Success No Yes 30% 75% 30% 40% 30% No 70% 25% 50% 25%

Completion & Operating Cost 25% 50% 25%

Dry Hole C& OC

Extra Drilling Costs Continue Yes 25% Stop Dont Drill $0 25% P&A 25% 50%

R Success Yes 50%

No 50%

Dry Hole

2 Decisions, Drill or Not and Continue Drilling or Not 11 Uncertainties 327 Possible Outcomes in this Decision Tree
Figure 3: Even More Complex Drilling Decision Tree
F-10 Measuring variability R2003.2.0.1

Landmark

DecisionSpace Immersion

Simulation Simulation generates random numbers for each risk factor, based on their risk distribution. The distribution is usually triangular from pessimistic (Low) up to most likely (Base) and then back down to optimistic (High). Simulations evaluate a greater number of possibilities than decision trees.

Reserves Strategy Decision Drill -$$

Oil Price

Well Cost

Operating Cost

Dont Drill

$0

Decision to Drill or Not 4 Uncertainties, each with 3 Outcomes 82 Possible Outcomes in this Decision Tree
Figure 4: More Complex Drilling Decision Simulation

R2003.2.0.1

Measuring variability

F-11

DecisionSpace Immersion

Landmark

Typically around 5000 simulations are run and then sorted based on the NPV output. The frequency distribution versus MBO and NPV are then plotted as in Figure 5. The probability distribution is then integrated to create a cumulative probability distribution as in Figure 6. The cumulative probability distribution is then used to make drilling decision based on the expected value of NPV.

Mode, Most Likely 200 Median, P50 Mean, Expected Value


Frequency

0 0 MMBO 1,000

Figure 5: Probability Density of 5000 Simulations

F-12

Measuring variability

R2003.2.0.1

Landmark

DecisionSpace Immersion

100%

Cumulative Probability

Mean, Expected Value 50% Median, P50

Mode, Most Likely 0% 0 MMBO 1,000

Figure 6: Cumulative Probability that MMBO is Less than or Equal to the Value

Spreadsheet The spreadsheet that evaluates the economics/NPV for every node of the decision tree must be designed properly. Organization, neatness and clarity are very important. Parametric relationships (formulas) should be used rather than numbers, to aid in deterministic analysis. Some of the key variables in the economics run are; net pay, recovery factor, and productive area (# of wells). A general outline for the spreadsheet should look like the following: Strategy Inputs Calculations (for each year) Income Statement/Cash Flow/Value Measures (for each year) Summary/Debugging

See Figure 7 for a partial expansion of the drilling model spreadsheet. See Figure 8 for a general influence diagram.

R2003.2.0.1

Measuring variability

F-13

DecisionSpace Immersion

Landmark

Section 1 - STRATEGY
Strategy Continue after Shallow Reserve? Continue after Mechanical Failure? Continue after 2nd Failure? Number of Wells Drilled? Drill/Do Not Drill yes/no yes/no yes/no #

Section 2 - CONSTANTS
Inflation Rate? Cost of Capital Daily Rig Rate 3% 15% $40,000/day

Section 3 - UNCERTAINTIES
Total Reserves (Low Base HIgh) Shallow Reserves Deep Reserves Investment Costs (Low Base HIgh) Initial Drilling Delay after Mechanical Failure Delay after 2nd Failure Supplementary Costs Completion Costs Operating Costs (Low Base HIgh) Economic Variables (Low Base HIgh) Oil Price Initial Oil Price Growth Discount Rate MMBO MMBO $MM $MM $MM $MM $MM $/bbl $/bbl % %

Section 4 - CALCULATIONS (year 1 to year N)


Production Operating Costs Investment and Capital Costs

Section 5 - INCOME STATEMENT (year 1 to year N)


Gross Revenues Operating Costs Investment and Capital Costs Profits

Section 6 - SUMMARY / NPV


Figure 7: The Drilling Model Spreadsheet
F-14 Measuring variability

$$$$

R2003.2.0.1

Landmark

DecisionSpace Immersion

Capital Expenditure

After Tax Cash Flow Taxes

Strategy: Drill or No Drill

NPV

Revenues

Profits

Initial Drilling Costs

Oil Price
Ini Oil tial Pri ce

Royalties Costs Oil Production Operating Costs Completion Costs Supplemental Costs Transportation Costs Shallow Reserves Delay Costs Mechanical Failure

Growth in Oil Price

Resolution After Two Tries

Deep Reserves

Success of Drilling

Resolution After One Try

Continue after Shallow Reserves

Continue after Mechanical Failure

Continue after Mechanical Failure not Resolved

Figure 8: Drilling Decision Influence Diagram

R2003.2.0.1

Measuring variability

F-15

DecisionSpace Immersion

Landmark

Sensitivity analysis
You should only consider projects that meet your companies hurdle rate (NPV >). You should also only consider projects that provide your company with the desired risk/return portfolio. In general five to six variables contain about 95% of the uncertainty. Therefor, additional variables make the analysis quite complex and do not add much to the predicted outcome. The main variables are called Critical Uncertainties and are displayed in the Sensitivity Analysis tornado chart, Figure 9. Base Case NPV $300 Reserves Price
Uncertainties

Low
Critical Uncertainties

High $25/bbl

$7/bbl

Expense Cost Subsidence Inflation

$300 NPV

Figure 9: Sensitivity Analysis: Evaluating the Extremes of each Uncertainty

Deterministic sensitivity analysis charts often takes the shape of a tornado, and are thus referred to as Tornado Charts. The tornado chart is a correlation between the outputs and the inputs. The above tornado chart shows the NPV swing associated with each uncertainty variable. Usually reserves are always at the top of the tornado then price per barrel then expense followed by cost, then facility capital, etcetera.
F-16 Measuring variability R2003.2.0.1

Landmark

DecisionSpace Immersion

As the variability of reserves causes most of uncertainty of the economics spreadsheet, it is the geologists / geophysicists job to reduce the variability in the volume of reserves as much as possible. One way to reduce this variability is through Seismic Inversion and Geostatistics. These charts, however, should not replace the control of experience or sound judgment, because a poorly or misapplied procedure or bad input data can cause garbage results. Keep in mind that applying risk principles makes decision-making tougher, not easier; because it forces decision-makers to formally recognize the trade-off between risk and return. The decision makers will not be able to hide behind superficial indicators. Those seeking to make all decisions using one, simplistic variable will become quite frustrated with DecisionSpace.

R2003.2.0.1

Measuring variability

F-17

DecisionSpace Immersion

Landmark

Final Thoughts
The goal of risk analysis is to improve the quality of Strategic Decisions for the organization. The goal is not to tell management how to run their operations. In general oil companies use risk analysis for the following functions: R&D Priorities New Ventures (exploration, development, secondary recovery) Capital Investments Portfolio Management

The strategy decisions are based on a set of scenarios generated by experts: production scenarios oil price scenarios gas price scenarios reserves (% of projected) scenarios subsidence scenarios drilling costs scenarios well operating costs scenarios facility capital scenarios other uncertainty scenarios significant enough to change base case

When the experts are assessing probabilities/uncertainties, be careful to avoid their bias and hidden assumptions. For example, they may have assumed two months of good weather for drilling during hurricane season. Also, make sure the experts are not afraid to think about extremes. For example, could there be a 1000 foot pay sand in the reservoir? Often a Risk Committee works best to assign the uncertainties. Finally, oil companies can reduce exploration risk uncertainties via:
F-18 Final Thoughts

Experienced; crews, explorationists, and engineers 3D seismic (and other geophysical surveys) Presence of direct hydrocarbon indicators Spectral decomposition Depth migrations Computer mapping Computer log analysis Seismic inversion and confidence analysis Hiring a Decision Space consultant
R2003.2.0.1

Appendix G

Well Planning Basics

Since directional wells are more complicated and costly to drill than straight holes, it makes sense to spend more time planning these wells, and more time evaluating and refining your plans with your engineers. The principal goals of your well planning team are to: optimize a wellpath to hit key targets with a minimum total measured depth minimize risk of rendering a well undrillable shorten well planning cycle time

The rewards of proper well planning and execution include: higher production rates greater reservoir sweep efficiencies lower total field development costs

In a given field area, you generally know a number of your drilling constraints in advance. For example, you may know that you will have to drill from a particular platform location, or that it will be more economical to extend an existing well than to drill a new one. You will choose different well planning options in Wellbore Planner depending on the distance and depth of your targets from the surface location. Other factors, such as lithology, pore pressure, lease issues, and completion assemblies may also influence well design.

Topics covered in this chapter:


Components of horizontal wells Directional drilling considerations Well plan types Redline parameters

Landmark

Well Planning Basics

G-1

DecisionSpace Immersion

Landmark

Components of Horizontal Wells


A simple horizontal well profile consists of the following sections: a vertical section dropping from the surface or platform location the kickoff point (KOP), the point on the wellbore at which the wellpath first departs from vertical a curved, or build section to steadily gain lateral distance with depth

The build section is generally designed at a constant build rate, expressed in degrees (from vertical) per 100 feet or per 30 meters. Long, medium, and short radius wells are defined by the length of the radius of an imaginary circle defined by a continuous arc at the specified build rate. As shown in the diagram below, lower build rates yield longer radius, longer reach wells.

Long Radius
2-6 deg. / 100 ft. 3000-1000 ft. radii

8-50 deg. / 100 ft. 700-125 ft. radii 1.5-3 deg./ft. 40 - 20 ft. rad. 1500 ft.

2000 ft.

Short Radius

400 ft.

Medium Radius

Horizontal well radius definitions

G-2

Components of Horizontal Wells

R2003.2.0.1

Landmark

DecisionSpace Immersion

the end of build point (EOB; sometimes end of curve, EOC) at which straight hole drilling resumes (The angle of the wellbore at the end of the first build section is called the hold angle.) a straight, or tangent section is sometimes added here to correct for drift and to provide a soft landing into the target the horizontal section through the target bed

R2003.2.0.1

Components of Horizontal Wells

G-3

DecisionSpace Immersion

Landmark

Directional Drilling Considerations


Well plans may become more complicated as the number of targets you have to hit increases, or as geologic conditions vary. Economics may also play a part in how a well is designed. Since straight hole drilling is less expensive than directional drilling, straight tangent sections may be added in the middle of a build section when possible. Rarely are multiple targets all in the same vertical plane, which creates the need to turn the wellbore sideways as you continue to drill downwards. This gives us two more parameters to consider: turn rate, the rate of azimuthal increase or decrease in the wellpath, measured in degrees per 100 feet or per 30 meters drop rate, the rate of inclination decrease of the wellbore (going toward vertical), measured in degrees per 100 feet or per 30 meters

During drilling, directional drift from the planned wellpath is always a possibility, and a certain amount of directional uncertainty is inherent in the survey measurements. The drill bit may walk for some distance along a tight streak or an inclined bed, causing an unplanned dogleg, or kink in the wellpath. A wellpath that is too tightly curved or twisted can exceed the flexibility limits of the drillstring, and render the well undrillable. Landmark's Wellbore Planner (found within DecisionSpace) uses a parameter called dogleg severity (DLS) to identify tight spots like this if they occur in your well plans. The DLS calculation takes into consideration build rate, turn rate, and drop rate. The higher the DLS, the sharper the bend in the wellbore.

G-4

Directional Drilling Considerations

R2003.2.0.1

Landmark

DecisionSpace Immersion

Well Plan Types


Wellbore Planner allows you to define seven different types of well plans, which can be used singly, or combined to address more complex situations. If you know in advance what type of well plan you will be using, you can set it before picking your targets; otherwise let it default. The seven well plan types differ in terms of where their starting and ending points are in relation to the ground surface and to existing wells and platforms. They are:
Unknown

Unknown Surface Well Sidetrack Look Ahead Complex Extension Platform Location Platform Well

A well plan with an Unknown starting point location consists of a series of targets that are joined together by a wellbore, but the wellbore is not linked to a surface location or to an existing well.

Well Plan with Unknown Starting Point

You would not actually drill a well like this, of course, but it is often helpful to group targets in this manner for future planning sessions. In Wellbore Planner, Unknown is the default selection. You can change the well plan type from unknown to another type during your planning session.
R2003.2.0.1 Well Plan Types G-5

DecisionSpace Immersion

Landmark

Surface Well

A surface well plan begins at a particular x, y, z location on the ground surface, and connects a series of targets at depth. The plan typically has a kickoff point (KOP), the point along the wellbore where the plan first deviates from vertical.

KOP

Surface Well Plan Sidetrack

A sidetrack well plan begins at a point (the Mill Out Depth, MOD) along an existing borehole or well plan, and veers off in another direction. You define it as a separate entity from the borehole or well plan to which it is joined. In the example below, Well A could be an existing Open Works well.

Well A

MOD

Well A

Sidetrack Well Plan

G-6

Well Plan Types

R2003.2.0.1

Landmark

DecisionSpace Immersion

Look Ahead

A Look Ahead well plan extends the path of an existing borehole. It is used for planning from the currently drilled section to the remaining targets.
Well A

Well A

Look Ahead Well Plan

R2003.2.0.1

Well Plan Types

G-7

DecisionSpace Immersion

Landmark

Complex Extension

The Complex Extension option is similar to Look Ahead in that it allows you to append one plan to another. However, with Complex Extensions you can take a well with multiple sections and use different calculation methods for each section (varying dogleg severity, build, drop, and turn rates, etc.). Any changes you make to the base well plan are also applied to the appended plan. This plan type is useful if your drilling parameters need to change along the well path due to changes in hole size (at casing points), mud properties, lithology, or other subsurface conditions.

Well Plan 1

Well Plan 2

Complex Extension Well Plan

G-8

Well Plan Types

R2003.2.0.1

Landmark

DecisionSpace Immersion

Platform Location

A Platform Location well plan is simply an abbreviated well plan that you use to simulate a well platform. You assign it an x, y, z surface location, but no targets. You then use the Platform Well option to route wellbores through this location. Once you have attached other wells to the platform location, you can change the platform location coordinates and the associated wells will move with it.

Platform Location

Platform Well

The Platform Well option allows you to attach multiple wells to a particular platform location (see previous section). A platform well plan extends to a platform location well plan that you have defined with the Platform Location option. Platform wells kick off at a specified measured depth below the selected platform location.

Platform Well

R2003.2.0.1

Well Plan Types

G-9

DecisionSpace Immersion

Landmark

Redline Parameters
Redline Parameters helps you visually assess whether or not the drilling options you have specified can result in a drillable well. The current calculated well plan is compared to the maximum acceptable values set for the six standard drilling parameters used to evaluate the drilling feasibility of a directional well. A drilling engineer can typically supply reasonable values for these parameters. They include: Build Rate (rate of increase in well angle towards horizontal) Drop Rate (rate of increase in well angle towards vertical) Turn Rate (rate of increase in well azimuth) Dog Leg Severity (a composite of the above) Inclination (angle of wellbore from vertical) Cumulative Directional (total feet or meters of directional hole down to a given depth)

In Wellbore Planner, a Redline View graphical display actually shows you where the rough spots are in your well plan, at what depths they occur, and gives you some idea of what must be corrected. If any of your curve rates (build, drop, turn rates and dogleg severity) cross the red line, the proposed wellpath could exceed the flexibility of the

G-10

Redline Parameters

R2003.2.0.1

Landmark

DecisionSpace Immersion

drillstring to make the curve, or cause oversteering and drift from your plan. If inclination crosses the red line, the well plan may require the drill string to defy gravity. If the cumulative (total) directional crosses the red line, the well plan contains too much directional footage at the expense of straight hole segments, and may be too costly to drill.

R2003.2.0.1

Redline Parameters

G-11

DecisionSpace Immersion

Landmark

G-12

Redline Parameters

R2003.2.0.1

Appendix H

Glossary

This Appendix provides a basic glossary of terms used throughout the DecisionSpace Immersion training.

Topics covered in this chapter:


Glossary of terms used in this course

Landmark

Glossary

H-1

DecisionSpace Immersion

Landmark

Glossary of terms
3D Viewer

A 3D viewing tool that allows multiple objects from multiple sources to be displayed in the same geometric space.
abandon

To cease producing oil or gas from a well when it becomes unprofitable. A wildcat may be abandoned after it has been proved nonproductive. Sometimes, before a well is abandoned, some of the casing is removed and salvaged. Usually, one or more cement plugs are placed in the borehole to prevent migration of fluids between the various formations.
acre-foot

The volume generated by a surface one acre in area and one foot deep. An acre-foot of volume can hold 7,758 barrels of oil.
active object

The object in the 3D Viewer, to which single object operations will be applied. Also referred to as AO. While many objects may be selected for operations such as hiding, only one object may be the target of operations such as individual object shifting.
animation

Animation is the sequenced presentation of visual images. For example, in OpenVision, you can specify a volume of seismic data to display in OpenVision 3D Viewer. You can then use the mouse to rapidly move through the volume, causing an animation effect.
anisotropy

Having different physical properties in different directions. Variograms or other measures of spatial correlation often exhibit different ranges of correlation in different directions. The property of a rock which allows it to show different responses or measurements when measured along different axes.
a priori

Involving reasoning, but before investigation.

H-2

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

aquifer

A water-bearing stratum of permeable rock, sand, or gravel capable of producing water. In a petroleum reservoir with a natural water drive, the aquifer is that part of the reservoir containing water.
arithmetic mean

The sum of values divided by the sample size.


attribute

Same as Property. Numerical value / coding of rock characteristics (Phi, Sw, K, Pressure, etc.)
authorization for expenditure

(AFE) An approved authority for a capital expenditure on a physical asset, such as platforms or estimate of the costs of drilling and completing a proposed well. The AFE is approved by the controlling authority within a company, or, which can be a document the operator provides to each working interest owner before the well is commenced for the approval by a majority of the working interests.
azimuth angle

Rotation of view around the vertical axis. Measured in degrees clockwise from the positive Y-axis (North).
azimuth

Angle of rotation about the z axis, measured from the north.


base case

The interpretation scenario. An average or most likely case.


bed

A subdivision of a stratified sequence of rocks, lower in rank than a member or formation, internally composed of relatively homogeneous material exhibiting some degree of lithologic unity, and separated from the rocks above and below by visually or physically more or less well defined boundary planes.
bedding planes

In sedimentary or stratified rocks, the division planes that separate the individual layers, beds, or strata.
R2003.2.0.1 Glossary of terms H-3

DecisionSpace Immersion

Landmark

bedrock

A general term for the rock, usually solid, that underlies soil or other unconsolidated, superficial material.
bed thickness

True bed thickness is the thickness of the stratigraphic unit measured along a line normal to the direction of extension of the unit. True thickness can be derived from information derived by a dipmeter. The bed thickness determined from some well logs is an apparent bed thickness corresponding to the distance the borehole remained in the bed. The borehole may not have penetrated the bed normal to its upper or lower boundary surface because of hole deviation and formation dip.
bootstrap

A statistical resampling procedure (with replacement) whereby the uncertainty in a calculated statistic is derived from the data itself. Monte Carlo simulation is used for sampling from the data distribution. (Deutsch, 1999)
boundary conditions

In the context of determining the effective properties of a grid block, boundary conditions are the pressure and flow rate conditions surrounding the block of interest.
breakeven discount rate, BDR, IRR

This measurement BDR is also referred to as the internal rate of return IRR. The IRR equals the discount rate that sets the cash flow values equal. Normally you would accept a project when the IRR is greater than the discount rate, plus saying that the project earns 20%.
N

0 =

NCF t DF t
t=1

bubble point

The temperature and pressure at which part of a liquid begins to convert to gas. For example, if a certain volume of liquid is held at constant pressure, but its temperature is increased, a point is reached at which bubbles of gas begin to form in the liquid. This is the bubble point.
H-4 Glossary of terms R2003.2.0.1

Landmark

DecisionSpace Immersion

capital asset

The assets intended for long, continued use or possession. May be further classified as tangible; e.g., land, buildings, and intangible; i.e., the costs of drilling a well other than materials.
capital budget

Money allocated for financing long-term outlays for assets such as plants, plant expansion, research and development and advertising.
capital costs

For Federal or National income tax purposes, those costs of capital expenditures which may be recovered by deduction against income (through depreciation and depletion).
capital expenditure

An expenditure intended to benefit the future activities of a business, usually by adding to the assets of a business, or by improving an existing asset.
capital funds

Money invested in a business for use in conducting the operations of the business.
capital investment

Synonym to Capital Costs. Funds spent to acquire additions to assets for the betterment of the operation. Depreciation is taken on such expenditures rather than charging them off as an expense or operating cost.
capitalize

To treat certain expenditures as capital expenditures for Federal income tax computations. These are usually depreciated costs.
case

A set of specific models comprised of one of each type of models (such as, fiscal models; reservoir characteristic models, saturation; rock type; facilities characterization) used as input for a run.
cash flow

A series of relevant cash receipts and cash expenditures along with the time that the receipt or expenditure occurred.
R2003.2.0.1 Glossary of terms H-5

DecisionSpace Immersion

Landmark

CE

Capital efficiency, see Efficiency ratio


center of interest

The 3D point in the scene about which rotation occurs. OpenVision depicts this point with a + symbol in the center of the viewport. Viewing angles are defined around this point.
certainty equivalent

The amount that you would be indifferent between 1) having that monetary amount for certain or 2) having the alternative with its uncertain outcome.
CGM

Computer Graphics Metafile: a standardized file format used to transfer computer graphics between systems, devices or programs.
CIV

Classical investment valuation


compressibility

The volumetric change in a unit volume of fluid (usually) when the pressure on that volume is changed.

H-6

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

coning

Coning can occur in a oil reservoir for both gas and water. Because all oil reservoirs have water in them, and most have gas coning is a serious concern. Water and gas have a lower viscosity than oil and therefore flow preferentially instead of oil if allowed. If a preferential path of flow for water or gas opens then the well may become unsalvageable because the water or gas may form a cone about the well displacing the oil as in the figure below:

constraint

A condition that restricts, limits, or regulates, i.e.; a check.

R2003.2.0.1

Glossary of terms

H-7

DecisionSpace Immersion

Landmark

continuous random variable

If an attribute that has a continuous range of possible outcomes with natural ordering.
convergence

The point at which additional runs of the Monte Carlo algorithm do not change the overall results. A feature of some software is to stop the simulation automatically when convergence is achieved.
coordinate space

A frame of reference for locating and positioning objects. Defined by an origin and direction vectors. Example: world coordinates (X, Y, Z) survey coordinates (Line, Trace, Time)
correlation

Given a pair of related measures (X and Y) on each of a set of items, the correlation coefficient (r) provides an index of the degree to which the paired measures co-vary in a linear fashion. In general r will be positive when items with large values of X also tend to have large values of Y whereas items with small values of X tend to have small values of Y. Correspondingly, r will be negative when Items with large values of X tend to have small values of Y whereas items with small values of X tend to have large values of Y. The value of r is calculated by first converting the Xs and Ys into their respective Z Scores and, keeping track of which Z Score goes with which item, determining the value of the mean Z Score product. Numerically, r can assume any value between -1 and +1 depending upon the degree of the relationship. Plus and minus one indicate perfect positive and negative relationships whereas zero indicates that the X and Y values do not co-vary in any linear fashion.
correlogram

Plot showing that the covariance between paired points tends to decrease with each corresponding increase in lag.
cost of capital

Rate of return that a business could earn if it chose another investment with equivalent risk.

H-8

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

covariance

The covariance is a measure of the joint variation of two random variables about there respective means.
critical value

The value in a test that separates the rejection region from the acceptance region.
crossplot

User-defined plot that lets you plot both the X and Y axis with a specific parameter.
cumulative probability distribution

A chart with probability on the y-axis and value on the x-axis, which describes the entire range of probable outcomes resulting from a course of action. The chart is always read from right to left, stating the probability Y and the value as X or less. The 100% probability value occurs at the right side of the plot.
curtailment

When the government or pipeline capacity reduce the amount of gas/oil that can be produced.
cutting plane

A plane used to sculpt away part of a 3D scene. The portion of the scene on one side of the cutting plane is not rendered.
daemon

Process running in the background and waiting to be activated or used by another process to perform a specific service.
darcy

A unit of measure of permeability. The permeability of a porous medium which will allow a flow of one milliliter per second of fluid of one centipoise viscosity through one square centimeter under a pressure gradient of one atmosphere per centimeter. The commonly used unit is the millidarcy or 1/1000 darcy.

R2003.2.0.1

Glossary of terms

H-9

DecisionSpace Immersion

Landmark

Darcy's equation

Sometimes referred to as Darcy's law. A relationship for the fluid flow rate q through a porous medium, expressed as: kA p ------ q = ----- x where: k = permeability, A = cross-sectional area, = viscosity, and p = pressure difference across the distance x.
dataflow

How / which data moves from one step of a workflow to another.


Decision Desktop

The web-enabled entry point into DecisionSpace for launching applications. The Desktop lets users capture and share knowledge, and make decisions through collaboration.
Decision Workspace

The set of decisions in an evaluation plus what is needed to make the decision. The Decision Workspace could contain OW projects, SeisWorks projects, Wellbore Planner projects, etc.
decision maker

Person or Team with the responsibility and authority to allocate resources, decide on a course of action, and appoint and those to implement the decision.
decision node

A point in a decision tree where a decision must be made.


decision tree

A sequential graphical representation of individual decisions and their associated uncertainties which represent all paths the decision maker might follow through time. There are four basic elements from which a decision tree is constructed: decision nodes, alternative branches, probability nodes and outcome branches.

H-10

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

decision tree analysis

Used for sequential decision-making processes. A diagram that looks like a tree branch has to be constructed to show all the subsequent possible events and decision options that are outcomes from previous decisions. This analysis method is used only for simple cases in which the anticipated events and the probability for each event are already known. Computations involved in this analysis are relatively simple and can be handled with calculators.
decline curve

A plot of oil or gas production with time for a single well or an entire field. Production will decline with time as reservoir pressure decreases.
deterministic

A deterministic model is a model with properties that are so well and completely known that it can be modeled with no uncertainty. An example would be dropping a ball of known physical properties from a known height, velocity, and direction. Given these parameters we can model the path of the ball. Unfortunately, the physics involved in most geological processes are not completely known so deterministic modeling is not often an option. However, it is usually possible to inject some deterministic knowledge into the model, such as about fault patterns, etc.
deterministic analysis

Calculates only one possible outcome, using a single value chosen for each of the uncertain data parameters and decision variables.
development well

A well drilled in an area in a field to complete a pattern of production. An exploitation well.


DF

Discount factor
DCF

Discounted cash flow

R2003.2.0.1

Glossary of terms

H-11

DecisionSpace Immersion

Landmark

dip

The angle that a structural surface (e.g., a bedding or fault plane) makes with the horizontal, measured perpendicular to the strike of the structure. See also strike.

directional drilling

The technique of intentional, controlled drilling at an angle from the vertical by deflecting the drill bit. Although wellbores are normally drilled vertically, it is sometimes necessary or advantageous to drill at an angle from the vertical. Controlled directional drilling makes it possible to reach subsurface areas laterally remote from the point where the bit enters the earth.
discount rate

A number which is used to discount future cash flows to a present value. This number usually reflects both the time value of money and the risk associated with the project. The discount rate for one project will generally be near other alternative projects that have similar risk.
discounted cash flow

DCF. Value of future expected cash receipts and expenditures discounted from a common date, using a discount rate. The present value of a cash flow.
discovery well

The first oil or gas well drilled in a new field. The well that reveals the presence of a petroleum-bearing reservoir. Subsequent wells are development wells.
H-12 Glossary of terms R2003.2.0.1

Landmark

DecisionSpace Immersion

diversification

A system of owning different investments in a portfolio, whose return patterns are negatively correlated, with the intent of having relatively stable earnings over time. When one investment is yielding a low or negative rate of return, another should have above normal returns.
drainage

The migration of oil or gas in a reservoir toward a wellbore due to pressure reduction caused by production of reservoir fluids by the well. A drainage point is a wellbore (or in some cases several wellbores) which drains the reservoir.
drainage radius

The radius, measured from a wellbore, of a circular area of a reservoir which is drained by a single well.
drawdown

The difference between static and flowing bottom-hole pressures. The distance between the static level and the pumping level of the liquid in the annulus of a pumping well.
DROI

Discounted return-on-investment, see Efficiency ratio


DPO, Discounted Payout, Payout

The length of time (years or months) needed to recover the investment or initial outlay of capital.
earth model

A consistent representation of the reservoir including structural framework and properties.


economic limit

When production costs equal production revenue. It depends on how deep the well is, how much water it produces, where the well is located, and several other factors. The well is either plugged and abandoned or put into waterflood or enhanced oil recovery when the economic limit of the well is reached.

R2003.2.0.1

Glossary of terms

H-13

DecisionSpace Immersion

Landmark

economic valuation

The process of costing and forecasting value metrics of investment opportunities, usually summarized in measures like NPV, DCF, IRR, NPV/INV, etc.
edit

The ability to alter the original interpretations from the Classic applications including the ability to add new points or delete old points.
efficient frontier

EF. A graph of value versus risk for a suite of investment portfolios. The EF signifies the most profit that can be achieved for any given level of risk.
efficiency ratio, PI, DROI, CE
N

NPV i = t =1 --------------------------------------------------N Inv i Inv t DF t


t=1

NCF t DF t

A project with an efficiency ratio of 2 would recover all project costs, plus an additional $2 for every $1 invested.
elevation angle

Rotation of view around the horizontal axis measured in degrees.


equilibration

Computation that is done in a simulator to distribute oil, gas, and water so that they are in capillary equilibrium. Equilibration is done before any wells are put onstream. Equilibration establishes the initial conditions of fluid distributions and volumes in each reservoir. It uses the fluid contacts, reference pressures, and capillary pressure curves for each rock type.
equiprobable

Of the same probability as any other result or realization.

H-14

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

expected monetary value

EMV. Weighted average of possible monetary returns from a particular decision.


expected value

A single number that can represent an entire probability distribution (aka "probability-weighted average"). The arithmetic mean of a probability distribution function (PDF).
expenses

Expenditures for business items that have no future life (such as rent, utilities, or wages) and are incurred in conducting normal business activities. If running After Tax Cash Flows, expenses are usually written off in the year they occur as opposed to being capitalized over the life of the tangible item.
exploratory well

Well drilled to find the limits of a hydrocarbon-bearing formation that is only partially developed.
facies

Based on an assessment of geological variables such as grain size or mineralization, for example, limestone and dolomite in a carbonate setting and channel sandstone and shale in a siliciclastic setting.

R2003.2.0.1

Glossary of terms

H-15

DecisionSpace Immersion

Landmark

fault

Fracture or break in subsurface strata. Strata on one side of the fault line have been displaced (upward, downward, or laterally) relative to their original positions.

fault block

A mass bounded on at least two opposite sides by faults. It may be elevated or depressed relative to the adjoining regions, or it may be elevated relative to the region on one side and depressed relative to that on the other.
fault trap

A subsurface hydrocarbon trap created by faulting, which causes an impermeable rock layer to be moved to a location opposite the reservoir bed.
H-16 Glossary of terms R2003.2.0.1

Landmark

DecisionSpace Immersion

fiscal regime

The complete set fiscal parameters, e.g. taxes, royalties, depreciation schedules, and investment credits, which govern how a post-tax income is computed from revenues and costs.
flood

To drive oil from a reservoir into a well by injecting water, etc. under pressure into the reservoir formation.
flowing pressure

The pressure registered at the wellhead of a flowing well.


flowing well

A well that produces oil or gas by its own reservoir pressure rather than by use of artificial means (such as pumps).
fluid properties

Phase Behavior, Saturation pressure

R2003.2.0.1

Glossary of terms

H-17

DecisionSpace Immersion

Landmark

fold

A flexure of rock strata into arches and troughs, produced by earth movements.

formation

A general term applied in the well-logging industry to the external environment of the drilled well bore without stratigraphic connotation. The basic or fundamental rock-stratigraphic unit in the local classification of rocks, consisting of a body of rock (usually a sedimentary stratum of strata, but also igneous and metamorphic rocks) generally characterized by some degree of internal lithologic homogeneity or distinctive lithologic features (such as chemical composition, structures, textures, or gross aspect of fossils). Formations may be combined in groups or subdivided into members and beds. A formation name should preferably consist of
H-18 Glossary of terms R2003.2.0.1

Landmark

DecisionSpace Immersion

a geographic name followed by a descriptive lithologic term (usually the dominant rock type) or by the word formation if the lithology is so variable that no single lithologic distinction is appropriate.
formation evaluation

The analysis and interpretation of well-log data, drill-stem tests, etc. in terms of the nature of the formations and their fluid content. The objectives of formation evaluation are (1) to ascertain if commercially producible hydrocarbons (or other forms of energy and minerals) are present, (2) to determine the best means for their recovery, and (3) to derive lithology and other information on formation characteristics for use in further exploration and development.
formation pressure

The pore pressure existing within reservoir rock or non-reservoir rock at a specified time. The pressure exerted by fluids in a formation, recorded in the hole at the level of the formation with the well shut in. It is also called reservoir pressure or shut-in bottomhole pressure.
fossil fuel

A deposit of organic material containing stored solar energy that can be used as fuel. The most important are coal, natural gas, and petroleum.
fracture

A break, parting, or separation in brittle rock.


frequency distribution diagram

A histogram that posts probability on the y axis and certainty on the x axis. It provides a quick indication of whether the simulation of the model produced plausible results. If the distribution of the population of outcomes is skewed in an unexpected direction or to an unexpected degree, or if there are multiple humps (modes), the simulation may need to be run again. The number of iterations and/ or the model inputs may need to be reviewed and carefully revised.
Gaussian transform

A mathematical transformation of grades to a normal (Gaussian) distribution.

R2003.2.0.1

Glossary of terms

H-19

DecisionSpace Immersion

Landmark

gas cap

A free-gas phase overlying an oil zone and occurring within the same reservoir as the oil.
gas drive

The use of the energy that arises from gas compressed in a reservoir to move crude oil to a well bore. Gas drive is also used in a form of secondary recovery, in which gas is injected into input wells to sweep remaining oil to a producing well.
gas lift

The process of raising or lifting fluid from a well by injecting gas down the well through tubing or through the tubing-casing annulus. Injected gas aerates the fluid to make it exert less pressure than formation pressure; consequently, the higher formation pressure forces the fluid out of the wellbore. Gas may be injected continuously or intermittently, depending on the producing characteristics of the well and the arrangement of the gas-lift equipment.
gas-oil contact

GOC. The lowest depth (deepest depth in a well) opposite a formation at which virtually 100% gas can be produced. This depth is at the top of the gas-oil transition zone.
gas-oil ratio

GOR. A measure of the volume of gas produced with oil, expressed in cubic feet per barrel or cubic meters per metric ton.
geobody

A gOcad term. Similar to a Voxbody a view restricted to cells with certain attribute values.
geologic success ratio

The number of wells whose geological characteristics were as predicted (eg the nature of the fluids may make a well uneconomic, fluids are an engineering, not a geological property).
geostatistics

The branch of applied statistics which deals with spatially located data.

H-20

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

goodness of fit

The probability of the data given the input parameters.


grid

A partitioning of space into intervals (cells) in either 1D, 2D, or 3D.


gross income

Revenues attributed to your net interest prior to any operating cost, investment, or tax deductions.
gross rock volume

GRV. The total rock volume contained between two or more surfaces including horizons, faults, and/or contacts (OWC, GOC, etc.) and/or artificially bounded by some user defined aerial extent whether they be arbitrary or represent lease boundaries or drainage radius.
gusher

An oil well which has come in with such great pressure that the oil jets out of the well like a geyser. In reality, a gusher is a blowout and is extremely wasteful of reservoir fluids and drive energy. In the early days of the oil industry, gushers were common, and many times were the only indications that a large reservoir of oil and gas had been found. (what you want to find with DecisionSpace)
hard data

In the absence of direct core measurements, well log data is considered the hard data. All other data types (mainly the seismic) are considered the Soft Data and must be calibrated to the hard data.
histogram

A histogram is a frequency diagram constructed by dividing the data into categories and tallying the frequency of occurrence of data into each category. The categories or bins need not be of the same range of values. The bin frequencies are usually indicated by rectangles, hence the area of rectangles are proportional to the frequency.
horizon

A plane defined by seismic data representing a geologic boundary in space.

R2003.2.0.1

Glossary of terms

H-21

DecisionSpace Immersion

Landmark

horst

A block of the earth's crust that has been raised between two faults. The opposite of a graben.
hurdle rate

Term used in the budgeting of capital expenditures, meaning the required rate of return in a discounted cash flow analysis. If the expected rate of return on an investment is below the hurdle rate, the project might not be undertaken.
hydraulic fracturing

The breaking or parting of reservoir rock through the use of injected fluids.Hydraulic fracturing is a method of stimulating production or injection at a specific depth in a formation of low permeability by inducing fractures and fissures in the formation by applying high fluid pressure to its face. Fluids (liquids, gases, foams, emulsions) are injected into reservoir rock at pressures which exceed the strength of the rock and overcome internal stresses of the rock. The fluid enters the formation and parts or fractures it. Sand grains, aluminum pellets, glass beads, or similar materials are carried in suspension by the fluid into the fractures. These are called propping agents or proppants. When the pressure is released at the surface, the fracturing fluid returns to the wellbore as the fractures partially close on the proppants, leaving paths with increased permeability for fluid flow.
hydrocarbon pore volume

HCPV. The pore volume of a reservoir occupied by hydrocarbons in either the vapor or liquid phase.
hydrocarbon saturation

Fraction of the pore volume filled with hydrocarbons (oil or gas).


IJK

A coordinate reference system assigned to a 3D geocellular model. In many ways I, J, and K are analogous to Column, Row, and Layer. A specific IJK coordinate identifies a single cell within a model.
IAT (Income After Tax)

Income after tax is defined as cash flow spread over time.

H-22

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

initial production

The initial production (IP) of a well is the first 24 hours of production, and is usually the highest.
interpolation

Resource estimation techniques in which samples falling within a specified search neighborhood are weighted to form an estimate, e.g. kriging and inverse distance weighting.
inverse distance weighting

Non-geostatistical method of interpolation which assumes that grades vary in a deposit according to the inverse of their separation (raised to some power). This method does not account for nugget variance or other aspects of the variogram (such as anisotropy, short scale structure, etc.).
impermeable

Preventing the passage of fluid. A formation may be porous yet impermeable if there is an absence of connecting passages between the voids within it.
immiscible

Not capable of mixing or being permanently mixed (as oil and water).
inclination

Deviation angle. Measured in directional surveys and used in calculating true vertical depths. It is the angle between the axis of the measuring tool (hence, borehole axis) and true vertical. The inclination can also be measured with respect to true horizontal.
independent events

All events (outcomes) are independent if the occurrence of one event has no effect on the occurrence of other events.
influence diagram

A graphical representation of decisions and uncertainties which shows what is known and uncertain at the time of each decision and the dependence and independence of each uncertainty on all other decision and uncertainties.

R2003.2.0.1

Glossary of terms

H-23

DecisionSpace Immersion

Landmark

injection well

A well into which fluids have been pumped, and from which fluids have been injected into an underground stratum to increase or maintain reservoir pressure.
in situ

In place. In its natural location.


interface

The contact surface between two boundaries of immiscible fluids, dissimilar solids, etc.
internal rate of return, IRR

IRR. The discount rate that sets the net present value equal to zero. The internal rate of return may have multiple values when the cash flow stream alternates from negative to positive more than once. see Break Even Discount Rate.
irreducible water saturation

The fraction of the pore volume occupied by water in a reservoir at maximum hydrocarbon saturation. In water-wet rock, it represents the layer of absorbed water coating solid surfaces and the pendular water around grain contacts and at pore throats. Irreducible water saturation is an equilibrium situation. It differs from residual water saturation, measured by core analysis because of filtrate invasion and the gas expansion that occurs when the core is removed from the bottom of the hole and brought to the surface.
isochore map

A map showing thickness of a unit measured vertically.

H-24

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

isopach map

A geological map of subsurface strata showing the various thicknesses of a given formation normal to the stratigraphic thickness. It is widely used in calculating reserves and in planning secondary-recovery projects.

isotropy

The property of homogeneity or uniformity of a rock which allows it to show the same responses or characteristics when measured along different axes.
iteration

A single step in an algorithmic procedure for which multiple steps must be repeated in a sequence of operations to yields results.
iterative

Describes a procedure which repeats until some condition is satisfied. Successive approximations, each based on the preceding approximations, are processed in such a way as to converge onto the desired solution.
kriging

Geostatistical method of interpolation. Kriging is a weighted average where the weights are a function of the variogram.
lag distance

Separation distance (in 3D space). The x-axis of the variogram is in units of lag. Denoted h by convention.
Latin hypercube

A method that ensures the sampling of a given distribution is representative of that distribution, i.e. no particular region of the distribution is under-sampled or over-sampled.

R2003.2.0.1

Glossary of terms

H-25

DecisionSpace Immersion

Landmark

LattixiQ

Lattix iQ is a web-enabled, thin client application that will serve as the Landmark common desktop This is the first thing users see when they enter the DecisionSpace environment. The desktop provides support for launching applications, managing workflows, and journaling. It is within Lattix iQ that the asset team will see the decisions they need to make, and expose workflows that support those decisions.
lithology

The physical character of a rock generally determined without the aid of a microscope. Sometimes used with the general meaning of rock type.
lease

A legal document executed between a landowner, or lessor, and a company or individual, as lessee, that grants the right to exploit the premises for minerals or other products. The area where production wells, stock tanks, separators, and other production equipment are located.
least-squares fit

An analytic function which approximates a set of data such that the sum of the squares of the distances from the observed points to the curve is a minimum. One must determine the functional form of the fit (whether linear, quadratic, etc.) in order to define the problem.
lithology

The physical character and composition of the rock. Refers to the different rock strata within the formations penetrated by the borehole. The study of rocks, usually macroscopic.
log

Well log. A record containing one or more curves related to properties in the well bore or some property in the formations surrounding the well bore.
logging tool

An openhole or cased-hole tool for performing downhole well log data gathering services for determining properties of the formation, or characteristics of the well bore environment.

H-26

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

matrix

The solid framework of rock which surrounds pore volume. In a rock in which certain grains are distinctly larger than the others, the grains of smaller size comprise the matrix. In mathematics, a rectangular array of numbers which obey certain rules.
mean

The average of the scores in the population. Numerically, it equals the sum of the scores divided by the number of scores. It is of interest that the mean is the one value which, if substituted for every score in a population, would yield the same sum as the original scores, and hence it would yield the same mean. In equations, it is customary to use the Greek letter mu to represent the mean of a population. See Arithmetic Mean
measured depth

MD. Depth measured along the drilled hole. Reported in drilling records, measured by well-logging cables, and shown on well logs. This depth has not been corrected for hole deviation.
median

The middle observation or midpoint after the data have been ordered from low to high. Also known as the 50th percentile or P50.
metric ton

A measurement equal to 1000 kg or 2204.6 lb avoirdupois. In many oil-producing countries, production is reported in metric tons. One metric ton is equivalent to about 7.4 barrels (42 U.S. gal = 1 bbl) of crude oil with specific gravity of 0.84, or 36 API gravity.
migration

The movement of oil from the area in which it formed to a reservoir rock where it can accumulate.
miscible drive

A method of enhanced recovery in which various hydrocarbon solvents or gases (as propane, LPG, natural gas, carbon dioxide, or a mixture thereof) are injected into the reservoir to reduce interfacial forces between oil and water in the pore channels and thus displace oil from the reservoir rock.
R2003.2.0.1 Glossary of terms H-27

DecisionSpace Immersion

Landmark

mode

The value or group that occurs most frequently in the dataset. Also known as Most Likely.
model

(1) An analysis tool that represents a business prospect with combinations of data, formulas, and functions. An explicit approximation of reality, typically expressed as a series of mathematical relationships. (2) A numerical or mathematical representation of a financial or economic situation. Accordingly, a cash flow or investment model is specifically designed to evaluate the effects of making an investment.
Monte Carlo risk analysis

An iterative procedure whose outcome is a probability distribution function for each of a desired uncertain variable outcome. The procedure consists of a process that generates random numbers, which are used to draw values from the PDFs of uncertain data parameters that are then used as inputs to a mathematical model. Each random draw from the set of parameters produces one value in the outcome PDF. A quantitative simulation technique used in many different types of decision analysis models. The first step in Monte Carlo risk analysis is to define the capital resources by developing the deterministic model of the estimate. The second step is to identify the uncertainty in the estimate by specifying the possible values of the variables in the estimate with probability ranges (distributions). The third step is to analyze the estimate with simulation. The model is run (iterated) repeatedly to determine the range and probabilities of all possible outcomes of the model. During each run, a value for each variable is selected randomly based upon its specified probability distribution (just like the ball dropping in the roulette wheel at the casino in Monte Carlo). As the Monte Carlo simulation is run, the model calculates and collects the results. The population of results is then presented as the overall probability distribution for the simulation. The fourth and final step is to make a decision based upon the results of the Monte Carlo analysis.
mutually exclusive events

When the occurrence of one event precludes the existence of all other events defines a mutually exclusive event.

H-28

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

NCF

Net cash flow


net pay

Within the limitations of given cutoffs for porosity, water saturation, etc., it is that portion of reservoir rock which will produce commercial quantities of hydrocarbon.
net present value, NPV

NPV. The sum of all revenues discounted to current dollars. The difference between the discounted present value of benefits and the discounted present value of costs. Net present value (after tax). PV is used before-tax.
N

NPV =

t=1

NCF t DF t

net to gross ratio

the amount of pay sand in the reservoir. If the lithofacies is quoted as having 20% sand and 80% shale (nonproducing material) then the net to gross ratio is 20%.
normalize

To adjust two log curves (or any other pairs of data) for environmental differences in order that one value may be compared with others.
nugget variance

The y-intercept of a variogram. The nugget variance represents the chaotic or random component of grade variability. Also referred to as nugget effect.
oil field

The surface area overlying an oil reservoir or reservoirs. Commonly, the term includes not only the surface area but may include the reservoir, the wells, and production equipment as well.

R2003.2.0.1

Glossary of terms

H-29

DecisionSpace Immersion

Landmark

oil-water contact

OWC. The highest depth (shallowest depth in a well) opposite a formation at which virtually l00% water can be produced. This depth is at the bottom of the oil-water transition zone. oil-water interface
operating expenses

OPEX. These are usually the costs of keeping the wells and platforms running. This does not include any large capital expenditures.
optimization

A mathematical process of determining the best solution given constraints. A user defines the metric for best, which is represented as an objective function.
original oil in place

OOIP. The amount of crude oil that is estimated to exist in a reservoir and which has not been produced.
output range

In statistics, the simplest measure of the dispersion of the population. The minimum and maximum values of the population of outcomes establishes the 100% interval of all possible values of the simulation. Most simulation software will also supply the data points in 5 or 10 percent ranges through the population, and use this data to present a confidence interval for the results, or quartiles, of the population.
object

A single entity displayed in the scene. Represents data from a project. Can be highlighted (selected) with a single click when in selection mode. Examples: seismic slice, horizon, well
orthographic mode

See projection.
perspective mode

See projection.

H-30

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

PI

Profitability index, see Efficiency ratio


P10

The 10th percentile of a probability distribution where P10 is the value where the cumulative probability is 10 percent i.e. there is an 10 percent probability of getting a value LESS than or equal to P10.
P50

The 50th percentile of a probability distribution where P50 is the value where the cumulative probability is 50 percent i.e. there is an 50 percent probability of getting a value LESS than or equal to P50.
P90

The 90th percentile of a probability distribution where P90 is the value where the cumulative probability is 90 percent i.e. there is an 90 percent probability of getting a value LESS than or equal to P90.
pay

The part of a formation which produces or is capable of producing oil or gas, or other economic product.
permeability

A measure of the ability of a rock to conduct a fluid through its interconnected pores when that fluid is at 100% saturation. Measured in darcies or millidarcies. Absolute permeability is the permeability of a rock measured when only one fluid phase is present in the rock. Usually measured in millidarcies or darcies. Effective permeability is the ability of the rock to conduct a fluid in the presence of another fluid, immiscible with the first. It not only depends on the permeability of the rock itself, but also upon the relative amounts of the two (or more) different fluids in the pores. Usually measured in millidarcies, or darcies. Relative permeability is the ratio between the effective permeability to a given fluid at a partial saturation and the permeability at 100% saturation. The ratio of the amount of a specific fluid that will flow at a given saturation, in the presence of other fluids, to the amount that would flow at a saturation of 100%, other factors remaining the same. It ranges in value from zero at low saturation to 1.0 at 100% saturation of the specific fluid. Since different fluid phases inhibit the flow of each other, the sum of the relative permeabilities of all phases is always less than unity.

R2003.2.0.1

Glossary of terms

H-31

DecisionSpace Immersion

Landmark

permeability K

The measure of the ease with which the rock allows fluids to flow through it. Be aware that permeability is a tensor (with values in X, Y, and Z directions).
petroleum

Oil or gas obtained from the rocks of the earth.


petrophysical properties

Relative Permeabilites, Capillary Pressure, Endpoint saturations


point set

A collection of x, y, z data points.


population of results

In statistics, the number of outcomes created by a simulation. The size of the population or number of outcomes (iterations) is based on the number required to achieve convergence. Generally, 1,000 or more iterations through the model will create a meaningful sized population of results.
pore

An opening or void within a rock or mass of rock, usually small and filled with fluid (water oil, gas, or all three).
pore pressure

Pressure exerted by fluids contained within the pores of rock.


porosity

The ratio of void space to the bulk volume of rock containing that void space. Porosity can be expressed as a fraction or percentage of pore volume in a volume of rock. (1) Primary porosity refers to the porosity remaining after the sediments have been compacted but without considering changes resulting from subsequent chemical action or flow of waters through the sediments. (2) Secondary porosity is the additional porosity created by chemical changes, dissolution, dolomitization, fissures, and fractures. (3) Effective porosity is the interconnected pore volume available to free fluids, excluding isolated pores and pore volume occupied by absorbed water. In petroleum engineering practices, the term porosity usually means effective porosity. (4) Total porosity is all void space in a rock and matrix whether effective or noneffective. Total porosity
H-32 Glossary of terms R2003.2.0.1

Landmark

DecisionSpace Immersion

includes that porosity in isolated pores, absorbed water on grain or particle surfaces, and associated with clays. It does not include water of crystallization wherein the water molecule becomes part of the crystal structure.
porosity

The fraction of void space in the rock that may contain fluid. DecisionSpace is only concerned with the effective porosity that contributes to fluid flow rather than the total porosity, which includes small isolated pores.
portfolio

A combination of alternatives, usually investments, which can be combined in alternative ways to achieve, specified objectives, like maximizing shareholder value.

portfolio management

The process of managing investments. For example when you have a number of good investments, but cannot afford all of them. Portfolio management helps you priortize all of your opportunities.

R2003.2.0.1

Glossary of terms

H-33

DecisionSpace Immersion

Landmark

portfolio theory

A body of theory relating to how investors optimize portfolio selections.


possible reserves

Those unproved reserves which analysis of geological and engineering data suggests are less likely to be recoverable than probable reserves. In this context, when probabilistic methods are used, there should be at least a 10% probability that the quantities actually recovered will equal or exceed the sum of estimated proved plus probable plus possible reserves.
pressure maintenance

A method for increasing ultimate oil recovery by injecting gas, water, or other fluids into the reservoir before reservoir pressure has dropped appreciably, usually early in the life of the field, to reduce or prevent a decline in pressure.
primary recovery

Recovery of petroleum oil from underground reservoirs by taking advantage of the natural energy (pressures) in these reservoirs. The most common types of these sources of reservoir pressures are solution-gas drive, gas-cap-expansion drive, and water (aquifer) drive. More than one of these natural drives may occur in a single reservoir.

H-34

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

probabilities

Probabilities are the basic units of statistics and are expressed as a fraction between 0 and 1 (or as a percentage ranging from 0 to 100%). Probabalities provide a standardized way of describing any variable from any population, regardless of the units of the variable. Probability of 1 is absolute certainty, probability of 0 is absolute impossibility. Probability of .5 is a 50-50 chance (like flipping a coin). Depending on the type of data you have, you may use a different type of probability.

Type of Probability A Priori

Characteristic The population elements are known beforehand Little or no data exists and the population is unknown. Applied to unique events Calculated from historical or current data

Example Deal of Cards Roll of Dice Flip of Coin Outcome of baseball game Possibility of Rain Discovering oil in frontier area Batting average Winning percentage Decline curve

Subjective

Empirical

probability

The relative likelihood of a particular outcome among all possible outcomes. A probability of 1 indicates that the person believes that the event will happen while 0 represents there is no chance that the event will occur.
probability density

A function (curve) describing the relative likelihood of the occurrence (probability) of the possible values of an uncertain quantity.

R2003.2.0.1

Glossary of terms

H-35

DecisionSpace Immersion

Landmark

probability distribution function

PDF. This function describes a statistical distribution. It has the value, at each possible outcome, of the probability of receiving that outcome. A pdf is usually denoted in lower case letters. Consider for example some f(x), with x a real number is the probability of receiving a draw of x. A particular form of f(x) will describe the normal distribution, or any other unidimensional distribution. The probability function for a continuous random variable. The area under the probability density curve = 1. Therefore probability density can have a value greater than 1, whereas probability can not.
probability of success

POS. The chances of being successful with a particular search, in a given area.
probable reserves

Those unproved reserves, which analysis of geological and engineering data suggests, as a best estimate, are more likely than not to be recoverable., In this context, when probabilistic methods are used, best estimate is a measure of central tendency, such as P50 that the quantities actually recovered will equal or exceed the sum of estimated proved plus probable reserves.
production

The amount of oil or gas produced in a given period. That phase of an energy related industry which deals with bringing the formation fluids to the surface and preparing the product for pipeline or other transportation.
production log

A well log run in a production or injection well. Small diameter tools are used so that they can be lowered through tubing. Services and devices include continuous flowmeter, packer flowmeter, gradiomanometer, manometer, densimeter, watercutmeter, thermometer, radioactive-tracer logs, temperature logs, calipers, casing collar locator, fluid sampler, water entry survey, etc.
production sharing agreement

An agreement between a Government or National oil company with one or more private companies spelling out the terms of the Fiscal Regime.
H-36 Glossary of terms R2003.2.0.1

Landmark

DecisionSpace Immersion

project

A collection of data / processes / workflows.


projection

The transformation of an object in a coordinate system of dimension n into a coordinate system of dimension less than n is called projection. In OpenVision 3D Viewer, objects are projected from 3D space into the 2D coordinate system of the viewing screen. OpenVision 3D Viewer provides two types of projection: perspective and orthographic. The visual effect of a perspective projection is similar to that of photographic systems and of the human visual system. This visual effect is referred to as perspective foreshortening. Objects closer to the viewer appear larger than objects farther away. While visually realistic, perspective projection may not be particularly useful for recording the exact shape and measurements of objects. With orthographic or parallel projection, parallel lines remain parallel and exact measurements may be taken.
proved behind-pipe reserves

Estimates of the amount of crude oil or natural gas recoverable by recompleting existing wells.
proved developed reserves

Estimates of what is recoverable from existing wells with existing facilities from open, producing pay zones.
proved reserves

Those quantities of petroleum which, by analysis of geological and engineering data, can be estimated with reasonable certainty to be commercially recoverable, from a given date forward, from known reservoirs and under current economic conditions, operating methods, and government regulations. Proved reserves can be categorized as developed or undeveloped.
radial flow

The flow of fluids into a wellbore from the surrounding drainage area. Also, could be descriptive of the flow from an injection well to the surrounding area.

R2003.2.0.1

Glossary of terms

H-37

DecisionSpace Immersion

Landmark

random function

A variable which takes a certain number of numerical values according to a certain probability distribution. For instance, the result of casting an unbiased die can be considered as a random variable which can take one of six equiprobable values. If one result is 5 then we say that this value is a particular realization of the random variable result of casting the die.
random sample

A set of items that have been drawn from a population in such a way that each time an item was selected, every item in the population had an equal opportunity to appear in the sample. In practical terms, it is not so easy to draw a random sample. First, the only factor operating when a given item is selected, must be chance.
random variable

A random variable is a rule that assigns a value to each possible outcome of an experiment. For example, if an experiment involves measuring the height of people, each person who could be a subject of the experiment has associated value, his or her height. A random variable may be discrete (the possible outcomes are finite, as in tossing a coin) or continuous (the values can take any possible value along a range, as in height measurements).
range

Statistics - an index of variability that characterizes the dispersion among the measures in a given population. The range is the distance between the highest and lowest score. Numerically, the range equals the highest minus the lowest score. Geostatics - The distance at which the variogram reaches the sill. The range typically depends on direction, with horizontal directions showing greater continuity (larger ranges). 3D viewer - The distance (in project units) between the viewpoint (user position) and center of interest.
range estimates

Estimates that calculate three scenarios: the best case, the worst case, and the most likely case. These types of estimates can show you the range of outcomes, but not the probability of any of these outcomes.
H-38 Glossary of terms R2003.2.0.1

Landmark

DecisionSpace Immersion

rate of return

ROR. Return on equity or return on invested capital. You need to be careful with this one. There are many different ways to calculate the rate of return. The one that is used in the oil and gas business most often is what finance calls the Internal rate of return. You have this defined earlier.
ratio

A mathematical relationship between two values, where one value is divided by the other, commonly expressed as a fraction.
realization

A set of values [ z(u1), z(u2), z(u3), ..., z(un)] that may arise from a random function Z(u). This realization may be regarded as a member of the random function in the same way as a that an individual observation is regarded as a member of a population. A single run of a Monte Carlo simulation.
recovery

The amount of core recovered compared to the amount cut. The height of fluid in the drill pipe on a drill-stem test which did not produce enough fluid to rise to the surface. The total volume of hydrocarbons that has been or is anticipated to be produced from a well or field.
recovery factor

The percentage of oil or gas in place in a reservoir that ultimately can be withdrawn by primary and/or secondary techniques.
relative frequency

The number events in a bin divided by the total number of events.


relevance

The extent to which one uncertain event depends on another uncertainty or on a decision,i.e., X is relevant to Y.
reserves

The amount of oil or gas that can be produced from a well or field in the future under current economic conditions using current technology.

R2003.2.0.1

Glossary of terms

H-39

DecisionSpace Immersion

Landmark

RESCUE

RESCUE is a Joint Industry Project managed by the Petrotechnical Open Software Corporation (POSC). The acronym 'RESCUE' stands for REServoir Characterization Using Epicentre. The RESCUE model was created to establish an open standard for representing fine scale 3D earth models. RESCUE models are often used for the purposes of upscaling and simulation. Detailed information on the RESCUE project may be found at the following internet site: http://www.posc.org/rescue/.
reservoir

A subsurface, porous, permeable rock body in which oil or gas or both can be stored. Most reservoir rocks are limestones, dolomites, sandstones, or a combination of these. The three basic types of hydrocarbon reservoirs are oil, gas, and condensate. An oil reservoir generally contains three fluids: gas, oil, and water, with oil the dominant product. In the typical oil reservoir, these fluids occur in different phases as a result of the variation in their specific gravities. Gas, the lightest, occupies the upper part of the reservoir rocks; water, the lower part; and oil, the intermediate section. In addition to occurring as a cap or in solution, gas may accumulate independently of the oil. If so, the reservoir is called a gas reservoir. Associated with the gas, in most instances, are salt water and some oil. In a condensate reservoir, the hydrocarbons may exist as a gas, but, when brought to the surface, some of the heavier gases condense to a liquid or condensate. At the surface, the hydrocarbons from a condensate reservoir consist of gas and a high-gravity crude (i.e., the condensate). Condensate wells are sometimes called gas-condensate reservoirs.
reservoir-drive mechanism

The natural energy by which reservoir fluids are caused to flow out of the reservoir rock and into a wellbore. Solution-gas drives depend on the fact that, as the reservoir is produced, pressure is reduced, allowing the gas to expand and provide the driving energy. Water-drive reservoirs depend on water pressure to force the hydrocarbons out of the reservoir and into the wellbore.

H-40

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

reservoir pressure

Formation pressure. The pressure found within a reservoir at a specific point in time. Sometimes reservoir pressure is thought of as original pressure or geopressure (prior to disturbance) but at other times is thought of as pressure existing after disturbance. Reservoir or formation pressure should be qualified as to time, condition, and place.
resource management

The process of allocating resources properly to achieve the specified objectives.


return on capital

Distribution of cash resulting from depreciation tax savings, the sale of a capital asset or of securities in a portfolio, or any other transaction unrelated to retained earnings.
return on equity

ROE. Indicator of profitability. Determined by dividing net income for the past 12 months by common stockholder equity (adjusted for stock splits). Result is shown as a percentage. Investors use ROE as a measure of how a company is using its money. ROE may be decomposed into return on assets (ROA) multiplied by financial leverage (total assets/total equity).
return on invested capital

Amount, expressed as a percentage, earned on a company's total capital. A useful means of comparing companies, or corporate divisions, in terms of efficiency of management and viability of product lines.
return on investment

ROI. This is by far the more common term. - Same as Return on Invested Capital.
risk

Measurable possibility of losing or not gaining value. Risk is differentiated from uncertainty, which is not measurable. Possibility of loss, damage, or any other undesirable event. Generally risk is measured as an explicit quantification of the probability and magnitude of loss.

R2003.2.0.1

Glossary of terms

H-41

DecisionSpace Immersion

Landmark

risk averse

Term referring to the assumption that, given the same return and different risk alternatives, a rational investor will seek the security offering the least risk. A risk averse person values alternatives at less than their expected value.
risk management

The process of identifying and evaluating risks and selecting and managing techniques to adapt to risk exposures.
risk neutral

Someone who is risk neutral is willing to play the long-run odds when making decisions, and will evaluate alternatives according to their expected values. For example, such a decision maker would be indifferent between receiving $1 for certain and an alternative with equal chances of yielding $0 and $2.
risk tolerance

Describes your attitude towards risk. The greater your risk tolerance, the closer the certain equavalent of a gamble will be to its expected value. In general, the greater your wealth the greater your risk tolerance.
ROCE (Return on Capital Employed)

This is the difference between all invested capital minus its depreciation.
rock properties

Porosity, Permeability
rose diagram

A polar plot or diagram in which radial distance indicates the relative frequency of an observation at a certain azimuth. Used in dipmeter interpretation. Compare azimuth frequency diagram.
run

Set of rule definitions, execution setup, and corresponding results for a specific case.

H-42

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

run rules

Rules that define how values are treated during the execution of a run.
salt dome

A dome that is formed by the intrusion of salt into overlying sediments. A piercement salt dome is one that has pushed up so that it penetrates the overlying sediments, leaving them truncated. Formations above the salt plug are usually arched so that they dip in all directions away from the center of the dome.

saturation

The fraction or percentage of the pore volume occupied by a specific fluid (e.g., oil, gas, water, etc.). The fluids in the pore spaces may be wetting or nonwetting. In most reservoirs, water is the wetting phase, but a few reservoirs are known to be oil wet. The wetting phase exists as an adhesive film on the solid surfaces. At irreducible saturation of the wetting phase, the nonwetting phase is usually continuous and is producible under a pressure gradient to the well bore.

R2003.2.0.1

Glossary of terms

H-43

DecisionSpace Immersion

Landmark

scenario

A specific set of variables and their associated probabilities that are used for an economic calculation. For example, you could run three different GRV PDF's based on three different assumptions. So - you would run 3 different economic evaluations based on each of the three assumptions. Each of the evaluations would be a different scenario. Should be use with qualifier, e.g., development scenario, subsurface scenario, geological scenario, drilling scenario, facilities scenario, economic scenario etc.
scenario analysis

Appraises the impact of discrete deviations in forecasted conditions.


scene

The collection of objects visible in the OpenVision window.


secondary recovery

Recovery of petroleum oil from underground reservoirs by using secondary sources of energy, such as injected natural gas or water to increase producing rates and to improve ultimate oil recovery. Water injection, commonly known as water flooding, usually affords higher recovery than gas injection. Gas injection is generally limited to those reservoirs which have a gas cap and in which gas cap expansion is expected to be an efficient natural drive mechanism. Although the terms primary and secondary imply a sequence of use, both mechanisms might work concurrently once secondary recovery is implemented. See also primary recovery and tertiary recovery.
seed

Believe it not, random is not random on a computer randomness is generated by a seed value and can be repeated using the same value.
sensitivity chart

A chart that identifies which factor in a forecast is most responsible for the uncertainty surrounding the outcome.
session

A particular work session that can be saved and restored for later use.
H-44 Glossary of terms R2003.2.0.1

Landmark

DecisionSpace Immersion

sill

The naive equal-weighted variance of all the data entering variogram calculations. The sill is not where the variogram appears to plateau.
simplified integrated asset model

SIAM. A single model that represents the whole system under consideration including G&G, production prediction, drilling, facilities, scheduling, economics - and the relationships/ dependencies between them. Each of the sub-models (G&G, drilling, etc.) are simplified representations compared to the models used in our classic applications. They can be derived from the detailed classic applications. This is so that they can be run within a Monte Carlo simulation.
simulation

An organized sensitivity analysis which is based on distributions of inputs which are then randomly selected to produce statistical distributions of outputs, which can be evaluated using probabilities. Comes in two versions - Monte Carlo and Latin Hypercube which use different sampling procedures. Should be used with qualifier, e.g., monte carlo simulation, geostatistical simulation, reservoir simulation, etc. Any analytical method meant to imitate a real-life system, especially when other analyses are too mathematically complex or too difficult to reproduce.
skewness

In statistics, a measure of how much the population of outcomes distribution deviates from being symmetrical. Distributions which are skewed to the right (most of the outcomes are near the low end of the range) are said to be positively skewed, while left skewed distributions are said to be negative. A skewness value greater than 1 or less than -1 indicates a highly skewed distribution.
smoothing

Reduction of variability. To avoid conditional bias, it is necessary to smooth when interpolating. Contrast with smearing.
soft data

In the absence of direct core measurements, well log data is considered the Hard Data. All other data types (mainly the seismic) are considered the Soft Data and must be calibrated to the hard data.

R2003.2.0.1

Glossary of terms

H-45

DecisionSpace Immersion

Landmark

spatial correlation

The property of having linear interdependence between random variables in space. The random variables may or may not be different attributes.
standard conditions of temperature and pressure.

STP. According to the American Gas Association, 60 degrees Fahrenheit and 1 atmosphere pressure (14.7 psia).
standard deviation

A direct measure of variability. The positive square root of the expected value of the square of the difference between a random variable and its mean. With a normal distribution of data, 68.3% of the data fall within one standard deviation from the mean. Two deviations cover 95.45% of the data and three deviations cover 99.7% of the data.
standard variation

A measure of the dispersion (or variation or scatter) of the outcomes about the mean of the population. Usually the standard deviation is the area of distribution that spans 68% of the data. It is centered on the mean. Useful in describing the average deviation.
statistics

Statistics is the science of collecting, processing, analyzing and interpreting numerical data. Statistics dilutes numerical information to provide more easily (sometimes) understood insights to the population.
step-out well

A well drilled adjacent to a proven well, but located in an unproven area, in order to determine the extent and boundaries of a producing formation.
stochastic

Analysis that assumes input values are not known with certainty. The uncertainty may be the result of incomplete data, limited models, changes over time, and forecasting errors. Some of these sources may be eliminated by further analysis, while others result from limited knowledge. Stochastic refers to the use of statistics to develop the properties that a model will be populated with.

H-46

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

stochastic simulation

A simulation method, such as a Monte Carlo simulation, that the inputs are probability distributions, and the output of such stochastic methods is also given in terms of distributions.
stock tank barrel

STB. A 42-gallon barrel of crude oil at standard conditions of temperature and pressure (STP).
stock tank original oil in place

STOOIP. The volume of crude oil, expressed as its volume after separation from gas at the surface, estimated to be present within a geologic volume prior to production.
stragic value

An indirect source of value that must be included in the NPVs if you are to think appropriately about values. It is better to put a rough value on these indirect sources (so they can be discussed and evaluated) than to assume they are worth precisely zero. For example, drilling in politically unstable areas, such as offshore California, should be assigned a negative strategic value for the NPV calculation.
stratigraphic grid

SGRID. 3D grid for reservoir modeling or flow simulation with a corner-point or block-centered geometry. SGRIDS are often derived from the TSURF grids. The SGRIDS are defined by the user to represent divisions in the subsurface and to retain as much as possible the geometry of the natural stratigraphy.
strike

The direction or bearing of a horizontal line drawn on the plane of a structural surface; e.g., inclined stratum, fault plane, etc. The strike is perpendicular to the direction of dip. The bearing can be referred to south or north; e.g., N 30 E or S 30 W. See also dip.
subjective probability

A number between zero and one (inclusively) representing the degree of belief a person attaches to the occurrence of an event.

R2003.2.0.1

Glossary of terms

H-47

DecisionSpace Immersion

Landmark

sunk costs

Costs, usually capital costs, already incurred in a project that cannot be changed by present or future actions. Sunk costs should be ignored in determining whether a new investment is worthwhile.
surface

A plane (2D Grid or Triangulation) representing a geologic boundary in space.


texturing

Texturing maps a pattern to the surface of an object. The pattern used is selected to either enhance the realistic visualization of the object or to provide an additional informational dimension to the rendering.
technical-to-business

T2B. A Landmark trademark process that represents the integration and optimization of technical and business processes across all phases of the upstream life cycle.
tertiary recovery

Recovery methods which increase ultimate oil production beyond that achievable with primary and secondary methods. These methods are usually employed in the latter stages of conventional secondary flooding applications, but may be implemented early in the secondary application or in combination with the secondary method. These oil recovery methods enhance the production of oil by increasing the proportion of the reservoir affected, reducing the amount of residual oil in the swept zones and reducing the viscosity of thick oils. The tertiary methods usually are divided into three broad groups: thermal, miscible and chemical.
time-series forecasting

A forecasting method that uses a set of historical values to predict an outcome. These historic values, often referred to as a time series, are spaced equally over time and can represent anything from monthly sales data to daily electricity consumption to hourly call volumes.

H-48

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

topology

A term used to define the relationships of components of a framework such as surfaces and faults. Generally used in various ways to represent either the form of a surface (topographic as in a topographic map) or used as an adverb to represent the state of a model as in topologically correct.
tornado chart

Tornado charts determine which variables have the greatest impact on whatever is graphed on the x axis (usually NPV). The variable with the greatest impact has the largest bar, the variable with the smallest impact has the smallest bar. The variables are ordered on from lowest to highest impact with the bar for each displayed. The resulting graph has a tornado shape. Strictly a tornado chart is found by independently varying the input parameters by say plus or minus 10%.
triangulation

Also known as Delaunay triangulation. A 2D subdivision based on the perpendicular bisector of a line joining two points. The polygons defined by the perpendicular take on the same attribute value as the enclosed point, and are called polygons of influence.

R2003.2.0.1

Glossary of terms

H-49

DecisionSpace Immersion

Landmark

true vertical depth

TVD. The vertical distance between a specific location in a borehole and a horizontal plane passing through the depth datum. It is determined from directional surveys.

TSURF grid

A representation of the subsurface model using triangular/ tetrahedral geometry. These are very flexible in building the structural framework that conforms to the horizons and faults to retain as much as possible the geometry of the natural structure.

H-50

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

uncertainty

The lack of assuredness about the truth of a statement or about the exact magnitude of an unknown measurement of an unknown number. Uncertainty is the central concept in the decision making that follows any geostatistical study, in which case it is related to the risk of making an incorrect decision because the estimates do not agree with reality. Uncertainty is often measured by parameters such as confidence, probability intervals, and standard errors. Any event for which the outcome is not known at the time a decision is made. Ignorance about the past present or future. Very broad, it could be about events, outcomes of decisions, predictions of the present or future state of some system. Uncertainty can be quantified using probabilities.
uncertainty bounds

The ranges of uncertainty for each surface or fault within the scenario.
Uncertainty Collator

UC. A set of functionality that allows the user to gather information on uncertainties, then interpret that information to generate PDFs for particular parameters, and describe the uncertain nature of the relationships between parameters, and pass the PDFs to a Monte Carlo simulation or decision-making algorithm.
value
n

Value =
variance

t=1

----------------------------------------------------------------------------------------------------------------( 1 + WACC + ) t 0.5

Price Production Opex Capex Taxes

In statistics, a measure of the dispersion of the outcomes about the mean of the population. The variance is calculated as the square of the standard deviation. Variance is an indication of the risk or uncertainty of the distribution. When the population of outcomes is close to the mean of the population distribution, the variance is small; when the outcomes are widely scattered, the variance is large.

R2003.2.0.1

Glossary of terms

H-51

DecisionSpace Immersion

Landmark

variogram

A chart that converts distance to correlation by showing the movement of inertia at increasing lag distances. An important tool for geostatistical modeling, which attempts to correlate natural phenomena in space.
vertical exaggeration

The vertical scale factor. The heights of displayed objects are exaggerated (or stretched) when this factor is greater than 1.0. Controlled in the OpenVision 3D Viewer window by the vertical exaggeration slider control.
view

A window within the common desktop or parent application.


viewing angles

The angles that define the viewers position relative to the center of interest, expressed as azimuth, elevation, and roll angles.
viewpoint

The point from which the scene is viewed. The viewers position, or more specifically, the virtual location of the viewers eye. Point of view.
viewport

The portion of the OpenVision 3D Viewer window in which the scene is displayed.
voxbody

An irregular 3D polygonal shape defined by seismic attributes.


voxel

A singular cell within a 3D seismic cube


voxet

A cube of voxels.
VRML (Virtual Reality Modeling Language)

A file format specifying a means to store a 3D scene. VRML files can be displayed with VRML-enabled Web browsers.

H-52

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

WACC

Weighted average cost of capital


water-drive

The reservoir-drive mechanism in which oil is produced by the expansion of the volume of the underlying water, which forces the oil into the wellbore. In general, there are two types of water drive: bottom-water drive, in which the oil is totally underlain by water, and edgewater drive, in which only the edge of the oil is in contact with the water. Bottom-water drive is more efficient. watered-out Of a well, having gone to water. water encroachment The movement of water into a producing formation as the formation is depleted of oil and gas by production.
water flooding

A method of secondary recovery in which water is injected into a reservoir in order to move additional quantities of oil toward producing wells.
water saturation

The fraction or percentage of the pore volume of a rock occupied by water. The occupation may take different forms; i.e., funicular, insular, or pendular saturation.
well completion

The activities and methods necessary to prepare a well for the production of oil and gas; the method by which a flow line for hydrocarbons is established between the reservoir and the surface. The method of well completion used by the operator depends on the individual characteristics of the producing formation of formations. These techniques include open-hole completions. conventional perforated completions, sand-exclusion completions, tubingless completions, multiple completions, and miniaturized completions.
well properties

KH (Perm x thickness), Skin


well constraints

Bottom Hole Pressure, Min/Max Rates

R2003.2.0.1

Glossary of terms

H-53

DecisionSpace Immersion

Landmark

well spacing

The regulation of the number and location of wells over a reservoir as a conservation measure.
what-if scenarios

Scenarios that characterize what might happen if different conditions occur in a hypothetical business climate. These scenarios are based on a range of estimates. Examples: What if sales are best case but expenses are the worst case? What if sales are average, but expenses are the best case? What if sales are average, expenses are average, but sales for the next month are flat? What-if scenarios are extremely time consuming to construct. They produce large amounts of data, but they do not quantify the probability of achieving different outcomes.
workflow

A series of decision centered steps.


workspace

A collection of views relating to a step in a workflow.


Z-axis

A third dimension added to a crossplot of two parameters in an X-Y plane. The z-axis is perpendicular to both x- and y-axes.
Z score

Scores that indicate how far and in what direction a statistical value deviates from its distribution's mean, expressed in units of its distribution's standard deviation.

H-54

Glossary of terms

R2003.2.0.1

Landmark

DecisionSpace Immersion

A
Analogue Data D - 24 Anisotropy C - 7, D - 60 Annealing D - 98

I
Image Rays C - 26 interpretation uncertainties C - 33 Inter-Quartile Range D - 15

B
Breakeven Discount Rate F - 2 build section G - 2

K
kickoff point G - 2

C
Cell Size D - 39 Coefficient of Skewness D - 14 Coefficient of Variation D - 14 Cokriging D - 97 Colocated Cokriging D - 97 Components of the Variogram D - 53 Correlation Coefficient D - 18 Covariance D - 16 Crossplot (or Scatterplot) D - 17 Cyclicity D - 61

L
Large Scale Trends D - 62 Lumping Populations D - 43

M
Maximum D - 12 Mean or Expected Value D - 13 Median D - 13 message url http //longhorn.zycor.lgc.com/geostats/default.html D - 2 Migration Uncertainties C - 22 Minimum D - 12 Mode D - 13 Modeling Carbonate Reservoirs D - 29 Modeling Scale D - 23 Modeling Siliciclastic Reservoirs D - 27

D
Data Types D - 24 Declustering D - 45 Density C - 3 Depth Error C - 19 Discounted Payout F - 3 DMO C - 13

N
NMO C - 7 Normal Rays C - 26 NPV F - 2 Nugget Effect D - 53

E
Efficiency Measures F - 2 end of build point G - 3 Erroneous Data D - 42

O
Outliers D - 42

F
Formation Velocity C - 6

P
Petrophysical Properties D - 23

G
Geostatistics D - 11

Q
Quantile-Quantile Plots D - 15 Quantiles (Quartiles, Deciles, Percentiles...) D - 15

H
horizontal section G - 3

R
Range D - 53

R2003.2

Index

I-1

Exit

Contents

Landmark
Ray Tracing C - 25

DecisionSpace Immersion

S
Seismic Velocity C - 7 Sequential Simulation D - 77 Sill D - 53 Stacking Velocity C - 16 Standard Deviation D - 14 Statistics D - 11

T
tangent section G - 3 Time-to-depth Conversion C - 2 Trends D - 47

U
Uncertainty D - 19

V
Variable D - 11 Variance D - 13 variogram directions and lag distances D - 55 Variogram Interpretation D - 59 Variogram Models D - 63 Variograms D - 49 velocities C - 5 Velocity C - 4 Velocity Accuracy C - 18 Velocity Frequency C - 20 vertical section G - 2

R2003.2

Index

I-2

Exit

Contents

Potrebbero piacerti anche