Sei sulla pagina 1di 55

A Reservoir

ese o S
Simulation
u at o U
Uncertainty
ce ta ty
Modelling Workflow Tool

Brennan Williams
Geo Visual Systems Ltd

Contents
1. Introduction
2. Simulation model overview
3. What are the problems?
4. What do we need?
5. The tool
6. Case study
7. What comes next?

1. Introduction

describe the ongoing development of a software tool for uncertainty modelling


and
d hi
history
t
matching
t hi iin th
the reservoir
i simulation
i l ti discipline
di i li off the
th oilil and
d gas
industry.

brief overview of simulation models over the last decade

what are the problems? data management etc.

What do we need? aims of the tool run manager, uncertainty modelling, data
analysis

case studyy uncertaintyy modelling


g example
p

what comes next? history matching, support for other simulators

2. Simulation Model Overview


computer model used to predict the flow of fluids (typically, oil, water, and gas)
through porous media
dominated by finite difference simulators Eclipse, VIP
some finite element and streamline simulators
Output
grid geometry
initial grid property data one value per cell in the grid porosity,permeability etc
recurrent grid property data one value per cell in the grid for each report timestep
pressure, oil/water/gas saturations
plot vectors e.g. production rate for each well for each plot timestep

2. Simulation Model Overview


Then.

built a single (incorrect) simulation model to describe the reservoir.

manually history match this single model

use the matched model in a series of prediction runs to compare different field
production scenarios.

Now.

Build multiple models to gauge uncertainty

history match multiple models

prediction runs using multiple history matched models

2. Simulation Model Overview

1992
small model, 13000 cells, 15 year simulation, 300 plot vectors, 20MB

2. Simulation Model Overview

1998
medium model, 200,000 cells, 150 wells, 2,000 plot vectors, 70MB

2. Simulation Model Overview

1998
small model, 60,000 cells, 600 wells, 18,000 plot vectors, 127MB

2. Simulation Model Overview

2002
very large model, 3,000,000 cells, 800MB

2. Simulation Model Overview

2003
medium model with coarsening and local grid refinement
120,000 cells, 200MB

2. Simulation Model Overview

2006
large model with nested LGR, 300,000+ cells,
12,000 plot vectors, 1.3GB

2. Simulation Model Overview

2006
large model, 300,000 cells, 300+ wells, 360,000 plot vectors, 1.8GB

3. What are the problems?


Single Model

only one representation of an unlimited number of possible models that match reasonably
well the known history.

no understanding of uncertainty in a single model

3. What are the problems?


Multiple Models.

Models are getting bigger

file size issues

runtime issues

Data analysis

How do we model uncertainty in as few a number of simulation runs as possible?

3. What are the problems?

Model size of 200,000 cells

40 wells

2,000 plot vectors

500MB output files per run

9 uncertainty variables with 3 values each (low,mid,high) = 39 or 20,000 runs,


40,000 hours (4 years) runtime @ 2 hours per run and 10 TB data

2*9+1=19 Tornado runs = 38 hours runtime, 10GB data

4. What do we need?

multiple models to model uncertainty

Engineer/user designed workflow - capture steps in the simulation study process

select/define independent/control variables

Choice of algorithms to use to change our control variables

run/deck generation and submission i.e. a run manager

data analysis tools

assisted history matching

simulator independent

5. The tool - rezen


Phase 1 - Run Manager

open box

Phase 2 - Data Analysis


Phase 3 - Uncertainty Modelling
Phase 4 - History Matchingongoing

5. The tool data hierarchy terminology

ensemble : a set of related simulation


decks varying
y g around a core model

deck : an individual simulator dataset


or run (both input & output files)

variable : a simulation parameter


whose uncertainty or sensitivity we
wish to investigate

deck vector : plot vector imported from


simulation output files (e.g. FOPT)

ensemble vector : set of related deck


vectors, one for each deck

5. The tool run manager

Run Manager
manage multiple reservoir simulation runs
supports Eclipse
manage the
th submission
b i i off d
decks
k tto th
the simulator
i l t queues
provides convenience tools such as

scan simulation
i l i iinput & output fil
files (i
(including
l di bi
binary))

conversion tools (binary to text)

b ilt i diff between


built-in
b t
related
l t d .DATA
DATA fil
files

5. The tool uncertainty modelling

Uncertainty Modelling
supports the engineer/user designed uncertainty modelling workflow
select uncertainty modelling algorithm, define ensemble variables and build
an ensemble control file with
ith directi
directives
es
generate simulation decks and submit decks to the simulator queues
built-in
b ilt i diff utility
tilit b
between
t
related
l t dd
decks
k
plots of objective function vs generated ensemble variable values

5. The tool uncertainty modelling

Uncertainty modelling algorithms


Discrete cases e.g. create n runs by specifying n values of each variable.
Combination e.g. create n1*n2* runs by specifying ni values for the ith
variable
i bl
Tornado method or one-at-a-time uncertainty analysis. Requires 3 discrete
values (low
(low, min
min, high) of each variable
variable.
Monte-Carlo simulation. User can specify continuous or discrete
distributions which are randomly sampled for each run. User specifies the
number of runs to do.
Plackett-Burman experimental design. Requires "+1" and "1" values of
each variable, but it's usually a good idea to test for curvature, so run an allzero case too.

5. The tool uncertainty modelling


Multiple Scenario Approach

need to identify most significant reservoir uncertainties and ranges

varying one-parameter-at-a-time (Tornado) is a starting point

then a reliable method for combining parameters in an efficient set of


simulations is needed (experimental design)

and then the ability to use statistics to gauge the impact of the results

5. The tool data analysis

Data Analysis
import output vector data from simulator output files (for all decks)
partially filter the data that is imported
plot deck vectors (i.e. vectors for a specific deck)
plot simulated with history (e.g. FOPT & FOPTH)
plot ensemble vector against deck number or against ensemble variable
simple statistics on individual vectors

5. The tool workflow description


1. create ensemble
2. define core model
3. define variables & generate range of values
4. generate & submit decks, import output data
5. plot vectors & analyze results

differences between ensembles and/or decks may be:

subsurface unknowns e.g. geological realizations etc.

development
p
scenarios e.g.
g infill location,, water injection
j
start date

numerical issues e.g. model sizes, computational parameters etc.

5. The tool ensemble vector plot 1

5. The tool ensemble vector plot 2

5. The tool ensemble vector plot 3

5. The tool ensemble vector table

5. The tool ensemble vector plot vs deck

5. The tool ensemble vector plot vs ensemble variable

5. The tool ensemble variable plot

5. The tool ensemble variable table

6. Case Study - Infill Well

Infill well vs no infill well

11 different geological realisations (perm & poro distribution)

Want to identify what model variables are most significant

Tornado algorithm on 9 variables to reduce to 4 variables

Combine algorithm on 4 variables

Allocate probabilities to variable values to generate an S-curve

Select a number of models to use in future runs

6. Case Study Infill Well Project Workflow


Task Workflow
repeat for prediction case
create
base
case

copy
py
base
case

define
tornado
values

define
combined
values

run
hmatch
cases

run
hmatch
cases

visually
inspect
hmatch

visuallyy
inspect
hmatch

calculate
incrementals

identifyy most
significant
variables

generate
S-curve

repeat for prediction case

Stage 1 Identify Key Variables and Valid Cases


Stage 2 In Depth Analysis of Key Variables

determine
P90-50-10

6. Case Study Infill Well Project Workflow


Stage
g 1
repeat for prediction case
create
base
case

copy
py
base
case

define
tornado
values

define
combined
values

run
hmatch
cases

run
hmatch
cases

visually
inspect
hmatch

visuallyy
inspect
hmatch

calculate
incrementals

identifyy most
significant
variables

generate
S-curve

repeat for prediction case

Stage 1 Identify Key Variables and Valid Cases


Stage 2 In Depth Analysis of Key Variables

determine
P90-50-10

6. Case Study Setup Ensemble


Create ensemble variables and assign
g low-mid-high
g to each

Variables will be used in tornado analysis to identify the big


big
hitters

6. Case Study Setup Ensemble


Directives

{{formula 1-$residual_g
gas}}

enclosed in curly braces


contain one statement or statements separated
p
by
y semi-colons
a statement is a command word optionally followed by arguments
variables in the argument
g
must be p
preceded by
ya$

value directive

MULTIPLY
PERMX
PERMX {value $highperm
$highperm_leman}
leman} 1 70 1 100 4 4 /
/

f
formula
l di
directive
ti

MAXVALUE
SWL {formula 0.999-$residual_gas} 1 70 100 1 98/
/

6. Case Study Setup Ensemble


Edit the ensemble control file

{value $residual_gas}

insert directives instructions for Rezen about how to use the variables

6. Case Study Generate Decks


Decks Created

Each
E
hd
deck
k corresponds
d
to one .data simulator
input file.

6. Case Study Setup Ensemble Vectors


p
simulation output
p files
imported

6. Case Study Setup Ensemble Vectors


View deck vectors

6. Case Study Setup Ensemble Vectors


View ensemble vectors
Ensemble vector is a collection of related deck vectors
Time Plot

6. Case Study Setup Ensemble Vectors


View ensemble vectors
Tornado Plot

6. Case Study Visual History Match


Identify
y cases that dont historyy match FGPR
Left mouse button for drag graph
Right mouse button for drag legend
Press z in plot for zoom

Decks Case
D0001 aquifer size = 0
D0002 aquifer strength = 0
D0003 reservoir cont, polygons = A+B only
D0004 carboniferous leman transmissibility = 0
D0006 Facies proportion = 30:70

6. Case Study repeat for infill well


Repeat
p
same p
process as before

Data check ensemble control file


Generate decks
Submit decks
Create ensemble vectors

6. Case Study Identify most significant variables


R2 FGPT

R2P1 FGPT

These are only results from R2


R2. In reality
need to consider all 11 realisations together.

R2P1 FGPT_F R2 FGPT_F

6. Case Study Infill Well Project Workflow


Stage
g 2
repeat for prediction case
create
base
case

copy
py
base
case

define
tornado
values

define
combined
values

run
hmatch
cases

run
hmatch
cases

visually
inspect
hmatch

visuallyy
inspect
hmatch

calculate
incrementals

identifyy most
significant
variables

generate
S-curve

repeat for prediction case

Stage 1 Identify Key Variables and Valid Cases


Stage 2 In Depth Analysis of Key Variables

determine
P90-50-10

6. Case Study Stage 2


In depth
p analysis
y
of key
y variables
Key variables were found to be:
Reservoir continuity

res_cont

Aquifer Size

aqu_size

C b if
Carboniferous
ffacies
i proportion
ti

carb_facies
b f i

High perm streak in Leman

highperm_leman (client suggestion)

Aquifer strength was also significant but it is directly related to aquifer size so
it was discarded from further analysis.
Full factorial analysis of the 4 variables = 3^4 = 81 cases/realisation
But, from history match in stage 1 aqu_size downside, res_cont downside
and carb_facies downside can be discarded.
So full factorial analysis of the 4 variables = 2x2x2x3=24 cases/realisation
So,

6. Case Study Stage 2


Load output
p files and import
p vectors

6. Case Study Generating S-Curve

Ensemble Variable
Reservoir Continuity
Aquifer Size
Facies proportion
High Perm Streak

low
1
0.000001
L
2

prob
0.30
0.25
0.30
0.20

mid
2
1
M
1

prob
0.40
0.50
0.35
0.40

high
3
2
H
1

prob
0.30
0.25
0.35
0.40

6. Case Study Generating S-Curve


1
09
0.9

Cumula
ative Probab
bility

0.8
0.7
0.6
0.5
04
0.4
0.3
MonteCarlo (250 Runs)

0.2

ED+ Tornado (23 Runs)


0.1

Plackett-Burman ED (10 Runs)

0
2

Cumulative Oil per Well (MMstb)

7. What comes next?

run manager
support for additional simulators
integration with load balancers

data analysis
s-curve generation & display
response surface plots
user defined objective/goodness-of-fit functions

history matching
user defined history match variables, ranges, objective functions
optimisation algorithms

7. What comes next?...data analysisresponse surface

7. What comes next?...data analysis objective function

nwells

ntimes

wj wt abs( st-ht)n
j=1

t=1

nwells

ntimes

wj abs(w
b ( t.h
ht)n
j=1

t=1

7. What comes next?...history matching

no history match is unique

aim is to get a model with good predictive capability

define and implement workflow

goodness of fit/ objective functions

selecting history match variables & defining value ranges

defining algorithms for adjusting the history match variables in the model to
improve the match

Potrebbero piacerti anche