Sei sulla pagina 1di 66

5 D Seismic Exploration

OVERVIEW
We continuously develop new technology to ensure that our customers have access to the solutions
that they need to address their geophysical requirements. Key Seismic technology developments
are guided by our customer's needs as well as our awareness of new and promising areas of
geophysical research. Herein, we catalog some of Key Seismic’s most recent developments that are
being used in production processing workflows. New technology is implemented by our in-house
software development team and all processing is carried out on our scalable processing system.

MWD
We have developed a model-based water-layer demultiple (MWD) technique specifically designed to
attenuate multiples in shallow water. This technique is similar to Surface Related Multiple Elimination
(SRME) which has become the de facto standard for marine demultiple. SRME predicts multiples by
convolving the data with successive estimates of the primaries in a recursive estimation procedure. It
is very effective for both 2D and 3D multiple attenuation, in moderate to deep water. However,
SRME can struggle with shallow water multiples, especially in the presence of a hard water bottom.
In shallow water, the water-layer multiples can be recorded with significant amplitudes up to high
orders, where the order of a multiple refers to the number of downward bounces from the sea
surface. The high order water-layer multiples often overlap peg-leg multiples from deeper events.
The subsequent adaptive subtraction must simultaneously match all orders of multiples and this
often leads to poor multiple attenuation results.

What is MWD

Figure 1. Types of free-surface multiple.


Figure 1 illustrates two different types of free-surface multiple. On the left is a water-layer multiple
which is a multiple with at least one upwards bounce at the water bottom and one downward bounce
at the surface. It is a special case of the more general free-surface multiple, which must have a
bounce from the free-surface but may or may not include an upward bounce at the water bottom.
SRME addresses both of these types of multiples. MWD on the other hand only seeks to attack the
water-layer multiples and defers the remaining free-surface multiples for subsequent attenuation by
SRME.

Figure 2. First and second order water-layer multiples.


Figure 2 shows two different orders of water-layer multiple for illustration. Each is constructed by
combining the water layer Green’s function (shown in red) with a general raypath (shown in blue).
The blue raypaths represent events in the recorded data. Applying the Green’s function to any order
of water-layer multiple predicts the next order of water-layer multiple. In Figure 2, the second order
multiple (right) is constructed by combining a water layer raypath (red) with the blue raypath which is
a first order multiple (blue). Thus, all orders of water-layer multiple are predicted by operating on the
data with the Green’s function.
To describe this in more detail we will examine shot-side multiple removal. Similar to SRME, the
MWD method relies upon cross-convolution, in this case cross-convolution of the water-bottom
Green’s function with the data. The water bottom Green's function was represented by the red
raypath in Figure 2. The Green’s function used is the wavefield recorded at various points on the
surface due to an impulse generated at the shot, with reflection from only the water bottom.
Given any input trace with shot position S and receiver position R, the water-layer multiples are then
predicted by convolving the Green’s function from shot position S and receiver position X with the
data from shot position X and receiver position R, where X is the downward reflection point (DRP)
for the multiple. This is repeated for all sensible positions of X (based on aperture considerations),
and the results are summed to produce our multiple estimate. This is illustrated for two such DRPs,
X1 and X2, in Figure 3. Note that in Figure 3, position X2 would correspond to a non-specular
downward reflection for this relatively flat geology. However, this cannot be known a priori, and it
might correspond to a specular reflection for some other dipping reflector. The MWD method relies
on Fermat’s principle, such that the summation will naturally select the specular events by
constructive interference and remove others by destructive interference.
Figure 3. Construction of the water-layer multiple by using Green’s functions (red) convolved with
traces (blue) for all possible DRPs. Just two (X1 and X2) are shown here.

East Coast Data Example

Figure 4 shows a comparison of a 2D marine line from the North Flemish Pass on the east coast of
Canada. These data were provided by Jebco Seismic (Canada) Company. The data without multiple
attenuation show many orders of shallow water multiples. The multiple attenuated data is the result
of applying MWD, SRME and a pass of Radon multiple attenuation. The multiple attenuated result is
a significant improvement over the original section and the primary energy is more readily
interpreted.

Figure 4. Comparison of 2D marine line without and with multiple attenuation. Multiple attenuation is
the result of applying MWD, SRME and Radon.

5D INTERPOLATION
Our 5D interpolation method is based on Fourier reconstruction by Minimum Weighted Norm
Interpolation (MWNI). This method is called 5D interpolation because it operates on 5 dimensions of
the seismic data, one temporal dimension and four spatial dimensions. The four spatial dimensions
are either inline/crossline/inline-offset/crossline-offset, or, inline/crossline/offset/azimuth. Choosing
inline-offset and crossline-offset as spatial dimensions leads to a Common Offset Vector (COV) or
Offset Vector Tile (OVT) description of the data.
5D interpolation uses a neighbourhood of acquired seismic data to predict the missing data. Ideally,
data that are missing in one or two of the spatial dimension can be reconstructed using data that are
present and well sampled in the other spatial dimensions. Interpolation has become an important
part of our 3D processing flow for a number of reasons:

 It corrects for the effects of an irregular acquisition geometry by regularizing the geometry and filling
in small gaps within the geometry.
 It equalizes geometries from multiple 3D datasets in 3D merges. In a merge project, the acquisition
geometry can have different shot and receiver line spacing, different CDP bin sizes or the azimuth of
acquisition may vary from survey to survey. As well some surveys may be acquired with an
orthogonal geometry and others with Megabin. Minimizing these differences is an important step
towards an optimally merged dataset.
 Interpolation increases offset/azimuth fold of data for subsequent azimuthal analysis.
 For cases where base line and monitor surveys are acquired with different geometries, the
geometries can be regularized and equalized to common grid for improved time lapses seismic
processing and subsequent 4D analysis.
 The 5D interpolation is also applied to PS datasets by the same technique, but with conversion point
locations used as the inline and crossline coordinates.

The output of 5D interpolation is well sampled COV gathers that can be pre-stack time migrated, in
the COV domain, to preserve offset and azimuth for subsequent azimuthal analysis, including
velocity variation with azimuth (VVAZ). If an offset plane pre-stack time migration is required, the
increased fold of the data will allow us to migrate to more offset planes which should improve
subsequent AVO analysis.
The following figures show interpolation results where a survey was acquired with a coarser bin grid
than the grid required for the merged survey.
Figure 1. A CDP gather before and after 5D interpolation.

Figure 2. An inline stack before and after 5D interpolation.

Figure 3. A time slice before and after 5D interpolation.


SCME
We have developed a method to adapt surface related multiple elimination (SRME) to land data,
which accounts for the near surface by including differential static corrections during the prediction
phase. We call this surface corrected multiple elimination (SCME). An advantage of SCME method,
as for SRME, is that multiple predictions are data driven and they do not rely on velocity differences
to identify multiples. As shown in a heavy oil example in Figure 1, SCME attenuates the small
moveout multiple and leaves the primaries relatively untouched.

Figure 1. No Demultiple, Radon and SCME applied to 3D data acquired over a heavy oil reservoir.
On the left are migrated data and a superbinned gather is shown on the right. Data are provided by
OSUM Oil Sands Corp.
The multiple that is being attenuated is a free surface multiple. While SRME has been widely used
on marine data it has only recently been used on land data. This may in part be due to the difficultly
of recognizing surface related multiples on land data. These multiples are readily visible on marine
data, but they can be difficult to see on land data because the multiple has been downward reflected
from a non-planar surface, so that the multiple has a shape influenced by both the topography and
the multiple generator.
This is illustrated in Figure 2, which shows a stacked section, at a flat processing datum, taken from
a 3D seismic dataset acquired over a heavy-oil reservoir. The multiple generating reflector is shown
as the yellow horizon. The green line at the top represents zero time, as measured from the surface.
Based upon a simple approach using just the zero offset times, we double the time between the
surface and the horizon to predict a multiple, as shown by the red horizon. This tracks an event on
the data that we believe is a multiple.
Figure 2. Identification of a free surface multiple on CMP stacked data from a heavy oil survey. The
multiple (red horizon) results from a bounce between the multiple generator (yellow) and the surface
(green).
Field Example
Figure 3 shows the application of SCME to data from a 3D seismic dataset acquired over a heavy-oil
reservoir (data provided by OSUM Oil Sands Corp). The multiples present in the data, indicated by
the arrow, are problematic because they stack in with very similar velocities to the primaries, and are
not easily removed by velocity based methods such as radon (as shown in Figure 1). The multiples
could have given rise to erroneous interpretation of the data if not identified. SCME has been
successful in attenuating these multiples resulting in a more interpretable seismic section.

Figure 3. Stack of 3D inline with no multiple attenuation, after SCME and the difference. The blue
ellipse highlights the location of the free surface multiple.
TSKK FOOTPRINT ATTENUATION
Most acquisition geometries have regularly spaced source lines and receiver lines. This regular
spacing (or sampling) can lead to periodic amplitude striping that mimics the source line and receiver
line intervals. This periodic amplitude striping is called an “Acquisition Footprint”. Surface consistent
shot and receiver scaling cannot compensate for this amplitude banding, as it is a function of the
offset distribution. The amplitude banding can be subtle when viewed in a section view, but is more
detectable when viewing the data in a time slice or horizon slice domains. Key Seismic uses a
proprietary program called TSKK which first transforms time slices into the kx-ky domain. Once in
the kx-ky domain, the amplitude banding appears as localized peaks or spikes. The program
determines the localized amplitude peaks over a limited range of inlines, crosslines and time
windows, then suppresses the amplitudes based on a scaled value of the neighbouring amplitudes.
Since a subset range of inlines and crosslines are used to determine the amplitude footprint, the kx-
ky filter is adaptive in both time and space. This is especially useful when dealing with a 3D merge,
where the acquisition geometries may vary. Footprint attenuation can be applied post stack, or on
single fold pre-stack data such as Common Offset Vectors (COV) or offset planes.

Figure 1. The kx-ky transform of a time slice. Most of the primary energy is concentrated at the
centre of the transform (kx = ky =0).
Figure 2. The same kx-ky transform as in Figure 1, but now the detected spikes in the transform are
shown as yellow dots.

Figure 3. Time Slice Before/After/Difference of TSKK Footprint Attenuation


VVAZ
We perform Velocity Variation with AZimuth (VVAZ) analysis to estimate and correct for azimuthal
velocity variations. One cause of azimuthal velocity variations is the presence of horizontal
transverse isotropic (HTI) media.

When seismic waves travel through these media the acquired data can exhibit an azimuthal
dependence of arrival time. These variations can by observed by constructing a Common Offset
Common Azimuth (COCA) display as shown in Figure 1. The sinusoidal variation observed at larger
offset on the right of the figure are indicative of azimuthal anisotropy.
An HTI medium is characterized by the azimuth of Vfast, ϕfast, and the percentage of anisotropy, which
is a measure of the intensity of the anisotropy and is defined as ((Vfast – Vslow) / Vfast * 100). To calculate
these parameters we scan over possible azimuths and percent anisotropy. For each trial value we
correct for the azimuthal effect and calculate the semblance in a window around a horizon. Figure 2
shows an example of the semblance from such a scan over a superbin at one CDP location for six
horizons. Each of the six windows shows windows semblance as a function of two parameters
related to HTI anisotropy. If no azimuthal anisotropy is present the peak of the semblance will occur
at the centre of the square. Figure 2 shows that the largest amount of azimuthal anisotropy is
present at the fourth horizon, in the lower left corner of the display.
Figure 3 shows the same COCA as in Figure 1, but with and without the VVAZ correction. The
correction has, for the most part, removed the azimuthal effect. Figure 4 compares an individual
CDP gather with and without the VVAZ correction and shows that the many small trace-to-trace
variations observed in the uncorrected gather were due to azimuthal velocity variation.
The deliverables from a VVAZ analysis are corrected gathers and volumes of ϕfast, percent
anisotropy, Vfast, and Vslow. The VVAZ corrected gathers are good input to AVO analysis and inversion
as the data are better behaved at far angles. The volumes of ϕfast and percent anisotropy can be
used to create vector plots as shown in Figure 5 that can be used to identify areas of large
anisotropy. The analysis is typically carried out on pre-stack time migrated data, where the migration
is run in the common offset vector domain.
Figure 1. Common Offset Common Azimuth (COCA) display of seismic data.

Figure 2. Plot of stack power at six horizons. The x and y axes are related to the degree of azimuthal
anisotropy. If the layer is isotropic the semblance peak will be at the centre of the square.

Figure 3. Comparison of a COCA gather with and without VVAZ correction.


Figure 4. Comparison of a CDP gather with and without VVAZ correction.

Figure 5. Plot of percent anisotropy for a layer with vectors that point in the azimuth of Vfast and have
a length proportional to percent anisotropy.
COV
The Common Offset Vector (COV), also known as the Offset Vector Tile (OVT), domain is important
in 3D land seismic processing as it provides a natural binning of offsets and azimuths for datasets
acquired with a regular acquisition geometry. With an appropriate choice of sampling for the inline
and crossline offset, an individual COV is a single-fold dataset with a limited offset and azimuth
range. These COVs are an attractive domain in which to apply noise attenuation, FXY interpolation
and pre-stack footprint attenuation. Applying pre-stack time migration (PSTM) on COVs allows us to
preserve offset and azimuth through migration so that we can perform azimuthal velocity analysis in
the migrated location.
A COV is characterized by the sampling of the inline and crossline offset increments. With a regular,
orthogonal, 3D acquisition geometry these increments are twice the shot line spacing and twice the
receiver line spacing. For merged datasets, or datasets with a great deal of irregularity, we may
need to carry out additional analysis to determine the optimal COV parameters. We can perform this
analysis interactively and Figure 1 shows the fold of a single COV with inline/xline increments
340/540 m versus 270/540 m. For the nominal acquisition parameters of this survey 340/540 m is
the correct choice, but due to variability of the shot and receiver locations 270/540 m is also a good
choice. The fold pattern will vary from COV to COV so these must be examined interactively.
This 3D survey is part of a merge and the majority of the data were acquired at a different azimuth
and a smaller bin size. Binning these data with the merge geometry results in the fold, for a single
COV, shown in Figure 2.1. After 5D interpolation the fold of the same COV, in Figure 2.2, is now
generally single fold for all CDP locations which is ideal for COV PSTM.
When choosing COV processing parameters we can also interactively examine the fold for all COVs
for a given inline/xline offset choice as shown in Figure 3. Additionally we can examine the azimuths
(Figure 4) and offsets (Figure 5) for the COV geometry. Using these QC plots we can quickly
determine the optimal COV parameters for a single 3D survey or a group of surveys in a merge.

Figure 1. A comparison of fold for an individual COV with inline/xline offset increments of 340/540 m
and 270/540 m.
Figure 2. Data from the same survey as shown in Figure 1, however, now in the desired grid for
processing of a 3D merge. Shown is fold for a single COV with and without interpolation at the
centre of the square.

Figure 3. Fold for each COV in a survey as a function X-Offset (inline offset) and Y-Offset (crossline
offset). The skewed appearance is a result of the processing grid not being aligned with the
acquisition grid.

Figure 4. Nominal azimuth for each COV.


Figure 5. Nominal offset for each COV.
ORIENTATION ANALYSIS
When PP seismic data is processed, a careful analysis of the surveyed shot and receiver positions is
carried out to ensure that they are consistent with the acquired data. For PS data, accurate field
positioning of source and receivers is required, but also accurate orientation of the inline direction of
the 3C receivers is required. Without reliable orientation information, estimation of azimuthally
dependent quantities required for splitting analysis is impossible, and even simple radial PS output is
adversely affected.
A typical practice in the field is to orient the receivers towards magnetic north, and apply the
magnetic declination correction in processing. Sometimes practical considerations (e.g. difficult
ground conditions) can compromise this ideal. In the survey illustrated in Figure 1 the recorded
orientations of the receivers are shown as a line segment. The orientations of some of the receivers
were not measured in the field and these are the receivers with no line segments. There was enough
variance from the nominal orientation (magnetic north) and the measured values that we decided to
estimate orientations of all the receivers from the data. This provides an opportunity to compare
measured and estimated values where available.
The assumption used is that first breaks measured on both vertical and horizontal components are
radially oriented P-wave arrivals. This assumption warrants some scrutiny, as it neglects near
surface effects such as anisotropy and scattering. However, we found the analysis to be fairly robust
when applied over many shots for each receiver. We also estimated a standard deviation of the
individual shot-based measurements for each receiver. This gives an error estimate which is useful
to identify suspect locations. Comparison of the calculated orientations with the measured
orientations in Figure 1 shows that many of the calculated orientations were consistent with those
measured in the field, however, there are some locations that are not consistent. To determine which
set of orientations should be used in processing we compared the radial and transverse rotated data
using the two different orientations. This showed that in general the orientations estimated from first
breaks were at least as plausible as the recorded ones. Ultimately, we processed the entire dataset
with the estimated values.
Figure 1. A comparison of the orientations measured in the field to those that are calculated.
CANA NOISE ATTENUATION
CANA is our amplitude preserving pre-stack noise attenuation workflow. It is most commonly used in
our AVO compliant processing flow. In an AVO compliant processing flow we do not perform any
single-trace processes so many conventional noise attenuation methods cannot be used. CANA
models seismic noise by its temporal and spatial characteristics. It is especially effective at removing
ground roll, spikes and other noise bursts. The noise is adaptively removed and the underlying
signal is preserved. We ensure that signal is preserved by carefully examining difference plots to
ensure that only noise is removed.
Figure 1 shows a portion of an input shot gather, the result of CANA and the difference between the
input and CANA. Figure 2 shows a similar display on an inline stack. For both Figures 1 and 2 the
difference is exclusively noise with no signal attenuation. With ground roll and large spikes
attenuated we can apply an additional FXY deconvolution for random noise attenuation. Figure 3
shows this comparison for a shot. The FXY process is typically run in the cross-spread or COV
domain.
CANA is part of our AVO compliant processing flow that allows us to produce seismic volumes that
are ready for subsequent AVO inversion and time lapse analysis.

Figure 1. Input shot gather, shot after CANA, and the difference.
Figure 2. Input CDP stack, stack after pre-stack CANA, and the difference.

Figure 3. Input Shot gather after CANA, shot after FXY in the cross-spread domain, and the
difference.
WEATHERING STATICS (WXS)
Near surface velocity modeling and correcting for these near surface velocity variations is one of the
first steps in seismic data processing. Refraction analysis uses first arrival times of refracted seismic
energy to compute near surface velocities and layer thicknesses. Using a combination of proprietary
automatic and manual first break picking algorithms we first determine and QC the first break times.
The first break times are used in our WXS interactive/batch software that computes and analyzes
weathering models and static corrections. Our method is based on a delay time method of refraction
analysis. Within WXS we iteratively compute a weathering model and monitor the fit of the refraction
model to the observed data. We interactively QC the model and examine the effect of parameter
choices on the resulting model. Figure 1 shows the main window of our interactive statics analysis
program. Here we are looking at a single cross-section of a weathering model. In this window we
perturb parameters and then examine the effect of changes on the model and the computed statics
in map view.
As an example, if the geometry information is correct then the modeled first arrival times should
match observed arrival times. A mismatch of the two can represent error in near surface model or
placement error of the seismic source and receiver. In Figure 2, such a mismatch is shown, which is
in this case due to a shot “skid”. When the shot is moved to the correct position, in Figure 3, the
modeled and observed pick times match well.
The velocity depth model from WXS, along with first break time, can be used as input to a
subsequent tomographic inversion for near surface velocity depth models.

Figure 1. Display of near-surface velocity model cross-section from WXS interactive refractions
statics analysis program.
Figure 2. A portion of a shot where the picked times, displayed as "x", do not match the theoretical
times shown as a solid yellow line.

Figure 3. The same data as shown in Figure 2, with a shot skid applied. Now the picked times and
modeled time match well.
PRE-STACK TIME MIGRATION
We use our proprietary pre-stack Kirchhoff time migration algorithm, PSTM3D, to image both PP and
PS data for both 2D and 3D datasets. The flexibility of the Kirchhoff algorithm in terms of ordering
and density of input and output makes it ideal for the migration of land seismic data. Commonly we
migrate COV ordered data so that azimuthal analysis, such as VVAZ, can be carried out with data in
the migrated position. As well, we can migrate to fixed offset bins or to “accordion” bins where offset
spacing varies to accommodate natural fold variations.
We migrate both land and marine data with PSTM3D. For land data, we typically migrate from the
topographic surface at which the data were acquired. Marine data is migrated from the sea surface.
Velocities for pre-stack time migration velocities are determined using our proprietary interactive
velocity analysis software. Velocities can be optimized by examining migrated stacks, migrated
gathers or both stacks and gathers simultaneously. In addition to velocities we can also estimate eta
which can correct for observed VTI anisotropy or “hockey sticks”. Flattening the far offsets is
important for subsequent AVO analysis. Additionally, we can apply 4th and 6th order corrections
based on the vertical variation of the migration velocities.
The efficient parallel implementation of PSTM3D, adapted to our in-house clusters of CPUs, ensures
optimal turnaround time for even the largest projects.
SPLITTING ANALYSIS
Historically, shear-wave splitting analysis of PS data has focussed on the detection of fracturing in
the subsurface, typically assumed to be represented as a horizontal transverse isotropy (HTI)
medium as in Figure 1. An HTI medium is one with a single horizontal axis of symmetry: for example
a single set of vertical fractures. As shown in Figure 1 on the left, the converted wave does not
undergo splitting in an isotropic medium, however, on the right the shear-wave is split into S1 and S2
modes.
In addition to splitting caused by a fracturing we can also observe shear-wave splitting caused by
stress anisotropy. For shallow heavy oil reservoirs the most likely cause of shear-wave splitting is
stress anisotropy. There is a great deal of interest in analyzing these stress variations because this
analysis may provide valuable input into caprock integrity studies.

Figure 1. Diagram illustrating the concept of converted-wave splitting in an HTI media. On the left,
without HTI anisotropy the wave propagates without splitting . On the right ,with HTI anisotropy the
converted wave splits into two modes, labelled S1 and S2. The layering can be thought of as vertical
fracturing or regional direction of maximum horizontal stress in the absence of fractures. (From Wikel
et al, 2012)

The Shear-wave Splitting Signature

A convenient domain for detection of shear-wave splitting, and analysis of the fast and slow
directions or “principal axes”, is after a rotation to radial and transverse directions. Radial refers to
the direction aligned with the source receiver azimuth, and transverse is the direction perpendicular
to radial. The radial and transverse data can be analysed prestack or after partial stacking into
azimuthal sectors. The analysis can also be applied after prestack migration provided that azimuthal
information is retained, for example by using a common offset vector (COV) or offset vector tile
(OVT) based migration.
In the absence of any azimuthal anisotropy, and assuming a layered medium, there will be no
coherent signal present on the transverse component. Therefore, one preliminary indication that
shear-wave splitting is present will be observable signal on the transverse component. It should be
noted that signal can also arise on the transverse component for other reasons such as structure.
The characteristic of splitting related signal is that it will have a 2θ periodicity, with polarity reversals
every 90°, and this is generally not the case with other causes of transverse signal. So it is natural to
find methods that search for this kind of periodic signal on the transverse data as an indication of
probable anisotropy.
To illustrate shear-wave splitting we have constructed a very simple synthetic which consists of a
constant medium with 2% difference between S1 and S2 velocities, and a constant S1 azimuth of
30° from north (Figure 2). Four equally spaced reflections have been modelled, and the total amount
of time delay between S1 and S2 is simply proportional to reflection time. Thus we can see the
progressive change in the nature of the shear-wave splitting signature as the S1-S2 delay time
increases. Figure 3 shows a display of the radial and transverse from a dataset acquired over a
heavy oil reservoir that displays similar behaviour to the synthetic data.

Figure 2. Synthetic gather used to illustrate splitting amplitude effects. Radial component (left) and
Transverse component (right) displayed as a function of azimuth from shot to receiver. The S1
direction is 30° and the medium has a constant 2% velocity difference between S1 and S2.

Figure 3. Real data example to illustrate splitting amplitude effects. Radial component (left) and
Transverse component (right) displayed as a function of azimuth from shot to receiver. The apparent
sinusoidal behaviour of the radial on the left and the polarity flips of the transverse on the right are
indicative of shear-wave splitting.
Shear-wave splitting analysis can be viewed as a two-step process: first, estimate orientation of S1
direction, and; second, estimate the time-delay between S1 and S2. Typically we use both Radial
and Transverse components to estimate the S1 direction based on amplitude fitting.
Having determined the S1 direction, we next determine the time delay by first applying a rotation to
S1 and S2 directions, performing a weighted stack to obtain S1 and S2 traces, and then using a
cross-correlation technique. At this point, we have an estimated S1 direction Φ, and time delay Δt,
for each CCP location in the survey, at the current analysis depth.

Layer Stripping

Since the earth may contain layers with different stress or fracture regimes, leading to different S1
directions, the recorded shear-waves may have been split multiple times, with both S1 and S2 from
the deepest layer being split again into new S1 and S2 directions from the layer above, and so on.
This results in considerable complexity of waveform in which the directions for layers are masked by
layers above, for all but the shallowest layer. It is important to unravel this effect to find the splitting
characteristics at the target depth.
To take into account the existence of several different layers with different S1 directions, we perform
layer stripping. In each layer stripping step the estimated time delay is compute as above, then
applied on the data after rotation to S1-S2 coordinates, by first shifting the P-S2 data to match P-S1
and then rotating back to the radial-transverse coordinate system. The resulting data may be
referred to as radial-prime and transverse-prime (R' and T'). This procedure is illustrated in Figure 4.

Figure 4. Illustration of transformation from original Radial and Transverse (R-T) coordinates through
splitting correction to derive R' and T'. The input common asymptotic conversion point (ACP) gather
is at the left, the middle figure shows the gather after rotation to S1 and S2 coordinates and at the
right the data displayed after S2 is time shifted and the data are rotated back to R-T. These are now
referred to as R' and T'.
At the left the common asymptotic conversion point (ACP) gathers are indexed first by offset range,
and then by shot-receiver azimuth, (see profiles above the gathers). The gathers are superbinned
using 7x7 bins to improve S/N. Using S1 azimuth derived from the analysis, the data are rotated to
the coordinate system defined by S1 and S2 direction. The data in the middle figure are shown after
this rotation and with the application of a time delay that is estimated from the analysis (about 10ms)
so that the S2 data is shifted upwards to align with the S1 data. Finally in the rightmost figure, the
data are rotated back to the R-T coordinate system where the removal of the anisotropy effect is
evident. These are referred to as R' and T'.
In order to layer strip through multiple layers, it is necessary to compute R' and T' for each layer in
succession, in order to perform analysis for the subsequent layer. Usually the R' dataset is the most
suitable for the final goal of imaging the reservoir, although it is also quite possible to generate P-S1
and P-S2 images after the final layer stripping step.

Data Example

An example of the application of shear-wave splitting analysis is shown in Figure 5. In this case,
shear-wave splitting analysis over the reservoir interval of an active in-situ combustion project
highlights the stress and/or matrix change with time as the combustion front progresses. In this
specific case, it is known that the operation is well below the fracture pressure of the interval and
that the reservoir itself is not known to be fractured from extensive core samples and logging. The
implication of this is that the ample amount of splitting observed in the reservoir (7 ms maximum
over a 50 ms layer window), after layer stripping of the above layers, is substantial and most likely
due to stress and/or matrix changes within the formation (for more complete information on this
example please see Wikel and Kendall, 2012 or Bale et al, 2012).
This type of analysis is useful for gauging the progression of the combustion front along with PP time
lapse data and other reservoir engineering information.
Figure 5. Overburden analysis at left and caprock analysis at right from shear-wave splitting. The
colour underlay represents the time delay between S1 and S2 modes, with a range from 0 to 15ms,
as shown in the histograms. The line segments represent both time delay (length) and S1 azimuth.
REVIVE 5D Interpolation
Wide-Azimuth Interpolation

The CGG REVIVE5D algorithm is of particular benefit to land, seabed and wide-azimuth surveys. A global multidimensional
interpolator fills gaps in coverage and increases spatial sampling density, while preserving original recorded data.

Up to five interpolation dimensions can be used to infill gaps, increase fold and improve offset-azimuth distribution, especially in
areas of complex structure. More dimensions make the model of the data more accurate and the interpolation more effective. This
results in improved images as geometry-related migration artefacts are minimised. It also gives cleaner, regularly populated pre-
stack gathers with preserved AVO and AVAZ characteristics for reservoir and fracture characterization.
Slide to compare

FEATURES

 High-fidelity pre-stack interpolation in five dimensions - for example inline, crossline, offset, azimuth and frequency
 Preserves original recorded data while predicting new shots and receivers at desired locations
 Honours amplitude variations with offset (AVO) and azimuth (AVAZ)
 Applications for:
o Infilling gaps
o Decreasing bin size
o Improving azimuth-offset distribution
o Regularisation
o Survey merging to a defined grid and bin size
o Wide-azimuth surveys

BENEFITS

 Mitigates effects of acquisition constraints


 Higher fold and reduced gaps for improved imaging results
 More complete and uniform offset-azimuth distribution for improved pre-stack AVO and AVAZ analysis
 Enhances 4D analysis with regularization to common grid and bin size
Slide to compare

Plots of shot (red) and receiver (blue) locations for a land dataset with irregular acquisition. Note the river running north-south through the
centre of the survey area. After interpolation of additional shot and receiver locations the gaps have been filled and the fold has been boosted
and equalised. This will result in more uniform amplitudes after migration and allow a decrease in bin size to improve resolution.

Data courtesy of GDF, SUEZ, ExxonMobil, Wintershall and EWE.


AGORA Adaptive Ground Roll Attenuation
AGORA Adaptive Ground Roll Attenuation provides accurate noise removal by automatically adapting to spatially-varying
ground roll characteristics in 2D and 3D domains, giving improved results over fixed parameter techniques such as FK.

APPLICATIONS

 Ground roll and guided waves on land data


 Similar dispersive noise (e.g. "mud roll") on shallow water data
 Ice- and permafrost-related noise on Arctic data
 Prior to digital array forming (DAF) on point source/ point receiver data
 Unwanted shear waves and associated down-going multiples on multicomponent VSP data
 Both narrow and wide azimuth data

AGORA (right) removes the ground roll more effectively than FK methods (centre) and also
gives better preservation of the amplitudes of the body waves in the near offset cone

FEATURES

 Adaptive attenuation of spatially-variant ground roll and guided wave energy


 Fully data-driven
 Flexible non-linear attenuation using FX modeling with a Wavelet Transform scheme
 Uses true (irregular) source/receiver positions, does not require regular spatial sampling
 Effective against aliased and dispersive surface waves

Stack of raw data Stack with AGORA

BENEFITS

 Preserves the amplitudes of both P and S body waves


 Preserves bandwidth of data
 More accurate ground roll attenuation recovers better quality near-offset data for improved stack response
 Guided wave attenuation recovers far-offset data allowing:
o wider mute
o more accurate velocity picking
o more meaningful AVO analysis

Spatial variation of ground roll Gathers with different ground roll velocities
characteristics across a survey at marked locations on map

Local variation of ground roll characteristics is common and is clearly visible on these shot gathers from different locations of the same
survey. Frequency content, group and phase velocity, amplitude and degree of aliasing vary spatially between these locations. AGORA
automatically adapts to the ground roll characteristics, so this flexible approach enables accurate modelling even where there are local
variations.
Land SRME (Surface-Related Multiple Elimination)
Complex surface-related multiples from out-of-plane reflectors can only be accurately modeled and attenuated with a full 3D SRME
approach. In the past, 3D SRME has been used very successfully in marine data processing but less so for land data due to
difficulties created by surface topography, statics, spatial sampling, noise and azimuthal limitations.

Bring the Power of 3D SRME Onshore

Land 3D SRME uses primary reflections to predict multiples. Once the predicted multiple model is created, it is adaptively
subtracted from the original data. Adaptive subtraction is a 4D approach incorporating inline, crossline, offset and time in order to
preserve primary energy. As a result, Land 3D SRME maximizes multiple removal and minimizes damage to the primaries.

Land 3D SRME removes dipping multiples whilst preserving underlying primary energy,
resulting in an improved subsurface image.

Improve Your Velocity Analysis

Removing multiples directly overlaying primaries brings clarity and focus to velocity analysis,
as seen on these semblance and gather displays.
No magic from 5D interpolation on a coarse 3D
oilsands dataset – a case history
Sylvestre Charles* and Jim Hostetler, Jiwu Lin, Xiaoming Luo, Kevin Roberts**

*SUNCOR ENERGY INC., CALGARY, ALBERTA, CANADA


**SCHLUMBERGER GEOSOLUTIONS, CALGARY, ALBERTA, CANADA

ARTICLE COORDINATOR(S): MIKE PERZ / MOSTAFA NAGHIZADEH

CSEG RECORDER | MAR 2014 | VOL. 39 NO. 03

 ARTICLE
 BIOS
 REFERENCES
 DOWNLOAD PDF
 PRINT

This case history is based on a coarse 3D seismic dataset that was acquired for a commercial Steam Assisted Gravity Drainage

(SAGD) project in the Athabasca oil sands in Alberta. In conjunction with a dense core-hole drilling program, this legacy 3D seismic

dataset was acquired to better understand the geological setting, assess the thickness and quality of the reservoir and decide how

to orient the SAGD pads that were used to develop the field.

The efficiency of the SAGD process is a function of reservoir properties e.g. lithology, stratigraphy, relative permeability, fluid

mobility, fluid saturations. 4D seismic has historically been the method of choice for characterizing the evolution of the reservoir, and

in particular for monitoring the growth of the steam chambers around the SAGD well pairs. 4D seismic surveys based on the same

acquisition parameters were acquired above specific well pads over the years.

To assess future developments in the area surrounding the project, a new 3D seismic survey will need to be acquired in the near

term future. Beforehand, a review of the legacy 3D seismic surveys was conducted. Some of the questions that were raised include:

 Are they any obvious limitations with the acquisition parameters of the legacy 3D survey in light of today’s technology?

 Given that the source line spacing to primary target depth ratio is between 3 and 4, would 5D interpolation be of any help?

 With the advances made in seismic imaging since these data have been acquired, would the pre-stack time imaging results be

better now?

 As all of the data lies within a 400ms two-way travel time window, would pre-stack depth imaging improve the spatial

resolution of the seismic images?

 Could we quantify the amount and type of anisotropy that affects the data and could we take it into account in the depth

imaging data processing flow?

 Finally, would the reprocessed data be more suitable for a quantitative interpretation?
Geological setting
The geological history of the area is extremely complex and not yet fully understood. The subsurface can be divided into five layers,

each having their own internal complexity (Figure 1).

At the top, there is a thick Quaternary section that represents most of the overburden. It is made of glacial tills, former glacial lakes,

etc. It was subjected to multiple glacial and interglacial intervals that left internal unconformities. Periodic ice-sheets of variable

thickness carried and deposited glacial drift and till. The structural framework of the Quaternary shows gravity induced slumps,

duplexes and internal small displacement faults.


Figure 1. Main geological packages.
At the base of the Quaternary there is a major unconformity that corresponds to a period of erosion of a thick package of sediments

dating from the late Cretaceous to the early Quaternary. Below this unconformity the Clearwater and Grand Rapids formations are

mostly shaley, and form the cap rock. The caprock is finely layered and exhibits a very uniform lithology over the project area. It has

an undulatory shape that was induced by the differential compaction of the underlying McMurray reservoir.

Below the caprock, the notoriously complex McMurray reservoir is made of inter-braided channel systems and inclined heterolithic

stratification (IHS) that vary from clean sand beds to silty muds. At the base of the reservoir, there is a major unconformity that

spans more than 200 million years, resulting in an extremely rough and weathered surface with paleo-topographic highs and lows.

The Devonian formation is highly faulted, karsted and the paleo-surface is riddled with sink holes (Altosaar 2013). Some of the faults

in the Devonian have clearly controlled the sedimentation of the McMurray reservoir. The Devonian section lies unconformably on

the Precambrian basement. The contact between these two formations is another very rugged and weathered unconformity.

Numerous fault-bound rhomboidal blocks can be identified at the top of the Precambrian.

Acquisition parameters of the legacy 3D dataset


The legacy 3D survey was shot in 2000 and covers a 14km2 rectangular area. The source line and receiver line spacing are 90m in

an orthogonal geometry. The source station and receiver station spacing are 30m creating a 15m x 15m CMP bin. The dynamite

source has a charge size of 1/8kg at a depth of 9m. The patch consists of 10 lines x 28 receiver stations (900m x 840m). The

receiver stations consist of a string of 6 Oyo 30CT, 10Hz geophones that are clumped together (no array). The sample rate is

0.5ms. The fold taper is standard (i.e., a quarter of the patch size). The migration aperture includes the fold taper.

This geometry results in extremely low fold in the Quaternary with only 2 or 3 traces per bin. At a depth of 300 to 350 meters,

corresponding to the base of the McMurray reservoir and the top of the Devonian, the fold increases to about 23 traces per bin.

Assuming an average velocity of 2200m/s down to the McMurray reservoir, the maximum un-aliased frequency is around 73Hz for a

minimum opening angle of 30 degrees (diffraction energy).

A quick review of these acquisition parameters concluded that:

 The source line spacing and receiver line spacing are too coarse to capture the heterogeneities that lie within the shallow

portion of the Quaternary section.

 The frequencies that are required to achieve the resolution that we desire today are much higher and would require a tighter

source and receiver spacing i.e. a smaller bin size.

To complicate matters further and this is a concern specific to the project area, the velocity profile is atypical. Specifically, it does not

continuously increase as a function of depth but shows strong velocity inversions within the Quaternary, as later delineated by

tomographic depth velocity model building and shown in Figure 2. These velocity inversions are likely due to the transport and

deposition of faster material by the ice sheets during the periods of glaciation. They will impact both depth imaging and reservoir

characterization and would require a patch that is larger than usual.


Figure 2. Velocity model along a traverse from PSDM tomographic modeling.

5D Interpolation
The need for interpolating or regularizing inadequately sampled seismic data to minimize migration artefacts (migration is based on

the principle of constructive and destructive interferences) and to preserve amplitudes has been identified a long time ago (Ronen

1987, for instance). It is of particular importance with 3D land surveys as lakes, rivers, setback distances from pipelines, wellheads

and other infrastructures will induce holes in the seismic coverage that are difficult to fill in at processing. Various obstacles at

surface will alter the regularly sampled acquisition survey geometry (orthogonal, mega-bin, bricks, etc.) and introduce some

randomness in source and receiver spacing and source line and receiver line spacing. These perturbations will clearly affect the

uniformity of the coverage.


Numerous tools have been developed over the years to try to address the aforementioned issues. Multi-dimensional interpolation of

the seismic data (so-called 5D interpolation for 3D datasets) has established itself as one of the most robust. 5D interpolation (Wang

2004, Liu and Sacchi 2004, Trad 2008, Downton et al 2012, Cary 2012, etc.) comes in various flavors as each geophysical

contractor has now implemented its proprietary approach, however it is beyond the scope of this paper to discuss their respective

strengths and weaknesses. 5D interpolation remains an area of active research both in academia and in the industry as issues such

as aliasing continue to be addressed by increasingly sophisticated technology.

For the legacy oil sands data, 5D interpolation offered a potential way of improving the spatial sampling such that pre-stack time

migration (PSTM) could be used to provide a higher resolution image. This dataset was processed with a post-stack time migration

flow in the past. While PSTM flows were evaluated and brought some limited uplifts, they also introduced a lots of migration

artefacts. This process was therefore considered of limited value by the geophysicists and interpreters at that time. The post-stack

time processing flows were optimized over the years and became standard for the processing of the monitor surveys (4D).

Because this legacy 3D survey is quite coarse and because we wanted to assess the potential of this dataset for pre-stack imaging

and reservoir characterization, we opted for an emulation of a denser seismic data acquisition where both the source and receiver

line spacing and the in-line source and receiver spacing would be divided by two. This reduced the CMP bin size to 7.5m by 7.5m.

The main goal was to improve the azimuthal coverage and reduce migration artefacts. Small holes were filled in, but larger holes

(lakes for example) remained when the gaps between data points were too large to be handled reliably by the interpolator.

Time and Depth Imaging


For the initial part of our analysis, the legacy 3D survey was reprocessed in 2012 with an amplitude friendly multi-azimuth PSTM

processing flow. As outlined above the pre-stack data that were input to the migration were 5D interpolated. In the study area, all of

the geological layers are extremely shallow. Because the velocity is around 2000m/s in average, there is almost a 1 to 1

equivalence between the time and depth scales. The entire geological section lies within the first 400-450ms of the seismic image.

Such a short time window presents considerable challenges at processing, and care was taken to allow the new sequence to

provide as much uplift as possible.

When completed, the visual differences between the legacy post-stack and new pre-stack migrated volumes were subtle when

comparing individual in-lines and cross-lines, confirming earlier findings. However, differences were more pronounced on time slices

and attribute volumes (coherency cubes, etc.). In particular, on the pre-stack time volume, the topography of the Devonian

unconformity is much more detailed than on the post-stack volume. This is of primary importance for designing the optimal SAGD

horizontal well pairs that will be located just a few meters above it or that will be running through some portion of it.
Figure 3. 3D traverse image comparisons after new processing:
Top – Legacy Time Migration
Bottom – Anisotropic Pre-Stack Depth Migration stretch back to time.
The two sections have a different datum.
To take the work further, a pre-stack depth migrated image (PSDM) was produced, again using a 5D interpolated input. The benefits

of the pre-stack depth migration volume over the pre-stack time migration volume were obvious. Not only do the wells tie in depth at

the cap rock and at the Devonian unconformity, but the structural positioning and definition was improved. Figures 3 and 4 show

clearly the benefit of the reprocessing.

Figure 4. An anisotropic PSDM traverse with well ties and horizontal well pad.

The pre-stack depth migration flow is multi-azimuth and anisotropic, with the data being migrated from the base of the weathering

layer as described by Charles et al (2008). As in this paper, an anisotropic model provided better focusing and well ties than with an

isotropic model. Because each geological macro layer may be affected by different types of spatially variable anisotropy, evaluating

the anisotropic parameters was very difficult for this dataset. Figure 5 shows common-offset common azimuth (COCA) displays for

two different areas of the survey – the variation in azimuthal effects is clear. However, it proved impossible with this dataset to

dissociate the shallow azimuthal velocity variations within the Quaternary from potential orthorhombic anisotropy effects. The fold

was just too low in the upper part of the Quaternary section. As a result, we limited the anisotropy symmetry to tilted transverse

isotropy. Imaging differences in the individual azimuthal volumes showed that the data were affected by both illumination issues and

azimuthal variations. Additional azimuthal corrections were therefore applied to the individual azimuthal volumes in using a non-rigid

matching technique prior to stacking.


Figure 5. COCA plots at two different locations with very different azimuthal responses.

We concluded that the anisotropic pre-stack depth migration volume provided the best answer in terms of resolution and data

conditioning, confirming common knowledge. Pre-stack depth migration gathers were therefore used in reservoir characterization in

subsequent work.

High resolution 2D test lines


While the reprocessing efforts on the 3D dataset using 5D interpolation, PSTM and A-PSDM, had yielded significant improvements

over the legacy product, it was felt that the resulting image still lacked the resolution necessary to optimize the SAGD production of

the area. In particular the Quaternary lacked detail, and the Devonian surface still showed evidence of smearing.

To test this hypothesis, several high effort and high resolution 2D lines were acquired in the project area to assess the Earth

response, and determine what acquisition parameters would be required to achieve the desired imaging results. Two 2D Walkaway

VSPs were also acquired to get an estimate of attenuation and anisotropy as well as to locate multiples and converted modes. For

all of these lines, the source and receiver spacing are both 5m, with sources located at the receiver midpoints. All receivers were

single 3C analog geophones. Some lines were shot live three times, with source depths of 6m, 9m, and 12m. Test shots were fired

at both ends of one line with varying charge size and depths. Examples are shown in Figure 6.
Figure 6. 2D source tests and autocorrelations from a deep time window.

Comparing shot records, spectra and the stack sections in particular, 1/8kg at 6m was considered to be the best source for these

lines. As expected, increasing the charge size shifted the dominant frequency of the source wavelet towards the lower end of the

spectrum (Figure 6). Although a higher frequency source-ghost notch is more desirable, there is a trade-off between reducing the

effect of the source ghost (i.e. by decreasing the hole depth) and the amount of the source generated noise (ground-roll, air-waves,

etc.) that we will be found in the shot records while minimizing the risk of blowouts. The frequency notch due to the source ghost is

at about 125Hz for the 6m shot points, at about 95Hz for the 9m shot points, and at about 80Hz for the 12m ones.
Figure 7. 2D source FK spectra for different receiver spacing, from left to right: 5m, 10m, 20m, 30m.

F-K spectra (Figure 7) illustrate aliasing wraparounds of the most powerful events for Figure 6. 2D source tests and autocorrelations

from a deep time window. different receiver spacing. There is almost no aliasing for a receiver spacing of 5m, some mild aliasing for

a receiver spacing of 10m and a fair amount of aliasing wraparound for the receiver spacing of 20m and 30m.

While the 5D interpolation and PSTM / A-PSDM work had yielded some imaging improvements, the high resolution 2D proved to be

spectacularly better, confirming the suspicion that 5D interpolation and depth imaging on the existing dataset fell short of the

potential for seismic to provide detailed images for SAGD production. Figures 8 and 9 compare two different 2D lines (L1 and L2)

with both the legacy 3D and the new PSTM 3D respectively.


Figure 8. Comparison between the legacy 3D (left) and the high resolution 2D line L2 (5m x 5m). Note that one can
now distinguish the architecture of the reservoir at about 250ms on the 2D line.
Figure 9. Comparison between the new 3D PSTM (left) and the high resolution 2D line L1 (5m x 5m). Although the
PSTM improved on the legacy 3D the uplift provided by the high resolution 2D line is compelling.

Since the legacy 3D and the high resolution 2D lines have almost identical source and geophone characteristics, with the

processing flows being very similar, the improvement in seismic resolution that we see can only be due to the increase in surface

sampling. To test this further, decimation was used to assess the degradation of the 2D seismic image as the source and receiver

spacing increases. Different decimation combinations were compared: 5m x 5m versus 10m x 10m, 20m x 20m and 30m x 30m

(Figures 10-13). Source only decimation results were also derived: 5m x 5m versus 10m x 5m, 20m x 5m and 30m x 5m. It should
be noted that no effort was made to process each combination from scratch – while this is likely to result in slightly improved images

with, for example, the removal of some low frequency migration artifacts, the spatial resolution losses would remain.
Figure 10. Raw 2D PSTM stack with a 5m source spacing and a 5m receiver spacing.
Figure 11. Decimated raw 2D PSTM stack with a 10m source spacing and a 10m receiver spacing.
Figure 12. Decimated raw 2D PSTM stack with a 20m source spacing and a 20m receiver spacing.
As expected, when the 2D source and receiver spacing were both 30m (Figure 13), the resolution deteriorated to produce an image

that is similar to the legacy 3D. Even the 30m x 5m combination (Figure 14) clearly shows a deterioration of the illumination of the

Devonian unconformity in particular, the caprock as well as in the shallow part of the Quaternary, compared to the original full

resolution image.
Figure 13. Decimated raw 2D PSTM stack with a 30m source spacing and a 30m receiver spacing.
As a final test the decimated 2D datasets were 5D interpolated back to the original geometry to assess how well the process could

recover the missing data. Figure 15 shows the most extreme case, from the 30m x 30m decimation back to 5m x 5m. Comparing

this with Figure 10, the original 2D image, clearly shows the limitations of the method, even with the latest anti-aliasing algorithms.

The missing information, in particular the heterogeneities in the subsurface, result in an image notably poorer than that produced

from real high resolution seismic. This is consistent with the result seen from running 5D interpolation on the 3D legacy data.
Figure 14. Decimated raw 2D PSTM stack, with a 30m source spacing and a 5m receiver spacing.
Although 5D interpolation is a very useful tool to interpolate un-aliased 3D data, regularize the seismic data coverage and reduce

migration artifacts (Trad 2009, Downton et al 2012, Cary 2012), it is clearly not a substitute for high density, high resolution, seismic

data acquisition in this geological setting. It is simply not possible to obtain, or even come close to, the imaging quality of the high

resolution 2D lines using current processing technology on the existing sparse 3D acquisition data. New 3D acquisition parameters

were proposed by Charles et al in 2013.


Figure 15. Raw 2D PSTM stack decimated to a 30m source spacing and a 30 m receiver spacing and then
interpolated back to a 5m source spacing and 5m receiver spacing. Compare this section with figure 10.
The 2D converted wave data has proven problematical in this area, and work is continuing on understanding the results. The PS

section, scaled to the PP section, is of much lower spatial and temporal resolution. The existence and location of the conversion

points varies along the reflectors making the registration and statics estimation challenging. Attenuation on the S-waves was clearly

visible but difficult to compensate for. We concluded that the requirement for acquiring adequately sampled 3C data at the study

area would be extremely costly and leave an unacceptable environmental footprint. While a joint inversion of PP/PS data to recover

the density parameter more accurately is highly desirable, it may just not be possible within the project area.

Conclusions
Despite such a short time window, anisotropic pre-stack depth migration provided substantial imaging improvements over pre-stack

and post-stack time imaging. Strong azimuthal variations were observed at far offsets but a much denser data set would be required

to dissociate azimuthal anisotropy effects from shallow velocity variations. Nevertheless even tilted transverse isotropy (TTI)

improved the results over the one derived with an isotropic model.

For this project area, we have established that 5D interpolation used as a regularization tool was beneficial as it improved the

azimuthal coverage, data conditioning and reduced migration artefacts, enabling pre-stack imaging. However, both the decimation

and interpolation test of the 2D data, and the comparison of the interpolated 3D data to the high resolution 2D data, demonstrated

that there is no magic from 5D interpolation. Interpolation simply cannot correctly recover the heterogeneities in the subsurface

when they have been insufficiently illuminated by poor spatial sampling. With the current processing technology, 5D interpolation is

not a substitute for high density, high resolution seismic data acquisition in the project area.

Acknowledgements
We would like to thank the following people and organizations: Suncor Energy Inc. for the permission to publish these results, all of

those at Schlumberger Geosolutions who helped with this project, Josef Heim in particular, Peter Vermeulen and Andres Altosaar

for their geological insights, and Eileen Charles, Michael Webb, Graham McFarlane, Karen Primrose and Mike Perz for reviewing

and improving this paper.

Disclaimer
Suncor Energy Inc. and its affiliates (collectively "Suncor") do not make any express or implied representations or warranties as to

the accuracy, timeliness or completeness of the statements, information, data and content contained in this presentation and any

materials or information (written or otherwise) provided in conjunction with this presentation (collectively, the "Information"). The

Information has been prepared solely for informational purposes only and should not be relied upon.

deendarlianto@ugm.ac.id
indra_bastian@yahoo.com

Potrebbero piacerti anche