Sei sulla pagina 1di 321

GRAVITY AND MAGNETICS

IN TODAYS OIL & MINERAL


INDUSTRY
(Updated in 2011)























Course notes prepared by
Professor J. Derek Fairhead


All course notes are copyright protected. These notes are for the sole use of
individuals attending the training course and cannot be reproduced for third parties
in any form without the written approval of Prof. J. D. Fairhead

Prof J Derek Fairhead, GETECH Group plc
Email jdf@getech.com
Web: www.getech.com





GRAVITY AND MAGNETICS
IN TODAYS OIL & MINERAL
INDUSTRY

Copyright

The hard copy materials making up the course notes and the digital pdf
product are copyright protected and cannot be copied or given in any way to
third parties without the written approval of Prof Fairhead


About the Author: J Derek Fairhead
Biography
He did a Joint Honours in Physics and Geology at
Durham University and an MSc and PhD in Geophysics
at Newcastle upon Tyne University on the seismicity of
Africa and the crustal structure of the East African Rift
System based on gravity and magnetic data. He joined the
Department of Earth Sciences, University of Leeds in 1972 as a lecturer in
Geophysics and was promoted to Senior Lecturer and in 1996 to Professor of Applied
Geophysics. He is the founder and Managing Director of GETECH Group plc since
1986. GETECH originally stood for Geophysical Exploration Technology Ltd which
was a spin out company from the University of Leeds in 2000 before successfully
floating on AIM in 2005. GETECH offices are now located at Elmete Hall, Roundhay,
Leeds. GETECH has compiled the worlds largest gravity and magnetic database and
provide a range of services to the oil and mineral industries. These services were
traditionally the provision of data, data processing, data integration and integrated
interpretation studies. Since 2004 GETECH has developed a Petroleum Systems
Evaluation Group (PSEG) headed by internationally recognised geoscientist. This
range of non-seismic services thus provide a set of integrated exploration solutions
enabling the quantitative evaluation of sedimentary basin structure and architecture
and the evolution of its petroleum systems.
His main academic interests lie in Applied Geophysics: improving interpretation
theory, understanding the geological and geophysical controls on sedimentary basin
development within and along the continental margins; and crust/mantle processes
related to rifting and break-up of continents and the influence that plate tectonics has
on continental tectonics. These studies by their very nature require an integrated
approach. In 1999 the SEG honoured him at their annual meeting in Houston for his
services to the oil industry and academia by presenting him with the Special
Commendation Award.

Table of Contents
Title: GRAVITY AND MAGNETICS IN TODAYS OIL & MINERAL
INDUSTRY (Updated 2011)

INTRODUCTION
Section 1 - Introduction
Section 2 - General Properties of Gravity & Magnetic (Potential) Fields
Section 3 Gravity & Magnetic Units and Rock Magnetism

GRAVITY
Section 4 Gravity Anomalies
Section 5 GPS in Gravity Surveys (Land, Marine & Air)
Section 6 Land Gravity Data: Acquisition and Processing
Section 7 Marine Gravity Data: Acquisition & Processing
Section 8 Airborne Gravity Data: Acquisition & Processing
Section 9 Gravity Gradiometer Data
Section 10 Satellite Altimeter Gravity Data: Acquisition & Processing
Section 11 Global Gravity Data & Models
Section 12 Advances in Gravity Survey Resolution

MAGNETICS
Section 13 Magnetic Data: Geomagnetic Field and Time Variations
Section 14 Magnetometers and Satellite & Terrestrial Global Magnetic
Models
Section 15 Aeromagnetic Surveying

MAPPING
Section 16 Geodetic Datums and Map Projections
Section 17 Gridding and Mapping Point and Line Data

DATA ENHANCEMENT
Section 18 Understanding the Shape of Anomalies and classic enhancement
methods to isolate individual anomalies
Section 19 Data Enhancement

INTERPRETATION
Section 20 Interpretation Methodology/Philosophy General Approach
Section 21 Structural (Edge) Mapping: Detection of faults and Contacts
Section 22 Estimating Magnetic Depth: Overview, Problems & Practice
Section 23 Quantitative Interpretation-Manual Magnetic Methods
Section 24 Quantitative: Forward Modelling
Section 25 Quantitative :Semi-Automated Profile Methods
Section 26 - Quantitative -Semi-Automated Grid Methods : Euler
Section 27 Quantitative Semi-Automatic Grid Methods:
Local Phase (or Tilt), Local Wavenumber, Spectral Analysis and
Tilt-depth

APPENDIX
USGS Map Projections
J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds:




INTRODUCTION

Section 1 - Introduction
Section 2 - General Properties of Gravity & Magnetic (Potential) Fields
Section 3 Gravity & Magnetic Units and Rock Magnetism
J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds:




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 1, Page 1
GETECH/ Univ. of Leeds:

SECTION 1: INTRODUCTION




1.1 Geophysical Methods and their
related Rock Physical Properties

Geophysical exploration methods exploit the fact that as
lithology varies, so do the physical properties of the rock
concerned. The physical properties exploited are as
follows:


Gravity Method : determines the sub-surface spatial
distribution of rock density, , which causes small
changes in the earth's gravitational field strength.


Magnetics Method : determines the sub-surface spatial
distribution of rock magnetisation properties, J, (or
susceptibility and remanence) which cause small
changes in the earth's magnetic (Geomagnetic) field
strength and direction.


Electrical Method : determines the sub-surface spatial
distribution in rock conductivity (=1/resistivity) using
artificial stimulated electrical fields (or varying magnetic
fields ) and measuring their effects.

The above method are POTENTIAL FIELD methods
and give non-unique solutions


Seismic Method : determines the sub-surface spatial
distribution of seismic elastic properties (acoustic
impedance, V). Reflection seismics allow high
resolution images of subsurface structures down to a
certain depth. However this method is very expensive.
Thus the first three methods are rapid methods for
evaluating the existence of sub-surface structures and
help to identify areas for seismic exploration.

Since physical property boundaries do not always
coincide with geological boundaries (chrono-
stratigraphic or lithological), geophysical surveys always
have to be treated with an understanding of what they
can and cannot tell you. Nevertheless, great insights,
obtained no other way (apart from saturation drilling),
can result.

Since it is seldom oil industry practice to log for
susceptibility or to measure remanence, it is normally
necessary to estimate/guess magnetic properties or to
estimate them as part of the interpretation process. The
mineral industry will often log susceptibility.

1.2 Role of Grav/Mag Methods in Oil
Exploration

Gravity and magnetic methods used for:

Pre-seismic stage: Grav/Mag surveys will help to
evaluate depth to basement, structural and basin
configuration mapping, and thus provide major input to
seismic survey design.

During seismic exploration stage: Gravity collected
along and between seismic reflection lines to allow
interpolation/extrapolation of structures between and
away from seismic lines. (Ground magnetic data rarely
collected)

During seismic processing stage: Since gravity
data can be processed rapidly, they can be used to
define structures, particularly faults which can help
seismic processing decisions. Gravity can be used with
seismic data to improve velocity models of the near
surface which will allow better imaging of deeper
structures.

Post-seismic stage: Checks via model studies on
whether seismic interpretation is correct and helps
model deeper parts of sedimentary basins not imaged
by the seismic data.


1.3 Role of Grav/Mag methods in
mineral exploration

Aeromagnetic surveys used to assist in basic
mapping, and to identify target areas for follow up
studies. Since about 2000 airborne gravity surveys are
becoming more common to identify geological
structures and targets. Combined airborne gravity and
magnetics can be a powerful means of differentiating
rock types.

Follow up high resolution aeromagnetic survey or
ground magnetic profiling to identify targets better.

Grav/Mag modelling with drilling to identify size and
subsurface shape of ore body

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 1, Page 2
GETECH/ Univ. of Leeds:


1.4 Basic differences between gravity
and magnetic methods

Gravity and magnetic fields follow very similar physical
laws. Yet the geological information they yield are very
different. This is because their sources are commonly of
quite different geological and physical nature. Rock
density seldom goes outside the range 1.8 to 3.0 g/cc
(often 2.2 to 2.7g/cc) i.e. range less than an order of
magnitude. Whereas magnetisation shows a very large
range up to several orders of magnitude since this is
controlled by the magnetic mineral content (normally
magnetite) within a rock.

Gravity Method: Here the object is to determine the
spatial variation in the acceleration due to gravity (small
g) which depends on the mass (density and volume) of
the rocks underlying the survey area.

The force of attraction F between the gravity meter
mass M and a body of mass m depends on

F = G mM/r
2


where G is the Gravitational constant and r is distance
between masses m

and M

since F = Mg then g = G m/r
2


g is proportional to mass m and proportional to
density

Density is a scalar quantity (has only magnitude, not
direction). This makes the shape of gravity anomalies
simpler and generally easier to interpret than magnetic
anomalies. Density boundaries tend to be associated
with: porosity changes, faults, unconformities, basin
edges or basin floor, limestones, dolomite or evaporite
occurrences, salt occurrences and major lithologic
boundaries

Magnetic Method. Here the object is to determine the
spatial variation of the geomagnetic field within the
survey area and use these magnetic field variations to
say something about the geometry, depth and magnetic
properties of subsurface rock structures. The
magnetisation of rocks has both direction and
magnitude (thus magnetisation is a vector quantity) and
can be a combination of both Remanent and Induced
magnetisation. The induced magnetisation depends on
the rocks susceptibility while the remanent
magnetisation (remanence) depends on the history of
the rock.

These factors tend to make the shape of the magnetic
anomalies complex and in general more difficult to
interpret than gravity anomalies.

1.5 Scaling Properties of Gravity and
Magnetic Data

Gravity Field: Due to its monopole source nature, the
amplitude of g is proportional to scale change. That is, if
a structure is double the size of another (or the density
contrast doubles) then the gravity effects will also
double. This can be viewed as a doubling of the mass.
Thus gravity maps are therefore often dominated by the
gravitational effects of large regional density structures
and the gravity effects due to shallow small scale
structures, that are of interest, may only represent a
small percentage of the gravity signal (often less than
10%).

Magnetic Field: due to the dipole source nature, the
magnetic field scales differently. The amplitude of a
magnetic anomaly is unaffected by physical scale
change. This in part is due to the magnetic effect not
arising from the bulk volume of the magnetic material
but from the surface area of the magnetic interface and
that magnetic fields decay more rapidly with distance.
This causes magnetic maps to appear to favour the
effects of shallow sources over deep ones. This can be
a problem when volcanics (generally strongly magnetic)
occur within the sedimentary section of a basin, since
their magnetic signal will tend to dominate and make it
difficult or impossible to determine depth of the basin
from the magnetic signal arising from the crystalline
basement interface. If no shallow volcanics are present
the effects of the crystalline basement can usually be
seen in magnetic maps.
Figure 1/1 illustrates scaling effect over a simple 2D
body.


Figure 1/1 Scaling relations. Solid anomaly lines
relate to effects of larger body and the dashed lines
the effect of the smaller body

1.6 Lateral Density and Magnetisation
contrasts

Lateral rather than vertical density contrasts cause
gravity anomalies. The magnetostatic charge of the
body will control the magnetic anomaly (see fig 1/2).
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 1, Page 3
GETECH/ Univ. of Leeds:

Table Gravity vs Magnetic Effects


Property Gravity Magnetic field

Anomaly detection Vertical component of Components along local field
direction
Local anomalous gravity of local anomalous flux density

Means of measurement Spring balance Atomic/Nuclear effects

Source type Monopole Dipole

Scaling Increases with scale Independent of scale

Body physical property Mass Magnetic dipole moment

Rock physical property Density i.e. mass per unit Magnetisation (induced or remanent)
volume i.e. dipole moment per unit volume

Range of physical property rocks are 1.8 to 3.0 g/cc 0 to 40 A/m (X=0 to 1SI)
Often 2.0 to 2.8g/cc ( up to 5 orders of magnitude)
(less than order of magnitude)

Lateral changes Gradual with well defined Chaotic in basement rocks
Boundaries(faults, contacts) but can be similar to grav

Low values Water(1.0 g/cc or less for ice Sediments
Or up to 1.03g/cc for sea water
Halite, unconsolidated or porous
Sediments

Mid values Shales Acid igneous and
metamorphic rocks

High Values consolidated sediments, Basic igneous and metamorphic
rocks,
Carbonates, anhydrite, iron ore, banded Iron formation
igneous rocks

Geological information Faults, basin location, Extrusive and igneous rocks. Depth
and
provided sediment thickness, porosity structure of metamorphic
basement
Structure of sediments(HRA)
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 1, Page 4
GETECH/ Univ. of Leeds:



Figure 1/2 Lateral density and susceptibility
contrasts cause anomalies.

For magnetic anomalies, the anomaly response is more
complex than just lateral changes in susceptibility since
magnetisation is a vector and we need to consider
magnetic flux and magnetostatic charge. The magnetic
response of a geological structure depends on the
orientation of the body with respect to the inducing field
and the top surface of a structure often contributes
significantly to the shape and size of a magnetic
anomaly. (see Section 18 for understanding the shape
of magnetic anomalies).
1.7 Geological Context

Rocks tend to be more uniform in their density than in
their magnetisation. Different rock types tend to have
different densities and magnetisations.

In General

High density(3.0g/cc) ........................>. Low
(1.8g/cc)
Ultra basics >Basic>Metamorphic > Acid Intrusive > Sediments
Strong magnetisation
. ..............
>Weak
magnetisation

General trend in density and magnetisation properties
of rocks is similar with high density rocks (e.g. Gabbro)
having strong magnetisation and low density rocks (e.g.
sediments) having weak magnetisation. Thus it is to be
expected that gravity and magnetic anomaly maps will
show some degree of correlation.

1.8 Reference Books

M. B. Dobrin Introduction to Geophysical Prospecting
McGraw-Hill

J. Milsom Field Geophysics Geol Soc. of London

D. S. Parasnis Principles of Applied Geophysics
Chapman and Hall

G.D. Garland The Earth's shape and Gravity
Pergamon Press

C. Tsuboi Gravity George Allen and Unwin

W. Torge Gravimetry Walter de Gruyter

W. M. Telford et al. Applied Geophysics. Cambridge
University Press

Grant and West 1965 Interpretation theory in applied
geophysics McGraw Hill

Richard J Blakely 1995 Potential Theory in Gravity
and Magnetics Applications Cambridge Univ. Press

AGSO Journal of Australian Geology & Geophysics
Vol. 17, No 2 1997 Thematic Issue: Airborne magnetic
and radiometric surveys Ed P. Gunn

SEG No 8/AAPG, #43 1998 Geology Applications of
Gravity and Magnetics: Case Histories Ed R I Gibson
and P S Millegan


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 2, Page 1
GETECH/ Univ. of Leeds:

SECTION 2: GENERAL PROPERTIES OF GRAVITY
& MAGNETIC (POTENTIAL) FIELDS




2.1 General Properties

Potential field theory can be applied to a broad class of
force fields in which no dissipative losses of energy
occur when a body moves from one point to another.
Such fields are conservative and the potential energy of
the body depends only on its position and not on the
path along which the body moved.

For simplicity let us only consider the gravity field.

Field of a monopole (Gravity): The force between two
monopolies of strength (mass) m
1
& m
2
situated at
distance r apart is



Figure 2/1: Field of a monopole (Gravity)

F = Gm
1
m
2
/r
2

let m
1
go to 1 and m
2
go to m
The unit pole moves distance dr against field of m
External work dU = -Fdr (force x distance)
where U = Potential Energy
dU/dr = -F = -Gm/r
2

Integrating
U = Gm/r + constant

To make constant = 0 we define work done in bringing
the unit pole from infinity to the point i.e. when m is at
infinity with respect to unit mass

U = O, so constant = O

So Potential Energy U = Gm/r
In vector notation F = grad U = dU/dr =VU
where grad is short for gradient
dU/dr = F
= Fx + Fy + Fz
= i
dU
dx
j
dU
dy
k
dU
dz
|
\

|
.
|
|
\

|
.
|
|
\

|
.
|
+ +
Where i, j & k are unit vectors in the x, y & z directions

Question 1: Does the gravitational force field of a
mass interact with the gravitational force field of
another mass?


Figure 2/2: Hall Effect

If you apply current and magnetic fields in perpendicular
directions through a material it is possible to generate a
force field in the third perpendicular field. Thus I & F
force fields have interacted giving rise to a voltage
potential in the mutually perpendicular direction. So do
two gravitational force fields interact to give rise to a
new vector field?

To answer this one needs to solve for Curl F

= + +
|
\

|
.
|
|
\

|
.
|
|
\

|
.
| i
dF
z
dy
dF
y
dz
j
dF
x
dz
dF
z
dx
k
dF
y
dx
dF
x
dy


For Potential Field such as Grav. and Mag. Curl F = 0

Answer There is no interaction between gravity force
fields. Therefore gravity force is continuous function of
space co-ordinates


Question 2: How does the gravity force field diverge
or change with distance?

Need to evaluate the Divergence (Div)

Figure 2/3: Divergence of force field

Consider force in terms of lines of force (flux)
we know F is proportional to 1/r
2

At distance r surface enclosing mass m has area is
proportional to r
2

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 2, Page 2
GETECH/ Univ. of Leeds:

force = No. of lines/unit area
So it is possible to calculate the Divergence of the force
field (cF/ cr) in terms of Surface or Volume Integrals.

i) Consider surface S encloses no attracting matter



Figure 2/4: Surface enclosing no attracting matter

div
v
Fdv F
n
ds
s
} }
= = 0
F
n
= force normal to surface s
v
F
r
dv
s
F
n
ds 0
} }
= =
c
c

since F = dU/dr = grad U, therefore div. grad U = 0
or U
U
x
U
y
U
z
V = + + =
2
2
2
2
2
2
2
0
c
c
c
c
c
c
Laplaces Equation

ii) Consider surface S encloses some attracting
matter



Figure 2/5: Surface enclosing some attracting matter

Fnds divFdv GM
v s
= =
} }
4t
where div is total flux crossing the surface S and is not
dependant on position of masses.
and M = Total Mass = m
1
+ m
2
+ m
3

( )
= GM r r GM /
2 2
4 4 t t

Thus can show that

V
2
U = div F = -4 t G ...........Poissons Equation





2.2 Implications of Laplaces and
Poissons equations

i. Interpretation of gravity (or magnetic fields) are
non-unique



Figure 2/6: Interpretation non uniqueness



Both masses give rise to the same gravity force field at
the surface. Thus need to use all geological information
available to control interpretation

ii. Use Laplaces Equation to obtain 2nd vertical
derivatives

V = + + =
2
2
2
2
2
2
2
0 U
U
x
U
y
U
z
c
c
c
c
c
c

c
c
c
c
c
c
2
2
2
2
2
2
U
z
U
x
U
y
=

For Gravity and Magnetics anomaly A

c c c
c c c
2 2 2
2 2 2
A A A
z x y
=

where A is the anomaly field and the horizontal
derivatives can be calculated from the map data.

iii. Upward or downward continuation of the field
The flux crossing a horizontal surface at different
heights will be the same. Since the flux is diverging
then the flux density is changing with height, which
causes an increase in wavelength and decrease in
amplitude of anomaly with height. This provides a
means of predicting the change in anomalies
(wavelength and amplitude) with distance from their
source e.g. upward continuation acts like a low pass
filter and downward continuation acts like an amplifier,
which also amplifies noise.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 2, Page 3
GETECH/ Univ. of Leeds:


Figure 2/7: Upward continuation of the field.
Implications: Spectral content of anomaly
systematically changes with wavelength increasing
and amplitude and gradients decreasing with height.

This systematic change in spectrum is clearly seen
when anomalies resulting from sources at different
depths are visualised in terms of their Power Spectrum.
The example in Fig 2/8 is a dipole at 500m and 1000m
depth


Figure 2/8: Power |Spectrum of a dipoles at 500m
and 1000m depth.

The wavelength components making up an anomaly are
related to each other such that they form a linear
spectrum whose slope is a function of depth (steeper
the slope the deeper the top the body). These spectral
properties are utilised later in interpretation.
2.3 Variation of Density and Seismic
Velocity with Depth
(see Sheriff and Geldart. Exploration Seismology
data Processing and interpretation Vol 2)

Density of a rock depends directly upon the density of
the minerals making up the rock and the porosity.
Density variations play a significant role in velocity
variations with high density normally corresponding to
high velocity. (See Gardner et al., 1974) Gardner
introduced the relationship
4
1
aV =
where is in g/cc, V in m/s and a = 0.31
The density of the minerals making up sedimentary
rocks have a range 2.7 +/- 4% This density is close to
the density of Quartz 2.68 g/cc. A sedimentary rock e.g.
sandstone is mainly made up of Quartz but has a bulk
density of 2.4 +/- 10%. The difference is mainly due to
the effects of porosity. To understand what is going on
sedimentary rocks are either clastic or chemically
deposited. Clastic rocks are composed of fragments of
minerals, and thus have appreciable void space.
Chemically deposited rocks can be formed by
recrystalisation and/or the effects of percolating
solutions. The void spaces are usually filled with fluids
and the bulk density, , is given exactly by

= r +(1- ) m

where = porosity, r = fluid density and m = matrix
density
Seismic velocity is affected directly by the bulk density
and porosity since voids are volumes of low velocity
fluids. Thus density and velocity increase with depth as
rock compact, which reduces void space and drives out
the void fluids. The relation of Density and Velocity is
shown in Figure 2/9 and in Figure 2/10 as Density
verses transit time (normally given from well logs).

Figure 2/9: P wave velocity verses density relation
for different lithologies. The dashed line shows
constant acoustic impedance (kg/s m
2
x 10
6
). The
dotted line is
4
1
aV =
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 2, Page 4
GETECH/ Univ. of Leeds:




Figure 2/10: Formation density verses sonic transit
time. The effect on porosity is indicated by the
decreasing percentage value with increasing
compaction.




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 3, Page 1
GETECH/ Univ. of Leeds:

SECTION 3: GRAVITY & MAGNETIC UNITS &
ROCK MAGNETISM




3.1 SI, cgs and Practical Units

The units used in the Petroleum industry have
traditionally been, and remain, a horrible mixture of :
Foot, pound, second (FPS)
Centimetre, gramme, second (CGS)
In these notes Practical units will be used

In an attempt to bring order to physical units the
Systeme Internationale d Unites (SI) has been
introduced. The system uses the metre, kilogram,
second and ampere as primary units. Where possible
the relation between Practical and SI units is given.


3.2 Gravity Units

i. Acceleration g
Working formula: F = G m
1
m
2
/r
2

where F = gm
1
, g = acceleration due to gravity
m
1
= mass in measuring system (gravity meter)
m
2
= mass of Earth and is function of rock density
r = radius of Earth - this is not a constant since there is
topography and latitude effects,
G = gravitational constant

Small g is the pull of, or acceleration due to, gravity and
is measured as follows

1 cm/s
2
= 1 Gal = 0.01 m/s
2

1 mGal = 10
-3
Gal = 10
-5
m/s
2

SI unit is 1 GU. = 10
-1
mGal = 10
-6
m/s
2


where Gal is named after Galileo (1564 - 1642)
and GU. is gravity unit

The mGal is the Practical unit in common use, whereas
the GU. is the SI Unit (Systeme International d' Unites)

(SI for g is m/s
2 ,
10
-3
milli, 10
-6
micro and 10
-9
nano)

Earths Gravity Field:
g
mean
for Earth 981000 mGal
so 1 mGal 10
-6
of g for Earth

gravity meters can read to 10
-9
( 0.001 scale divisions
for a LaCoste & Romberg meter).
In oil exploration, gravity variations of the order of
0.2 mGal (and less) can be important locally over a
structure with variations of 10's of mGals being
more common over sedimentary basins.

ii Gravity Gradients
Because practical units of acceleration for exploration
gravity surveys is the mGal, it makes sense to use
gradient units that are easily related to our unit of
convenience. For historical and practical reasons the
Eotvos is defined in terms of the Gravity Unit (GU)
1 E = 1 GU/km = 10
-9
s
-2
or 1 E = 0.1 microGal / m
or 1 E = 1 nGal / cm = 1 nanoGal / cm

iii. Density,

Figure 3/1: Range of sediment densities with depth
based on well log data

Density, , is a scalar quantity (has only magnitude and
no direction)


1 gram/cubic centimetre = 1 g/cc = 1000 kg/m
3



The SI unit is the kg/m
3

Practical unit is g/cc or g/cm
3
or g.cm
-3

Density of Sea Water is a fuction of salinity and
temperature but generally taken as 1.03 g/cc (Practical
unit) or 1030 kg/m
3
(SI unit)

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 3, Page 2
GETECH/ Univ. of Leeds:

Density can be measured in a number of different ways
e.g. Dry, Saturated, Grain and Bulk density.

Density of sediments increases with depth due to:
(i) compaction (ii) lithification (iii) metamorphism



Figure 3/2: Range in bulk densities for various rock
types


3.3 Magnetic Units

These are more complex than gravity and have various
ways of defining them.

There are four fundamental terms that can be used to
describe how magnetised a material (or a region) is.
These are

B the magnetic induction;
H the magnetic field;
J the magnetic polarisation (or simply, the
magnetisation);
M the magnetic dipole moment per unit volume

These quantities are related in different ways in the two
systems (cgs and SI units) by the following equations:

cgs B = H + 4J
J = M
SI B = H +J
J = M

where = 4 x 10
-7
H/m is the permeability of free
space.

The above equations show that all four fundamental
terms have the same units in the cgs system. In the
cgs system, however, it has been customary to
designate B in terms of Gauss (G), H in Oersteds (Oe),
and J and M in electromagnetic units (emu) per cubic
centimetre. Incidentally, it has also been customary in
the cgs system to express the magnetisation of
materials in terms of the magnetic polarisation J and to
call this quantity the intensity of magnetisation, the
magnetisation per unit volume or simply the
magnetisation. Because J and M are the same in the
cgs system, this introduces no ambiguity. In SI units, B
and J are expressed in Tesla (T), while M and H are
expressed in amperes (A) per metre.

The conversion factors for the four terms are

(B) 1 T = 10
4
G

(H) 1 A/m = 4 x 10
-3
Oe
(J) 1 T = 10
4
/4 emu/cm
3

(M) 1 A/m = 10
-3
emu/cm
3

Note that the conversions for B and M involve only
powers of 10, while those for H and J involve factors of
4.

A fifth important term is used to describe how
magnetised an object may become under the influence
of a field. This is called the magnetic susceptibility (k),
for which the defining equations are
cgs M = J = kH
S M = J/ = kH

In both systems, susceptibility is a pure number. It
follows from the conversions given above that 1 cgs unit
of susceptibility equals 4 SI units of susceptibility.

For these study notes the following units and
terminology are generally used in exploration.

i. Geomagnetic Field Strength , T
Measured in terms of geomagnetic flux density B - all
exploration instruments measure this.

Gauss gamma nanoTesla Weber/m
2

1 G = 10 = 10

nT = 10
-4
Wb/m
2

The SI unit is nT

You may be more familiar with magnetic field given in
units of the Oersted (Oe) which is the magnetic intensity
H. Russian aeromagnetic maps were commonly
contoured in intervals of 1 mOe = 100nT
where 1 Oe = 10
3
/

4 A/m

H is computed in same way as B but is different in that
B is dependent on the permeability
0
of the material
where B
0

Relation between B & H is
B =
0
H + kH
where
0
is permeability and k is susceptibility

for space (vacuum)
B = H since k = 0

in air
B = H since = 1 & k = 0

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 3, Page 3
GETECH/ Univ. of Leeds:


ii. Magnetisation J
(or magnetic moment/unit volume): This is a vector
magnetic property of a rock. Magnetisation J is often
quoted in electromagnetic units per unit volume (emu
cm
-3
) where 1 emu/cm
-3
= 10
3
A/m = 4 10
-4
Wb/m
2

(e.g. if k=0.0057 SI then k=0.00045 emu)
J = Ji + Jr
There are two components of Magnetisation:

Remanent Magnetisation (or Remanence) Jr
This is a natural property of some rocks, independent of
T, and is the magnetisation that remains if T is removed.
Jr direction is often the Earth's field direction at the time
of rock formation. Since rock deform and/or continental
movements may have occurred since the time of
formation the direction of magnetisation need not be the
same as the present direction of T

Induced Magnetisation Ji (where Ji = kT)
Since Ji & T are in nT then k (the susceptibility) is
dimensionless. The magnetic susceptibility of rocks is
normally almost entirely dependent on the volume
percentage of magnetite ( Fe O
3 4
) in the rock (Figure
3/3a, 3/3b).

Figure 3/3a: Magnetic susceptibility of rocks as
a function of magnetic mineral type and
content. Solid lines: Magnetite; thick line =
average trend, a = coarse-grained (>500 mm),
well crystallised magnetite, b = finer grained
(<20 mm) poorly crystallised, stressed, impure
magnetite. Broken lines: Pyrrhotite; thick line =
monoclinic pyrrhotite-bearing rocks, average
trend; a = coarser grained (>500 mm), well
crystallized pyrrhotite; b = finer grained (<20
mm) pyrrhotite (after Clark and Emerson 1991).



Figure 3/3b: Typical values of rock
susceptibility values (where 1 cgs = 4 SI units)

Magnetite: a fair approximation is
k =0.0025 x % magnetite or % Fe O
3 4
=400 x k
Pyrrhotite is the second most common source of
magnetic anomalies but the susceptibility is only about
one-tenth of magnetite
k =0.00025 x % Pyrrhotite or % po = 4000 x k
Ilmenite, maghemite and titanomagnetite are less
important contributors to magnetic anomalies

Ferromagnesian minerals (e.g. amphiboles and
pyroxenes) do not contribute to susceptibility or
remanence

Since mafic rocks usually contain more magnetite than
felsic rocks, their magnetic susceptibility are
correspondingly higher.
Total Magnetisation J
J is the vector sum of Jr & Ji

J = Jr + Ji

The SI unit of J is the A/m. Susceptibility k is
dimensionless but normally quoted as susceptibility per
unit volume.

Koenigsberger Ratio, Qn :
Remanence is often measured by the ratio of remanent
to induced magnetisation
Qn = Jr/Ji
Qn is not strongly lithology-dependent, but is more a
factor of age and metamorphic history
Old rocks (Precambrian Qn is generally less than 1)
often < 0.2 (see section 3.4.6) Here Ji dominates.
In Tertiary and younger rocks Qn may at least 3 and
possibly 10 or more indicating that remanence
dominates.

IMPORTANT: The range of susceptibility k are several
orders of magnitude greater than density .
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 3, Page 4
GETECH/ Univ. of Leeds:


3.4 Rock Magnetism

(Notes provided by Dr Nicola Pressling)

Theoretical Background Atomic magnetic moments
are generated by the motion of electrons spinning about
their axes of rotation. Electron orbits are arranged in
shells and pairs of electrons with opposite spins cancel
each other out. Consequently, full electron shells will
have zero net magnetic moment and incomplete shells,
with unpaired electron spins, will have a net atomic
magnetic moment.

3.4.1 Induced and Remanent Magnetisation

The magnetic moment of an atom and any interaction
with surrounding atoms defines the three types of
magnetisation:


Diamagnetism (Not important in exploration)
Materials with no magnetic moment will still respond to
the presence of an external magnetic field. An
additional torque is exerted on the electrons causing
them to precess about the applied field direction and
induce a weak, negative magnetization that is
proportional to the applied field strength. Diamagnetism
vanishes when the applied field is removed and does
not depend on temperature.


Paramagnetism (Not important in exploration) In
materials with a net magnetic moment, thermal energy
favours random orientation of the unpaired spin.
However, an external magnetic field will exert an
aligning torque and induce a weak, positive magnetic
field that is proportional to the strength of the applied
field. Paramagnetism is inversely dependent on
temperature and also vanishes in the absence of an
applied field.

Ferromagnetism (Prime carrier type and thus
important in exploration) Atoms occupy fixed
positions in crystal lattices and electrons with unpaired
spins can be exchanged between neighbouring atoms.
The process requires large amounts of energy and
produces a strong spontaneous magnetisation orders of
magnitude greater than either dia- or paramagnetism in
the same external magnetising field. During thermal
expansion, inter-atomic distances increase and the
exchange coupling becomes weaker. The Curie
temperature, Tc, is defined as the temperature at which
the distance between atoms is such that they act
independently and paramagnetically; when thermal
energy overcomes magnetic energy. Tc is different for
different minerals and can be diagnostic. Ferromagnetic
materials retain a net magnetisation even after the
removal of the applied magnetic field.



Figure 3/4: Various types of spin alignment possible
in ferromagnetic materials. (a) Ferromagnetism
sensu stricto. (b) Anti-ferromagnetism. (c)
Ferrimagnetism. (d) Canted anti-ferromagnetism. (e)
Defect ferromagnetism.


The term ferromagnetism strictly applies only when
spins are aligned perfectly in parallel. However,
alignment of the unpaired electron spins can occur in a
variety of ways within a crystal lattice and also between
lattice layers (Figure 3/4). Anti-ferromagnetism is when
spins are anti-parallel and there is zero net magnetic
moment. When spins are antiparallel but not equal in
magnitude, the resulting net magnetic moment is
referred to as ferrimagnetic. A weak magnetic moment
is also generated in canted anti-ferromagnetic
materials, where anti-ferromagnetic spins are obliquely
aligned. Additionally, lattice defects in ferromagnetic
materials can displace the magnetic structure and
cause a net magnetic defect moment as a result of any
subsequent unpaired spins.

Hysteresis Loops: Ferromagnets exhibit hysteresis
properties changes in the magnetisation of a
ferromagnet lags behind any changes in the
magnetising field. Figure 3/5 shows a typical
ferromagnetic hysteresis loop. The magnetisation of the

Figure 3/5: Typical ferromagnetic hysteresis loop
illustrating the changes in the magnetisation of a
ferromagnet, M, lagging behind the changes in the
magnetising field, H. Saturation magnetisation, Ms,
remanent magnetisation, Mrs, coercivity, Hc, and
coercivity of remanence, Hcr, are defined in the text.

ferromagnet, M, increases as the applied strong
magnetic field, H, increases. When all the individual
magnetic moments are aligned with the applied field,
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 3, Page 5
GETECH/ Univ. of Leeds:

the magnetisation is said to be saturated (Ms). A
remanent magnetisation, Mrs, remains after the removal
of the saturating applied field. Increasing the strong
magnetic field in the opposite direction progressively
remagnetises the material, cancelling out the original
remanence. The coercive force, Hc, is the reverse field
required to reduce the magnetic remanence to zero
whilst still in the presence of the reverse applied field.
The coercivity of remanence, Hcr, by comparison, is the
reverse field required to reduce the magnetic
remanence to zero after the removal of the reverse
applied field.

3.4.2 Magnetic Domain Theory

Ferromagnetic properties depend heavily on grain size
and shape, defining three magnetic domain regimes
(Figure 3/6: Single Domain (SD) SD grains are the ideal
magnetic remanence carriers, acting as isolated


Figure 3/6: Magnetic domains. (a) SD grain aligned
with the applied field; magnetostatic energy
accumulating on the grain surface creates a self-
demagnetising field. (b) Larger PSD grain. (c) Larger
still MD grain. (d) Domain wall with sequentially
rotating magnetic moments. (e) MD grain where the
walls have moved to enlarge the domain aligned
with the applied field compare to the MD grain in
(c).

magnetic dipoles. They are very stable, requiring a
strong magnetic field to rotate the entire magnetic
moment against the grain anisotropy. The theoretical
range of SD grain sizes is very narrow: from 0.030.1
m in equant grains and up to 1 m in elongated grains,
where magnetic anisotropy has a greater effect.

Multi-Domain (MD) As grain size increases it becomes
energetically favourable to divide a grain into a number
of uniformly magnetised regions or domains, thereby
reducing the magnetostatic (self-demagnetising) energy
at the surface of the grain by 1/N, where N is the
number of domains. Each region is separated by thin
(0.1 m) domain walls, within which the magnetic spins
sequentially rotate between the adjacent domain
directions. MD grains are less stable than SD grains as
it is energetically easier to move domain walls
compared to switching the magnetic moment of a whole
domain. The magnetisation of a MD grain is therefore
changed by enlarging domains with a specific magnetic
moment at the expense of any domains oppositely
magnetised. The number of domains in a grain depends
on factors such as grain size and shape, internal stress
and the number of defects present. True MD behaviour
should occur in grain sizes exceeding 15 m.

Pseudo-Single Domain (PSD) These grains are of
intermediate size and thus contain only a limited
number of domains. PSD grains essentially act like SD
grains as the movement of domain walls is restricted by
strong interactions at the grain surface.

3.4.3 Magnetic Mineralogy

The most important and most commonly occurring
ferromagnetic minerals are irontitanium (FeTi) oxides.
The various possible compositions of FeTi oxides are
displayed on the ternary diagram in Figure 3/7.
Compositions varying from bottom to top have
increasing Ti content and compositions varying from left

Figure 3/7: Ternary diagram illustrating the
composition of the iron-titanium oxides. Names and
compositions of the important minerals and solid-
solution series are labelled. Increasing oxidation is
in the direction of the dashed arrows. Colour
shading represents the variation in Tc with
composition: increasing Ti content decreases Tc;
increasing oxidation increases Tc. The Curie
temperatures, Tc, of the end-member minerals of
the two solid-solution series are noted explicitly.

to right have increasing ratios of ferric (Fe
3+
) to ferrous
(Fe
2+
) iron.

Titanomagnetite and titanohaematite are primary
crystallising phases in igneous rocks. In the
titanomagnetite solid-solution series (Fe3-xTixO4), Ms
and Tc both linearly depend on the composition, x,
decreasing with the addition of Ti
4+
which is relatively
larger than the Fe
2+
and Fe
3+
cations and has no net
magnetic moment. When x > 0.8, titanomagnetites
behave paramagnetically at room temperature. In the
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 3, Page 6
GETECH/ Univ. of Leeds:

titanohaematites solid-solution series (Fe2-xTixO3), Tc
also decreases linearly with increasing Ti
4+
content.
However, the mode of ferromagnetism varies
significantly with composition: when 0 < x < 0.45 the
magnetism is the canted anti-ferromagnetism of
haematite; when 0.45 < x < 0.8 the Ti and Fe are no
longer equally distributed and the magnetism is
ferrimagnetic; above x = 0.8 titanohaematite behaves
paramagnetically.

High temperature oxidation High temperature
oxidation is also known as deuteric oxidation. This
process occurs during initial cooling at temperatures
above Tc and when there is enough oxygen present in
the melt. The composition moves to the right in the
ternary diagram. However, the resulting grains are often
not homogeneous, but instead composite grains, for
example, ilmenite lamellae in a titanomagnetite host.
Movement along the dashed arrows in Figure 3/7
reflects this oxidation process: primary titanomagnetite
of intermediate composition is replaced by Fe-rich
titanomagnetite, which results in increased Tc and Ms.
In addition, the grain size of the oxidation product is
reduced by the introduction of lamellae that break-up
the original grain matrix. Deuteric oxidation almost
always occurs unless the lava has been quenched or
cooled very rapidly. The extent of its effect depends on
the cooling rate and oxygen fugacity; end-member
products being rutile, pseudobrookite or haematite.

Low temperature oxidation Also known as
maghaematisation. This process occurs at lower
temperatures, generally < 200?C, and is caused by
processes such as weathering or hydrothermal
circulation. Titanomagnetite alters into
titanomaghaematite, a cation-deficient product from the
diffusion of Fe out of the rock. Maghaemite is
metastable and inverts with time and temperature to the
more compact, but chemically equivalent, structure of
haematite. Inversion to haematite has the affect of
decreasing Ms and increasing Tc.

3.4.4 Lava as a Recording Media

It is the small proportion of ferromagnetic minerals
present in some rocks that record ambient magnetic
fields. However, if the applied, ambient magnetic field is
removed, the net magnetisation will eventually decay to
zero by the relation
M(t) = M0 exp_(-t/r) (1.1)
where M0 is the initial magnetisation, t is time and r is
the relaxation constant, defined empirically as the time
for the remanence to decay to 1/e of its initial value.r is
inversely proportional to temperature and proportional to
coercivity and grain volume. The blocking temperature,
TB, of a grain is the point below which r is large and the
magnetisation can be considered stable. At one
extreme, r can be unstable on laboratory timescales of
the order 10
2
10
3
seconds. So-called
superparamagnetic (SP) grains will rapidly become
random in the absence of an applied field, effectively
behaving like paramagnets. At the other extreme, r can
be stable on the order of 10
9
years and retain
information on the geomagnetic field over geological
timescales.

The natural remanent magnetisation (NRM) carried by a
rock may be composed of several different magnetic
components acquired at different times during its
history. The primary component is that acquired at the
time of initial formation. Secondary components are any
remanence acquired post-formation. The primary NRM
component in lavas is thermal remanent magnetisation
(TRM); the magnetisation acquired as the lava cools
and solidifies, held by grains with TB < Tc.

3.4.5 Sources of Secondary Magnetisation

Secondary remanent magnetisations overprint the
primary component, masking information about the
geomagnetic field recorded instantaneously at the time
of formation. However, since secondary components are
acquired at different times and usually in different
magnetic conditions, they can often be identified and
separated. The various sources of secondary remanent
magnetisation include:

Secondary TRM Reheating to elevated temperatures
can reset the remanent magnetization in grains with low
TB. For example, secondary TRM could occur in baked
margins, close to dyke intrusions.

Chemical Remanent Magnetisation (CRM) Chemical
reactions can form new ferromagnetic minerals, or
cause phase changes in existing ones. For example,
new minerals that are the product of oxidation or
exsolution will acquire a CRM describing the magnetic
field present during their growth.

Drilling Induced Remanent Magnetisation (DIRM)
The heat and motion generated by drilling can result in
the alignment of magnetic moments parallel to the
drilling direction. This causes a diagnostic bias in the
original NRM direction.

Viscous Remanent Magnetisation (VRM) VRM is
acquired in low coercivity grains by exposure to weak
fields over long time periods at constant temperatures.
VRM is often aligned with the present day geomagnetic
field or a laboratory field, i.e. the most recent field to
which the rock was exposed.

Isothermal Remanent Magnetisation (IRM) This form
of secondary magnetisation results from short-term
exposure to strong magnetic fields at a constant
temperature, such as magnetic fields generated in the
vicinity of a lightning strike in the field, or an
electromagnet in the laboratory. IRM has the capacity to
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 3, Page 7
GETECH/ Univ. of Leeds:

affect all magnetic grains and completely eradicate any
primary magnetic remanence.

Detrital Remanent Magnetisation (DRM) DRM refers
to the combination of depositional and post-depositional
magnetisation processes in sedimentary rocks.
Alignment of magnetic minerals occurs during sediment
deposition, but can be modified by bioturbation and
compaction before consolidation.

3.4.6 Summary of Magnetic Properties of
Common Crustal Rocks (after Reeves, 2005
Aeromagnetic Surveys, Principles, Practice &
Interpretation, Geosoft website)

(a) Sedimentary rocks can be considered as non-
magnetic or very weakly magnetic. This is the basis for
many applications of aeromagnetic surveying in that the
interpretation of survey data assumes that magnetic
sources must lie below the base of the sedimentary
sequence. This allows rapid identification of hidden
sedimentary basins in petroleum exploration. The
thickness of the sedimentary sequence may be mapped
by systematically determining the depths of the
magnetic sources (the magnetic basement) over the
survey area. Exceptions that may cause difficulties with
this assumption are certain sedimentary iron deposits,
volcanic or pyroclastic sequences concealed within a
sedimentary sequence and dykes and sills emplaced
into the sediments.

(b) Metamorphic rocks probably make up the largest
part of the earth's magnetic crust shallower than the
Currie isotherm and have a wide range of magnetic
susceptibilities. These often combine, in practice, to
give complex patterns of magnetic anomalies over
areas of exposed metamorphic terrain. Itabiritic rocks
tend to produce the largest anomalies, followed by
metabasic bodies, whereas felsic areas of
granitic/gneissic terrain often show a plethora of low
amplitude anomalies imposed on a relatively smooth
background field.

Processes of metamorphism can be shown to radically
effect magnetite and hence the magnetic susceptibility
of rocks. Serpentinisation, for example, usually creates
substantial quantities of magnetite and hence
serpentinised ultramafic rocks commonly have high
susceptibility (Figure 3/10). However, prograde
metamorphism of serpentinised ultramafic rocks causes
substitution of Mg and Al into the magnetite, eventually
shifting the composition into the paramagnetic field at
granulite grade. Retrograde metamorphism of such rock
can produce a magnetic rock again. Other factors
include whether pressure, temperature and composition
conditions favour crystallisation of magnetite or ilmenite
in the solidification of an igneous rock and hence, for
example, the production of S-type or I-type granites.
The iron content of a sediment and the ambient redox
conditions during deposition and diagenesis can be
shown to influence the capacity of a rock to develop
secondary magnetite during metamorphism.

(c ) Fracture Zones Oxidation in fracture zones during
the weathering process commonly leads to the
destruction of magnetite which often allows such zones
to be picked out on magnetic anomaly maps as narrow
zones with markedly less magnetic variation than in the
surrounding rock. A further consideration is that the
original distribution of magnetite in a sedimentary rock
may be largely unchanged when that rock undergoes
metamorphism, in which case a 'paleolithology' may be
preserved - and detected by magnetic surveying. This is
a good example of how magnetic surveys can call
attention to features of geological significance that are
not immediately evident to the field geologist but which
can be verified in the field upon closer investigation.

(d) Igneous and plutonic rocks. Igneous rocks also
show a wide range of magnetic properties.
Homogeneous granitic plutons can be weakly magnetic
often conspicuously featureless in comparison with
the magnetic signature of their surrounding rocks - but
this is by no means universal. In relatively recent years,
two distinct categories of granitoids have been
recognised: the magnetite-series (ferrimagnetic) and the
ilmentiteseries (paramagnetic). This classification has
important petrogenetic and metallogenic implications
and gives a new role to magnetic mapping of granites,
both in airborne surveys and in the use of susceptibility
meters on granite outcrops. Mafic plutons and lopoliths
may be highly magnetic, but examples are also
recorded where they are virtually nonmagnetic. They
generally have low Q values as a result of coarse grain
size. Remanent magnetisation can equally be very high
where pyrrhotite is present.

(e) Hypabyssal rocks. Dykes and sills of a mafic
composition often have a strong, remanent
magnetisation due to rapid cooling. On aeromagnetic
maps they often produce the clearest anomalies which
cut discordantly across all older rocks in the terrain.
Dykes and dyke swarms may often be traced for
hundreds of kilometres on aeromagnetic maps - which
are arguably the most effective means of mapping their
spatial geometry. Some dyke materials have been
shown to be intrinsically non-magnetic, but strong
magnetic anomalies can still arise from the contact
auriole of the encasing baked country rock. An
enigmatic feature of dyke anomalies is the consistent
shape of their anomaly along strike lengths of hundreds
of kilometres, often showing a consistent direction of
remanent magnetisation. Carbonatitic complexes often
produce pronounced magnetic anomalies.

(f) Banded iron formations/itabirites. Banded iron
formations can be so highly magnetic that they can be
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 3, Page 8
GETECH/ Univ. of Leeds:

unequivocally identified on aeromagnetic maps.
Anomalies recorded in central Brazil, for example,
exceed 50000 nT, in an area where the earth's total field
is only 23000 nT. Less magnetic examples may be
confused with mafic or ultramafic complexes.

(g) Ore bodies. Certain ore bodies can be significantly
magnetic, even though the magnetic carriers are
entirely amongst the gangue minerals. In such a case
the association with magnetic minerals may be used as
a path-finder for the ore through magnetic survey. In
general, however, the direct detection of magnetic ores
is only to be expected in the most detailed
aeromagnetic surveys since magnetic ore bodies form
such a very small part of the rocks to be expected in a
survey area.

In an important and pioneering study of the magnetic
properties of rocks, almost 30 000 specimens were
collected from northern Norway, Sweden and Finland
and measured for density, magnetic susceptibility and
NRM (Henkel, 1991). Figure 3/8 shows the frequency
distribution of magnetic susceptibility against density for
the Precambrian rocks of this area. It is seen here that
whereas density varies continuously between 2.55 and
3.10 t/m
3
, the distribution of magnetic susceptibilities is
distinctly bimodal. The cluster with the lower


Figure 3/8: Frequency distribution of 30 000
Precambrian rock samples from northern
Scandinavia tested for density and magnetic
susceptibility (after Henkel 1991).

susceptibility is essentially paramagnetic ('non-
magnetic'), peaking at k = 2 x 10-4 SI, whereas the
higher cluster peaks at about k=10-2 SI and is
ferrimagnetic. The bimodal distribution appears to be
somewhat independent of major rock lithology and so
gives rise to a typically banded and complex pattern of
magnetic anomalies over the Fenno-Scandian shield.
This may be an important factor in the success of
magnetic surveys in tracing structure in metamorphic
areas generally, but does not encourage the
identification of specific anomalies with lithological units.

It should be noted from Figure 3/8 that, as basicity (and
therefore density) increases within both clusters, there
is a slight tendency for magnetic susceptibility also to
increase. However, many felsic rocks are just as
magnetic as the average for mafic rocks - and some
very mafic rocks in the lower susceptibility cluster are
effectively non-magnetic.

Figure 3/9 shows the relationship between magnetic
susceptibility and Koenigsberger ratio, Q determined in
the same study. The simplifying assumption, often
forced upon the aeromagnetic interpreter, that
magnetisation is entirely induced (and therefore in the
direction of the present-day field) gains some support
from this study where the average Q for the
Scandinavian rocks is only 0.2

The possible effects of metamorphism on the magnetic
properties of some rocks are illustrated in Figure 3/10.
The form of the figure is the same as Figure 3/8 and
attempts to divide igneous and metamorphic rocks
according to their density and susceptibility. Two
processes are illustrated. First, the serpentinisation of
olivine turns an essentially non-magnetic but ultramafic
rock into a very strongly magnetic one; serpentinites are
among the most magnetic rock types commonly
encountered. Second, the destruction of magnetite
through oxidation to maghemite can convert a rather
highly magnetic rock into a much less magnetic one.

Figure 3/9: Frequency distribution of 30 000
Precambrian rock samples from northern
Scandinavia: Koenigsberger ratio versus magnetic
susceptibility (after Henkel 1991).
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 3, Page 9
GETECH/ Univ. of Leeds:



Figure 3/10: Some effects of metamorphism on rock
magnetism. In blue, the serpetinisation of olivine
(blue) turns an ultramafic but non-magnetic rock
into a highly magnetic one. In red, the oxidation of
magnetite to maghemite reduces magnetic
susceptibility by two orders of magnitude.


3.5 Methods of measuring magnetic
properties


3.5.1 In the laboratory and at outcrops.

A 'susceptibility meter' may be used on handspecimens
or drill-cores to measure magnetic susceptibility (Clark
and Emerson, 1991). This apparatus may also be used
in the field to make measurements on specimens in situ
at outcrops. Owing to the wide variations in magnetic
properties over short distances, even within one rock
unit, such measurements tend to be of limited
quantitative value unless extremely large numbers of
measurements are made in a systematic way.

3.5.2 In a drillhole.

Susceptibility logging and magnetometer profiling are
both possible within a drillhole and may provide useful
information on the magnetic parameters of the rocks
penetrated by the hole.


3.5.3 From aeromagnetic survey interpretation.

The results of aeromagnetic surveys may be 'inverted'
in several ways to provide quantitative estimates of
magnetic susceptibilities, both in terms of 'pseudo-
susceptibility maps' of exposed metamorphic and
igneous terrain, and in terms of the susceptibility or
magnetisation of a specific magnetic body below the
ground surface (see Section 27.4).
.
3.6.4 Paleomagnetic and rock magnetic
measurements.

The role of paleomagnetic observations has been
important in the unravelling of earth history. Methods
depend on the collection of oriented specimens - often
short cores drilled on outcrops with a portable drill.
Studies concentrate on the direction as well as the
magnitude of the NRM. Progressive removal of the
NRM by either AC demagnetisation or by heating to
progressively higher temperatures (thermal
demagnetisation) can serve to investigate the
metamorphic history of the rock and the direction of the
geomagnetic field at various stages of this history.
Components encountered may vary from a soft, viscous
component (VRM) oriented in the direction of the
present day field, to a 'hard' component acquired at the
time of cooling which can only be destroyed as the
Curie point is passed. Under favourable circumstances
other paleo-pole directions may be preserved from
intermediate metamorphic episodes. Accurate
radiometric dating of the same specimens vastly
increases the value of such observations to
understanding the geologic history of the site. The
reader is referred to McElhinny and McFadden (2000)
for further information on these methods.


3.6 References /Further Reading

Clark, D A (1997) Magnetic petrophysics and
magnetic petrology: aids to geological
interpretationof magnetic surveys. AGSO Journal of
Austarlian Geol & Geophys 17(2) 83-103

Clark, D.A., and Emerson, D.W., (1991). Notes on
rock magnetisation in applied geophysical studies.
Exploration Geophysics vol 22, No.4, pp 547-555.

Hanneson, J E (2003) On the use of magnetic and
gravity to discriminate between gabro and iron-rich
ore forming systems. ASEG 16 th Adelaide
Extended Abstracts

Henkel, H., (1991). Petrophysical properties (density
and magnetization) of rocks from the northern part
of the Baltic Shield. Tectonophysics, vol 192, pp 1-
19.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 3, Page 10
GETECH/ Univ. of Leeds:


Butler, R.(1992) Paleomagnetism. Blackwell
Scientific Publications,.

Dunlop, D. and Ozdemir, O. (1997) Rock
Magnetism: Fundamentals and frontiers. Cambridge
University Press,.

McElhinny, M.W., and McFadden, P.L., (2000).
Paleomagnetism continents and oceans.
Academic Press, 386 pp.
Tauxe, L. (2002) Paleomagnetic Principles and
Practice. Kluwer Academic Publishers,

J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds:




GRAVITY

Section 4 Gravity Anomalies
Section 5 GPS in Gravity Surveys (Land, Marine & Air)
Section 6 Land Gravity Data: Acquisition and Processing
Section 7 Marine Gravity Data: Acquisition & Processing
Section 8 Airborne Gravity Data: Acquisition & Processing
Section 9 Gravity Gradiometer Data
Section 10 Satellite Altimeter Gravity Data: Acquisition & Processing
Section 11 Global Gravity Data & Models
Section 12 Advances in Gravity Survey Resolution
J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds:


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 1
GETECH/ Univ. of Leeds:
SECTION 4: GRAVITY ANOMALIES



4.1 Gravitational Potential Energy, U

Potential Energy U. The acceleration due to gravity, g,
is generally referred to being the first vertical derivative
(gradient) of the potential energy

Figure 4/1: Potential energy surfaces and gravity
response

g = - dU/dr

where g is acceleration due to gravity, U the potential
energy and r the radial distance. (more details see
Section 2)

Concept of Equipotential Surface: This is a
continuous surface that is everywhere perpendicular to
lines of force. No work is done against the field when
moving along such a surface. Mean sea level is an
equipotential surface with respect to gravity. For the
Earth there is an infinite set of equipotential surfaces
surrounding the Earth, with mean sea level being but
one. As you go away from the Earth the equipotential
surface becomes smoother due to divergence of the
gravitational field with height (see Figure 2/7).

The equipotential surface is a smooth surface. If a ball
was placed on this surface (assuming surface to be
physical surface) the ball would stay where it was put. If
the surface is in space around the Earth and the ball is
pushed then the ball will continue to move at a constant
velocity along this surface without stopping. This is how
satellites move around the Earth, since above ~500km
altitude there is little or no atmospheric drag to slow
down the satellite. There are an infinite number of
equipotential surfaces (4 shown in Figure 4/1) on which
a satellite can travel; each higher surface is slightly
smoother than its neighbor. The surface becomes
smoother due to the divergence of the gravity field (see
Section 2, figure 2/7). The geiod surface is an
equipotential surface (height measured in metres) that
represents the mean sea level surface or mean
ellipsoidal shape of the Earth.

Note: the gravity field on an equipotential surface is not
constant since g is the perpendicular gradient across
the surface which varies spatially (see Figure 4/1). A

high density structure on the earths surface will
generate a positive correlation with the equipotential

surface as well as its derived gravity (or gradient across
the equipotential surface.

The Geoid: This is the sea-level equipotential surface
which can be measured in marine areas by satellite
radar altimeters (see Section 11) after correcting the
surface heights for transient effects (tides, currents,
wind, temperature variations). An example of the geiod
is shown in figure 4/2 for an area centred on the Gulf of
Thailand.



Figure 4/2: Geoid surface over SE Asia. Resolution
over marine areas good from satellite radar altimetry
whereas for land areas the geoid is derived from the
much sparser coverage of gravity measurements.

Resolution of the equipotential surface is being
improved all the time by new satellite data (GOCE
Section 11) and new and improved coverage of gravity
data.

4.2 Gravity (g) dependence

The observed pull of gravity g is a function of

g =function ( , r, V, lat, ht, topo , time)

Geology controls
1 & 2 , r density & distance of subsurface
mass
distribution from point of
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 2
GETECH/ Univ. of Leeds:
measurement
3 V rock volume of the mass distribution
Other controls
4 lat Latitude - position on Earth's surface
5 ht height - height above or below sea
level which is used as reference
height
6 topo topography surrounding measurement
site
7 time e.g. lunar

If we did not correct for items 4 - 7 then we would be
unable to use gravity data to investigate items 1 - 3
since the gravity effects of items 4 - 7 are generally
much larger than items 1 - 3.

There are various ways of correcting (reducing or
processing) gravity data for items 4 - 7 and the resulting
variation in gravity are called:

Equipotentail U (see Section 4.1)
Free air Anomaly
Bouguer Anomaly
Isostatic Anomaly
Decompensative Anomaly

These anomaly types are inter-related. See below and
Sections 6, 7, & 8 for more details for each anomaly
type


4.3 Free Air Anomaly (FAA)

FAA = g
obs
- g
th
+ Free air correction (FAC)

where

g
obs
= vertical component of gravity measured with
gravity meter.

g
th
= theoretical or normal value of gravity at sea level
at measuring site (sometimes called latitude correction).
This correction removes the major component of
gravity leaving only local effects (see later)

FAC = corrects for height above sea level
= (0.3086 mGals per metre)
(for airborne gravity a more correction is used see
Section 8.4.6)

Since gravity decreases with distance
2
from centre of
Earth and g
obs
is measured at various heights above
sea level along a profile, then there is a need to reduce
the data to a common reference surface (datum).

Height Reference Datum is normally taken as mean
sea level, but can be any defined height e.g. lowest
point in survey area

FAA = g
obs
- g
th
+ 0.3086 h mGals

where h is measured in metres

There are two ways of viewing data processing. The first
is to wrongly assume you are moving all
measurements to the sea level datum. The correct way
to view the correction is that it is being applied at the
observation location.


FAA = g
obs
- g
th
+ 0.3086 h moving g
obs
to sea
level (wrong way to consider this correction)

FAA = g
obs
(g
th
- 0.3086 h) correcting at g
obs
(correct way to consider this correction)

Both equations the same just different emphasise. To
move g
obs
to another height needs upward or
downward continuation of the field.

In Fig 4/3 The Free air correction appears to over
corrected g
obs
. This is due to not taking into account
the gravity effect of the rock mass between the
measurement site and sea level datum. Thus the free
air anomaly is not normally used for land based gravity
studies. Its main use is at sea (see later)

Gravity Reference Datum: since gravity meters are
relative measuring instruments their values need to be
tied and adjusted to an international network of known
gravity values called the IGSN71 (see later)

Assumption Made to Generate FAA
- no assumptions made
- FAA strongly influenced by topography/bathymetry
- the FAA at long wavelengths varies about zero due to
isostatic processes

Implications of FAA on exploration
- not used generally for land based surveys
- more generally used in marine surveys where water
layer/bathymetry used as first layer of model (this layer
can be 2D or 3D)

4.4 Bouguer Anomaly (BA)

BA = g
obs
- g
th
+ 0.3086h - Bouguer Correction

Thus the BA equation is simply the FAA with an
additional correction called the Bouguer Correction.

i.e. BA = FAA - Bouguer Correction (BC)
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 3
GETECH/ Univ. of Leeds:


Figure 4/3: Relations between topography , Observed gravity, Free air anomaly, latitude correction & Bouguer
anomaly. Note g
obs
shows negative correlation with topography and FAA gives positive correlation with
topography

The Bouguer Correction corrects for the rock mass
between the measuring site and height datum (sea
level). The correction assumes the rock to be a flat
infinite slab of thickness h (metres) and constant density
r (g/cc). This is not strictly true since the top of the
infinite slab is the topography.

BC =2tGh = 0.04191h mGal

Where G = Grav. Constant = 6.672 x10
-11
N.m
2
.kg
-2


In Fig 4/3 the Bouguer Correction has successfully
removed the main effects of topographic correlation.
The equation for the Bouguer anomaly where no terrain
correction is applied is called the Simple Bouguer
Anomaly (SBA), where

SBA = g
obs
- g
th
+ 0.3086h - 0.04191 h
SBA = g
obs
(g
th
- 0.3086h + 0.04191 h)

or when terrain corrections are applied it is called the
Complete Bouguer Anomaly (CBA), where

CBA = Simple Bouguer Anomaly + Ter Cor

Always define COMPLETE (e.g. out to 22 km )
where Terrain Correction, TC, is the correction made to
the Bouguer Correction since the top of the flat infinite
slab is not flat but has topography (see Section 5.5 ).

BA = g
obs
g
th
+ 0.3086h 0.04191 h + T
BA = g
obs
(g
th
- 0.3086h + 0.04191 h - T

)
The Bouguer anomaly is extensively used for both land
and marine studies
Assumptions made
- BC assumes the rock mass between the
measurement point and the height datum used
(normally sea level) to be of constant density (referred
to as reduction density)
- BC assumes flat Earth model, Now 3D topography
correction now common by combining BC and terrain
correction (see section 7 & 8)
- large regional variations in anomaly amplitude
between oceanic and continental areas due to crustal
thickness variations and density structure (see Fig. 7/1)

Implications
- better imaging of sub-surface geology and structure
than FAA at all depths for marine and land surveys
- long wavelength field closely correlates with crustal
thickness.
- can be more easily interpreted than FAA
- difficult to use in regions of major crustal thickness
variations e.g. continental edges, subduction related
areas. In these high Bouguer gradient areas the
Isostatic residual anomalies used for interpretation

4.5 Isostatic Residual Anomaly
Isostatic Residual Anomaly = BA -
Isostatic Correction

Classical studies of isostasy considered the depth to the
Moho (base of crust) to simply depends on the mass of
the crust such that at a certain depth below the Moho
the weight of simple columns of crust and mantle are
constant e.g. in mountain belts the weight of the crust
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 4
GETECH/ Univ. of Leeds:
will cause the crust to subside until it is in isostatic
equilibrium (similar to blocks of wood floating on water).
This implies that higher the topography the thicker the
crust and deeper the Moho (roots of mountain). In
reality the crust is not a series of independent columns
(blocks) of rock but a continuous structure, which can
support mass-excesses or deficiencies on or below the
crust. Thus for loads of small dimension (less than ~200
km) can be out of local isostatic equilibrium. However,
for loads with dimensions in excess of about 200 km
(distance depends on strength of crust/lithosphere) the
crustal loads will be totally isostatically compensated by
flexure.

4.5.1 Airy-Heiskanen Isostatic Model

For land areas the computation of Moho depth (i.e.
base of crust) t =h/A +T
where
t = depth below sea level to crust /mantle boundary
h = elevation of station
= crustal density of topography
A = density contrast across Moho
T = crustal thickness at sea level
(Program AIRYROOT, a USGS program, can be used
to determine depths to base of the crust using
topography and then determines the 3D effect of the
resulting Moho model out to a certain radius.)



Figure 4/4a Airy-Heiskanen Model


Figure 4/4b: Another way of visualising the Airy
Heiskanen model as a series of independent blocks
of wood floating in water i.e. the crust is assumed
very weak.

Only h is known, often considered to be 2.67g/cc
(Chapin, 1996 determined better density to be 2.6g/cc
for Andes), A is generally a highly variable parameter
ranging from 0.2 to 0.6g/cc (Chapin, 1996 determined
0.45g/cc for Andes) and T originally assumed and still
holds up to being 30km.

4.5.2 Pratt Isostasy


Figure 4/5: The Pratt Isostatic model

where
constant = +
m m
h
c c
h

4.5.3 Infinite Strength Crust/Lithosphere





Figure 4/6: Infinite Strength Crust no flexuring

4.5.4 Crustal/Lithosphere Flexure



Figure 4/7a: Crustal Flexure under load

In oil exploration we are not generally interested in
these large wavelength structures it is possible to
remove all anomalies with wavelengths greater than
about 300 km (can be done by frequency filtering). The
Isostatic Correction can be considered to be the long
wavelength Bouguer anomaly component.

Isostatic residual Anomaly = Bouguer Anomaly -
Isostatic Cor.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 5
GETECH/ Univ. of Leeds:

Figure 4/7b: Topography/bathymetry of Hawaii


Figure 4/7c: Satellite derived Free air gravity
showing crustal flexuring.

The isostatic correction for the Airy case (no crustal
strength) can be determined from the grid of
topographic heights by assuming the topography
reflects the relief of the Moho. Good results can be
obtained by considering topography effects out to about
400 km. The isostatic correction at a grid node is thus
the 3D gravity attraction of the regional effect of the
density contrast across the Moho based on an AIRY-
HEISKANEN model out to 400 km.

What remains after applying the isostatic correction are
the small wavelength anomalies which are due to near
surface geological structures such as sedimentary
basins.

Warning some large sedimentary basins (Fig 4/8)
with widths in excess of 150 km will be partly or
nearly wholly isostatically compensated so that part
of the isostatic correction will be due to the
sedimentary basin itself. This will mean the
Isostatic Anomaly will be less than would be
normally expected which can lead to
underestimating the sedimentary thickness since
part of the sedimentary anomaly has been removed
by the isostatic correction (a classic case of the
baby being thrown out with the bath water!)

When we talk about isostatic anomalies we generally
mean the isostatic residual anomaly i.e. that part of
the gravity field not locally compensated. Crustal
structures are in fact normally compensated at much
longer wavelengths due to lithosphere flexure.


Figure 4/8: Isostatic Residual Anomaly over a large
basin. Since the topography is flat, the shape of the
Isostatic Residual anomaly will be similar to the
Bouguer anomaly, so observed Isostatic residual
anomaly will be thick black line and will toytally
underestimate the true gravity response of the
sediments (thin black line).



Figure 4/9: Isostatic Compensation can be
considered at some depth in the mantle where the
two columns of rock shown above have the same
overall mass.

Isostatic Compensation: A point source at Moho
depth will give rise to an anomaly at the surface of
wavelength ~>70 km, thus BAs with wavelengths
greater than ~70 km could result from depth variations
of the Moho. At the Moho there is a large positive
density contrast of 0.2 < A < 0.6 g/cc (Crust lower
density than underlying mantle). In an oil exploration
environment where one is focused on the sedimentary
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 6
GETECH/ Univ. of Leeds:
section of the upper crust an ideal situation would be to
isolate the sedimentary anomaly by removal of all other
anomaly effects, particularly the response of the Moho..

An example of an ideal situation is shown in Fig.4/10.
The profile crosses the Viking Graben of the North Sea
at 59
0
50 N and coincides with a BIRPS deep seismic
reflection line so there is good knowledge of both the
sedimentary layers and the variable depth to the Moho
(bottom panel of Fig 4/10). Here the crust has been
stretched by plate tectonic forces resulting in the brittle
upper crust fracturing and the lower crust deforming by
ductile flow. Crustal stretching results in

- crustal thinning (necking)
- surface subsidence (due to isostasy)

thus creating a depression into which water and
sediments have been deposited. The weight of water
and sediments will also help the subsidence i.e. tectonic
and sedimentary subsidence.


Figure 4/10: Isostatic compensation of the Viking
Graben based on seismic model of the sedimentary
basin and Moho If bathymetry had been used to
estimate Moho variation, then the Isostatic residual
anomaly would be very similar to the Bouguer
anomaly since the variation in the water depth of the
North Sea is very small. Thus the anomaly due to
the sediments (red profile top panel) is
significantly underestimated.

The gravity effect of the raised Moho generates a major
positive gravity anomaly (centre panel of Fig 4/10)
which when removed from the Bouguer anomaly leaves
a perfect residual negative anomaly or isostatic residual
anomaly. Generally we do not know the Mohos depth,
its structure or density contrast. However, we do have
the surface elevation (bathymetry) from which to
estimate it. We can thus treat the Moho as an
equivalent layer, and assume its average depth and
density contrast. Sea level crustal thickness is generally
assumed to be 30 km thick and the crust/mantle density
contrast to be 0.4 g/cc. If you look back at the simple
Airy - Heiskanen model it considers the upper crust to
have a uniform density. For the Viking Graben this is not
the case as with all sedimentary basins. The low-density
sediments both dampen out the bathymetric effects
(makes water depth shallow) and generates a negative
gravity anomaly. In addition this negative sedimentary
anomaly is symmetrically superimposed on the positive
anomaly due to the elevated Moho resulting in the two
anomalies tending to cancel each other out. This is not
surprising since the thinned crust and resulting
subsidence results in the formation of the sedimentary
basin. If we compress all the water and sediments in
the North Sea basin to the same density as crystalline
crust then the basin relief will closely correlate with the
shape of the Moho ( Assume Te = 0).

Assumptions
- The same assumptions are used as for BA
- The crust is of uniform density (this is not the case
over sedimentary basins)
- The long wavelength part of the BA (the isostatic
correction), if it can be resolved particularly over a large
sedimentary basin, closely relates to gravity response of
the Moho. Removal of the isostatic correction from the
BA, leaves (or isolates) the upper crust or near surface
geology not locally compensated by the Moho. Crustal
masses are either compensated locally or regionally.
The regional compensation normally occurs at
wavelengths greater than about 150km. At shorter
wavelengths the crust is strong enough to support mass
excesses or deficiencies thus generating isostatic
residual anomalies. Plate tectonic dynamic processes
at constructive and destructive plate edges can
contribute to the region not being in isostatic
equilibrium.

4.5.5 Closer Look at Isostatic Residual
Anomalies

Because of this non uniform density problem, estimates
of the isostatic correction using the known bathymetry
will tend to generally underestimate the gravity effects of
the raised Moho.

4.5.6 Varied approaches to calculating the
Isostatic Correction

Method 1 Quick and Dirty The calculation employs
the Airy hypothesis (Garland, 1965 121: Blakely 1995,
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 7
GETECH/ Univ. of Leeds:
page 148) assuming that crustal rocks and mantle rocks
have constant (but different) densities and that
equilibrium is attained by crustal rocks floating on a fluid
mantle. The procedure below then follows Archimedes
principle.

a. Assume a depth to the Mohorovicic Discontinuity
(i.e. crust/mantle interface, or Moho) of 30 km for sea
level topography and a density change across the Moho
of 0.4 g/cc
b. Produce a grid of topography (ht) and bathymetry
(hb) in metres
c. Calculate the undulations del(hm) in the Moho depth
hm
For topography del(hm) = ht * 2.20/0.4 metres (assumes
near surface rocks are 2.20 g/cc and Airy model for
isostasy.
For bathymetry del(hm) = -hb * 1.20/0.4 metres
(assumes water density is 1 g/cc).
Model the gravity effect (at Moho depth of 30 km) of the
Moho variations using Bouguer slab approximations
g30= -0.04191*0.4*del(hm)
d. Upward continue the gravity effect by 30 km. This
produces the isostatic regional gravity field as the
surface
e. Subtract the isostatic regional from the Bouguer
gravity to produce the isostatic residual gravity field.

This method works best when the variation in Moho
depth is not large with respect to the crustal thickness.
Upward continuation of the anomaly from 30 km depth
to the surface smears out the gravity effect i.e. a point
source at 30 km depth and has a gravity anomaly
wavelength at the surface of about 70 km.

Method 2: Exact calculations. Same method as above
with the same assumptions but do more exact
calculation of the gravity effect of the Moho variations.
The calculation involves determining gravity effect of the
Moho variations out to 400 km for each surface grid
node using grid node heights i.e. if you have 1 km grid
then there are < 400
2
Moho depth variation elements to
be calculations per surface grid node. To speed the
calculations the line mass formulae is used.



Figure 4/11: Line mass calculation

Ag G A
r h r h
=
+

+
|
\

|
.
|
|

1 1
2
1
2 2
2
2


where A = area of grid cell
h1 = depth to top of line mass
h2 = depth to bottom of line mass

Alternatively the exact gravity effect at 30 km depth is
determined at each grid node for each Moho variation
(del(hm)) rather than using the simple Bouguer slab
approximation then upward continuation field to 30 km.
The resulting isostatic correction will vary subject to
method used by up to 10%, but remember your
correction is an estimate and assumes Airy isostasy
which is unlikely to be perfect since the crust has finite
strength and will tend to flexure under load.

Method 3. Filter Method: Determine the isostatic
response function or coherence to help shape a filter
to high pass the residual Bouguer anomaly (or isostatic
residual anomaly). This. method also helps to determine
the lithosphere strength of the region. However this can
only be done over large areas e.g. 500km x 500 km or
greater.(see Fig 4/12).

Figure 4/12: Coherence of gravity and topography
over the East European Craton.

For wavelengths up to ~100 km (wavenumber of 0.01)
there is little to no correlation between gravity and
topography. For wavelengths greater than ~150 km
(wavenumber of ~0.007) the correlation begins to be
established. If the correlation start at longer
wavelengths then this indicates that the lithosphere is
stronger and has higher value of Te. (Where Te = the
effective elastic thickness). In Fig. 4/12 the theoretical
coherence curves for Te=5 to Te = 160 are shown.

4.6 Decompensative Anomaly

The Isostatic residual anomaly is attempting to
remove the gravitational effects from both the
topography (Bouguer component) and the Moho,
leaving only gravity effects due to the upper crust.
However, the simplicity in the Airy model
assumptions
- constant crustal density,
- constant lateral density within the crust,
- constant spatial density contrast across
the Moho and
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 8
GETECH/ Univ. of Leeds:
- an effective elastic strength Te = 0 of the
crust/lithosphere)
results in the isostatic residual anomaly being less
than perfect for mapping gravity response of the
upper crust. The Decompensative anomaly partly
addresses this problem.



Figure 4/13: 40 km upward continued Isostatic
residual anomaly for Western Australia



Figure 4/14: Decompensative Anomaly

The Decompensative Anomaly originally defined by
Cordell et al (1991) attempts to remove the
anomalies associated with sources deep in the
lithosphere. This technique makes use of the
results of J acobsen (1987) that the optimum filter,
in the least squares sense, for separating two
layers is upward continuation.

Decompensative = (Isostatic Residual) (40 km
upward continued Isostatic Residual

To generate the Decompensative Anomaly, the Isostatic
Residual Anomaly is upward continued by 40km, This
regional field estimates sources located deeper than
40km, The Decompensative Anomaly is then
determined by simply subtracting the upward continued
field from the Isostatic Residual Anomaly.

The ambiguity inherent in potential field
interpretation means that it is impossible to
guarantee that the upward continued field contains
only signal from the deeper sources. On the other
hand, the anomalies displayed in the
Decompensative Anomaly map will generally result
from structures located within the upper crust and
will generally better reflect upper crust geological
structure.

Where there is a large sedimentary basin this will
generally generate a long wavelength negative
gravity response, which will tend to be removed
from the Decompensative anomaly by the upward
continuation 40 km filter. Thus in any analysis of
the decompensative anomaly the upward
continued field should always be evaluated as well
since it will have anomalies that reflect
sedimentary basins, flexure and anomalous
deeper plate, plate edge and deeper structures.
These are all important to a full understanding of a
region.

The examples of the Decompensative Anomaly
are shown from Lockwood (2004) for Western
Australia in Figures 4/13 and 4/14.


4.7 Gravity Effects at Constructive
Plate Margins


Mid oceanic ridges (or constructive plate margins ) are
not hydrocarbon targets areas. Free air gravity is
generally close to 0 mGals with a long wavelength
positive free air anomaly of ~+20 mGals centred over
the ridge crest. This is probably due to dynamic forces
due to upwelling upper mantle.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 9
GETECH/ Univ. of Leeds:

Figure 4/15: Mid Atlantic Ridge (after Bott, 1971 The
Interior of the Earth published by Arnold)




Figure 4/16: Free air anomaly map of the South
Atlantic generated from satellite altimeter data.
Features mainly the result of the bathymetry since
the sea bed is major density boundary.


The Bouguer anomaly is generally very positive due to
the water in the ocean being replaced by rock at
2.67g/cc and shallow Moho compared to over
continental areas. Over the ridge ocean ridge axis the
Bouguer anomaly exhibits a negative anomaly
compared to the rest of the oceanic areas. This is due to
hot, low density upper mantle at shallow depth beneath
the ridge crest as well as the bathymetry being
shallower. Figure 4/16 shows the nature of the free air
gravity field over the South Atlantic as deduced from
satellite altimeter measurements.

4.8 Gravity Effects at Continental
Margins

Within the plate structure of the Earth there are major
changes in crustal thickness from normal to thinned
continental crust to even thinner and denser oceanic
crust (Fig 4/9). Over these transitions (continental
margins) the Free air, Bouguer and Isostatic anomalies
show very distinctive anomaly changes. Now that oil
companies are working in ever-deeper water close to or
on the continental edges then an understanding of the
anomalies is very important. The isostatic residual
anomaly basically deduces the effects of the large
amplitude Free air and Bouguer anomaly effects (see
figure 4/17). Thus the higher frequency gravity effects of
shallow geological structure from within the sedimentary
and top basement can be better imaged and
interpreted.

The Free air edge anomaly is clearly seen in Figures
4/16 and 4/18 and is similar in character to that shown
in Fig 4/17. The Positive edge anomaly (red) flanks the
outer part of the continental margin with a negative
anomaly on the seaward side. Why is there a positive
anomaly over the Amazon cone and Niger delta in
areas of low density sediment accumulations?


Figure 4/17: Model of Bouguer, Free air and Isostatic
anomalies over an ideal structure with 100% Airy
compensation. Note: large edge effects of Free air
anomaly and large gradients of the Bouguer
anomaly are removed when viewing gravity in the
form of Isostatic residual anomaly. In reality crust
has strength thus the Isostatic correction does not
remove completely the edge effect. (After Bott, 1971)

Figure 4/19 shows a simplistic profile AB across the
Amazon Cone (Fig 4/18). As sediments are deposited
they displace water, thus low density water is being
replaced by higher density sediments., thus a positive
gravity anomaly results. The weight of the sediments
will force the crust to flexure downwards. How rapidly
this happens depends on the strength (Te) of the crust.
Normally the sediments are deposited more rapidly than
isostasy processes can act thus there is a positive
anomaly. As regional flexure occurs then areas about
the delta will exhibit negative anomalies (see around
Niger delta in Fig 4/18). This means that in areas of
negative anomaly the flexuring has over compensated
locally for the amount of sediment present.


The nature (amplitude and shape) of the Free air
anomaly is subject to how strong (cold) or weak (hot)
the crust/mantle is beneath the margin. In some
locations single and multiple positive Free air anomalies
are found. Continental margin anomalies are
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 10
GETECH/ Univ. of Leeds:



Figure 4/18: Free air gravity anomalies over the continental margins of Equatorial Africa and northern Brazil

complicated due to the complex interplay of the
sedimentary and structural evolution and the thermal
processing present. .


Figure 4/19: Sedimentary model of the Amazon cone
(for location see profile AB in Fig. 4/18) Sediment
replaces water to generate mass excess.

Strong Crust: The following model (after Prof A B
Watts, Univ. of Oxford) uses an effective elastic
thickness of Te = 25 which is about normal for many
continental margins.

Figure 4/20: Sedimentary loading of a continental
margin having high Te of 25. Note the maximum of
the positive anomaly is over the thickest sediments

The Equatorial margins shown in Fig 4/18 shows these
type of anomalies. A strong margin could be the result
of cooler crust, which could have implications to
maturation processes acting along these margins.

Weak Crust: The next example Fig 4/21 is of the same
initial continental margin but now with weak crust (Te =
0). This is extremely weak and can be considered a
limiting case. An example of a margin exhibiting
features relating to a weak crust or thermal hot crust is
shown in Fig. 4/22.



The Figure 4/21: Sedimentary loading of a
continental margin having low Te of 0 (i.e. Airy
model). Note the thickest sediments are located
partly over and between the maxima of the positive
anomalies, which are now significantly smaller in
amplitude than the positive anomaly over the
stronger crust (Fig. 4/20).
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 11
GETECH/ Univ. of Leeds:



Figure 4/22: The Free air anomaly of the West
African continental margin (Niger delta to Walvis
ridge) showing significant change in anomaly
character to that seen in Fig 4/18.

Figure 4/23 summarise the gravity response effects of
crustal thinning and sedimentary loading for a range of
Te values. The range in total gravity response is shown
in the blue profiles (lower panel) for the range of crustal
Te values.


Figure 4/23: The components of anomalies making
up the final composite (sum) free air anomaly
shown in blue. The variability of the final anomaly in
this case is totally due to the strength of the crust,
shown as the light brown anomalies in the central
panel for Te ranging from 5 to 35.
(see Watts and Fairhead, 1999)

4.9 Gravity Effect over Deltas

This is covered in Section 4.8 and figures 4/18 and 4/19
for examples over Amazon and Niger deltas.. Deltas
normally have positive anomalies since the sediments
have replaced the water and thus represents an excess
weight on the surface of the crust. This gives rise to a
large amplitude positive gravity anomaly. Usually, the
crust responds to the excess weight by flexing which
reduces the anomaly but because the sediments are
nearer the surface than the flexure, the anomaly is still
positive overall.


4.10 Gravity Effects at Destructive
Plate Margins

Subduction zones (or convergent plate margins) are
areas of active and successful hydrocarbon exploration.
These convergent margins are sites of major sediment
accumulations, deformation as well as sites of major
gravity anomalies due to the subduction processes. The
isostatic residual anomalies tend to remove the effects
of the deeper structures so that shallower structures
can be better imaged



Figure 4/24: The Bouguer, Free air and Isostatic
anomalies over the Eastern Alps where the long
wavelength gravity effects of the deep subducted
structures are effectively removed from the isostatic
anomaly.

If the Bouguer anomaly is converted into isostatic
residual anomaly over a small area then the main effect
is a DC shift in the anomalies, making the resulting
isostatic residual anomaly values closer to zero (see
Figs 4/24 and 4/25)

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 12
GETECH/ Univ. of Leeds:


Figure 4/25: Bouguer and Isostatic residual
anomalies over Western Alps (from Bott, 1971).

The gravity effects of mountains that lie above active
subduction zones are large. Figures 4/26 shows
topography and gravity a profile over the Andes and
their expressions. The profile is made up of free air
anomaly over the sea and Bouguer over the land. At
the coast the height is zero so the two anomaly types
should agree.

Figures 4/27 to 4/30 shows the spatial extent of the
anomalies over the Peru-Chile trench and Andes.



Figure 4/26: Gravity and topography profile across
the Andes. Crustal structure also shown.



Figure 4/27: Topography of the Peru/Chile region



Figure 4/28: Free air anomaly of the Peru/Chile
region



Figure 4/29: Bouguer anomaly over Peru/Chile
region

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 13
GETECH/ Univ. of Leeds:



Figure 4/30: Isostatic Residual anomaly over the
Peru/Chile region

4.11 Comparison of Anomaly Types

4.11.1 West Africa

This region is currently a major area of oil exploration
and exploitation. The thinned continental margin is
board in extent (see line in centre panel that

demarcates the Continent-Ocean Boundary (COB). This
is an important line since the hot thinned continental
crust allows sediments to mature earlier where as west
of this line the sediments are overlying oceanic crust
with generally lower heat flow due to lack of radioactive
minerals within the crust. Thus over oceanic crust
significantly thicker sediments are needed (deltas) to
generate maturation within the sediments.

The continental margin is imaged by the Free air but its
response is dominated by the bathymetry rather than
subsurface structures. The Bouguer anomaly has high
gradients over the margin due to crustal thickness
change. The Isostatic residual anomaly, which has
attempted to remove bathymetry effect Bouguer
correction) and the Moho effects (isostatic
correction) shows better the geological structure of
the upper crust. Even at the scale shown here, it is
clear the Isostatic residual anomaly has done a
good job.






Figure 4/31: Comparison of Free air, Bouguer and Isostatic anomalies offshore West Africa. The resolution of
grids is the same but due to dynamic range of values which controls colour the detail in each figure appears
different.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 4, Page 14
GETECH/ Univ. of Leeds:

4.11.2 South America



Figure 4/32: Topography and Geoid maps of S
America plus their respective histograms.

For South America it is interesting to see the histogram
of values for the region shown. The dynamic range of
the Free air. and the Isostatic residual anomalies are
very similar since for the latter the Isostatic correction
removes the long wavelength effect introduced by the
Bouguer correction in the Bouguer anomaly.

.
Figure 4/33: Free air and Bouguer anomaly maps of
S America plus their respective histograms.




Figure 4/34: Isostatic Correction and Isostatic
Residual anomaly maps of S America plus their
respective histograms. Note the dynamic range of
the Isostatic is similar to Free air anomaly map.

4.12 Further Reading

Continental Isostasy and Mountain belts
D A Chapin. 1996 The deterministic approach towards
isostatic gravity residualsA case study from South
America Geophys. 1022-1033.

R Hartley, A B Watts, J D Fairhead, 1995 Isostasy of
Africa EPSL 137, pages 1-18

M G Kogan, J D Fairhead, G Balmino & E L
Makedonski 1994 tectonic fabric and lithospheric
strength of northeast Euroasia based on gravity data.
Geophy Res. Let. 21, 24, 2653-2656

A B Watts, S H Lamb J D Fairhead & J F Dewey 1995
Lithospheric flexure and bending of the Central Andes
EPSL 134, 9-21

Continental Margins and Deltas
Walcott Geol. Soc. Amer. Bull., 83 :1845-1848

Watts 1988 EPSL 89,:221-238

Watts and Fairhead, 1999



J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 5, Page 1
GETECH/ Univ. of Leeds:


SECTION 5: THE USE OF GPS IN GRAVITY
SURVEYS relation between the Geoid and Ellipsoid,
indirect effect

Based on Fairhead, Green, and Blitzkow (2003)

5.1 Introduction

Global Positioning System (GPS) technology provides a
cost effective surveying method that is replacing
traditional methods of precise levelling in exploration.
Using satellite derived GPS coordinates can, however,
generate problems, since they will need to be translated
to national systems if this is the preferred local system
being used by the explorationists. However, consistent
coordinates should be used to compute gravity
corrections. This contribution reviews how GPS derived
coordinates are used for different types of gravity
surveys and how different coordinate systems generate
significant differences in the location, height and derived
gravity values. The use of GPS technology also leads to
the term gravity disturbance which may be new to
many, but turns out to be conceptually for the more
straightforward expression for the anomalous part of the
Earths gravity field. This section also draws readers
attention to recent detailed articles on this subject and
on our GPS experience in South America.

5.2 The Global Positioning System
(GPS)

GPS was designed to provide an instantaneous absolute
positioning using two codes, P and C/A, transmitted by a
constellation of satellites. The P code has certain
characteristics that allow an accuracy of decimetres in
the coordinates, but is restricted by the US DoD for
military applications. The C/A code is a free civil code,
and provides an accuracy of a few tens of meters in the
worst case. An alternative to these codes to determine
precise 3D position, now used extensively in geophysical
surveys, is to measure the phase of the carrier wave
which does not require to know, or use, the modulations
of the signal or the codes transmitted by the satellites.
Each satellite transmits two frequencies with the
terrestrial receiver designed to receive either one or both
frequencies. In the second case, the receiver system
can correct for ionosphere refraction by using the
correlation of this effect with frequency. By using the so
called "carrier beat frequency" measurements, a
centimetre accuracy is achieved in (X, Y, Z) or (, , h)
for distances greater than ~25 km from the base station,
while for distances less than this receivers using just one
frequency are sufficient. In many countries fixed
networks of GPS receivers are being established (e.g.
the CORE network in South America) and if available
save the need to establish your own base station.
Figure 5/1 shows the Antuco CORE station in Chile.



Fig. 5/1: Gravity observations at the Antuco CORE
station in Chile.

It is important to emphasize that the phase
measurements are only applied for positioning in the
differential (relative) mode. This means that two
receivers are needed, one remaining fixed in a known
position, so that differences in coordinates are
determined with the roving receiver. The reason for
using the differential mode with the phase
measurements is that it cancels correlated errors and
reduces the number of unknowns. By differentiating with
respect to two stations, two satellites and two epochs,
known as single, double and triple difference
respectively (Figure 5/ 2),



Fig. 5/2: Single, Double and Triple differences where
satellites are in two different positions.

It is possible to limit the unknowns in a progressive way.
The triple difference is always used as a first step
because it offers the possibility to provide preliminary
coordinates for a point with just four unknowns.
However, the coordinates derived in this way are
unreliable. The best alternative is to derive the
coordinates using the double difference. In this case the
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 5, Page 2
GETECH/ Univ. of Leeds:


Country Latitude Longitude Altitude Reference System
Brazil -19 45 43.3459 -48 06 05.6732 754.1502 WGS-84
Brazil -19 45 41.6527 -48 06 04.0639 763.2821 SAD-69
Argentina -35 10 29.0200 -59 15 46.6400 45.9100 WGS-84
Argentina -35 10 27.3038 -59 15 44.4610 31.9277 SAD-69
Venezuela 5 16 42.0800 -61 08 04.8600 1 254.5400 WGS-84
Venezuela 5 16 43.2237 -61 08 03.0228 1 271.0418 SAD-69
Table 1: Coordinates in two different reference systems

complication to be solved for is the ambiguities, i.e., the
integer number of cycles between the satellite and the
station at the first epoch of observations.

5.3 Co-ordinate Terminology

5.3.1 Horizontal Datum:

The local horizontal datum, used for geophysical
surveying and mapping, is normally a nationally
accepted system that uses a pre-determined geodetic
datum and ellipsoid. This will normally be different from
the WGS84 datum used by the GPS system, such that
the GPS-derived latitudes, longitudes and ellipsoidal
heights (or ellipsoidal coordinates) will need to be
transformed to an acceptable national or continental
datum. This implies that a single point can have more
than one set of coordinates by virtue of the effect of the
geodetic datum used. It is thus important to ensure that
the correct datum and ellipsoid parameters are used.
For example, coordinates in some countries can differ
from WGS84 by up to 1 km. This can be significant
when computing the normal gravity, thus we suggest
that WGS84 coordinates should be used for gravity data
processing, especially when using the WGS84 gravity
formula (gth, see Figure 5/3).

Table 1 shows an example of a simple horizontal
translation for three widely spaced South America
stations from WGS-84 to the South American Datum
1969, SAD-69. The following translation parameters
were used:

X+66.87m; Y-04.37m; Z+38.52m

Often seven transformation parameters are used, thus
allowing incorporation of axis rotation and scale factor.
These translations are applied to the geocentric
cartesian coordinates before the transformation to
geodetic coordinates using the appropriate ellipsoid
parameters.

The ellipsoidal height resulting from the transformation
(shown in bold italics in table 1) should not be used
since this is a horizontal transformation. The vertical


transformation is achieved using the ellipsoidal height
and geoid model.

5.3.2 Vertical Datum

Orthometric Heights: National height systems, used to
determine heights of bench marks and topographic
maps, are traditionally based on a reference datum of
H=0 representing mean sea level at a given location.
For mainland UK, for example, the height reference
system used is based on the mean sea level at Newlyn
in Cornwall.

For inland areas of continents, using sea height
references is not without its problems due to the
propagation of precise spirit levelling errors, reference
system biases and other temporal effects such as glacial
rebound etc. In central Eastern Europe the Baltic
(Kronshtadt) height reference system gives heights that
are up to 2 meters different from the Adriatic (Trieste)
height reference system.

For a country such as Brazil, having a single reference
tide gauge is impractical and the introduction of GPS
geometric levelling has highlighted the problems with the
older orthometric height system. If orthometric or
precise levelling is used to link two tide gauges, then for
various reasons the difference is not necessarily zero.
First, due to hydrodynamic effects (currents,
temperature), the mean sea level at the two tide gauges
will not necessarily be on the same equipotential surface
resulting in spatial differences of the mean sea level (sea
surface topography). This is not necessarily a function
of distance. Second, errors in orthometric levelling tend
to increase with distance. Third, and conceptually more
complicated, is the fact that the equipotential surfaces
are not parallel in a geometrical sense, i.e. what two
different equipotential surfaces have in common is their
difference in potential and not the difference in the
distance or separation between the two surfaces. So,
the results of orthometric levelling are dependent on the
paths taken, and as such orthometric corrections seek to
make the levelling path-independent.

Ellipsoidal Heights : Satellite positioning systems
(GPS) are increasingly being used to determine the
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 5, Page 3
GETECH/ Univ. of Leeds:

vertical coordinate. The accuracy using GPS can range
from a few meters to a few centimeters and are in
general 2-5 times worse than the horizontal accuracy. In
the first case, the C/A code is used with the DGPS
correction and gives an accuracy of 1 to 5 meters on
baselines shorter than ~25 km. Accuracies of between a
few centimeters and one meter can be achieved using
single frequency receivers with short periods of
observation, of about 20 minutes, using the triple
difference solution. This means that the ambiguities will
not be solved for and the base line distance is restricted
to within 25 km. If the carrier beat phase methodology
is used with periods of observation increased to 1-2
hours with single frequency receivers (or double
frequency for base lines longer than 25 km), an accuracy
of a few centimeters is achieved. This improved
accuracy uses the double difference method to solve
for the ambiguities and is made possible by the longer
observational period and error modelling. The above set
of timing requirements are likely to decrease significantly
when the current GPS (Global Positioning System-US
system) is upgraded and supplemented by Galileo
(European system) in ~2007.

GPS directly provides the geometric or ellipsoidal height
h (height above the ellipsoid defined by WGS84). To
convert the ellipsoid height h to an orthometric height H
requires the height of the geoid N to be known, where N
is the separation between of the geoid and the ellipsoid
(see Figure 5/3), so H = h N. This has its problems
since the geoid surface is not precisely known
everywhere and its calculation is continually being
updated.



Figure 5/3: Definitions of terms used at the Earth,
Geoid and Ellipsoid surfaces

5.4 Gravity Terminology

Gravity corrections are often, and incorrectly, referred to
correcting the surface gravity observation, gobs, down
to a datum. The correct way of viewing such corrections
is correcting normal gravity to the observational point at
the earths surface. The magnitude of the gravity
acceleration, gobs, measured at the earths surface, is a
scalar quantity (see Figure 5/3). If gobs is corrected
using ellipsoid and/or geoid based corrections, then
different gravity terminology needs to be used and the
resulting gravity values will be numerically different.

Using H, the height in meters relative to the geoid
surface (traditional processing), the free air and Bouguer
gravity anomalies, expressed in mGal, can be simply
expressed (ignoring the second-order free air correction
effect, the curvature (Bullard B) correction and terrain
corrections) as:

Free air anomaly = Faa = gobs (gth -0.3086H)
Bouguer anomaly = Faa 0.04191H

where gobs is the observed gravity, gth is normal
gravity based on the WGS84 ellipsoid formula, the free
air correction is 0.3086 H, the Bouguer correction for an
infinite slab is 0.04191H, where H is in meters and is
density in g/cc.

Using h, the height in meters relative to the ellipsoid
surface (new GPS processing), then the above
expressions become,

Free air disturbance = Fad = gobs(gth- 0.3086h)
Bouguer disturbance = Fad 0.04191h

The magnitudes of these anomaly and disturbance
values will be different since H and h have different
values. Thus merging old surveys using H and modern
gravity surveys using h will give a further level of
complexity to be resolved for.

Which formula is the more correct? From the
above equations and Figure 5/4 the gravity disturbance
is the difference between gobs at the observational
surface and gth at the ellipsoid surface and its upward
continuation to the measurement surface by the use of
the free air correction, 0.3086h and the Bouguer
correction 0.04191h.

Traditional geophysical processing has determined
gravity anomalies using H, since H (not h) was only
available in the past. Using H under-corrects for the
hatched areas in Figure 5/4, where the geoid surface is
above the ellipsoid surface, and over-corrects for the
part of the geoid surface that is below the ellipsoid
surface.



Fig. 5/4: Geoid and ellipsoid surfaces.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 5, Page 4
GETECH/ Univ. of Leeds:



Fig. 5/5: Indirect Effect for the free air values generated from the height differences (EGM96 geoid ellipsoid).
The gravity effect for these two different reference
surfaces is shown in Figure 5/5 and is known as the
Indirect Effect. The difference between the geoid and
ellipsoid surfaces can be as large as ~100 m and
generates a maximum Indirect Effect of ~30 mGal in the
Free air value or ~20 mGal in the Bouguer value. So
should geophysicists change their terminology and
procedures and introduce the more correct term gravity
disturbance?

5.5 Implications to Gravity Exploration

For solid earth geophysical studies, studying large
regions of the Earth, the traditional method of calculating
gravity anomalies has ignored the Indirect Effect
(Figures 5/5) and thus sub-surface mass distribution
may have been under- or over-estimated. The spectral
plot of the Indirect Effect for South America is shown in
Figure 5/6 and indicates that for small-scale surveys,
less than a few hundred km in dimension, the variation
within the survey area is likely to be small. This is
quantified by using an azimuthally averaged power
spectrum for South America in Figure 5/6.

The ratio between the Indirect Effect Free air and Free
air amplitudes decrease from ~10
-2
at large gravity
anomalies can still produce significant Indirect Effects.
The global free air representation of the Indirect Effect
(Figure 5/5) is produced from EGM96 and hence
represents the regional or long wavelength component
of this correction.


Fig. 5/6: Power spectrum of Free air and Indirect
Effect Free air for a 10 minute grid of South America
covering the area 85W to 34W and 56S to 15N.

Satellite Altimeter derived Gravity: The satellite
altimeter method to derive marine gravity uses the
principle that the mean sea surface is an equipotential
surface of the gravity field. Free air gravity is derived
from either converting along track sea surface gradients
or directly from the mean sea surface shape. The
gravity values relate to the sea surface (i.e. the geoid)
and hence are clearly gravity anomalies rather than
gravity disturbances. To derive the gravity disturbance
requires the application of the Indirect Effect.

Airborne Gravity: Scalar and vector gravity
measurements are now routinely observed in
exploration. This has been possible by the use of GPS,
which tracks the geometric 3D motion and position of the
gravity sensor/aircraft. All 3D positional measurements
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 5, Page 5
GETECH/ Univ. of Leeds:

onboard the aircraft are made by GPS, thus determining
gravity disturbances would be less error prone than
trying to correct with orthometric heights. Airborne
gravity gradiometer measurements (gradients and
tensors) also rely on GPS but are insensitive to long
wavelength gravity variations and since such data
undergo different forms of processing compared to
conventional gravity, there are assumed to be no similar
problems with the Indirect Effect.

Marine Gravity: Ship-borne gravity measurements are
collected at the sea surface and tidal effects have either
been corrected for or minimized at cross-over correction
and micro-levelling stages of processing. To our
knowledge, no attempt has been made to output
accurate ship based GPS heights due possibly to the
ship motion noise. In light of such problems, the free air
gravity anomaly can at present only be converted to the
free air disturbance by applying the Indirect Effect. For
many marine surveys imprecise bathymetry still remains
a major source of error. Depth errors up to 10% are
common (i.e. 300 m in 3 km of water!). This results from
using the wrong velocities to convert the two-way transit
times to depth, which significantly affects the calculation
of Bouguer and Isostatic anomalies, can be a greater
problem than any geoid /ellipsoid differences.

Land Gravity: For stand-alone gravity surveys, the
speed and efficiency of GPS based surveying are
replacing conventional surveying methods such that
surveys can be reduced to free air disturbance without
any assumptions being made. Converting the data to
Bouguer and Isostatic disturbances is straight forward
using traditional methods by using h rather than H. If the
gravity survey is regional in character (Figure 5/7) and
involves the integration of existing surveys, then working
with orthometric heights, H, is recommended and
requires the appropriate geoid to be used.



Fig. 5/7: Gravity fieldwork in Paraguay and Rio
Negro, Brazil using GPS methods

5.6 Conclusions

The term gravity disturbance which is familiar in
geodesy, is unlikely to be readily accepted by
geophysicists until possibly a body such as SEG
provides clear guide lines to the acquisition, processing
and documentation of gravity surveys using GPS
measurements. The terms anomaly and disturbance
will have to co-exist since not all surveys use ellipsoidal
heights. The amplitude variation of the Indirect Effect at
the scale that exploration surveys normally operate
(<100km) will in general have an insignificant effect on
any resulting interpretation since long wavelength or DC
terms are often removed from Bouguer data during
modeling to remove unwanted long wavelength regional
gravity effects. Figure 5/5 indicates that spatial variation
of the free air Indirect Effect could be as large as 30
mGal or 20 mGals for the Bouguer Indirect Effect. Such
shifts between Bouguer anomaly-disturbance between
adjacent surveys could generate problems when
integrating old and new surveys. The gravity differences
between overlapping surveys and the global variation of
the Indirect Effect could however provide a finger print
to the type of processing that has occurred if full
processing details are not readily available.

Good survey practice should always dictate that a full set
of meta data for both data acquisition and processing be
included with the listing of the principal facts of a survey


and reproduced on all map legends. This will only be
achieved if clients insist on such conditions/specification
as part of their standard survey contract.

Clearly ship, satellite and older land data naturally
produce gravity anomalies, while newer land and
airborne data will most readily be gravity disturbances.
Possible problems envisaged by having these two
gravity systems working side by side are in the
calculation of terrain corrections using GPS station
heights with digital terrain maps based on a national
coordinate system, and in data compilations where there
is potential for mixing gravity anomalies and
disturbances as well as surveys with different coordinate
systems.

5.7 Current GPS Working Practice 2010

To convert GPS geodetic heights (relative to ellipsoid) to
orthometric heights (relative to Geoid), then the geodetic
heights need to be transformed into orthometric (quasi-
orthometric) by using the EGM08 (Earth Geopotential
Model 2008) restricted to degree and order 150 (i.e. that
part of the EGM08 model that is totally controlled by
satellite solution). A higher order EGM08 field is
influenced by the surrounding terrestrial data which may
change with time and thus could be considered as
unreliable if there is poor terrestrial data coverage. By
using the EGM08 restricted to degree and order 150
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 5, Page 6
GETECH/ Univ. of Leeds:

means that when the local geoid is better defined the
orthometric heights can be easily recalculated. This
threshold of degree and order 150 is likely to be
improved by the gravity results of the GOCE satellite
which should move the threshold to of degree and order
~300 (see Section 11).


5.8 Suggestions for further reading

General text: GPS - Theory and practice by B.
Hofmann-Wellenhof, H. Lichtenegger and J. Collins
(Springer - New York, Wien)
Articles:

Hackney and Featherstone (2003) Geodetic versus
geophysical perspectives of the gravity anomaly
Geophysical Journal International 154, 35-43

Li & Gtze (2001) .Ellipsoid, geoid, gravity, geodesy,
and geophysics Geophysics 66:1660-1668

Fairhead, Green, and Blitzkow (2003)


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 1
GETECH/ Univ. of Leeds:

SECTION 6: LAND GRAVITY DATA- Acquisition,
Processing & Quality Control


The Bouguer Anomaly (BA) is defined as:

BA = g
obs
- g
th
+ 0.3086h - 0.04191 h + T
Or

BA = g
obs
( g
th
- 0.3086h + 0.04191 h - T )

The reason for indicating two equations, that are
identical, is the second with bracket implies the correct
at the observational point which is technical correct.

Lets critically look at each term of the Bouguer anomaly
equation

6.1 Observed Gravity (g
obs
)

Since the gravity meter only measures differences in
gravity (rather than being an absolute instrument)
gravity measurements need to be tied into places of
known gravity.

Up to 1971 all gravity measurements were tied via
intermediate base stations to the absolute value at
Potsdam (981274 mGal) which was originally derived in
1906.

Since Potsdam's value was found to be about 14 mGal
too large (1971 value at Potsdam is 981260.19 mGal) a
new world network of base stations was established in
1971 and called IGSN(71) the International Gravity
Standardisation Net 1971. One measurement was
made in every country (total number of stations 1854).

National base station network since 1971 have been
tied to IGSN71. For example in the United Kingdom the
National Gravity Reference Net 1973 NGRN(73) was
established. Stations are located at Ordnance Survey
fundamental benchmarks. This national network of
gravity base stations (see Fig 6/3) allows gravity
surveys to be tied into an absolute frame of reference.

Note Details of National IGSN(71) stations and can be
obtained from appropriate national organisations

or
BUREAU GRAVIMETRIQUE INTERNATIONAL (BGI)
http://www.bgi.cnes.fr

&
National Geospatial Intelligence Agency (NGA)
http://www.nga.mil
6.1.1 Land Gravity Meters

The LaCoste-Romberg gravity meter was originally
designed in 1934 and is currently one of the main
instrument used. It works on the principle of a 'zero
length' spring which supports a beam and mass in a
horizontal position. The 'zero length' spring has its
tension proportional to the length of spring s not its
change in spring length (Hooks law).



Figure 6/1: LaCoste and Romberg gravity meters (for
operation and calibration of meter see Instrument
manual.



Figure 6/2: Zero length spring
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 2
GETECH/ Univ. of Leeds:








Figure 6/3: The National Gravity Reference Net 1973 for the UK
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 3
GETECH/ Univ. of Leeds:

Table 6/1 Details of LaCoste Instrument


Model G

Model D
Range 7000 mGal 200 (300) mGal
Drift <1 mGal per month < 1 mGal per month
Calibration Stable Stable
Repeatability 0.01 mGal .005 mGal
Accuracy
Power
0.04 mGal
3W @ 12V
.01 mGal
3W @ 12V



Figure 6/4: Sketch of the mechanism of the LaCoste
and Romberg gravity meter



Figure 6/5: Actual interior of the L & R meter

With reference to Figure 6/2
Sensitivity of meter, where spring length = s-z (z finite
length of spring under zero tension)
tension = k(s-z)
Taking Moments at null position
Mga cosu =k (s-z) b sin o
=k (s-z)b (y cos u)/s

Using sine law

g
k
M
b
a
1
z
s
y =
|
\

|
.
|
|
\

|
.
|
|
\

|
.
|

when g goes to g + dg spring increases by ds where

dg
k
M
b
a
z
s
d =
|
\

|
.
|
|
\

|
.
|
|
\

|
.
| s

We can make ds as large as you like by changing value
in brackets.


The CG-5 Gravity Meter


Figure 6/6: CG-5 Gravity Meter
The CG-5 is the latest, gravity meter from Scintrex Ltd
(2009). It offers all of the features of the low noise
industry standard CG-3M micro-gravity, but is lighter
and smaller, has a larger screen which gives a superior
user interface. The CG-5 can be operated with minimal
operator training, and automated features significantly
reduce the possibility of reading errors
Data down load bottlenecks have been eased with the
provision of a fast USB interface and flexible data
formats. Noise rejection has been improved.
By constantly monitoring electronic tilt sensors, the CG-
5 can automatically compensate for errors in gravity
meter tilt. Due to low mass and excellent elastic
properties of fused quartz, tares are virtually eliminated.
The CG-5 can be transported over very rough roads and
the residual drift remains low. The CG-5 can withstand a
shock of more than 20G and the tare will be no more
than 5 micro Gal.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 4
GETECH/ Univ. of Leeds:

The CG-5 offers the best possible repeatability. Over
many 10's of field readings the CG-5 will repeat to within
a standard deviation of 0.005mGal
By entering in the time date and location the meter will
automatically correct for solid earth tidal correction
Specifications of CG-5 meter
Resolution 1 micro Gal
Residual drift 0.02 milliGal/day
Sensor Type
Fused quartz using
electrostatic nulling
Range of automatic tilt
compensation
+/- 200 arc.sec
Memory 1 M Byte
Data I/O port USB
Display
1/4 VGA 320 x 240
pixels
Dimensions and weight
31 x 22 x 21 cm, 8 kg
incl. battery
Operating temperature
range
- 40C to + 45C
Automated
compensations
Temperature
Instrument tilt
Tide, Noisy sample
Seismic noise filter
Operating range

8000 mGal without re-
setting




The ZLS Meter:

This meter is based on the Automated Burris Gravity
Meter with UltraGrav Control System. This is
basically similar to the L & R meter. The Burris Gravity
Meter is built around a hand made, metal, zero-length
spring. This spring system has extremely low
hysteresis and drift rates. When new, the spring drifts
approximately 1.0 mGals per month and when mature,
drift correction is less than 0.500 mGals per month.
Data have shown that the spring's drift rate improves
with age to approximately 0.030 mGals per month.
Calibration values are stable over time as they are
determined by a metal micrometer screw. The Burris
Gravity Meter has consistently yielded standard
deviations of 0.003 mGal or better during routine field
tests.

Further details from <www.zlscorp.com>



Figure 6/7: The ZLS automated Burris Gravity
Meter


6.1.2 Survey Procedure

i. Check Instrument is in good working order (see
manual for procedures to carry out sensitivity and level
checks). Are batteries in good condition and battery
charge working?

ii. Instrument calibration check by measuring repeatedly
at two sites of known gravity difference ( for UK the BGS
has set up a calibration check line - Hatton Heath and
Press)

iii. Establish base station network (e.g. one station per
50 km
2
normal in UK )

iv. Tie base station network into IGSN71 datum

v. Tie each survey loop of measurements into the base
station network (best to use same base station at
beginning and end of loop). Check base station
measurements at start and end of loop (after applying
tidal corrections--see next sub-section) are within
acceptable limits of instrument drift (i.e. 0.01 mGal/hr)

vi. At each station: record location and station number
on map and in field notebook the following:
a) Station number
b) Location - grid co-ordinates
c) Time of day to 1 minute
d) Gravity meter measurement (repeated) to 0.01
mGal
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 5
GETECH/ Univ. of Leeds:

e) Height
f) Sketch of station location

6.1.3 Tidal effect on g
obs


If gravity measurements are made at the same spot at
hourly intervals over several days then they will show a
0.2 mGal sinusoidal variation over periods of approx.
13 hours. This is due to solid Earth tides ( + 1m)
caused by the moon's gravitational attraction as it
rotates about Earth every 25 hours. This effect also
causes ocean tides. The effect is predictable and g
obs

can be corrected by computer tables for region being
surveyed. Input to program is date, time (relative to
GMT) of measurement, and location . Usually a centre
point of the survey is sufficient to enable tables showing
time variation of tidal correction to be produced for
survey period.


6.2 Theoretical or Normal Gravity or
Latitude Correction (g
th
)

The Earth is elliptical in shape with mean equatorial
radius 6,378.16 km and mean polar radius 6,356.18 km
(difference of about 22 km). Thus there is a large
change in gravity from equator to pole due to:

i) Shape of Earth
ii) Spin of Earth

The Spheroid is a mathematical figure which
approximates sea level Earth if all irregularities remove
(i.e. no lateral variation in density, only vertical
variations). Under such conditions the spheroid is an
equipotential surface of its gravitational field.

The Spheroid is used to provide the sea level predicted
value of gravity at a station. The Spheroid has been
redefined on numerous occasions to reflect the
improvements in its determination (see next page).

Gravity variation with Latitude: the force due to
gravity at a point on the Earth's surface is a vector
resulting from the attraction of the Earth and the
centrifugal force. For simplicity lets assume the Earth is
perfectly spherical and it revolves about its axis at
angular velocity e

g
2
= F
2
+ f
2
- 2F f cos u
= (GM/R
2
) + (e
2
R cosu)
2
- (2GM/R
2
) e
2
Rcos
2
u
=

GM/R
2
(1 - e
2
R/(GM/R
2
) cos
2
u)

to 1st approximation for Earth GM/R
2
= 980 Gal

& e
2
R/(GM/R
2
) = 1/300
so g = 980 (1 - 0.0033 cos
2
u)
or g(N.pole) = 977 (1 + 0.0032sin
2
u)

The centrifugal force f is a function of latitude and e.
Thus if g is measured on a moving platform ( i.e. a ship
or plane) then f will change (see Eotvos effect)


Figure 6/8: Global gravity forces



Figure 6/9: Gravity variation with latitude

6.2.1 Theoretical Gravity prior to Introduction
of IGSN71

The 1930 International Gravity Formula originally used
to describe theoretical gravity (g
th
) was

g
30
= 978049 (1+ 0.0052884 sin
2
u - 0.0000059
sin
2
2u) mGal

Where u = latitude in degrees
this uses g
Potsdam 1930
= 981274 3 mGal

The equation only attempts to correct for elliptical (2nd
harmonic) shape of Earth. There is no variation of g
th

with longitude only latitude.

6.2.2 Theoretical Gravity with the Introduction
of IGSN71

The International Gravity Formula was updated in 1967
(IGF67 or GRS67), where GRS is Geodetic Reference
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 6
GETECH/ Univ. of Leeds:

System, to take account of increased accuracy of the
spheroid by satellite studies. Formula keeps to but
updates the 2nd harmonic shape of the Earth and
corrects for Potsdam error.

g
67
= 978031.8 (1+0.0053024 sin
2
u- 0.0000058
sin
2
2u) mGal

where g
Potsdam 1971
= 981260.19 +/- 0.017 mGal
or
g
67
= 978031.85 (1+0.005278895
sin
2
u+0.000023462sin
4
u)
mGal

6.2.3 Updates and Current Theoretical Gravity

WGS72: World Geodetic System 1972 (WGS72)
Ellipsoidal Gravity Formula is:

g
72
= 978033.27 (1 + 0.005278992 sin
2
u +
0.000023461 sin
4
u) mGal not used

1980 Latitude Correction formula (Normal
Gravity-GRS80)


Where is latitude (radians) and h altitude above the
WGS84 ellipsoid(metres)

Currently used is the WGS84 Formula (accepted by
IUGG as the best available formula) is:

( )
g
84
978032.67714
1 0.00193185138639sin
2
1 0.00669437999013sin
2

=
+

u
u
0 5 .
mGal

where u = geodetic latitude
g
84
= normal gravity at sea level

WGS84 is different to all other formula by taking into
account weight of the atmosphere. Thus if one uses
WGS84 one needs to add additional height term to the
Gravity Anomaly formula.

Gravity Anomaly = g
obs
- g
84
+ dg
A
................
etc.

where
047 . 1
116 . 0
87 . 0
H
e
A
g

= o mGal

where H is elevation of point in kilometres

Elevation Correction of dg
A


Note: All formulae IGF67 to WGS84 use IGSN71
datum for observed (normal) gravity. (see North
American Gravity Database Committeee. 2005).

6.3 Free air Correction (0.3086h)

This correction FAC = 0.3086h assumes there is a
linear vertical gradient representing the fall-off of gravity
with height for topography/bathymetry encountered on
Earth. This linear assumption is generally accepted
more precise equations are now used in airborne gravity
surveys. (See Section 8)

A more complete expression is now available from BGI
(G. Balmino, Toulouse) which has the Free Air
Correction (FAC) as a function of latitude and height.

FAC
dg
dh
h +
1
2
dg
dh
2
h
2
1
6
dg
dh
3
h
3
=

only the first two terms are significant


FAC = ( 0.3083293357 + 0.0004397732cos
2
u) h

- 7.2125 x 10
-8
h
2

(see North American Gravity Database Committeee.
2005).

For GRS80
FAC = -( 0.3087691 - 0.0004398sin
2
u) h
+ 7.2125 x 10
-8
h
2
mGal

6.4 Bouguer Correction (0.04191h)

The correction 0.04191h used is for a flat infinite slab
of thickness h metres (above sea level) and density
g/cc



Figure 6/10: Infinite flat slab approximation

Why do we consider Earth & sea level flat?
In reality the Earth is close to being spherical and a
spherical shell correction may appear at first sight to be
more appropriate

(km) (mGal) (km) (mGal)
0.0 0.87 3.0 0.60
1.0 0.77 4.0 0.53
2.0 0.68
( ) ( ) | | u u + =

2 sin 10 8 . 5 sin 10 3024 . 5 1 7 . 978032
2 6 2 3
n
g
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 7
GETECH/ Univ. of Leeds:




Figure 6/11: Spherical shell model for Bouguer
Correction?

Bouguer correction (flat Earth) = 2tGh = 0.04191h
Bouguer correction [spherical Earth] =4tGh
=0.08382h)

Reasons for using flat Earth
Since height datum is normally sea level then:
i) Major part of world covered by sea not land above sea
level thus spherical shell would over correct

ii) Continental size masses approximate to infinite flat
Earth after Terrain Correction i.e. spherical cap of 167
km radius is equivalent to infinite flat slab
(see North American Gravity Database Committeee.
2005).

6.5 Terrain Correction (T)


Figure 6/12: Terrain corrections

6.5.1 Inner Zone Terrain Correction (T)

This corrects for topography relief out to 53m from
gravity station (i.e. Hammer zones A, B & C ).
Sometimes go to D zone when there is lack of detailed
topographic maps.



Figure 6/13: Calculating the terrain correction

Theory
Gravity effect of annulus of inner and outer radii of r
1
&
r
2

surface element = r d dr
volume element dv =r d dr dz
mass M= dv = r d dr dz
sinu = z/R
g = GM/R
2
Ag = G
sin dv
R
2
u
}
( )
Ag Gr
zrdjdrdz
r
2
z
2
3/2
0
h
r
1
r
2
j 0
2p
=
+
} }
=
}
when r
1
= 0 & r
2
= (i.e. infinite slab)
then Ag = 2t G h = 0.04191h mGal

Divide annulus up into segments and use tables based
on equation above to determine terrain corrections.

Figure 6/14: Inner zone terrain corrections

Terrain Corrections can be calculated out to any
distance. Normal distances are ~22 km or can be as
large as 167 km.
Note For land based stations, Terrain Corrections
are always positive for effects of valley below and
topography above the station height.

How to calculate inner zone Terrain correction
effects (from estimates made in field)

i) try and make sure zone A is flat out to 6 feet (2
metres) then TC = 0
ii) measure mean height difference between station
and each segment of zone B
iii) convert height difference to mGals using Hammers
tables (see Dobrin or Parasnis text books for tables)
iv) sum mGal effects for the 4 segments of zone B
v) Do ii) to iv) for 6 segments of zone C
vi)Sum all effects of A, B & C together = Inner zone T

NOTE: Use symmetry and two dimensionality of
topography to minimise number of measurements
in field.

6.5.2 Intermediate Terrain Corrections (zones D
to F)

These distances are normally too far to estimate in the
field from a station so they are calculated from
topographic maps of
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 8
GETECH/ Univ. of Leeds:

scale 1:10,000 and 1:25,000

Template method: Use Hammer zone template for D,
E & F at same scale as map it overlays


Figure 6/15: Intermediate terrain corrections


i) Place centre of template on station location

ii) Estimate topo. height in each segment of zone D and
determine height difference Ah with gravity station

iii) convert Ah to mGal using Hammer's tables

iv) repeat ii. & iii for all segments of D then sum their
gravity effects together .

v) Do ii) to iv) for zone E & F

vi) Sum all gravity effects for zones D, E & F together to
give Intermediate T

6.5.3 Outer Terrain Corrections (from zone F to
22 km or 167 km)

You could continue to use template method but this is
prone to human error of up to 20%

To minimise human error use computer method which
involves generating a digital terrain model (DTM). If you
have access to high resolution DTM then there may be
no need to do D,E & F zones using template method.
Since such high resolution DTMs are not normally
available then the following method described is normal
practice

For a rectangular area containing survey area, digitise
the topographic heights onto a 0.5 km grid. Area of grid
should be survey area extended by 22 km on all sides.

Computer program
For given gravity station
i) calculates distance from a UTM station to the first grid
node in metres

ii) determine height difference of station and grid node
in metres

iii) knowing grid size calculate the exact Terrain
Correction gravity effect at station by using prism
formula
A A g G x
r r
r r
b
r
r
D d =
|
\

|
.
| + +
|
\

|
.
|
|
2
1 4
2 3
2
1
2 4 1 3
u u u u ln ln ( ) ( )


Figure 6/16: Exact prism formula

iv) Repeat i) to iii) for each grid node for distances out to
x(e.g. 20 km)

v) sum gravity effects from all grid nodes to give Terrain
Correction out to 20 km

Repeat i) to v) for each gravity station

6.5.4 Terrain corrections for land marine and
air based measurements

Land: For land based stations the terrain correction is
positive for hills above the station and for valleys and
marine areas
below the station (see Fig 6/17).
Marine: Since measurements are normally made on the
sea surface then the resulting Terrain correction can be
both positive and negative (see Fig 6/18).
Airborne: Similar to marine but at variable height above
sea level. Terrain correction can be both positive and
negative

Ref: for Leeds based students see Anna Dyke MSc
dissertation 1997



Figure 6/17: Land based gravity stations

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 9
GETECH/ Univ. of Leeds:

The gravity effect of valleys and hills are such that they
reduce the observed gravity value at the station. Hence
the terrain correction is positive. Marine terrain effects
for land stations can be divided into two parts both of
which are positive, the effect of a void above seal level,
in a marine area, is similar to the effect of a valley. The
Bouguer correction assumes water is rock at 2.67g/cc
whereas it is water of density 1.03g/cc. The terrain
correction for sea water assumes a density of 1.64g/cc.


Figure 6/18: Marine based terrain corrections

The computation of the marine Bouguer correction
assumes an infinite slab of water underlying the sea
surface station. The slabs thickness is equal to the
bathymetry at the station. Both land and seamounts
increase the density so their correction is negative and
computed using the density difference between sea
water and rock (1.64g/cc) Land areas and deeper parts
of the marine area are both positive corrections

Since digital terrain models are commonly available
from the survey itself and from public domain sources,
there is a tendency to derive a 3D terrain correction
which can also take into account the curvature of the
Earth i.e. several independent corrections (flat slab,
terrain correction & curvature correction) can be
combined into one correction.

6.6 Curvature Correction (not
normally applied)

The Bouguer correction for the rock between the land
surface and the datum (taken as sea level) or between
the sea surface and the sea bed is computed as if the
rock or the sea water were an infinite horizontal slab.
The curvature correction, which was first applied by
Bullard (1936), is a modification of the slab
approximation to take account of the curvature of the
earth. Here the infinite slab is replaced by a spherical
cap of radius equal to the outer radius of the Hayford 02
zone (1
o
29' 58" or 166.735 km).

For land based stations the following expression,
taken from Cordell et al (1982 has been used for the
curvature correction (LCC)

LCC = -1.4639108 x 10
-3
h + 3.532715 x 10
-7
h
2
-
4.449648 x 10
-14
h
3
mGal

where h is the elevation of the observation point in
metres.

For marine based stations the curvature correction
cannot be approximated by the same expression,
because of the lower density contrast between rock and
water and the different geometric situation. In the land
case, the radius of the earth is the inner radius of the
cap while in the marine case, the radius of the earth is
the outer radius of the cap. For marine gravity station
the following expression has been used for the
curvature correction (MCC)

MCC = -6.40427 x 10
-4
h - 1.54751 x 10
-7
h
2
- 4.06303 x
10
-14
h
3
mGal

where h is negative and represents the water depth in
metres.


6.7 Density Determination using
Gravity Data

T 0.04191. 0.3086h g g BA
th obs
+ + =

In the above equation , g , g
th obs
h & T are known.

If a profile of gravity measurements is constructed over
a topographic feature within the same geology, then the
bulk density of the rocks making up the topography can
be determined. Ideally a hill or a valley (the latter
without sedimentary infill ) in a region of low regional
gradient (i.e. where wavelength of hill << wavelength of
regional gravity). If you have choice of hill or valley
choose the hill since valleys can be erosion features
controlled by change in geology from one side of valley
to other. Also valleys can have sediment infill which will
distort gravity field.

Two methods are available to calculate density

6.7.1 Parasniss Method

Rearranging the Bouguer Anomaly (BA) equation we
get

( . ) ( . ) g
obs
g
th
h h T BA + = + 0 3086 0 04191

This equation is in the form of a straight line (y = mx +C)
where the slope is . This assumes that the BA is a
constant subject to random error. This will be the case if
the assumption on the regional gradient is correct.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 10
GETECH/ Univ. of Leeds:


Figure 6/19: Density determination used relation
between gravity variation over a topographic
feature.



Figure 6/20: Regression analysis

6.7.2 Nettletons Method



Figure 6/21: Nettletons Method

Calculate Bouguer anomaly values at each station
along the gravity profile for an assumed density, . If the
density is less than the bulk density of the rocks making
up the topography then the BA profile will have a
positive correlation with the topography (if =0 the
maximum positive correlation since now BA = Free air
anomaly). The converse is true ,if is higher than bulk
density then there will be a negative correlation. Thus a
zero correlation occurs when = bulk density.

Thus just by plotting the same profile with range of
different densities you can 'eyeball ' the approximate
density. A more precise estimate of density can be
determined by calculating the Correlation Coef.

Cor. Coef. =
A A H H
A A H H
g g
_ _
g g
_
2
_
2
i i
i i
i

|
\

|
.
|
|

|
\

|
.
|
|

|
\

|
.
|
|

|
\

|
.
|
|



Where H g, A are the BA and elevation at station i,
and Ag, H are the average (arithmetic mean) of BA and
H of station values along the profile


Figure 6/22: Correlation coefficient


6.8 Oil company specification for
high-resolution land gravity
surveys for UK

Bouguer Anomaly = g
obs
- g
th
+ 0.3086h - 0.04191h +
T

Location (g
th
): Latitude u 10 m for 0.01 mGal
Height (h): 5 cm FAC + BC for 0.01 mGal
Gravity Meter: LaCoste Romberg
Gravity Meter Calibration: before or after survey to
check manufacturers value
Meter drift: < 3 hr survey loops with drift for any loop <
0.01 mGal per hour after tidal correction.
Base Station Network: 1 per 50 sq. km tied to IGSN71
with error in
obs
g less than 0.02 mGal
Gravity Datum & Formula: specified normally by oil
company
Terrain Correction: Specified by oil company
Repeat reading(
obs
g ): at 5% to 10% of stations with
standard deviation +0.02 mGal
Station density: 2 per sq. km

What is the minimum wavelength anomaly that such a
survey will preserve in the data?

This specification will enable Bouguer Anomaly to be
calculated to accuracy of ~0.05 mGal. Thus permitting
contours to be drawn at 0.2 mGal intervals. Often the
greatest error is in the determination of the terrain
correction
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 11
GETECH/ Univ. of Leeds:


Reconnaissance Surveys Helicopter type surveys
covering large areas -- specifications are relaxed.
Heights determined using barometric and or DGPS
methods where height accuracy can be better than 1 m
which is equivalent to 0.2 mGal (remember DGPS only
gives orthometric heights). The accuracy of height,
position and terrain correction are normally the main
parameters, which limit accuracy of survey.


6.9 Micro Gravity Surveys

6.9.1 Instruments

A modified LaCoste & Romberg gravity is generally
used. LaCoste have a model D meter with analogue
electrostatic feedback which automatically nulls of the
gravity meter and gives readings to 1 micro-Gal (i.e. to
0.001 mGal). EDCON Inc. have also modified the
LaCoste meter and call their meter SUPER G. The
basic design changes/upgrades include: The newest L
& R meter the EG meter now replaces the G and D
meters (Figure 6/1)

- automated electrostatic nulling system

- PC-based software to streamline field operations

- electrostatic levelling

- improved thermostate circuitry
- data logging

- continuous tide corrected station gravity

These modifications allow repeatability of readings to
2.0 microgal (0.002 mGal) and resolution to 0.1
microGal (0.0001 mGal). This allows the instrument to
be used to investigate civil engineering sites for sub-
surface density variations (e.g. cavities, backfill, etc.)

6.9.2 Field Operations

Base Station: Need a reliable sheltered base station
(allowing for bad weather)

Survey Procedures: To achieve microgal accuracy

- allow meter to settle (45 min.) to external temperature
with beam unclamped.

- readings taken in loops for 20 to 40 min. starting and
finishing at base station.

- two repeat readings made in every loop to check on
repeatability and quality of data

- orientation of meter, carrying case and observer kept
constant

- height of meter above gravity station measured at
same point on the meter

- read gravity at field stations in semi random order so
that local anomalies, due to incorrect drift adjustment,
are spatially de-correlated.

- handle instrument with extreme care

6.9.3 Operation of Meter:

-switch off nulling and feedback circuits at base station
and take measurements as normal (as model G)

- enter the counter value for the nulling position into the
PC computer that is connected to the meter.

- this will allow SUPER G to have range of 3 mGals (=
9.7 m of free air correction). This is normally OK for
most surveys. f not, then second base station would be
needed.

- enter station name and occupation number into PC.
In general 3 reading are taken (i.e. 3 averages of
successive 60 second periods of sampling done
automatically by the PC). If readings vary by more than
few microgal, instrument is re-levelled and reading
repeated.

- Clamp beam and record the meter reading and time.

- Measure height of meter to +/-5 mm

- return meter to carrying case and move to next
station.


6.9.4 Data Reduction

Same as conventional land gravity data but far more
precise. This includes using a Digital Terrain Model
(DTM) of the survey site and extended area to calculate
accurately the terrain
.


6.11 Changes due to GPS

Since GPS measure locations and heights relative to
the WGS84 Ellipsoid and not relative to Sea level
(geoid), see section 5, then there is a serious move to
reprocess gravity data to the Ellipsoid (North American
Gravity Database Committeee. 2005). See also
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 12
GETECH/ Univ. of Leeds:

section 5 for the differences between the Ellipsoid and
EGM96 Geoid.

6.12 Land Survey Design

Land based surveys could be stand-alone and often
their original purpose was to investigate the subsurface
geology by obtaining a uniform coverage of gravity
measurements. The British Geological Survey aimed at
1 gravity station per km
2
, while the Polish and Czech
surveys went for 7 station per km
2
. Generally in land oil
exploration 2 station per km
2
is satisfactory for initial
evaluation.

Gravity surveys specifically for oil exploration are
normally undertaken along seismic reflection lines at
spacing of 50 to 200 m. Figure 6/23 shows an example
of some Vibroseis
TM
lines along roads and tracks and
the necessity to infill around the seismic lines to
determine structural closures, so that more detailed
seismic (possible with explosives) can be designed. In
Fig 6/23 the gravity was used to correlate gravity highs
with structural anticlines and then the gravity method
was used (cheaper than seismic) to map out the
possible extent of the anticlines.



Figure 6/23: Land gravity survey, initially undertaken
along seismic lines then spatially to map out extent
of anticlines.

6.13 Quality Control (QC) of Land
Gravity Surveys

Many of the factors that control a land gravity survey
have been covered in sections 4 and 5 of the Course
notes. Here an attempt is made to bring together the
important points, since an oil company will wish the
survey to be collected to the highest possible accuracy
and will generally employ a consultant to work on their
behalf to ensure the acquisition contractor carries out
the survey to within the specification set. Remember
that acquisition contractors wish to do a very
professional job but sometimes for various reasons the
specifications do not quite meet the specific problems
found within a given survey area. It is not normally the
job for the QC consultant to have a big stick to beat the
acquisition contractor but more a person who can
resolve practical field work problems on behalf of the
client, thus helping the contractor and making the
survey as efficient as possible without compromising
the quality of the data.

Quality control often means visiting the survey site at
the start of the survey, possibly at 50% completion and
at the end (or de commissioning) of the survey. At all
other times the QC work can be done by remote
email/ftp resulting in weekly reports.

6.13.1 Pre Survey QC

Base Station tie: prior to the survey or at a suitable
time the base station established in the Survey area is
linked to a IGSN71 gravity base station. A primary
survey base station (PB) will be established and will be
tied to the IGSN-71 datum by multiple loops (at least 3
such loops of measurements) between an IGSN-71
station and the primary survey base station (PB).
Successive absolute gravity values at PB (for each
loop) should not differ by more than 0.05 mGal after the
application of all corrections.

Static Gravity meter test (Fig 6/24): All gravity meters
used in the survey will need to undergo a static test to
ensure all are performing correctly. This normally
involved measuring all instruments for 24 hrs at 30
minute intervals to ensure that after tidal corrections,
the drift of all instruments is <0.1 mGal/hr (see Fig
6/25). With the CG-5 meters, the Tidal correction is
corrected within the instrument so long as the location
and time and date have been input.

Calibration Test: This is to measure all survey
instruments at two stations with large gravity difference
(normally specified which spans the working range
within the survey area), say >30 mGals. The sequence
of measurements is A-B-A-B (where A and B are the
two stations). All meters need to be < 0.0005 mGal/s.d.
(where s.d. is scale division). If not then calibration of
individual instruments is adjusted to the mean of the
other instruments.

6.13.2 Production Survey QC

Station location: If the station is on a grid and pre
specified then location has to be < 25 m from planned
location. However this is generally relaxed to 100m to
avoid problem locations (e.g. rivers, lakes marshland,
for gravity and human cultural areas for magnetic
measurements). Remember the client wants the best
quality data. Station locations are pegged for precise
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 13
GETECH/ Univ. of Leeds:

relocation. Figure 6/26 shows a graphical display of
location requirement where specifications were relaxed
to <100m, with agreed exceptions.

Gravity Meter Instrument performance: Drift of the
gravity meter (after tidal corrections) is a major problem
that needs correcting for. The survey specifications will
indicate the time period of any survey loop (period
between repeat measurements at a base station). A
gravity survey loop can be from ~3 hours to <1 day
depending on difficulty of area. For the case shown in
Fig 6/27 loops were one day with maximum drift
permitted of <0.2 mGal. Drift within a survey loop is
generally considered to be linear with time. To minimise
drift the gravity meter needs to be carried very carefully
and when transported by vehicle the gravity meter
should be carried on the lap or in a harness to protect it
from vibration. Placing the meter on the vehicle floor will
generate vibrations that will adversely affect the drift
correction.



Figure 6/24: Six CG-5 instruments undergoing a
static test prior to a survey.

The Gravimeter drift of CG5-081
-0.010
0.000
0.010
0.020
0.030
0.040
0.050
0.060
7
:
4
6
8
:
4
6
9
:
4
6
1
0
:
4
6
1
1
:
4
6
1
2
:
4
6
1
3
:
4
6
1
4
:
4
6
1
5
:
4
6
1
6
:
4
6
1
7
:
4
6
1
8
:
4
6
1
9
:
4
6
2
0
:
4
6
2
1
:
4
6
2
2
:
4
6
2
3
:
4
6
0
:
4
6
1
:
4
6
2
:
4
6
3
:
4
6
4
:
4
6
5
:
4
6
6
:
4
6
7
:
4
6
T
i
m
e
mGal


Figure 6/25: Drift curve over 24 hrs showing total
drift of <0.06 mGal



Figure 6/26: Station location: Horizontal location
shift from planned km grid station location. Of the
stations shown the average is 26 m and maximum is
129 m of which only two stations were more than
100m for acceptable reasons



Figure 6/27: Drift corrections for each gravity
station. Since consecutive numbers represent a
gravity survey loop each saw tooth is a loop and
maximum drift is shown.

Normally drift is negative since solid state creep tends
to lengthen the spring with time. Thus when drift is close
to zero and /or positive something may have occurred to
the instrument, thus closer evaluation of the survey loop
and its spatial values on the map need to be considered
more closely. Best to colour code instruments in Figure
6/26 and station locations on maps to check on
potential problems.

Repeat measurements: Normally 5% of stations are
repeated to check on repeatability of gravity
measurement (<0.05 mGal). This is about one station
per day on a previous loop by the same instrument. The
position and height should also be repeated to check on
its repeatability.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 6, Page 14
GETECH/ Univ. of Leeds:


Figure 6/28: Gravity differences at Repeat stations

6.13.3 Spatial QC of the data

Once per week the new data (heights and Bouguer
gravity) will be gridded up with existing data so that
spatial problems, if any can be identified. This can be
done by generating simple residuals (centre point value
less the mean of surrounding 4 measurements).



Figure 6/29: Left-Bouguer anomaly map of new
stations and Rightresidual (centre point value less
the mean of surrounding 4 measurements). Any
poor reading or processing would show up.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 7, Page 1
GETECH/ Univ. of Leeds:

SECTION 7: MARINE GRAVITY Acquisition
& Processing (Dynamic and Static)



7.1 Observed Marine Gravity (gobs)

Collecting gravity on a moving boat is dynamic whilst
lowing a meter to the sea bed and measuring is static.

To fully appreciate the recent advances in technology it
is important to understand how marine gravity data have
and are conventionally collected on a moving vessel
that is subjected to a range of horizontal and vertical
accelerations due to sea state and the movement of the
vessel. The vertical acceleration of gravity is the signal
that we wish to separate from the background
accelerations of the vessel. The gyro-stabilised
mounting of the gravity sensor minimises the effects of
horizontal accelerations. The vertical accelerations
perceived by the stabilised gravity sensor are the
superimposed effects of signals resulting from :

Geology-- variations of which include the geologic
signal that we wish to isolate and measure. The
geologic signal variations are small: often less than 1.0
mGal and rarely more than 50 mGal. The shortest
wavelengths of geologic anomalies at typical seismic
survey boat speeds (5 knots) amount to a few minutes.

Wave motion -- vertical accelerations caused by wave
motion are rarely less than 10,000 mGal and in rough
weather can exceed 100,000 mGal. Fortunately,
vertical wave motion is normally confined to periods
shorter than about 60 seconds. At wavelengths of
geological interest (2 or 3 minutes and more), vertical
wave motion is generally much less than 1 mGal. This
large difference in signal and noise amplitudes poses a
substantial filter design problem for both analogue and
digital systems.

Eotvos effect -- the Eotvos effect is the change in
vertical acceleration f that will act on any moving object
or ship on a spinning earth. The correction for this
effect is proportional to the ship's eastward component
of velocity. Relatively small variations in velocity result
in changes of several mGal that can easily be confused
with geologic signal because the changes in ship's
velocity can occur at wavelengths similar to those of
geologically caused anomalies. Relative variations in
the Eotvos effect increase with rougher weather.

Assuming the velocity of a ship travelling west to east is
v, (relate to earth), then the angular velocity changes by
d, then because
f = r
2
and v = rd

df = 2rd = 2v
since g depends on latitude
change in g = dg = df cos where is latitude
dg = 2v cos = 7.487v cos sin

where v is in knots and is the heading of the ship w.r.t.
north. Thus max change in df at equator when travelling
E-W ( = 90
o
).

Size of the Eotvos effect for typical ship or helicopter
speeds are

Table 7.1 Eotvos Corrections

Need to know heading and speed very well to keep
errors in Eotvos low. Absolute reference system GPS
allows good control and keeps these errors small.

The uncertainty in this correction has been one of the
largest sources of error to the calculation of the
Bouguer and Free air anomaly since 3% error in speed
+ 1 mGal

Cross-coupling effect --although some meter
designs (e.g. Bell gravity meter) minimise this effect, all
types of ship borne gravity meters can have errors
caused by the interaction of the effects of vertical and
horizontal accelerations on the gravity meter or
stabilised platform. These cross-coupling errors can
occur only when the accelerations have the same
periods and there are systematic phase relations
between the accelerations. The variations of these
effects are more severe in rough weather and the
variations typically have wavelengths and amplitudes
similar to geologically caused anomalies.


7.2 Marine Gravity Meters (Dynamic)


7.2.1 Bell Aerospace BGM Gravity Meter

Latitude Speed Heading Correction(mGal)
0 deg 10 km/hr North 000.1
0 10km/hr East 044.6
0 100km/hr North 012.1
0 100km/hr East 415.9
60 10km/hr North 000.1
60 10km/hr East 020.2
60 100km/hr North 012.1
60 100km/hr East 214.0
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 7, Page 2
GETECH/ Univ. of Leeds:

This is a forced feedback vertical accelerometer or
inertial navigation-grade accelerometer mounted on a
gyrostabilised platform. See Fig 7/1 for basic physics of
the design. The accelerometer is a proof mass
wrapped in a coil that is constrained to move vertically
between two permanent magnets. The physical
principle of the sensor design is that a balance exists
between the gravitational force acting on the proof mass
and the electromagnetic force induced in the coil. This
force balance maintains the proof mass in a constant
(null) position. The current in the coil varies
proportionally to changes in vertical accelerations.
Changes in the position of the proof mass are detected
by a second-order servo loop which regulates the
current in the coil and drives the proof mass back to its
null position.

The output from the accelerometer is a current
proportional to vertical acceleration in the range of 0 to
200 Gal. By summing the output with a reference bias of
880 Gal, the system can respond to a range of vertical
accelerations normally encountered at sea. This current
is



Figure 7/1: BGM Accelerometer

then digitally filtered (replaces older R-C filter which had
a time constant of 4.5 s to prevent leakage of high
frequencies)..

7.2.2 LaCoste & Romberg Sea Gravity Meter

(Old System) The same mechanism as used for the
land gravity meter is used. Since the zero length spring
is inclined to the vertical, then the mass and zero length
spring will respond to horizontal accelerations. This is
known as the cross coupling effect.

Marine gravity systems have until recently applied a 3
minute RC filter (see Fig 7/2) to computed gravity and
the resulting signal has been sampled at 10 sec
interval. This sampling rate is more than adequate to
define geologic features with wavelengths greater than
0.5 km (i.e. a vessel moving at about 5 knots will cover
0.5 km in about 3 minutes, giving 18 samples).
However, changes in the Eotvos effect are often quite
sharp and require higher sampling rates for their
definition e.g.compare Figures 7/3 and 7/4. A common
result of inadequate sampling is known as spectral
leakage or aliasing; the result of leakage is that the low
frequency signal is contaminated by higher frequency
noise.

Another limitation of the data from conventional systems
is the broad range of anomaly wavelengths attenuated
by RC filters. The response of the 3-minute RC filter
typically used in analogue systems is shown in Fig 7/2.
The nomenclature for an RC filter refers to its time
constant which is only loosely related to frequency
response. By contrast, where a 4-minute, FFT, high-cut
filter is defined to attenuate 4-minute wavelengths by
50%, the 50% attenuation point for a 3-minute RC filter
is at an anomaly wavelength of about 6.5 minutes.
Moreover, the 3-minute RC filter attenuates anomaly
wavelengths for periods much longer than the nominal 3
minutes: 25% attenuation at 10-minutes and 10% at 17
minutes. The 4-min FFT filter causes no anomaly
distortion at all for periods longer than 8 minutes. The
flexibility of digital filter design allows a cleaner
separation of signal from noise with minimal signal
distortion.

7.2.3 Modified LaCoste & Romberg System

The design limitations of the analogue filters used in the
original LaCoste and Romberg S meters were originally
overcome by a redesign by EDCON Inc. to incorporate
the use of high sampling rates and digital filtering. The
system uses the proven LaCoste and Romberg sensor.
The sensor itself is unmodified, although the design of
the control and recording electronics eliminates
relatively short period noise sources that were present
in the old systems. The result is a signal that is quieter
at the marginal wavelengths of the old system and
retains meaningful resolution at shorter wavelengths
that the old systems missed entirely. The advantages
of these design improvements are most noticeable with
the high-accuracy position and velocity measurements
that are obtainable with satellite-based navigation
systems.

Some of the more effective modifications that result in a
fundamentally quieter system include isolation of critical
analogue signals prior to sampling and an improved
thermostat control system. To reduce aliasing, the
fundamental outputs of the sensor are filtered prior to
sampling using an analogue filter with a time constant of
about 1 sec. The initial sampling rate is 100 Hz, and
anti-aliasing digital filters are applied prior to recording
at 1 Hz i.e., 1 sample per second.

Figure 7/5 shows the modified system mounted in an
instrument room near the centre of pitch and roll of a
seismic vessel. Where the instrument room is far from
the optimal sensor location at the centre of pitch and
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 7, Page 3
GETECH/ Univ. of Leeds:

roll, cable links up to 150 feet to allow both optimal
placement of the sensor and convenient placement of
the control and recording electronics.

Figure 7/2: R-C and FFT Filter Responses. The
speed of the boat is normal seismic survey boat
speed 3-4 knots


Figure 7/3: Analogue gravity (after 3 min RC
filtering) and high resolution 1 second Eotvos
derived from satellite positioning (Starfix & GPS)


Figure 7/4: Same data as Fig 7/3 but collected with
modified LaCoste sampling 1 sec beam motion and
no3 min RC filtering. Note: the strong negative
correlation between beam and Eotvos signal

The Control Electronics Console consists of a PC
computer and the gyroscope power supply. Between
the two is a utility drawer containing the keyboard. At
the rear of the unit (not seen) is the System Junction
Box. The PC monitor is mounted on top of the
electronics console or other desirable location.
A continuous graphics printout of selected data traces is
provided as a QC and auxiliary data record.
Data are recorded on the PC computer hard disk
and archived onto 3.5" floppy disk approximately
two times per day.


Figure 7/5: Modified L &R system installed in ship's
instrument room

In addition to the 1 sec. data file, the system also
records: a 10 sec. data file which emulates the standard
3- minute, RC-filtered, LaCoste and Romberg output; a
message file which records ASCII text messages
related to system changes and operation; and a monitor
file which records key system monitor values such as
temperature and pressure once per minute. The
system is also extremely flexible when interfacing to
external systems. Four serial ports and four parallel
ports accommodate the recording of navigation,
magnetometer, water depth and other inputs, in nearly
arbitrary range of input formats.



Figure 7/6: The new LaCoste Romberg Air Sea
System II

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 7, Page 4
GETECH/ Univ. of Leeds:

The gravity recording system are now also able to
record the DGPS data giving the boats position
sampled at once per second. The DGPS data comes
from the seismic contractor or is provided for the gravity
survey if it is a stand alone survey.

7.2.4 LaCoste & Romberg Air-Sea System

LaCoste & Romberg have introduced in 1999 a new
meter (Air-Sea System II) which incorporates many of
the design features introduced by EDCON Inc and in
addition includes 10 Hz sampling, fibre optic gyros, etc.
(see Fig.7/6).

7.2.5 The ZLS Dynamic Gravity Meter

Figure 7/7: The ZLS Dynamic Gravity Meter

Features of The ZLS Dynamic Meter:Externally it
looks like a L & R Air Sea System but it has major
design advances in the sensor. The new sensor
eliminates the cross-coupling errors inherent in older
beam-type L & R gravity meters by constraining the
proof mass to vertical linear motion.
Damper adjustments are no longer required.
The new sensor utilizes liquid damping that virtually
eliminates the sensitivity to vibration common in air-
damped sensors.
Residual imperfection errors, due to minute variations in
manufacture of the system are typically three to five
times smaller than those for beam type sensors. Unlike
beam type meters, imperfection errors are stable with
time and do not require regular testing to track changes.
The new design eliminates the "slope error" prevalent in
beam meters that causes reading errors with beam
position under dynamic conditions.
Raw data from the accelerometers, gyros, and gravity
transducer are digitized 200 times a second with a 16-
bit A/D converter and processed by the embedded
computer. Analog signal outputs to control the motor
and gyros are provided by a 16-bit D/A converter, which
is also updated at 200 Hz. Slightly filtered data are
transmitted to the host computer once per second.
For further details <http://www.zlscorp.com/>



7.3 Processing of Marine Gravity Data

The radical redesign and operation of the L&R gravity
meter has been matched by a major redesign of the
processing software to maximise the use of 1-second-
sampled gravity, navigation, horizontal accelerometers
and other auxiliary data.

7.3.1 Base Constant Determination and
Removal

Repeat still reading in port establish base constants,
provide an accurate basis for meter drift correction and
serve as a quality control check on instrument
performance.

7.3.2 Cross-coupling Corrections

The method used to correct for cross-coupling effects is
based on the cross-correlation method published by
LaCoste (1973). The basic premise of the method is
that gravity should not correlate with any variations in
the motion of the ship including the interactions
between vertical and horizontal accelerations (i.e., cross
coupling). Any apparent correlation between observed
gravity and one of the monitors of ship motion, such as
the product of vertical and horizontal acceleration, must
be false. The objective of the cross-correlation method
is to find a gain factor which when applied to the cross-
coupling monitor will result in minimum correlation. The
correction procedure is applied to the 1 sec. sampled
data. Each line is evaluated as part of the entire set.
Occasionally, subsets of lines acquired in exceptionally
bad weather and on certain headings will benefit from
gain factors different from the average. The criteria for
improved cross-coupling corrections are a more
coherent representation of observed gravity and
improved survey line miss-ties.

7.3.3 Eotvos Correction

The Eotvos correction is proportional to the eastward
velocity component of the ship; the effect, along with the
related centripetal acceleration caused by the earth's
rotation, diminishes with increasing latitude. Small
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 7, Page 5
GETECH/ Univ. of Leeds:

errors in position and time can result in substantial
errors in the corrected gravity, so proper determination
of the Eotvos correction is critical to final data accuracy.
Velocities determined directly from DGPS have proven
to be the best source for Eotvos corrections on the 1-
second data.


Figure 7/8: The relationship between the Eotvos
correction and ship direction and latitude.

The correction is refined using a time-varying cross-
correlation between the observed Eotvos and observed
gravity. The final derived Eotvos should have minimal
correlation with corrected gravity. One measure of the
quality of the observed Eotvos correction is close
agreement with the derived Eotvos correction (from
correlation filtering).

Figure 7/4 is a good example of close agreement
between observed and computed Eotvos. Note the
inverse correlation between observed gravity and
Eotvos is a good indication that both signals are valid.

7.3.4 Filtering

Digital filtering parameters are selected on a line by line
basis according to apparent noise content and data
quality. The processing philosophy is to make every
effort at all stages of processing to maximise the
retention of geological signal. Figure 7/2 demonstrates
some of the advantages of digital filters over RC filters
Digital filtering can achieve attenuation of 100 to 150
DB for periods shorter than about 1 minute while
minimising side-lobe and passing signals with periods
greater than 2 to 4 minutes with minimal distortion.
Such sharp cut-offs cannot be achieved with analogue
filters.
7.3.5 Free-Air/Bouguer/Terrain Corrections

These are standard corrections, sometimes specified by
the client, and can include full 3D Bouguer corrections
to remove 3D variations in bathymetry over the study
area. Reference should be made to previous land
section for further details. Terrain corrections in this
environment are to remove the effects of variable
bathymetry. This is normally done using 3D terrain
corrections our to a certain distance (see Section 6.5.4)

7.3.6 Line Levelling

Despite the most rigorous attention to the retention of
geologic signal and suppression of identifiable noise,
small differences still remain between survey lines,
which cannot be accounted for even after tie line
adjustment. Figure 7/10 gives an example data from a
3D survey after tie line adjustment.



Figure 7/9: High-resolution marine gravity survey
with ship track spacing ~150 m after tie line
adjustment. Note the strong line orientated noise.
Any filtering to remove this noise will degrade the
signal



Figure 7/10: Same survey as shown in Fig 7/8 after
GETECHs proprietary micro-levelling methods
applied
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 7, Page 6
GETECH/ Univ. of Leeds:


Innovative and proprietary software have been designed
to minimise these differences, some times called micro
levelling so that the final Bouguer anomaly maps
contain the best possible visualisation of the short
wavelength data. Application of colour and shaded
relief can help to image the subtlety of gravity features.
Figure 7/8 show the same survey (Fig. 7/9) after
GETECH proprietary micro levelling.

7.4 High Resolution and Repeatability
of Marine Data

High resolution means the ability to identify low-
amplitude, short-wavelength signals that are
geologically meaningful. The noisy marine environment
obscures the resolution of subtle, but important,
geologic features. For a given vessel and instrument
system, the noise level can vary greatly within a single
survey and be influenced by sea-state and the direction
the vessel is moving with respect to sea-state. Careful
attention to acquisition and processing using DGPS now
enables the new marine systems to measure anomaly
wavelength down to 0.5 km to an accuracy of 0.2 mGal

The following example is from the Gulf of Mexico
(Figures 7/11 to 7/13) and show the location and the
comparison under very good conditions for gravity
collected over a salt dome by both an analogue
LaCoste & Romberg gravity meter and the new system.
The two meters were installed side by side and
operated simultaneously. Bottom measurements
provide the standard for comparison. Note that in
Figure 7/12 the new system data honour the shape of
the anomaly remarkably well

Figure 7/11: location of test survey in Gulf of Mexico

Whereas the analogue ship borne gravity data (Fig
7/12) lack the short-wavelength character and are
unable to match gradients, amplitude and the subtle
anomaly over the top of the dome. The implications of
these results on subsequent modelling will result in
deeper structure being determined for the analogue
data.


Figure 7/12: Test between Analogue data and
Bottom data over a salt dome in the Gulf of Mexico.
The analogue data does not match gradients


Figure 7/13: Same as Fig. 7/12 but now modified
L&R meter has replaced the Analogue meter. Note
the ability of modified L&R to honour the high
frequency anomalies and gradients.


Claims of high resolution are somewhat academic
without the ability to prove repeatability of observations
to the level claimed. This can been demonstrated by
repeating lines within a survey or when profiles are
separated by 150 m to compare adjacent lines . The
repeatability should be between 0.2 to 0.3 mGal. But
this will depend on sea conditions since data quality
decreases with sea state.

8.3 Sea Bed Gravity Measurements
(Static)

Gravity is measured on the sea bed using a remote
underwater gravity meter. This meter is housed is a
specially designed capsule and lowered to the sea bed
(Figs 7/14 to 7/16). Once on the sea bed the gravity
meter automatically levels itself and readings are taken
remotely and logged in the boat. Since the gravity meter
is stationary, as with land instruments, the complex
corrections that are necessary for a boat mounted
gravity meter recording in a dynamic mode are not
made. However the data reduction is different to land
based measurements. Since the water column is now
above the meter rather than below, and special care
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 7, Page 7
GETECH/ Univ. of Leeds:

has to be taken in calculating the Bouguer correction.
The Bouguer anomaly is now

BA = gobs-gth + 0.3086h - 0.04191h +T mGal

where h is below sea level and is thus negative(i.e.
height is this equation is positive upwards from sea
level so measurements on the sea bed are at negative
heights) and is (reduction density -water density)

e.g. reduction density = 2.20 and water density is 1.03
g/cc, then = 1.17 g/cc



Figure 7/14: The Underwater gravity meter (U meter)

7.6 Survey Design and Maximising
Data Recovery

Undertaking a gravity survey on land, at sea or in the air
provides some opportunities of survey design and/or
maximising data recovery.

In marine surveys the most common way to collect
gravity is on-board a seismic survey boat during the
acquisition of the seismic data. Thus survey design is
very much controlled by the seismic acquisition group.
Combining surveys keeps costs to a minimum and you
have gravity along each line of the seismic survey.
Stand alone surveys (other than gradiometer surveys-
see Section 9) are rare due to the costs involved for
boat and GPS navigation hire.

However, since oil companies funding a proprietary
marine gravity survey are paying for the gravity system
and operator for the duration of the survey, they can by
agreement make sure the instrument and associated
navigation and bathymetry systems are kept switched
on throughout the survey period, with the possible
exception when the ship is turning since many marine
systems using a feed-back system from the horizontal
accelerometers located on the gyro platform (platform
for the gravity meter) to the gyros themselves. This
feedback is not used in airborne survey systems. When
the boat turns the true horizontal can be biased by this
feed-back due to the velocity changes of the ship.
Getting the gyro platform truly horizontal then takes a
few tens of minutes of straight-line navigation to remove
the bias.



Figure 7/15: The gimbel device for levelling the
meter



Figure 7/16 LaCoste & Romberg model U gravity
meter being lowered over the side of a boat.


In Fig. 7/17 the contracted open 2D survey area is
shown in RED but by keeping the gravity meter
instrument switch on more than 90% of extra data,
YELLOW lines, have been acquired free of charge.
These extra data have the ability to extend the study
area and provide additional tie lines. The main reasons
why these extra line can be collected data are that a
seismic boat is towing a 3-6 km of streamer and this
has to go way beyond the survey limits before the full
fold of data have been collected and the boat can start
turning. The boat also has to straighten up well before
the start of the next line. Because of the large turning
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 7, Page 8
GETECH/ Univ. of Leeds:

circle of the boat, the survey is done in panels. This can
be seen as the outline. of the green area in the 3D in
Fig. 7/18. The NW end of survey is highlighted in green
to show this panelling. Also there is survey time down
for testing and replacing equipment or time sharing with
other survey ships in the vicinity that generate these
extra lines.



Figure 7/17: The benefits of having close control on
marine surveys is that you can generate in this case
up to 90% of additional data to this 2D survey.

The next example (Fig 7/18) is of a gravity survey
collected on a 3D seismic survey where the ship track
separation is 150 m. This survey, like many such
surveys contains no programmed tie lines since these
are not considered necessary for seismic studies.
Again keeping the instrument switched-on shows the
amount of extra data that can be collected. In this case
in excess of 100%. Has been collected. Also tie lines
(RED) have been generated from the extra data.



Figure 7/18: Geometry of a 3D survey showing the
turning circles and the panelling of data acquisition.

Another way of putting the proprietary survey into its
regional setting is by windowing the survey onto the
satellite-derived gravity. This is shown in Fig. 7/19 using
pastel colours for the satellite-derived data



Figure 7/19: Merging a proprietary survey with
satellite data to show regional gravity setting of
study area

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 8, Page 1
GETECH/ Univ. of Leeds:

SECTION 8: AIRBORNE GRAVITY DATA-
Acquisition & Processing

8.1 Static Measurements

The use of helicopters in transporting gravity meter and
observer rapidly from station to station has been a well
proven method of collecting ground based gravity
readings in areas of poor access. Radio navigation and
barometric heights have been replaced by DGPS.
However barometry levelling is still often used as a
height backup.

A new development in 1997 by Scintrex is the HeliGrav
method. As a boat can take remote readings on the sea
bed by a fully automated levelling and reading gravity
meter system, Scintrex has developed a helicopter
borne system to take readings remotely from the air by
lowering a gravity module onto the ground.

Figure 8/2 shows the module suspended from the
helicopter. Position and height are determined from a
GPS antenna on the helicopter. This antenna location is
visible to the satellites and the Lat. Long location of the
station is no problem. The station height can be
accurately determined by taking the GPS height
measurement of the antenna at the time the module
touches the ground and subtracting the known cable
length. When the module is on the ground it self-levels
and takes a reliable reading within about 2 minutes.

To provide a reliable reading the module, see Figure
8/1, the cable is allowed to rest on the ground with the
helicopter off to one side. Clearly there are limitations in
thickly forested areas but in many open areas this
method is claimed to be much more rapid than
conventional ground surveying. Work rates are claimed
to be approx. 70 stations a day compared to 30 by
conventional land based methods.



Figure 8/1: HeliGrav gravity module



Figure 8/2: HeliGrav method of airborne gravity
surveying.

Specifications: The Heligrav uses Autograv gravity
meter (made by Sintrex) and can work under water so
that measurements can be done over lakes with water
depths down to10m maximum, sensor reads to 5
microgals(0.005 mGal) with accuracy of readings <25
microgals, self levelling to 50 arc seconds, levelling time
17 seconds, weight of module50 kg.

8.2 Dynamic Measurements - Intro

Taking gravity measurements in an aircraft (Fig. 8/3) in
intrinsically more difficult but is in many respects similar
to taking gravity measurements at sea.

Similarities are:
- Similar types of gravity meter used since the
measurements are taken on a moving platform then the
need for
- Damped instrument
- Gyro stabilised platform to keep instrument vertical
- Cross coupling correction
- Eotvos correction
- Need DGPS for high accuracy 3D locations
- Major similarity is that resolution decreases with
increase swell (marine) and turbulence (air). Turbulence
is not considered such a problem when using INS
technology

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 8, Page 2
GETECH/ Univ. of Leeds:



Figure 8/3 Airborne gravity acquisition (from GFZ)

Differences are
- Aircraft travels at much higher speeds thus there are
larger Eotvos corrections (common speeds are ~>100
knots)
- Need to measure height of aircraft, height clearance of
topography and register this with known datum. Height
clearance usually measured by radar. The datum can
be determined from the DGPS plus knowledge of the
local geoid (remember satellite DGPS heights are
measured relative to the reference ellipsoid used.
- Aircraft has generally smaller cross coupling effects
than marine surveys.
- Resolution as of 2001: ( wavelengths used by
contractors)
Marine gravity surveys can measure geological
signals with wavelengths equal to and greater than 0.5
km at 0.1 to 0.2 mGal.
Airborne gravity resolution needs to be sub-divided
into
a) fixed wing flying at 100 knots can be as low as 2
km at 0.2 mGal
b) Helicopter flying at ~50 knots can be down to 0.6
1.0 km at 0.2 - 0.3mGal
(In mid 1990s resolutions was 3 5 mGal at 7km so
massive improvements in last 5 years)

With the above resolutions
- Costs ($80 to $90 per line km) varies due to
location, size of survey and competition.
- Cost effective over jungle, swamps etc.
- Since observing platform is more distant from the
gravitational source the signal will be attenuated (longer
wavelength and small in amplitude)
The resolution quoted by airborne contractors are under
ideal (stable air) conditions. Air stability is always a
problem and affects resolution. Test lines are generally
undertaken within or adjacent to the survey area and
flying height will be controlled by safety factors and
acceptable air turbulence generally the higher you fly
the more stable are the air conditions. This decreases
the signal amplitude you can observe.

Although airborne gravity has been available as a
technique for more than two decades it has only been
widely accepted in the oil exploration since the mid
1990s. Since the late 1990s to 2001 the emergence of
a number of competing contractors offering airborne
gravity services with innovative acquisition
developments has had a significant effect on improving
resolution. Since airborne methods have inherent
advantages over ground-based acquisition in difficult
terrain, particularly their ability for uniformity of
coverage, it is likely there will be significant continued
growth in this acquisition method.

The accelerations measured by an airborne gravity
meter are: Am = AFAA + AAircraft + AEotvos + ATheo + AFAC
where
Am = measured acceleration,
AFAA = free air anomaly
AAircraft = aircraft vertical accelerations
AEotvos = Eotvos correction
ATheo = theoretical gravity
AFAC = free air correction

Isolating the earths gravity signal from the
accelerations introduced from the measuring
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 8, Page 3
GETECH/ Univ. of Leeds:

environment is a major task in gravity surveys made
from a moving platform. The task is to extract AFAA from
Am . In the marine environment the gravity sensor is on
the equipotential surface and any vertical acceleration
due to waves can be filtered out since their periodicity is
smaller than the equivalent period of the geological
signal (period is used rather than wavelength since the
gravity measurements are dynamic in a moving
platform). In the airborne environment, vertical
accelerations are not periodic and have wavelengths
that correspond to geological features. Vertical
accelerations of 600 to more than 2,500 mGal and
periods of 0.1 to 300 sec must be carefully recorded
and removed from the measured accelerations.
Differential GPS measuring velocity directly is the most
accurate of methods to determine vertical acceleration.

The Eotvos effect (gravity meter moving at a different
rotational angular velocity to Earth will generates
changes in the vertical acceleration). These have to be
corrected for. Harlan (1968, Eotvos correction for
airborne gravimetry J . Geophys. Res., 73: 4675-
4679) derived the Eotvos correction

e V
V
a
h
a
V
a
h
a
e
n
e
= + +

(
+

(
7 5 1 2 3
1
2
2
2
2
. cos ( sin )
sin
| c |
c |

where
e = Eotvos correction in mGal,
| = latitude in degrees,
Ve = the easterly component of the platform velocity
Vn = northerly component of the platform velocity
h = the height of the aircraft above the geoid,

c = the earths flattening for the reference ellipsoid
a = semi-major radius of the earth

In marine surveys h = 0 and e is <+/-75 mGal. For
airborne gravity this can be 2500 mGal

As has been seen with marine gravity, using
conventional meters (LaCoste & Romberg) the
resolution break through in the mid to late 1980s has
not improved substantially. Limits to marine gravity
resolution may be the prevailing sea state and weather
conditions.

With airborne gravity the resolution break through was
in the very late 1990s to early 2000 and it is likely this
new higher resolution will be adequate for most oil
exploration purposes but acquisition costs are high.

Further improvements in resolution are likely to come
from helicopter surveys (slower speeds) and advances
in equipment design such as the Airborne Inertially
Referenced Gravimeter (AIRGrav) by Sander
Geophysics using accelerometers (Fig 8/5) and claim
that resolution is relatively unaffected by turbulence.



Figure 8/4: Installation by GFZ (Potsdam) of the L &
R and INS gravity instruments within a Twin Otter
aircraft.


Figure 8/5 Inertia Navigation Systems INS high
precision accelerometers using in the latest
airborne gravity meters e.g. AIRGrav

8.3 Where is Airborne Gravity in 2010?

Taken in part from Wooldridge (2010)

By far the largest market for airborne gravity is the
petroleum sector where regional gravity surveys play an
important role in identifying and mapping sedimentary
basins. Gravity gradiometer systems using Lockheed
Martin technology (see Section 9) are used by Arkex,
Bellgeo and Fugro initially for mineral exploration but
are now being increasing used for oil exploration. The
cost, however, of the gradiometer surveys is very much
more expensive (US$100-200 per km) than
conventional airborne gravity (US$ 50 -$80 per km) and
can image the gravity field to a higher accuracy for
structures down to 1-2 km. Beyond this range good
quality airborne gravity is just as good as gradiometer
data.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 8, Page 4
GETECH/ Univ. of Leeds:

Unlike airborne gradiometer systems which, in principle,
measure the gradient of the Earths gravitational field
that is independent of the aircraft accelerations, where
as airborne gravity systems measure the combination of
aircraft accelerations and the Earths gravitational field.
As a result most of the design and processing is aimed
at maintaining the gravity sensing unit in a vertical
orientation and accurately measuring the aircraft
corresponding vertical movement (accelerations) using
differential GPS velocities. Currently commercial gravity
meters utilize gyro-stabalised platforms to maintain the
vertical orientation with any residual platform
misalignment errors recorded either using dynamically
tuned gyros or via a control loop which is used to
measure horizontal accelerations. In simple terms,
subtracting the GPS derived vertical accelerations of the
aircraft from the total vertical gravity measured by the
instrument will provide residual gravity (in practice
additional corrections are required such as correction
for platform misalignment, horizontal accelerations,
accelerations, Eotvos effect, drift and minor temperature
variations.

As the dynamic range of aircraft acceleration is several
orders of magnitude greater than the geological
anomalies of interest, all airborne gravity systems rely
on relatively long down-line filtering to improve the
accuracy of the calculated residual gravity. The down-
line filters are often complex in nature, for example the
GT-1A processing uses non-stationary predictive
Kalman filters to generate residual gravity. The reliance
on long wavelength down-line filters to reduce the
gravity data introduces a fundamental limitation to the
resolution achieved with airborne gravity systems and is
key to understanding the accuracy resolution attributed
to the data.

Current Airborne Gravity Systems: There are four
commercial airborne gravity systems available for
survey.

1. The Lacoste Romberg modified marine Air II
meter. This is a highly damped zero length spring
gravity sensor mounted on a two axis stabilised platform
which was developed in the 1990s and commenced
operations in 1995. Accuracy under survey conditions
are greater than 2mGals and thus has become
redundant with the introduction of more accurate Airgrav
and GT-1A instruments and possible by the new Scintex
TAG system (Air III).

2. The AirGrav System (Sander Geophysics): This
system consists of three-axis gyro stabilised inertial
platform with three orthogonal accelerometers (Figure
8/6). A Schuler-tuned inertial platform is used to
maintain the vertical orientation of the gravity meter
independent of the aircrafts accelerations. One of the
major advances in this type of system was the
improvement in the INS platform and use of an accurate
three axis accelerometer rather than a spring-type
sensor removing the reliance on a control loop to
measure horizontal accelerations. As a result the
instrument is capable of operating in typical flying
conditions experienced in aeromagnetic surveys and
has been demonstrated to consistently deliver results of
better than 0.6 mGals for a 100s full wavelength down-
line filter.



Figure 8/6: Electronic rake in front of AirGrav
system

To improve resolution and accuracy flying closer line
spacing and repeat lines helps as shown in Figure 8/7
noise decreases with greater number of lines averaged
relative to length of Kalman filter.


Figure 8/7: Improving resolution

3. The GT-1A system developed by Gravimetric
Technologies in the Russian Federation

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 8, Page 5
GETECH/ Univ. of Leeds:



Figure 8/8 The GT-1A system marketed by
Canadian Micro Gravity. Instrument housed in
hermetically sealed for environmental extreme.

The GT-1A system (Figure 8/8) relies on a Schuler-
tuned three axis inertial platform with vertically
constrained gravity sensing element allowing for
operation in more turbulent conditions compared with
Air II system. Unlike the AIRGrav system, the quality of
the GT-1A results are impacted by increased turbulence
preventing the possibility of tight drape flying withy the
instrument and often necessitating a requirement for
night flying when conditions are less turbulent. Under
ideal conditions the system is capable of accuracies
better than 0.5 mGals for 100s down line filter length
and consistently delivers results of better than 1 mGal a
100s full wavelength down line filter with an overall
average of better than 0.7 mGals.

The TAG system of Scintrex: This system (Figure 8/9)
has been recently introduced and replaces the L&R Air
II gravity meter using zero-length spring concept.
Improvements have been made to the spring tension
tracking loop and stabilised platform control loop.

Improving accuracy resolution: As the resolution of
the gravity system is directly proportional to survey
speed, choice of the aircraft platform can make a
significant difference in results. This has encourage the
use of helicopters or as in the case of slow flying aircraft
such as the Pilatus PC6 for survey platforms which
result in improvements of more than 30% compared to
surveys with more typical survey aircraft, Figure xx
illustrates the effect of different aircraft platforms and
survey line spacing on the overall accuracy resolution of
the gravity survey. Both these parameters have
important implications when detailed basin mapping is
required.



Figure 8/9: TAGS system installed in aircraft



Figure 8/10: Accuracy vs resolution for different
aircraft speeds. The effect of oversampling the
dataset using tighter line spacing is illustrated (see
also Fig 8/7

8.4 Airborne Gravity and Quality
Control (QC)

Airborne acquisition (QA) of gravity data is often
carried out along side with aeromagnetic data
collection. Thus the reader is advised to also read
Section 15.4. The role of the QC person, who is
acting on behalf of the client, is to ensure that the
best possible data are collected within the agreed
specifications of the contract. Often the QC person
resolves problems that cannot generally be written
into the specifications and are generally local
logistic in nature. The QC person should be seen
as a person who helps the acquisition contractor
achieve acceptable results for the client as well as
keep the contractor honest. QC operations is not
restricted to just acquisition but also to the post
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 8, Page 6
GETECH/ Univ. of Leeds:

survey processing to generate the final products
for the client. See www.sgl.com/quality for more
information.

8,4.1 Gravity Base station tie and recording

For GPS specifications see Section 15.4.4

The base station location and its tie arrangements into
an IGSN71 station are provided in the form of
documentation and tabulated results together with the
calibration of the airborne gravity system before data
acquisition commences. A gravity base station will be
established at the aircraft base by looping with a hand
meter between the selected point and a known base in
the region. This is not necessarily carried out before the
survey commences but it is important to clarify
arrangements for this.

The survey gravity base station is tied to a station on
the IGSN71 network (see Section 6.1) which may be
many hundreds of kilometres from the survey area so
special tie flights may need to be organised to transfer
the absolute gravity value to the airstrip base station.
Can the results of this tie be checked? The answer is
YES. If there is an error in the tie then the survey will
have a DC bias or shift in its reading. One way this
could happen is that the base station used to tie in the
survey was in fact tied to the old Potsdam 1930 datum
rather than the IGSN 71 datum. This would generate a
positive bias of ~+17 mGals in the observed gravity for
the survey (see Section 6.2.1 to 6.2.3 to determine the
differences between the 1930 equation and more recent
formulations). To see if there is a bias, the DC bias can
be checked by taking the average Free air anomaly for
the whole survey at given elevation and determining the
equivalent GRACE satellite free area gravity field at the
same location and elevation. GRACE has minimum
wavelength response of ~150km but is incredibly
accurate.

This survey base station point will normally be located
on the airfield so that the aircraft gravity system can tie
into it at the start and end of each sortie to establish
system drift and (ultimately) the relationship between
the airborne system and ground gravity data throughout
the area.

8.4.2 Auto-Calibration of Airborne Gravity Meter

Prior to survey beginning the airborne gravity meter
needs to undergo a auto-calibration. This is done whilst
the plane is stationary. It requires the gravity meter to
be being turned 90
0
about the Z axis on the gyro
stabilised platform and the platform being tilted +3
0
and
then -3
0
. This is followed by gravity meter being turned a
further 90
0
and tilt test done again. Measurements from
this test allows the necessary parameters to be
determined. This test can be considered similar as atilt
test carried out on a Lacoste Romberg using the
airbubble level.

8.4.3 Dynamic Test Line

A short section (20km) of a flight line is chosen to test
operation of sensors (gravity and magnetic). The test
line need not necessarily be a part of the survey but
could be positioned close to the operations air strip for
convenient. This test line is repeated for each sortie and
examined separately from the production data. It should
be similar in character (frequency content, amplitude)
day to day. See figure 8/11 for a typical Geosoft outputs
for the gravity repeat lines.

The data for this line will be treated in exactly the same
fashion as the actual survey data. A Kalman filter is
normally applied in order to remove most of the
engine/vibration generated noise from the signal. The
contract will usually specify the length of the filter,
although it may be that the contractor uses a slightly
different one as appropriate (for example 107s is
specified and 100s is used). Any changes like this need
to be agreed with the client before commencing the
survey or supplying the data for a range of Kalman
filters.

Level shifts between repeats of the processed data,
using a contract specified 107sec filtered, should not
exceed 1.0 mGal. Based on the 107s filtered final
processed gravity data (equivalent to a 4km LP where 4
km is the halfwavelength), line crossing statistics will be
specified to be within, less than, 1.0 mGal for 1 sigma
r.m.s. after distribution of errors by 1/2 and application
of both raw and first order line adjustment.



Figure 8/11: Dynamic test line results. The magnetic
equivalent of this line is shown in Fig. 15/15.

8.4.4 Survey Error Specification

Cross-Over tie line errors: Survey design generally
requires that tie lines be flown at intervals of about 5
times the survey line spacing, to help identify and
constrain survey errors. For cross over errors to be
within specifications they need to be < 1mGal for 1
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 8, Page 7
GETECH/ Univ. of Leeds:

sigma rms after raw and first order line adjustment for
the survey block has been undertaken

Gravity meter drift: This is normally < 0.3 mGal per
hour over flight period.

Gravity Tares: these are sudden changes in the gravity
reading output and should be < 1 mGal. Such tares
need reporting and appropriate action agreed.



Figure 8/12: Gravity tare

8.4.5 Navigation and Altitude

The reader is referred to sections 15.4.5 and 15.4.6 in
addition to that below.

QC also has to ensure that all aspects of navigation and
altitude conform to specifications. The navigation QC
normally operates on a basis that if navigation
measurements lie outside the tolerance of >60 s then
the data will be rejected. For shorter periods <60 s, data
may be accepted if the aircraft continues on last
heading basis and line path is within tolerance.

Quality of navigation is dependent on the GPS and the
constellation of GPS satellites. A horizontal position
(Latitude, Longitude) is calculated by the real time
differentially (OMNISTAR) calculated GPS position. This
means that it can be affected by the OMNISTAR
correction. If, for example, the GPS receives an
erroneous correction from the OMNISTAR the HDOP
(Horizontal Dilution Of Precision) will increase. The
lower the HDOP, the better the estimate of the Latitude
and Longitude. A better measure of the overall position
is the PDOP which is the combination of both the HDOP
and the VDOP (Vertical Dilution Of Precision). Some
contracts will specify HDOP, although PDOP is more
usually used. The PDOP is calculated from the post
processed GPS position (This is the position used for
gravity processing) and is therefore far more accurate
than that calculated using the OMNISTAR real time
differential and a far better measure of the accuracy of
the position. A typical profile of the HDOP is shown in
Fig. 8/13



Figure 8/13: Along profile line plot of HDOP
(Horizontal Dilution Of Precision)

Typical Specification:

- Line location maintained 100 m over a
distance of 2 km. No deviation > 150 m.
Details of deviations, number, amplitude etc,
within specification?

- Number of GPS satellites > 4
Range = 8 to 12.

- HDOP* < 5 ( see table below)
Mean = # # s.d

- PDOP < 2.5 (Often no contract specification)
Range = # #


Geosoft has built tools for the above analysis. Figures
8/14 and 8/15 show how the analyses are displayed.
Figure 8/14 shows the profile display of the GPS height,
along a flight line with the green line the GPS height and
the red lines either side showing the acceptable
bounds, in this instance 15 m. Figure 8/15 shows the
Geosoft output when the survey line QC routine is run.
The software will examine the data and where the actual
flying location (black line) varies from the planned
location (red line) by more than the specified amount it
will create a flag (green line) and plot it parallel to
where the location is out of specification. In this
example as the survey line and planned line coincide for
most of the lines being analysed, only the black lines
can be seen.



Figure 8/14: GPS heights





J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 8, Page 8
GETECH/ Univ. of Leeds:




DOP value Rating Description
1 Ideal This is the highest possible confidence level to be used for applications
demanding the highest possible precision at all times.
2-3 Excellent At this confidence level, positional measurements are considered accurate
enough to meet all but the most sensitive applications.
4-6 Good Represents a level that marks the minimum appropriate for making business
decisions. Positional measurements could be used to make reliable en-route
navigation suggestions to the user.
7-8 Moderate Positional measurements could be used for calculations, but the fix quality
could still be improved. A more open view of the sky is recommended.
9-20 Fair Represents a low confidence level. Positional measurements should be
discarded or used only to indicate a very rough estimate of the current location.
21-50 Poor At this level, measurements are inaccurate by as much as 300 metres with a 6
meter accurate device (50 DOP * 6 meters) and should be discarded.
Table HDOP values





Figure 8/15: Line location QC.

8.4.6 Processing Requirements

All the GPS data are post processed to obtain final
accuracy in locations and heights.

Equations used for correcting the gravity data include:

Free air correction:

The simple correction 0.3086 mGal/m is replaced by

FAC = (0.3083293357 + 0.0004397732cos
2
u)
- 7.2125 x 10
-8
h
2

(From North American Gravity Database Committeee.
2005).

or

FAC= h - =-(0.3083293357 - 0.0004398sin
2
u) h
+ 7.2125 x 10
-8
h
2
mGal




Eotvos Correction:
2ev cosu used in marine surveys (Section 7.1) is
replaced by full formula. See below

The full Etvs correction term is

fE = VE
2
/RE + VN
2
/RN + 2uVE cos

where RN , RE are the curvature radii of the ellipsoid in
the north and east directions. VE, VN are the eastern
and northern components of relative linear velocity of
the point M

References
Argyle,M., Ferguson, S., Sander, L and Sander S
2000 AIRGrav results: a comparison of airborne
gravity data with GSC test site data. The Leading
Edge Oct :1134 1138.

Gumert, W and Phillips, D 2000 Advanced helicopter
aerogravity surveying system. The Leading Edge
Nov :1252 1255.

Van Leeuwen, E L 2000 BHP develops airborne
gravity gradiometer for mineral exploration. The
Leading Edge Dec :1296 1297.

Wooldridge, A 2010 Review of modern airborne
gravity focusing on results from GT-1A surveys First
Break 28 May:85-92.






J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 9, Page 1
GETECH/ Univ. of Leeds:

SECTION 9: GRAVITY GRADIOMETERS




9.1 Gravity Gradiometers for Oil
Exploration

9.1.1 Old Technology: Eotvos Torsion Balance

Gradiometers (Fig 9/1) have a long history. In 1786
Augstine de Coulomb measured distortion of the earths
gravitational field, specifically, the differential curvature
or warping of the equipotential field. The instrument
design was developed by Baron von Eotvos Univ.
Budapest in 1890. The instrument became known as
the Torsion Balance and was first used in 1915 to map
the Egbell Oil Field. It was not until 1922 that it was
introduced into the USA. In its hay day 1935, 40 crews
each recording 2-3 measurements a day (see Fig 9/2).
In 1936 gravity meters were introduced and replaced
Torsion Balances since they could record ~50
measurements a day. In 1997 the gradiometer is back
in favour. Why?

The Torsion Balance instrument (Fig.9/1) measures the
horizontal gradient (g/x) of the Earths gravity field


Single Double

Figure 9/1: The Eotvos - Torsion Balances

Horizontal gradients and differential curvature are
measured in Eotvos units.

The milliGal(mGal) = 10
-5
m/s
2

gravity gradients are mGal/m = 10
-5
s
-2
1 Eotvos (E) = 10
-9
s
-2

This is 1/10
th
of a miroGal/m or


Figure 9/2: Measuring the Torsion balance in the
field. The instrument needed to be protected from
the elements. This shows Eotvos at the telescope in
1891

1/10
th
of a mGal/km. Geological sources are typically +/-
200 E range. Salt domes in Texas are 50 to 100 E

Fig 9/3 shows the relationship between gravity (vertical
component) we normally measure and the horizontal
gradient and the differential curvature


Figure 9/3: Gravity components over a salt dome

The torsion balance (configuration is shown in Fig 9/4).
Measurements were generally only done in flat terrain
(+/-5
0
) due to the effects of topography being severe.
Measurements taken in portable huts/tents about 0.5 to
0.25 mile separation. Results from a survey are shown
in Fig 9/5 were the direction and magnitude of the
horizontal gravity gradient are shown as variable length
arrows. These were used to map the isogams, contours
of relative gravity.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 9, Page 2
GETECH/ Univ. of Leeds:



Figure 9/4: (a) Configuration of Torsion balance
double beam system and (b) Plan view of the three
double beam positions at each of which the beams
angular position are determined optically.




Figure 9/5: Gradient map in the region of Egbell,
Slovakia, the first successful oil exploration project
by Torsion balance 1916. Further details are given
in Heilands book.



9.1.2 New Technology: Tensor Gradiometer

The US Navy has spent hundreds of millions of dollars
to develop a system to measure gravity gradients. In
1994 this technology started to be used in exploration
(Robin Bell et al.1997 Gravity gradiometry
resurfaces The Leading Edge,16,:55-59)

The vertical component of gravity Uz can have a vertical
gradient Uzz as well as horizontal gradients Uzx and
Uzy . Thus for the horizontal components of gravity Ux
and Uy can have similar gradients, giving in total 9
tensor components.

Uxx Uxy Uxz

Uyy Uyz

Uzy Uzz

Figure 9/6: Gravity Tensor components

where Uz is the vertical component of gravity more
commonly know as the free air anomaly. Newtons
theory of gravity implies that only five of the nine
components are independent and four are redundant
The gravity gradiometer shown in Fig 9/7 is able to
recover five of these gravity gradients (bold in Fig 9/6).
Plus the Uzz component that is determined by Laplaces
relationship from Uxx and Uyy. These gradients provide
an important tool in the detection of edges and locations
of bodies. This is precisely why it was developed in the
first place to detect submarine volcanoes and
topography in front of the submarine in a passive mode
as part of the Stealth Technology developed by Bell
Aerospace for the Navy Trident submarine programme

Figure 9/7: Modern rotating gravity gradiometer
system made by Lockheed Martin

Uyx
Uzx
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 9, Page 3
GETECH/ Univ. of Leeds:

The instrument (Fig 9.7) consists of 12 separate gravity
meters (accelerometers) measuring the differences in
earths gravity over a distance of 1 metre as the meters
tumble in a binnacle. This is in the form of three arms
with gradiometers attached rotating perpendicular to
each other and the whole system slowly rotating.(see
Fig 9/8)

Figure 9/8: The three spinning elements of the
gradiometer that also tumble slowly.

The design of each element is shown in Fig 9/9 with two
pairs of accelerometer. These accelerometers find the
acceleration difference between opposite pairs and
output a sinusoidal signal. As shown in Fig 9/10
accelerations due to the ship or aircraft that the
instrument is located in are cancelled out.



Figure 9/9: Configuration of each spinning disk

Gravity Meter Gradiometer
One accelerometer Two accelerometere
Cannot cancel cancel common
Accelerations acceleration



Fig. 9/10 Ability of Gradiometer to cancel non-
geological derived accelerations
The 6 tensors that are measured provide 6 images of
the field. The complexity of these signals is illustrated in
Fig 9/11 for a thin prism and in Fig 9/12 for a for
complex salt dome model.



Figure 9/11: Tenors for a thin prism model


Figure 9/12: Tensors for a salt dome model with
structure shown by white contours in Tz window.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 9, Page 4
GETECH/ Univ. of Leeds:



Figure 9/13: Example of real Tensor data courtesy of
Bell Geospace in decimated form.

These data by there very nature provide new challenges
to interpretation. They provide the most accurate gravity
response and can see small gradient changes down to
about 1km. There after the advantages of having
Tensors greatly reduces compared to good quality
conventional gravity data.



Figure 9/14: Power Spectrum of standard marine
gravity data and the enhanced version using gravity
gradients. The standard spectrum flattens at shorter
wavelengths than 6.6 km where as the gradiometer
is noise free down to much shorter wavelengths.

The accuracy of this system based on a programme of
studies on a surface ship in the Gulf of Mexico is 0.5 E
over 1 km. or 0.05 mGal/km. The instrument, albeit
expensive could play a major role in detecting edges of
salt structures including their base which is difficult to do
with seismic data. Gradiometer data also provide an
opportunity to improve velocity functions in the shallow
section, which can lead to improved seismic imaging of
the deeper section.

9.2 Gradiometers for Mineral
Exploration

To be written


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 1
GETECH/ Univ. of Leeds:
SECTION 10: SATELLITE ALTIMETER GRAVITY
DATA - Acquisition and Processing


10.1 Introduction

The mean sea surface, to a first approximation, can be
considered to equal the marine geoid, thus representing
an equipotential surface of the earths gravity field. The
gravity response is the first vertical derivative of the
equipotential surface. The sea surface is only an
equipotential surface if the sea is in a static state i.e.
has no spatial (lateral) changes in water temperature
and/or salinity, no currents (e.g. the Gulf Stream), no air-
sea interaction (e.g. wind causing waves), no
differences in air pressure, no tidal forces, etc. The
difference between the geoid and the mean sea surface
is sea surface topography (SST) and it can reach 1 to 2
metres. Corrections are needed to remove the SST
effects.

Mean sea surface (geoid) is the orthometric reference
height datum that we have traditionally use to adjust
gravity measurements to (see section 5 for GPS
alternative reference system). The gravity field
measured at sea level, or on the sea surface, is the Free
air gravity (height = 0 m).



Figure. 10/1: Simplified principle of why the sea
surface is not a flat surface.

Satellite altimetry (Fig. 10/2) can thus be used to
accurately measure and systematically map the sea
surface topography. There have been a number of
satellites with this capability. The satellite travels around
the Earth on an Equipotential surface at a height of
~800 km above the Earths surface. Its orbit is very
smooth (due to inverse square law) and is known to high
degree of accuracy. The Equipotential surface at sea
level is closer to the causative bodies and thus its shape
contains much higher frequency and amplitude. It is this
equipotential surface at sea level and not at the satellite
height that is being measured. A pulse-limited altimeter
directs a short pulse of microwave radiation towards the
earth, this signal fans out in the atmosphere to
illuminate a footprint (a few km in diameter) on the
ocean surface. The altimeter detects the return signal,
and hence estimates its own height above the sea
surface to a (quoted) typical precision of 10 cm (better
for Topex satellite). As the satellite orbits the Earth, it
does so in exact repeat orbits thus generating repeated
measurements at different times for the same orbital
path. This repeat nature of orbits benefits
oceanographers measuring continuous change in ocean
processes. For Geodesist these stacked data have
orbital track spacing of 100s km and are not good for
mapping the Earths surface at high resolution. To date
only two satellite geodetic missions, to map the Earth
sea surface at high resolution, have so far been
undertaken that by Geosat and ERS-1. These missions
when combined give an average orbital spacing of about
3 km..




Figure 10/2: Satellite Altimeter to map the sea
surface and thus the gravity field

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 2
GETECH/ Univ. of Leeds:
10.2 Satellites with Altimeters

Satellites used for mapping the gravity field of the
oceans are:

10.2.1 GEOS-3 (1975-1978)
(dta from this mission not now used due to its low
accuracy)
height 840 km, i = 115
0
altimeter +/- 1 m covers
latitudes +/- 72
0


10.2.2 SEASAT(1978)
(data from this mission not now used due to its low
accuracy)
Seasat was launched June 26,1978 and stopped
functioning on October 10,1978 after 3 months. The
failure was due to electrical short circuiting on thispart of
the satellite. height 800 km, i = 108 covers latitudes +/-
72
0

This was a 17 day repeat mission and was used in the
mid-1980s to revolutionise our understanding of plate
tectonics. This was due to the sea bed being the most
significant density contrast boundary thus resulting in a
strong gravity response at the sea surface. Thus Free
air gravity is very good at mapping sea floor
bathymetry/topology and lateral density changes
immediately beneath the sea bed (Haxby, W. F., 1987).

10.2.3 GEOSAT(1985-1990)
Geosat was launched March 12,1985 and stopped
operating in 1990, height 800 km, i =108
0
and
covered latitudes +/- 72
0

This satellite had an initial military Geodetic Mission
(GM) from March 31,1985 through September 1986 with
a 168 day orbit. This was changed in October 1986 to a
17 day Exact Repeat Mission (ERM) to continue the
SEASAT mission which failed. This resulted in 66 repeat
orbits before the satellite stopped working. As of
summer of 1995 all the data for the ERM and the GM
missions were declassified and are available.

10.2.4 ERS-1(1991-1995)
ERS-1 was launched 1991 and stopped about early
2000s. height 780 km, i = 98.5
0
and covers latitudes +/-
84
0
. See Fig. 10/2

This satellite commenced a 35 day repeat orbit mission
before changing to a Geodetic Mission (GM) 168 day
orbit in the Spring of 1994. In October 1994 it entered
second GM 168 day orbit which was an infill orbit i.e.
the 168 day orbit gives track spacing of 16 km at the
Equator, thus the infill data gives an 8 km track spacing
at the Equator, this spacing decreases both north and
south.. See Fig 10/2 for image of satellite.

10.2.5 TOPEX - POSEIDON (1992-99)
Launched in 1992
Latitude range +/- 66
0

This altimeter satellite launched was in a 10 day ERM.
Onboard it had a GPS location system, a DORIS system
that measures atmospheric drag which is a dominant
component in radial orbit error and the ability to
measure signal propagation through the atmosphere.
For previous satellites these drag and propagation
errors have had to be modelled from ground based
measurements.

As of 2002 there are a many more satellites with
altimeters but none undertaking geodetic missions.
Figure 10/3 shows the latitude range of the satellite
orbits referred to above for northern Europe. Figures
10/4 to 10/6 show the GEOSAT and ERS1 coverage for
part of the Central Atlantic offshore Senegal. The
combined Geosat +ERS-1 tracks (Fig 10/6) give an
orbital spacing of approximately a 3 to 4 km.

10.2.6 Geosat and ERS-1 Data
Since these are the two satellites having Geodetic
missions it is worth saying something more about these
missions. The radar pulses rate was 1000 per second.
Of these 50 (ERS-1) and 100 (Geosat) return
waveforms were stacked onboard the satellite before
being transmitted to Earth. Once back on Earth they
were initially released for scientific investigation as 1
second sea surface height picks. One second sampling
represents 7 km along track. Eventually the ERS-1 20
Hz data ( equates to ~300m long track spacing) and the
Geosat 10 Hz data (~600 m along track spacing ) sea
height picks were released The raw return unpicked
ERS-1 data are also available (see section 10.7 for
more information).


Figure 10/3: Satellite tracks for offshore area of
Europe. ERS-1 maps the Arctic to 82
0
N while
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 3
GETECH/ Univ. of Leeds:
GEOSAT only extends to 72
0
N and TOPEX-
POSEIDON to 66
0
N.(GEOS-3 and SEASAT not used)

10.3 Altimeter Errors

The altimeter sea surface height measurements contain
two kinds of error:

measuring errors which includes altimeter noise,
calibration, tidal corrections, atmospheric pressure
correction, tropospheric correction and the way the
agencies measured the onset signal of the radar pulse
(see section 10.7). Of these the tidal correction is
improving with time due to the predictive nature of the
correction. So in 2003 we have far better tidal
corrections for 1985- 1996 than were available at that
time of the Geosat and ERS-1 missions. The example
(Fig .10/7) shows the ability of different tidal models to
correct for tidal effects to remove slope bias from the
sea surface heights.


Figure 10/4: GEOSAT GM data coverage for
offshore Senegal


Figure 10/5: ERS-1 GM data coverage for offshore
Senegal
radial orbit errors which could until 1994 added a
metre of uncertainty to the estimates of sea surface
height

To minimising these above errors the following
procedures are used:


Figure 10/6: Combined effect of Figs. 10/4 & 10/5





Figure 10/7: Tidal corrections for Irish Sea using
from left to right no model, global model and local
tide models.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 4
GETECH/ Univ. of Leeds:
10.4 Data Stacking:

This is easy to do for Topex - Poseidon due to its quality
and available GPS positioning. For the previous
satellite, with no GPS positioning of satellite, the ERMs
(Exact Repeat Mission) were more difficult due to the
large orbit error which can be considered as a DC shift
in height values between orbits for the same ERM orbit
track section. The error is more complex than a simple
DC shift. The effect of stacking is to minimise random
noise by n , where n is number of orbits used.


10.5 Orbital Correction

Once the data are stacked the data can be cross over
adjusted to remove most of the non - geographically
correlated part of the orbit error (see Fig 10/8). This is
done by determining the cross over errors and
minimising them by least squares



Figure. 10/8: Cross-Over Error - where two orbits
cross the sea surface heights should be the same.

The results of the cross over analysis are

GEOSAT/GEOSAT 10.6 cm for 22 repeat orbits (2 years)
ERS-1/ERS-1 6.5 cm for 11 repeat orbits (1 year)
TOPEX/TOPEX 1.9 cm for 37 repeat orbits (1 year)
TOPEX/GEOSAT 6.8 cm
TOPEX/ERS-1 6.2 cm
GEOSAT/ERS-1 8.8 cm

All cross overs 7.8 cm


10.6 Conversion of Altimeter Heights to
Free Air Gravity Anomalies

There are two methods available




10.6.1 Slopes to Gravity Method

This method (Fig. 10/9) converts geoid gradients or
deflection of the vertical grids into FAA In this method
the along- track gradients are determined after fitting
each arc with a smoothing spline. This results in vector
gradient quantities using 6 or more track orientations.
These data are used to obtain north-south and west-
east vector gradient grids which can be combined using
a method similar to Laplaces equation (section 2.2) to
obtain the vertical gradient (gravity). Sandwell (1992)
developed a method where north-south and west-east
deflection grids were iteratively obtained from ascending
and descending along track deflection grids.

10.6.2 Geoid to Gravity Method

This method (Fig. 10/10) relies on levelling(draping) of
track height data. This method developed by GETECH
is found to be a more accurate method, than the
gradient method above since it provides a better
interpolator of values between orbital tracks However,
the method requires the ability to cross over level and
micro-levelling(draping) the track data . For details of
why Geoid to Gravity method has advantages to the
Slopes to Gravity see page 517 Fairhead et al ., First
Break, 2001 and Section 17.3.1c


Figure 10/9: The Slopes to Geoid method used by
Sandwell and Smith (1997)

Using Agency picked data the resolution of 30-40km
achieved by Sandwell and Smith (1997) can be
improved to 15-20 km by processing the data using
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 5
GETECH/ Univ. of Leeds:
geoid to gravity method. Other advantages of the geoid
to gravity method are discussed in next Section 10.7

Figure 10/10: The Geoid to Gravity method

To convert the geoid field (mean sea surface) into
gravity a reference model is removed to make all values
oscillate about zero. The vertical gradient of the field is
determined by a FFT method. The FAA grid from the
reference model is restored to obtain the final geoid
gravity grid.

10.7 Improving the Accuracy of
Satellite Data for oil exploration

For satellite gravity data to be used to image subsurface
geological structures beneath the continental margins,
then a resolution of better than 20 km (or 10 km at half
wavelength) is required. At this resolution large
hydrocarbon trapping structures in deep water can be
identified. These are the sorts of structures oil
companies are looking for, since the cost of exploiting
deep-water hydrocarbon reserves, is very expensive.
Such structures are the only structures that can provide
economic return. Satellite data with 30-40 km resolution
(Sandwell and Smith 1997) has a noise envelop that is
too large to reliably see anomalies of 20 km or smaller
in size. Inspection of Fig. 12/1 shows that in the last few
years the resolution of both airborne gravity and satellite
gravity have moved from the grey region of the
resolution diagram (resolution less than 10 km half wave
length) into the yellow region where such data become
of serious interest to oil companies. Currently GETECH
derived satellite gravity is seeing anomalies down to 5
km (half wavelength). How is this achieved?
In sections 10.6.1 and 10.6..2 the Geoid to Gravity
method was found to improve the gravity resolution from
30-40 to 15-20 km. To improve the resolution further,
the radar data need repicking and application of better
corrections. The repicking of the leading edge of the
radar waveform turned out to be the most significant
factor to improve data resolution. Better tidal and
propagation corrections improved the overall stability of
the resolution.



Figure 10/11: The radar wavefront (blue)
progressively impacting the sea surface causing the
footprint of the reflection to grow in diameter. This
is equivalent to the period of time between A and B
in Fig. 10/12 (~10-15 nano seconds)

The return reflection from the sea surface is shown in
Fig. 10/11 as a series of images with nano-second
intervals between them. As the radar wave front touched
the sea surface, the reflection footprint increases in size.
This can be viewed in Fig 10/12 as a plot of the return
radar energy as a function of time. The return radar
pulse builds up from zero to a maximum (equivalent to
Fig 10/10) before the energy falls off in the tail of the
return radar pulse.

The slope of the onset of the radar pulse is dependent
on the sea state. A calm sea generates the purple pulse
with a steep (incident) onset where A and B are the
times when the radar first reflects from the top of the
waves and B the trough of the waves. In large wave
conditions the crests and troughs are more widely
separated and the slope in more gradual (emergent).
The mean sea level (MSL) in both cases is
approximately 50% energy level of the onset slope .


Figure 10/12: the differing onsets incident and
emergent due to sea state
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 6
GETECH/ Univ. of Leeds:
The Geosat mission was designed to be a Geodetic
mission and GETECH determined that it has not been
possible to improve the resolution of these onset times.
However, this was not the case for ERS-1 mission data.
ERS-1 was designed for oceanography studies and 5
parameters were used to pick the radar pulse. Some of
the parameters measure the decay rate of the tail of the
pulse (used to measure air-sea interaction). Since the
tail signal is exceedingly noisy any least squares
estimate of the five parameters will degrade the onset
time parameter. By repicking (or retracking, in geodetic
terminology) the onset time independent of the other
parameters has significantly improved the reliability and
accuracy of the onset times.



Figure 10/13: Power spectrum of Agency and
GETECH data showing the improvements in the
resolution below 30 km wavelength

This can be quantified in Fig 10/13 in the form of a
Power Spectrum of Agency verses GETECH picked
data. White noise commences in the GETECH repicked
data significantly later than the Agency data. In profile
form (Fig. 10/14) the repicked data is smoother and the
differences between the two profiles is probably noise
and results in the dimple effect of satellite gravity maps
(or orange skin effect). Since ERS-1 data represents up
to 45% of the combined Geosat-ERS-1 solution, such
noise can significantly degrade the high-resolution part
of the spectrum for the combined solution.



Figure 10/14: The difference in the sea surface
height profiles. The difference is the cause of the
noise seen in the Sandwell and Smith solution often
referred to as the orange skin effect at wavelengths
below 50 km.


Figure 10/15: Comparison between the satellite
derived Free air anomalies offshore Yucatan
Peninsula (Mexico). Sandwell and Smith (left) shows
dimple effect (or orange skin effect) where as
GETECH (right) looks clearer.


Figure 10/16: Difference plot between the solutions
shown in Fig. 10/15. The difference plot shows the
dimple noise and anisotropic effects contained in
the Sandwell and Smiths solution.


Figure 10/17: Total Horizontal derivative of the
Sandwell and Smith Free air gravity solution. When
compared to Fig 10/18, the strong continental
margin anomaly outlined in black is ragged and
there is lack of coherent anomalies in the deep
water or on the continental shelf.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 7
GETECH/ Univ. of Leeds:


Figure 10/18: Total Horizontal derivative of the
GETECH Free air gravity solution. When compared
to Fig 10/17, the strong continental margin anomaly
is more coherent and small wavelength coherent 2D
anomalies are seen in the deep water (solid white
arrows) and on the continental shelf (open white
arrows).

The advantages of using Geoid (sea surface) to Gravity
over the Slopes to Gravity method can be summarised
as follows

1. Least smoothing along track: To obtain slope
values requires filtering which will degrade /destroy
short wavelength signals



2.Griding process more stable: Deriving slopes after
gridding is inherently more stable a process



3. No Anisotropy Effects: In Equatorial areas the
orbital tracks are very N-S in orientation so the Slopes
to Gravity method introduces a strong bias in the grids.
Geoid to Gravity just relies on spatial track coverage.
Which adequately samples the surface at 10 km
wavelength.



4. Repicking allows solution to be determined to
within 5km of coast: This is in part the benefit of
repicking individual wave forms rather than relying on
Agency masked data ( poor coastline mask used by
Agency). For example see Figs. 10/19 & 10/20.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 8
GETECH/ Univ. of Leeds:


5. Line orientated artefacts Although the slopes
method minimises problems associated with slopes and
DC biases, it does not eliminate them. Fig. 10/7 shows
the problems that can arise if tidal corrections are not
applied. Figure 10/19 illustrates the problems showing
the Sandwell and Smiths solution for northern
Sulawesi. The problems of line noise and lack of data
close to the coast is evident especially when viewed in
derivative mode.

These problems are not seen in the GETECH solution
(Fig. 10/20) due to better corrections, micro line levelling
and repicking the data to within a few km from the coast.


Figure 10/19: Free air anomaly (left) and total
horizontal derivative (right) of the Sandwell and
Smith solution. Problems of line noise and lack of
data close inshore are evident.




Figure 10/20: Free air anomaly (left) and total
horizontal derivative (right) showing the significantly
improved GETECH solution.



10.8 Satellite Gravity in Arctic Regions

In Polar regions, the sea is covered in summer time with
thin sea ice. This ice surface is not the true sea level
thus the Slopes to gravity has been shown by Laxon &
McAdoo, (1994) to work well but the resolution of the
final gravity maps are ~50 km due to the larger noise
levels.



Figure 10/21: Sea /Ice surface response to the two
different methods

Fig 10/22 shows the extent of open sea (red) during the
northern hemisphere summer. .


Figure 10/22: Arctic Ocean, north of Russia,
covered by open sea water in red during northern
hemisphere summer of 1995



10.9 Relation between sea surface and
gravity spectra

How does height of the sea relate to gravity anomaly in
mGals? Assuming the gravity and sea surface are
sinusoidal in character then
Sea surface (cm) = 0.016 x wavelength (km) x gravity
anomaly (mGal)

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 9
GETECH/ Univ. of Leeds:
10.10 Exploring for Oil by Satellite
(taken from Technical Review Middle East By J D
Fairhead May/J une 1997)

Modern satellite technology is showing that the sea
surface contain information that enables oil
companies to image sub-surface geology and
structure in new and remarkable ways.

Satellite methods are being used on a daily basis by oil
and mineral companies for communications, position
locations, monitoring the environment and mapping
geological structure. However, over marine areas the
mapping techniques are inoperable except for the
detection of oil seeps. This contribution looks at how oil
companies are getting over this problem by using the
topographic maps of the sea surface to image the
geological structure. Examples are drawn from the
Central Red Sea which clearly demonstrate the methods
and techniques being used.

The gravity method has played an important historic role
in the search for hydrocarbons. In the 1950s seismic
reflection methods became the primary exploration tool,
sometimes to the exclusion of all other methods.
However, recent trends within oil companies to make
them leaner and more cost efficient has encouraged
explorationists to adopt a multi disciplinary approach to
exploration where gravity (and magnetic) methods play
a role at all stages of the exploration programme.

What is the gravity method? The method maps small
distortions in the Earths gravitational field to accuracys
as small as 1 part in 10
9
of the Earths field. These
distortions are caused by the spatial variation in the
subsurface mass distribution or density changes due to
the presence of differing

rock types. Land gravity measurements are made by
instruments called gravity meters which contain a small
mass suspended from the ends of a very sensitive
spring system. If the pull of gravity (g) decreases then
the gravity meter mass (m) will change its weight (weight
= mg) and the spring holding the mass will shorten. By
careful measuring of the change in spring length, the
change in gravity (g) can be measured. Over a salt
dome, the low density salt will result in a decrease in
gravity or gravity low. The width and amplitude of the
gravity low will be a function of the size (volume) of the
salt body, the density difference compared to the
surrounding rocks and its depth. Gravity effects
decrease with distance according to the inverse square
of the distance (Newtons gravitational law).

Land gravity surveys over the major sedimentary basins
in the Middle East were completed in the 1960s.
Although these data are more than 30 years old, they
are of excellent quality and are nearly as good as if they
were collected today. This is not the case for seismic
data, where technology of data acquisition and
processing have steadily improved with time. Thus
many oil companies are often unaware of the important
gravity data asset they hold in their archive. This asset
can be realised by careful computerising and
reprocessing of the data.

In marine areas of the Gulf, Arabian Sea and Red Sea
there is generally a poor coverage of gravity
measurements due in part to a lower priority put on
these areas in the past. Since 1995, satellite derived
gravity has revolutionised the imaging of the gravity field
of oceanic basins resulting in oil companies now taking
more notice of this satellite method as an exploration
tool.

So what is satellite gravity

The satellite based method of determining gravity is
totally different in concept to collecting gravity
measurements with a




Figure 10/23: Satellite Gravity of the central Red Sea
A) tracks B) Sea Surface Heights C) Free air
anomaly D) Isostatic Residual anomaly.

gravity meter. Since 1979 a number of satellites
(Seasat, Geosat, ERS 1, Topex- Poseidon) have used
radar altimeters to measure sea surface heights. Radar
signals are sent at approximately 1000 pulses per
second towards the sea surface. The signals are
reflected back to the satellite such that their transit time
and hence distance can be measured to centimetre
accuracy.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 10
GETECH/ Univ. of Leeds:
If sea water was static, i.e. devoid of tides, currents,
temperature variations and air - sea interactions then it
would represent an equipotential surface of the Earths
gravity field.
This surface can vary in topography relief by as much as
100 metres world wide relative to the best fitting
ellipsoid. By measuring the sea surface heights and
applying corrections hinted at above, the sea surface
can be mapped and mathematically transformed into the
gravity field by determining the vertical gradient of the
sea surface. Although the satellite is 800 km above the
Earth, it is the sea surface gravity that is measured not
the gravity field at the satellite height.

To illustrate the principles of the method the radar sea
surface height map of the central portion of the Red Sea
is used.

Figure 10/23 A shows the coverage of Geosat and ERS-
1 Geodetic Mission satellite track data that became
available in the mid 1990s to map the sea surface
heights.

Figure 10/23 B shows the resulting map of sea surface
heights while Figure 10/23 C has transformed the sea
surface into the gravity field. This transformation
enhances the high frequency components of the sea
surface map.


What can the gravity map tell us about geology?

The variations in gravity values imaged in Figure 10/23
C result from changes in density structure at the sea
bed and deeper. Near surface bathymetry generates a
major density (or mass) boundary between rock and
water and this tends to dominate the gravity signal.
Gravity signals from density contrasts between low-
density sediments and higher density basement rocks
will superimpose on the bathymetry induced gravity
signal. Deeper crustal density variations will also affect
the gravity response. However these deeper structures
will tend to be isostatically compensated if they extend
over distances greater than about 150 km by increasing
or decreasing the bathymetric depth.

To image the sub-sea bed geology the effects of the
bathymetry and deep crustal structure can be removed
by transforming the gravity field into the isostatic
anomaly (Figure10/23 D) which better images geological
structure at depths of interest (i.e. down to 10 km).

The Red Sea is an ideal place to show the power of the
satellite technique in action. The Red Sea is considered
as an embryonic oceanic basin where the Nubian
(African) plate is moving south-westwards away from the
Arabian plate. In so doing, the crust has been stretched
causing it to thin and subside due to isostatic forces.
This has generated a marine sedimentary environment.
Plate movements have continued and the crust has now
ruptured allowing submarine volcanism to develop and
infill the gap between the plates forming new oceanic
crust (see Figure 10/24).



Figure 10/24: Schematic cross section of the crustal
structure of the Red Sea

Since the oceanic crust has the ability to retain the
direction of the Earths magnetic field at the time of
emplacement it has, like a slow moving tape recorder (~
1 cm per year), recorded the numerous reversals of the
Earths magnetic field. The timing of these reversals is
well established and thus air borne magnetic surveys
over the Red Sea can map the openings of the oceanic
basin and determine the age of the oceanic crust
(Figure 10/25).

Figure 10/25: Magnetic anomaly map of central
portion of the Red Sea

The oil industry thus has used gravity and magnetic
survey data to help map the regional setting of
sedimentary basins and the tectonics that control them.
Profile and spatial grid data are used to construct 2 and
3 dimensional geological models to test seismic models
and/or to refine them. Currently the wavelength
resolution of satellite data lies between 15 and 30 km
but GETECH has shown that by careful processing this
can be improved to sub - 10 km resolution. At such
resolutions satellite gravity provides a rapid, cheap
means of evaluating large areas of the continental
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 11
GETECH/ Univ. of Leeds:
margins in the search for hydrocarbon trapping
structures.

10.11 General References

Haxby, W. F. 1987 . MAP: Gravity Field of the World
Oceans, National Geophysical Data Centre, NOAA,
Boulder, CO.

Laxon, S and McAdoo, D. 1994 Arctic Ocean gravity
field derived from ERS-1 satellite altimetry. Science
vol.265 29 July 1994 pages621-624

Maus, S., Green, C M., and Fairhead, J D., 1998
Improved Ocean Geoid Resolution from repicking ERS-
1 Satellite Altimeter Waveforms Geophy J Int. 134: 243-
253

Olgiati A., Balmino, G., Sarrailh, M., and Green, C.,
1995 Gravity anomalies from satellite altimetry:
Comparison between computation via geoid heights and
via deflection of the vertical. Bull. Geod. 69:252-260

Sandwell, D. T. and Smith, W H F., 1997 Marine
gravity anomaly from GeoSat and ERS 1 satellite
altimetry. J. Geophys. Res., Vol. 102:10,039-10,054

Fairhead, J D, Green, C M, and Odegard, M E 2001
Satellite-derived gravity having an impact on marine
exploration. The Leading Edge (SEG publication ) Aug
2001 p873-876

Fairhead, J D., Green, C M. and Fletcher, K M U 2004.
Screening of the deep continental margins using non-
seismic methods. First Break 22 : 59-63


10.12 Status of Satellite Gravity in 2010
Taken from: Trident A New Satellite Gravity
Model for the Oceans. EAGE Amsterdam 2009
extended abstract by J.D. Fairhead* (GETECH /
University of Leeds), S.E. Williams (GETECH), K.M.U.
Fletcher (GETECH), C.M. Green (GETECH) & K.
Vincent (University of Leeds)
Summary
This EAGE extended abstract had two aims to discuss
the resolution of currently available gravity solutions for
the worlds oceans derived from satellite altimetry, and
to present a new gravity solution that has superior
resolution to these solutions. We use comparisons with
marine gravity survey data to evaluate and compare the
public domain satellite-derived gravity solutions and
GETECHs proprietary solution. Both qualitative and
quantitative analysis shows that the resolution of the
GETECH solution (originally generated in 2002) was
clearly superior to that of the public domain datasets
then available. Versions of the public domain gravity
solutions available in 2008 (S&Sv16, DNSC08) show
significant improvements over their predecessors, with a
resolution comparable with the GETECH solution. A
stacked solution, combining all three datasets, shows an
even greater level of resolution based on qualitative and
quantitative analysis. We call this stacked solution
Trident. This solution is now considered to approach
the limit of resolution possible from available satellite
altimetry data.
10.12.1 Summary
This contribution has two aims to discuss the
resolution of currently available gravity datasets for the
worlds oceans derived from satellite altimetry, and to
present a new gravity dataset with superior resolution to
other existing solutions. In 2002, GETECH developed a
new satellite gravity model for the continental margins of
the Earth using ERS-1 and Geosat altimeter data. This
gravity model covers all the continental margins out to
500 km from the shore line (Figure 10/26). The
GETECH gravity model significantly outperformed both
the Sandwell and Smith (S&Sv11) and Danish (KMS02)
satellite gravity model solutions that were then available
as public domain models.



Figure 10/26: Coverage of the GETECHs Global
Continental Margins Gravity Study (GCMGS).

In 2008 both the Sandwell and Smith (S&S v16) and
Danish (DNSC08) solutions have significantly improved
in terms of resolution such that they now appear
comparable with the GETECHs 2002 solution.
Differences between the solutions appear to be random
and low amplitude. Residual orbital track noise is
observed at some locations along the continental
margins in the Sandwell and Smith solution. Quantitative
analysis shows the solutions have comparable spatial
resolution, and that by stacking the three solutions a
further measurable improvement in resolution is
obtained. We call this new three solution stack Trident.
When compared to a number of high resolution marine
gravity surveys, the stacked solution outperforms each
of the individual solutions.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 12
GETECH/ Univ. of Leeds:
10.12.2 Methodology to convert altimeter
measurements to gravity
If the sea was perfectly still and thus unaffected by wind,
temperature, currents and tides, then the sea surface
would be an equipotential surface of the Earths
gravitational field and its vertical gradient would provide
a measure of the free air gravity. To measure the sea
surface height variations, satellites use radar altimeters
which can measure height changes of the order of 1 cm
at 800 km altitude (satellite height). To generate the free
air gravity field from these altimeters, measurements
requires a set of precise corrections to remove the
perturbation factors listed above (see Fairhead et al
2001a&b for these corrections). In addition, radar
propagation corrections through the dry and wet
troposphere need to be made. To date there have been
only two geodetic missions, that of Geosat and ERS-1,
which collected altimeter data with track spacing of 5km
and 8 km respectively. Since the orientation of their
orbits is different, combining the results of these two
missions has allowed an effective track spacing of 3-4
km to be obtained.
Two methods have been used to convert altimeter data
to gravity, that of the along track gradient favoured by
Sandwell and Smith (1997) and that of the geoid to
gravity used by GETECH and DNSC (Danish National
Space Center, formerly KMS). In the latter method sea-
surface heights can be considered as geoid heights and
mapping this surface from the satellite data allows for a
more robust method of spatial interpolation compared to
interpolating a derivative component (gradients) used by
Sandwell and Smith (1997), especially when in the
presence of noise and where data coverage is irregular.
10.12.3 Evolution of Satellite Gravity Model
(2002-2008)
In 2002: At this time there were two major public domain
satellite derived free air gravity models for the oceans,
that of Sandwell and Smith (1997, v11) and the
Anderson and Knudsen (1998, KMS02) and one
proprietary solution by GETECH (Fairhead et al., 2001b,
2004). The GETECH solution significantly outperformed
the public domain solutions. The reason for this was that
GETECH had recognized that the agency responsible
for processing, particularly the ERS-1 data, had not
tracked the radar waveforms very accurately. GETECH
thus developed a robust adaptive method of re-tracking
(or re-picking the radar signal onset) that was able to
respond automatically along orbital tracks to changes in
the quality of the radar signal. This resulted in a 5-fold
increase in resolution and an ability to track the radar
waveforms to within 2-5 km from the coastline. In
addition, the method involves levelling orbital tracks by
cross-over-error corrections followed by GETECH
proprietary micro-levelling methods. The important
aspect of this levelling technology is that the process
does not result in any low pass filtering, thus retaining all
short wavelength signals present in the raw sea-surface
height data.

Qualitative comparison of the 2002 solutions revealed
that orbital track noise and areas within 50km of shore
line were a major problem. This was clearly seen when
solutions were viewed in the form of Total Horizontal
Derivatives.

Quantitative comparison of the solutions was
undertaken by Vincent (2008) over selected marine
areas covered by higher resolution marine gravity data.
Figure 10/28 shows the results from one of these areas
in the East Java Sea (marine survey data courtesy of
BP). Three analytical methods where used; 1)
correlation coefficients (space domain) using a series of
high pass filters from 1.0 down to 0.2 degree insteps of
0.1 degree which were applied to both the satellite
solutions and marine data prior to correlation; 2) Using
the same method but determining the Root Mean
Square (RMS) error between grids with the marine
dataset acting as the reference dataset; 3) Frequency
domain coherence, equivalent to the space domain
correlation method. The methods are all valid, but
evaluating a broader range of wavelengths in methods
1) and 2) separates out the resolution of the datasets in
a more convincing manner. Figure 10/28 clearly shows
that the GETECH solution showed the greatest
consistency with the marine survey data, followed by
Danish KMS02 and Sandwell and Smith (v11) solutions
(coherence plot for these early solutions not shown in
Figure 10/28).

In 2008: The newest and current public domain
solutions are those of Sandwell and Smith (v16) and the
Danish National Space Center (DNSC08). DNSC
incorporated the activities of KMS. Both solutions have
used independent methods to re-track both the ERS-1
and Geosat data which has resulted in major
improvements of their input datasets, however both
have retained their preferred transformation methods of
gradients to gravity and geoid to gravity respectively.
Qualitative comparisons by Vincent (2008) revealed a
significant improvement in the public domain solutions.
When these solutions were viewed in derivative mode
they look very similar to the GETECH-2002 solution with
little orbital track noise and far better quality of solution
within 50km of the shore line. An exception was found in
the region of the mouth of the Amazon River (figure
10/27), a notorious region with mixing of seasonal fluxes
of fresh water. The three models shown in Figure 10/27
reveal that the Sandwell and Smith solution performed
the poorest and still has significant orbital track noise.
This is a general global problem for their model for near
shore locations (Anderson, pers comm., 2008).

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 13
GETECH/ Univ. of Leeds:

A:S&S (v16) B: DNSC08 solution C: GETECH-2002 solution

Figure10/27: Qualitative comparison between the free air anomaly (FAA) datasets for the mouth of the Amazon.
Subsets show the total horizontal derivative (THD) which is a powerful means of identifying systematic noise
within a given dataset.


Figure 10/28: Quantitative Comparison: Statistical Analysis of the 2002 & 2008 datasets for East Java Sea with 1)
Correlation coefficients, 2) Root Mean Square (RMS) using same spatial domain grids as 1), whereas 3) evaluated
the data using spectral Coherence method.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 10, Page 14
GETECH/ Univ. of Leeds:

A B
Figure 10/29: A: Satellite gravity spatial resolution past, present and future (after Fairhead et al., 2001b) and B:
Gebco bathymetry (light blue) and modifications based on available higher resolution bathymetry.

Quantitative comparisons are shown in Figure 10/28
and reveal that over the East Java Sea comparison site,
there is little to separate the resolution of the three
solutions in both wavelength and amplitude content.
This was confirmed at other comparison sites, not
shown. Inspection of the solutions, particularly their
derivative maps, reveals small scale differences in
amplitude which are generally random in nature. In
individual solutions small amplitude systematic noise
can be identified which does not correlate to similar
signals in the other solutions, so in practice this can be
considered as random noise between solutions. On this
basis it was decided to stack the three solutions to see if
this would improve the resolution. Spectral coherency
between the marine and stacked satellite data sets
shows a 0.5 coherency at ~13 km full wavelength,
compared to 14-15 km for the individual solutions.
Comparison of high-pass filtered grids (ie focusing on
short wavelength correlation) shows an RMS difference
between marine surveys and the stacked solution close
to ~2 mGals.
10.12.3 Conclusions
It appears that a major factor in the improvement in
satellite gravity resolution has been the re-tracking of
the data which Fairhead et al (2001b & 2004) identified
in 2001. Public domain solutions have thus undergone a
resolution catch-up process in the intervening period.
The resolution of the three solutions are now remarkably
similar suggesting that little additional improvement will
occur until new radar or laser acquisition methodologies
are introduced. This could include scanning systems
that could provide altimeter swathes (see Figure
10/29A). The new GETECH stacked solution is called
here TRIDENT and has the best resolution compared
to individual solutions.

For satellite-derived free air gravity data sets to have
maximum use in Earth sciences, (e.g. to evaluate
crustal structure) there is a need to image the



bathymetry to at least a similar resolution or better
(Figure 10/29B). This then allows the Free air gravity to
be accurately converted to the Bouguer and Isostatic
Residual gravity fields. Global bathymetry is currently
one of our least well-known global parameters.
GETECH has continues to update the Gebco
bathymetry model whenever better resolution data are
available as shown by the colour insets in Figure
10/29B.
References
Andersen, O. B. and P. Knudsen, 1998.Globalmarine
gravity field from the ERS-1 and Geosat geodetic
mission altimetry, Journal of Geophysical Research,
v103, p8129-8137.

Fairhead, J.D., C. M. Green and W. G. Dickson,
2001a.Oil exploration from space: fewer places to hide,
First Break, v19, p514-519.

Fairhead, J. D., C. M. Green and M. E. Odegard, 2001b.
Satellite-derived gravity having an impact on marine
exploration. Leading Edge, v20, p873-876.

Fairhead, J.D., C. M. Green and K.M.U. Fletcher,
2004.Hydrocarbon screening of the deep continental
margins using non-seismic methods. First Break, v22,
p59-63.

Sandwell, D.T. and W.H. F. Smith, 1997.Marine gravity
anomaly from Geosat and ERS 1 satellite altimetry,
Journal of Geophysical Research, v102, p10,039-
10,054.

Vincent, K., 2008. Evaluating the accuracy and
resolution of satellite-derived gravity and its role in
exploration. MSc thesis, Univ. of Leeds
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 11, Page1
GETECH/ Univ. of Leeds:


SECTION 11: GLOBAL GRAVITY DATA AND
MODELS


11.1 EGM08 Geopotential model

The current public domain Earth Gravity Model of the
Earth is known as EGM08 (Earth Gravity Model 2008)



Figure 11/1: Free air gravity field of EGM08 with grid
cell size of 5 (~10km)

The data used to construct the global grid is shown in
Figure 11/2.



Figure 11/2: Data used to construct EGM08

The long wavelength part of EGM model contains
GRACE satellite data up to order n=150 (or down to
=270 km). In marine areas satellite altimeter data (blue
and green in Figure 11.2 extend the solution to harmonic
degree and order m=n=2,159 (or down to ~18km).

In land areas, the coverage and thus the resolution is
highly variable. The dark grey areas have resolutions
down to ~18km and the light grey areas down to 30 (or
60km) resolution.

The EGM08 model is provided as a grid of Free air
gravity (FAA) or a set of harmonic coefficients. The most
useful for exploration is the free air anomaly grid.





However a health warning (or fit for purpose) needs to
be stressed for the EGM08. In marine areas the best
possible grid cell size is 5 which is close to 10km
resolution claimed for marine data. Thus the 1 or 2
grids provided by Sandwell and Smith, Danish Space
center and GETECH or combined solution called
Trident (see Section 10.12.3), provides best resolution
marine data suitable for exploration purposes. To
convert marine FAA to Bouguer or Isostatic residual
gravity anomaly the spatial coverage of bathymetry
needs to be known. The best public domain bathymetry
is GEBCO (see Section 10 Figure 10/29 for GEBCO
bathymetry updated by GETECH) based on depth
soundings. Beware Sandwell and Smith have generated
a predicted bathymetry grid from the FAA data but this
should never be used in converting FAA to Bouguer or
Isostatic residual anomalies since the bathymetry is
basically derived from the FAA and thus anomalies from
subsurface geological structures will be compromised
(i.e. they will be seen in part or whole as bathymetric
expressions).

In land areas no information other than Figure 11/2
indicates the data coverage for EGM08. To know
whether an area is covered by gravity data needs an
input of station coverage. This is not available in the
EGM08 model. In areas of no-data an estimated gravity
field is provided from the longer wavelength part of the
EGM solution and this has been corrected by the free Air
correction based on the best available topography. This
makes the no-data area look to have data at first glance.
An example is Angola where most of the country there is
no-data coverage used in EGM08 (see Figure 11/3)



Figure 11/3: land gravity station coverage for Angola
used in EGM08.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 11, Page2
GETECH/ Univ. of Leeds:

The equivalent area shown in Fig 11/3 is seen as FAA in
EGM08 is shown in Figure 11/4. The complexity of the
field over eastern Angola is based on the free air
correction of the topography shown in Figure 11/5 not
due to the complexity of the sub-surface geology.



Figure 11/4: Free air anomaly for Angola based on
EGM08 model



Figure 11/5: Topography map of Angola reflected in
the FAA shown in Figure 11/4.

Another example is shown in SW Sudan where the true
5 grid of gravity is seen in figure 11/6 and the EGM08
grid as figure 11/7 which is based on a decimated grid of
at least 15 cell size.

Thus grid cell size indicates the limit of the grid
resolution but does not necessary reflect the data
resolution which can be much greater than twice the cell
size



Figure 11/6: True resolution of a 5 grid for Southern
Sudan (minimum wavelength ~10km) (GETECH)


Figure 11/7: EGM08 anomaly field for same area as
shown in Figure 6/14. Resolution is 30km based on
15 grid supplied by GETECH to EGM08.

New updates of EGM08 will incorporate the long
wavelength components of the geoid surface measured
by the GOCE satellite mission which was launched in
2009. (See next section). GOCE will extends the
GRACE component of EGM08 from n=m=150 (or
270km) to about n=m=300 (or 135km). To see short
wavelength features of oil exploration interest, terrestrial
gravity data is still needed over continents and satellite
altimetry over the oceans.

11.2 GOCE mission

The GOCE satellite, like the CHAMP satellite (see
Section 14) is aerodynamic designed to reduce effects of
atmospheric drag. CHAMP operated at a height of
~400km which gradually decreased over 10 years down
to <300km in late 2010 before burning up in the Earths
atmosphere. It also had limited buster propulsion system
to move CHAMP back into high orbits. The GOCE
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 11, Page3
GETECH/ Univ. of Leeds:




Figure 11/8: The aerodynamic GOCE satellite



Figure 11/9: Image of GOCE satellite launched in
2009 and its payload.


Figure 11/10: GOCE first gravity model from 2 months of data. The gradiometer is likely to see
anomalies to m=n=300. This compares to EGM08 Potential field model which has resolution down to
degree and order m=n=2,159 (~18km) and uses previous satellite data (mainly Grace satellite) out to
order (n~150 or 270km). This will be replaced by GOCE for n~300.

mission, on the other hand, is at 254.9km altitude and
has ion engines to keep it stable at this altitude from
atmospheric drag. The UK-built engine ejects xenon
ions at velocities exceeding 40,000m/s; the engine
throttles up and down to keep GOCE at a steady
altitude.

The first gravity model from GOCE is shown in Figure
11/10

GOCE has 3 gravity gradiometer measuring Txx,
Tyy and Tzz (where Tz=g). With its three pairs of
accelerometers, this state-of-the-art instrument
measures gravity with unprecedented accuracy.
Within its measurement band, each accelerometer
can detect accelerations to within 1 part in 10,000,
000,000,000 of Earths surface gravity.

For further reading go to
http://www.esa.int/SPECIALS/GOCE/SEMZQ2VHJ
CF_0.html


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 11, Page4
GETECH/ Univ. of Leeds:



J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 12, Page 1
GETECH/ Univ. of Leeds:

SECTION 12: ADVANCES IN GRAVITY SURVEY

(from Fairhead and Odegard, 2002)


Figure 12/1: Resolution Trends of Gravity Systems: as of mid 2001. Arrow points represent 2001

Major technological advances have been made, over the
last few years, in gravity resolution for many acquisition
systems currently available for exploration. These
advances have been due to better instrumentation,
better use of GPS, and better processing methods (see
Sections 4-9 and 11). This in turn led toa renaissance in
the use of gravity in modern multi-disciplined, cost-
effective oil and mineral exploration. This introduction
show how gravity resolution has improved with time
rather than how improved resolution is being used to
investigate and map subsurface density structures.

Resolution is the ability to separate two features that are
very close together. For gravity, this can be expressed in
terms of the accuracy of the measuring system (in mgal)
at the shortest resolvable signal wavelength (km).
Current practice defines gravity wavelength as the half
sine wave distance (1/2 wavelength) and this definition is
used here. Because gravity measurements are generally
collected along profile lines, a surveys spatial resolution
largely depends on profile line spacing.

Resolution for conventional land and/or seabed gravity
surveys (static surveys), using high performance gravity
meters, is simply a function of the spatial coverage of
observational points. However, resolution for shipborne
and airborne surveys (dynamic surveys) is influenced by
a range of noise components induced by uncertainties
and variability of speed, position, sea state, and air
turbulence. Thus, resolution claims are compared to the
best possible obtained under ideal survey conditions.
Resolution of marine gravity surveys degrades
significantly with worsening sea state. In airborne
surveys, flying straight and level with no turbulence
(ideal conditions) is generally not achieved. On the other
hand, recent improvements in airborne gravity resolution
have revealed the inadequacy of the ground static
measurements used to quantify this resolution.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 12, Page 2
GETECH/ Univ. of Leeds:

Figure 12/1 shows the time-trend plot of resolution for
the range of systems currently used by the oil industry.
Static measurements have been discussed above. Other
commonly available techniques are satellite-derived
gravity, shipborne and airborne gravity, and gravity
gradiometry.

Satellite-derived gravity relies on satellite radar altimetry
mapping of the marine geoid surface and then
transforming it, essentially by determining the vertical
gradient, to free air gravity (Fairhead et al., 2001). The
trend of resolution with time has primarily been
controlled by improvements in spatial coverage and
better picking of radar reflections. This has improved
resolution from 20 mgal @ 25 km in the mid-1980s to
about 5 mgal @ 12 km in the mid-1990s to ~3 mgal @ 5
km today. Swathe radar mapping may be a way to
achieve even higher resolution.

The breakthrough in shipborne gravity resolution in the
mid-to-late 1980s is credited to Edcon which used GPS
to monitor Etvos effects and upgraded the LaCoste &
Romberg (L & R) S-meter to the SAGE meter to which
could make the 1-s sampling necessary to track the
Etvos effect. Prior to this, in the 1970s, 1-minute and
then 10-s sampling were the norm and resolution varied
from 2 mgal @ 1km to 0.5 mgal @ 0.5 km (the latter for
stand-alone surveys in calm weather). After the
introduction of the SAGE meter in the mid-to-late 1980s,
resolution dramatically improved to 0.2 mgal @ 0.25 km.
Since then, the slump in oil prices has reduced the
principal contractors to two (Fugro-LCT and AEI) and
only minor resolution improvements have been achieved
since.

Since the mid-1990s, there has been considerable effort
to convert dynamic gravity R & D research in airborne
gravity into high-resolution commercial systems. These
new systems are likely to dictate future trends in
potential field acquisition. Airborne gravity can be traced
back to early tests in the 1970s; Carson Services
introduced helicopter mounted gravity systems in 1977
(claimed resolution was ~1 mgal @ 5-8 km) and then
fixed-wing systems in the mid-1980s. Carson was the
unrivaled champion of airborne gravity until the mid-to-
late 1990s. Mastering DGPS, upgrading the Carson
system, and introduction of rival acquisition systems by
Edcon, Fugro, and Sander (to name but three) resulted
in resolution claims of 0.3 mgal @ 1 km for helicopter-
mounted systems traveling at ~50 knots and 0.2-1 mgal
@ 2 km for fixed-wing systems traveling at ~100 knots
(Argyle et al ., 2000 and Gumert and Phillips, 2000).

Sanders AIRGrav, the newest of these systems, uses
accelerometers instead of the L & R devices used by the
other contractors. The AIRGrav system uses the same
principles as an inertial navigation system with gyros but
with no attempt to null the horizontal accelerations. The
accelerometers have lower noise (factor of 2-3) and
higher resolution because they do not have the
nonlinearity of the L & R system; as a result, variations in
vertical acceleration can be more accurately tracked and
removed during processing. A unique feature of the
AIRGrav system is that it is unaffected by air turbulence
which makes survey costs lower and allows the system
to be drape flown. Gravity gradiometers are essentially
Lockheed Martin instruments that can measure 3-D
gravity gradients and tensors. Accuracy claims based on
Gulf of Mexico surface ship measurements by Bell
Geospace put the system at 0.5Etvos (0.5 mgal/km).
The modified airborne system, developed by BHP and
flown by Sander Geophysics, generated impressive
vertical gravity gradient data (Van Leeuwen, 2000).

Recommendations: Resolution will generally be below
that shown in Figure 1 due to non-ideal survey
conditions. Thus, monitoring resolution (i.e., checking
resolution claims to see if a survey remains within
specifications) is a challenge for an oil company. For
marine gravity surveys undertaken as part of 2-D or 3-D
seismic surveys, one would appear limited to monitoring
the occasional repeat lines, adjacent line comparisons,
and evaluating crossover levels. Best practice suggests
that gravity/navigation/bathymetry should be
continuously recorded throughout the survey. By so
doing, data coverage can increase 100% and a
significant number of cross-lines can be generated at no
extra cost. Standalone surveys (marine and airborne)
have greater opportunities to follow best practice
procedures. For airborne surveys, this would be to fly the
same test line at the start and end of each flight during a
survey. The test line could be compared to ground-
based measurements if available. Such procedure gives
comfort that the gravity system and processing are
producing consistent results from flight to flight that are
within specification.

Resolution Update to 2010

The main developments have been in
Ground gravity: none
Marine gravity: none
Airborne gravity: yes so read Section 8.
Satellite gravity: yes due to Trident dataset so see
Section 10.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds:




MAGNETICS

Section 13 Time Variations of the Geomagnetic Field

Section 14 Magnetometers & Satellite Data &Models

Section 15 Aeromagnetic Surveying
J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds:


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 1
GETECH/ Univ. of Leeds:

SECTION 13: GEOMAGNETIC FIELD AND TIME
VARIATIONS

13.1 Introduction

Magnetic Anomaly = Tobs - Tth -Ttv
(Due to geology)

Where
Tobs =absolute field measured during survey
Tth = smoothed Geomagnetic field derived from
mathematical model of the Geomagnetic field
Ttv = transient variations of Geomagnetic field

Before one tries to measure Tobs it is important to
understand fully Tth and Ttv since both terms vary with
time.

13.2 Smoothed Geomagnetic Field
(T
th
)

Assuming no transient variations (Ttv) the Earth's
magnetic field (or Geomagnetic field) has two
components:

i) Main dynamo field which represents 99% of Tobs.
field

ii) Crustal field, which represents 1% of Tobs. field

By removing Tth from Tobs leaves just the small crustal
component (1% of main field) that we are interested in.



Figure.13/1: The geomagnetic dipole field

In 1980 Magsat (satellite with 3 component Fluxgate
and an Optical pumped magnetometer onboard) was
used to initially determine the main field Tth. An uodate
of this power spectrum study using CHAMP is shown in
Fig. 13/2. As previous it shows a very clear division
between the dynamo field located on the core/mantle
boundary and the crustal field. The main (dynamo) field
is now known (via satellite observations) to be important
upto spherical harmonic of degree 16.

Why is there such a sudden change of slope at
harmonic 16?

First we need to define harmonic in terms of wavelength

Wavelength (km) Order n
(Circumference of Earth) 40,000 1
2,500 16
200 200

i.e. wavelength of field = circ of Earth
n

The main cause of magnetisation in crustal rocks is
magnetite. As one goes deeper into the crust the
temperature increases and magnetite becomes non-
magnetic at about 600
0
C. This is the Curie temperature
for magnetite. The depth at which it is 600
0
C (often
referred to as the Curie depth) depends on the heat
flow, so at mid oceanic ridges it is a few kilometres and
increases in depth away from the ridges whereas
beneath continents it can be up to and can exceed 30
km in places. Rocks down to the depth of the Curie
isotherm of 600
0
C (temperature where rocks loose their
ability to retain magnetism), which for most rocks is
approximately 550
0
C and occurs at crustal depths of
less than 30 km.

The Curie isotherm occurs at a range of depths
depending on the heat flow but can be as deep as 20
km plus below continents and as shallow as 2km at
oceanic ridges. Between the Curie depth to the core -
mantle boundary (at 2,900 km) the rocks have no
magnetic properties. The slope of the Power spectrum
gives the depth of magnetic source: for harmonics 1 to
16 give the core-mantle depth and harmonics 16 and
greater give crustal depths. Thus the main dipole part of
the Geomagnetic field can be very well described using
spherical harmonics up to order 16. A mathematical
model of the Earths magnetic field to harmonic order
and degree 16 is now available. This field was originally
known as International Geomagnetic Reference Field
(IGRF). This field is however not static but changes
with time.

Secular Variation: Since the Geomagnetic Field
changes with time (secular variation, see Fig. 13/3) the
IGRF model has had to be redefined every 10 years for
1965, 1975, 1980, 1985, 1990 with each model being
able to predict changes in the field up to 5 years due to
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 2
GETECH/ Univ. of Leeds:

slow regular change in Geomagnetic field (see Fig 13/3
&13/4).

One model to replace all these IGRF models which
takes into account all past time variations is called the
Definitive Geomagnetic Reference Field (DGRF).
This model has now evolved into the Comprehensive
Model (see Section 13.6).

Figure 13/2: Power spectrum of CHAMP data(red
line) showing clear spectral divide at about n=16
(previously thought to occur at n=13 that separates
crustal anomalies (n >16) and Earths main dipole
field (n <16) located at the core mantle boundary.


Figure 13/3: Diurnal and secular variations in T at a
fixed point recorded over a number of weeks.. Each
tick-mark on the time axis is one day.


Year
Figure13/4: Secular variation at Melbourne Obs.
1860 to 1980. Over any five-year period the change
can be considered approximately linear.

13.3 Satellite Determination of the
Geomagnetic field

13.3.1 IGRF 1980.0 based on Magsat

To determine the Geomagnetic field requires a uniform
coverage of measurements over the entire globe.
Previous to satellite observations scientists had to rely



Figure 13/5: Selection of Magsat 2 days of quiet
solar activity to generate the geomagnetic model

Figure 13/5a: IGRF 1980.0 Total Magnetic Field
Intensity based on Magsat.

Figure 13/5b: IGRF 1980.0 Declination based on
Magsat



Figure 13/5c: IGRF 1980.0 Inclination based on
Magsat.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 3
GETECH/ Univ. of Leeds:



Figure 13/6a: Total Magnetic Intensity for Epoch 2010.0 For more information on its construction go to:
http://www.ngdc.noaa.gov/geomag/WMM/data/WMM2010/WMM2010_TR_preliminary.pdf




Figure 13/6b: Secular or annual change of the Total Magnetic Intensity field. This image is very accurate since the
satellites have been monitoring the field for a long period of time and have been able to measure the change in
the field strength. Figs 13/6a&b are just two images of the field parameters.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 4
GETECH/ Univ. of Leeds:

on observatory measurements, which were far from
ideal. Magsat was launched on 30 Oct 1979- and burnt
up in June 1980 This was due to its low orbital altitude,
which varied from 352 km to 578 km. The inclination of
the orbit was I = 97
0
and had a ground track spacing of
150km. The orbit was synchronous with dawn-dusk and
the orbital was elliptical as indicated making processing
of the returned magnetic data more difficult.

Magsat was among the first satellites to measure the
Earths magnetic field at low orbit so it could also see
both the dipole field and the crustal magnetic field.
Due to its short live (8 months) it was not able to fully
identify and map the secular change.

13.3.1 IGRF 2010.0 based on Oersted, SAC-C
and CHAMP satellites

The availability of a number of satellites Oersted, SAC-
C and CHAMP (see Section 14) has allowed the most
precise measurement of the Geomagnetic field and its
secular variation. Figures 13/6a & b provide images of
the field for the epoch 2010.0. This means users can
accurately determine the field back to 2005 but also
predict the field up until 2015.

13.4 Transient Variations (Ttv)



Figure 13/7: Sun. Earth, the Solar wind and the
Earths magnetic field.

The Earth's magnetic field extends into outer space and
interacts with the solar wind (see Figs.13/8 and 13/9)
The solar wind distorts the Earth's magnetic field and
any changes in solar wind can cause small amplitude
(few nT) and period (few. minutes to hours) variations to
large scale (100s nT and several days) variations in the
Earths surface measurements of the Geomagnetic field.

The spectrum of transient variations is large (see Fig
13/10), ranging from thousandths of a second to millions
of years. These variations can be sub-divided into
internal and external causes to the Earth.



Figure 13/8: 3D model of the Interaction between the
Earth's magnetic field and the solar wind.



Figure 13/9: 2D model of the magnetosphere. The
solar wind is a neutral plasma and made up of equal
amounts of low energy electrons and protons
thrown out by flares from the suns surface. The
velocity of the plasma 1500 km/s and density 10
10
ions/m
3



Figure 13/10: Spectrum of time changes of
the Geomagnetic field.



J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 5
GETECH/ Univ. of Leeds:

13.4.1 Internal Causes (ref to Fig.13/10)

Epochs/Reversals/Events Changes in the state of
the Earth's dynamo gives rise to magnetic polarity
changes in the Earth's field. Such changes have
occurred at approximately 0.5 million-year intervals
since the end of the Cretaceous (70 million years
ago). How do we know this? By counting up the
number of times the filed has reversed since the
Cretaceous using magnetic strips in the oceans. The
last reversal was 0.7 million years ago (or 0.7 Ma),
so we are due for another at any time. Luckily the
reversal process, as determined by Palaeomagnetic
studies, takes 10,000 years to complete. The field
decreases to a low value, then slowly rotates by
180
0
and grows back to its normal value in a
reversed state.


We are currently well into the next reversal with the
field strength decreasing at a steady rate (Fig.13/10)
The weakened field appears to move along a well
defined longitudinal paths (one pole goes through
South America and the other through Indonesia)
before increasing back to normal strength in its
reversed polarity. Studies of the strength of the
Earth's central dipole since the 1960 (see Fig. 13/11)
indicates the field is decreasing at a rate of 27 nT
per year, such that if linearly extrapolated the field
would go to zero in 1200 years time!
Palaeomagnetism have given names to the long and
short period of polarity change (epochs and events -
see Fig 13/10).

Are we starting to see the effects of this decreasing
field strength? The answer is yes (see Section 14.6)


Figure 13/11: Strength of the central dipole since
1900. The slope indicates that the field is
decreasing by -27 nT per year

Secular Change This is the name given to the above
slow process that is measurable in magnetic
observatories. In Fig 13/4 the field changes can be
predicted up to about five years in advance. There are
sufficient magnetic observatories and other
measurements (e.g. MagSat to CHAMP satellites) to
define secular change on a global scale to n=16 and
these changes are incorporated into the modern IGRF
and the DGRF and Comprehensive models. (see Fig
13/6b for epoch 2010.0)

If two magnetic surveys were flown over the same area
several years apart then they should be very similar to
one another if the processing of the surveys has been
done in a similar manner but one survey would have a
constant bias due the change in the dipole field. By
applying the appropriate IGRF field corrections this bias
(secular change or variation) can be removed (see
Section 13.5 for application of such corrections)

13.4.2 External Causes (ref to Fig.13/10)

These transient variations are caused in the main by the
interaction of the solar wind plasma with the Earth's
magnetosphere

WHAT IS PLASMA?: Matter can be classified in four
states: solid, liquid, gaseous, and plasma. The basic
distinction between solids, liquids and gases lies in the
difference between the strength of the bonds that hold
their constituent particles together. The equilibrium
between particle thermal (=random kinetic) energy and
the interparticle binding forces determines the state.
Heating of a solid or liquid substance leads to phase
transition to a liquid or gaseous state, respectively. This
takes place at a constant temperature for a given
pressure, and requires an amount of energy known as
latent heat. On the other hand, the transition from a gas
to an ionized gas, i.e., plasma, is not a phase
transition, since it occurs gradually with increasing
temperature. During the process, a molecular gas
dissociates first into an atomic gas which, with
increasing temperature, is ionized as the collisions
between atoms are able to free the outermost orbital
electrons. Resulting plasma consists of a mixture of
neutral particles, positive ions (atoms or molecules that
have lost one or more electrons), and negative
electrons. In a weakly ionized plasma the charge-
neutral interactions are still important, while in strongly
ionized plasma the multiple Coulomb interactions are
dominant.

Sunspot cycle and Magnetic Storms: Sun spots
are cooler areas on the Sun's surface and are sites of
major solar flares throwing plasma out from the Sun.
This has the effect of altering the solar wind's density
and energy, which will interact with the Earth's
magnetosphere approximately two days later, since that
is the time it takes for the plasma to travel the 93,000
miles to Earth. Scientists have been counting Sunspots
for centuries and in recent times there is a very clear 11
year cycle in the Sunspot number-count (see Figs13/12
and 13/13).

The interaction of the enhanced solar wind on the
Earth's magnetic field causes the magnetic field to
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 6
GETECH/ Univ. of Leeds:

change very rapidly and results in poor short-wave radio
reception. Thus the phenomenon is often referred to as
magnetic storms.


Figure. 13/12: Historic evidence of the sun spot
cycle 1840 to 1995.

The inset to Figure 13/14 indicates that Sunspot
originate in mid-latitudes on the Sun and move with time
towards the Sun's equator where they disappear and
coincide with the Sunspot minimum count. The process
restarts but this time the magnetic polarity of the
Sunspot is reversed. Thus the physical phenomenon
has a period of 22 years.

Figure 13/13: The current predicted sun spot cycle

The effect of magnetic storms (when enhanced plasma
energy and velocity interact with the Earths magnetic
field) generates large changes to the Earths magnetic
field that can take days to recover. Normally during
such storms survey data cannot be accurately
processed to remove the storm's effects, thus surveys
are temporarily halted. Fig.13/15 shows a base station
record at the commencement of a magnetic storm

The effect of a typical storm on magnetic records will be
a very sudden onset, followed by a highly variable signal
before dying away over a period of one to three days.
Since the Earth rotates every 24 hours the maximum
interaction occurs during the day time side of the Earth,
the effects of the storm will repeat itself each day at
about the same time.



Figure 13/14: Processes that originate within and on
the suns surface cause many of the transient
magnetic variations.

Since the Sun rotates every 25 days then magnetic
storms with have a 25 day repeat cycle. Why the effects
are variable throughout the day and night will be easier
to understand by reference to section iii, Diurnal Effects



Figure 13/15: Onset of a magnetic storm is sudden
as seen on a base station record that records the
last 2 digits of the absolute field reading.

Semi Annual Effects: The Earth's magnetic field axis
is similar to its spin axis resulting in the equatorial plan
being inclined at 23
o
to the solar plane.

This results in
the solar wind distorting the magnetosphere on a
seasonal basis. This is caused by the inability of the Sun
to ionise as many molecules in the Ionosphere during
the northern hemisphere winter as it does in the
northern hemisphere summer, i.e. similar to the Sun's
ability to warm the Earth. These effects result in
seasonal variations in the amplitude of the diurnal
variation by a factor of two (see Fig 13/16).

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 7
GETECH/ Univ. of Leeds:



Figure 13/16: Typical Diurnal Total field record by a
base station in the UK. This base station effectively
sees the response of the Ionosphere currents along
W-E tracks shown in dark blue dashed lines in Figs.
13/18a and 13/18b.

Diurnal Effects: The Earth rotates once ever 24
hours and thus experiences maximum interaction
between Geomagnetic field and solar wind at local mid-
day. The diurnal effects can be sub-divided as follows:

a. Solar: The Sun energy ionises molecules in the
ionosphere causing an electrical current to flow in
the ionosphere Figs. 13/17 &13/18. This current
flow generates its own magnetic field through the E-
M effect. Maximum variation normally occur at local
mid-day. The variation from the quiet night time
value to midday can be ~50 nT Fig. 13/18a and
13/18b show the variations in the Ionospheric
current systems between the northern hemisphere
summer and winter.

Figure 13/17: Plasma density of the Earths
Ionosphere

For the UK a diurnal total field base station record
(Fig. 13/16) would be generally steady at a constant
value from mid-night to about 4 a.m. At dawn the
field starts to decrease due to the Sun's action on
the Ionosphere and the ionosphere's current flow
acts in a way to decrease the magnetic field at the
Earth's surface. Maximum effects occur at local noon
before decreasing. Since the current system is
balanced over the whole Earth then the effect on
magnetic recording stations at different latitudes will
change (see Fig 13/19)

Figure 13/18a: Ionosphere current system for the
Northern hemisphere summer The Blue dashed line
represents what a base station in the UK would tend
to see.

Figure 13/18b: Ionosphere current system for the
Northern hemisphere winter


Figure 13/19: Variation of diurnal variation with
latitude. To understand what is going on, remember
Earth's field is changing with latitude as is magnetic
effects due to Ionosphere's current system (i.e.
current flow in southern hemisphere is reversed).
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 8
GETECH/ Univ. of Leeds:



Figure 13/20: Location of the magnetic equator.

b. Lunar: Gravitational attraction on the Ionosphere
causes magnetic effects no larger than 2 nT. This can
not normally been seen on records as its period is 13
hours and amplitude 2 nT. The only way to see this
effect is by signal processing (i.e. take 1 months
period of data for quiet solar activity and do a spectral
analyse on it)

c. Electrojet Phenomena: In areas close to the
magnetic (dip) equator (Fig 13/20 large magnetic
variation occurs about Solar noon. The amplitude of the
variations are much greater than the normal diurnal
(solar) effects and can reach 100 to 200 nT. The
magnetic equator in NE Africa lies close to 10N (i.e.
through Ethiopia). The width of the zone that is affected
by the Electrojet is about 5.5
o
wide (600 km).

The cause of the Electrojet phenomena appears to be
the convergence (pinching) of the ionosphere's current
flow lines from the northern and southern hemisphere at
local (solar) noon. Figures 13/18a & 13/18b show the
Ionosphere current distribution for the northern
hemisphere summer and winter.

The rapid change of current density at local (solar) noon
generates a large magnetic field. Normally
aeromagnetic field studies are suspended during this
period of the day due to the rapid change in the field i.e.
it is not possible to accurately remove the Electrojet
effects from the total field measurements, unless there
is an adequate set of base stations monitoring the
spatial form and temporal change of the Electrojet field.

Magnetic recording stations located N-S across the path
of the Electrojet show how the phenomena builds in
time and space 3 component vector magnetometers.
The H component is equivalent to the Total field and
peaks at local noon. Recording of the phenomena day
after day indicates the location of the Ionosphere
current peak changes latitude by small amounts thus
need for series of base stations to adequately monitor it.

Results from the Brazil array are found in Rigoti_et_al
(1999)

d. Bays(see Fig 13/10): Caused by small scale surges
in the Solar wind with periods of 20 minutes to 2 hours.
and amplitudes from 5 to 20 nT. It is important to be
able to monitor Bays in high accuracy surveys.

e. Polar Sub-Storms (see Fig 13/10): These
disturbances are observed in high latitudes and
decrease in magnitude away from the poles. The
amplitudes are variable and need to be spatially
recorded to remove their effects from survey data.

f. Micropulsations (seeFig 13/10): Micropulsations are
very small amplitude variations up to ~1 nT at
frequencies of 0.001 to 1000 c/s. Caused by lightning
strikes somewhere on Earth. They are important to
monitor and correctly remove from high resolution
surveys. If surveys are using less sensitive
magnetometers (accuracy of instrument 1 nT then
micropulsations are generally not seen.

Figure 13/21: Variations in T recorded at stations a
few tens of kilometers apart showing the nature pf
micro-pulsations and their variation from place to
place on the Earths surface.

While their amplitudes may be only a few nT, their effect
on magnetic records made in an aircraft or at a base
station on the ground is significant. Unfortunately, the
exact shape of a recorded sequence of micro-pulsations
may change (amplitude and phase) from place to place
over a few tens of kilometers (Figure 13/21), so even
subtraction of time variations observed at a fixed
location from the anomalies recorded in an aircraft is
not without its limitations and effective elimination of
such short-term geomagnetic variations is not achieved.

The example shown below of how to deal with micro-
pulsations is taken from OConnell (2001)

Figure 13/22shows high pass airborne and base station
data during a short period 1500 seconds when micro-
pulsation are present. Standard cross correlation (Fig
13/23 left) using windows of a given size shows there is
correlation. However a better correlation (Fig 13/23
right) is obtained if there is a slight time shift due
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 9
GETECH/ Univ. of Leeds:

possible to small phase changes possibly resulting from
differing responses beneath the base station and the
field location.


Figure13/22: High pass aeromagnetic and base
station records showing possible micropulsations

Application of the time shifted correlation correction (Fig
13/24) allows a cleaner airborne record that if one
simply calculated the difference since amplitudes and
phase of the field to base records is different.


Figure 13/23: (Left) Standard cross correlation
between base and aeromagnetic data. (Right) Time
shifted cross correlation of the same



Figure 13/24: Left after correction of time shifted
correlation results, Right simple difference between
field and base reading

g. Atmospheric Effects (see Fig 13/10): Pressure
systems will change the height of the Ionosphere and
thus generate small amplitude (>1 nT) effects on
diurnal records over a number of days. Not important.

h. Other Effects: These include magnetisation of
observing aircraft (see Section 15.4), man made
effects e.g. power lines, direct current railways,
industrial sites, drill rigs, well heads, pipelines, ships
etc .


13.5 Magnetic Data Processing
(Ground)

All the transient variation Ttv in section 13.4 will be
recorded by the base magnetometer and so long as
the changes are normal with time (i.e. no magnetic
storms) they can be removed by either one correction
or a spatial correction if more than one base station.
Corrections are referenced to the period during the day
when there is least interaction between the Solar wind
and the Geomagnetic field. This occurs between 1 am
to 4 am (local time). See base station chart Figures
13/16 & 13/25

Processing of ground observations to magnetic
anomaly values is more straight forward than for
gravity

Magnetic Anomaly = Tobs - Tth - Ttv

Tobs -since magnetometers measure absolute values
of the Geomagnetic Total field there is no need to
calibrate instruments or survey in loops.

Tth - This is derived from public domain software.
Inputs are the latitude, longitude, height, time and date
of the observation point. Full field components will then
be output from the DGRF or IGRF mathematical model.
NASA or BGS websites have such software.

Ttv -All the transient variations can be removed by
recording their effects at base stations using
magnetometers of the same sensitivity as used to
measure Tobs. A typical diurnal record is shown (Fig
13/25) for 3 days with some of the variations labelled.
Base stations normally have some form of chart
recorder for instant inspection to indicate whether the
system is working correctly. A base station could also
have a digital output for rapid data processing



Figure 13/25: Correcting Tobs for Ttv

All measurements are relative to the Quiet Night Time
Value (QNTV). This QNTV occurs at about 1 am to 4
am when there is least interaction of the solar wind with
the Geomagnetic field. The QNTV is monitored over 3
to 4 days by the base station and the mean value is
taken to represent the base station reference field from
which all variations Ttv are measured. If there is a
steady decrease or increase in the QNTV over this 3 to
4 days then a longer period is used to determine the
reference field. A base station in a different location will
have a different reference value for the QNTV but will
show identical variations Ttv relative to this QNTV value.
(There may be slight changes to Ttv depending on
conductivity of crust/upper mantle in this different
location).

Important: Base station records are only used to
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 10
GETECH/ Univ. of Leeds:

measure Ttv i.e. the variation or departures from the
QNTV and these values are added or subtracted from
Tobs to make all Tobs appear to be recorded at the
time (i.e. QNTV).

Example

QNTV value for 19 July 1972 is 48422 nT 1 nT
QNTV value for 20 July 1972 is 48416 nT 4 nT

(Need to record for several more nights to obtain best
estimate of QNTV.)
Assume best estimate of QNTV is 48420 nT. based on 6
days of QNTV values.
We can now determine for survey all Ttv values relative
to 48420 nT.
For 19 July
At 10.00 hours
Tobs (say)= 48764 nT
Ttv =+17 nT (Base station = 48403 nT)
Tobs + Ttv = 48781 nT
At 20.00 hours
Tobs (say)= 48812 nT
Ttv = -15 nT (Base station = 48435 nT)
Tobs + Ttv = 48797 n


13.6 Comprehensive Model

The smoothed geomagnetic field model (Tth) has
evolved with time. Originally it was described by the

13.6.1 International Geomagnetic Field (IGRF)

This provides the 6 components X, Y, Z, T, Dip and Inc
with Secular Variation to 8 or 13 harmonic order for
periods +/-5 years (e.g. 1980 +/-5 years) now updated
every 5 years.

then

13.6.2 Definitive Geomagnetic Field (DGRF)

This is basically the IGRF but seamlessly joined the
IGRF models together from 1960 to Present day.

Now

13.6.3 Comprehensive Model (CM3)

This model extends the components defined in IGRF
and DGRF. This new model incorporates data from
magnetic field satellites Pogo (1965-1970), Magsat
(1979-1980), Orsted (2000-) and CHAMP (2001-) and
magnetic observatories from early 1960s to present. To
isolate the magnetic effects of geological sources, CM3
model defines in a continuous manner (space and time)
many long wavelength magnetic fields that users of
crustal magnetic anomaly datasets aim to remove from
their magnetic observations. These include the main
field, quiet time external magnetic fields from the
magnetospheric and ionospheric sources. These fields
generate base level variations in aeromagnetic and
marine magnetic observations.
(CM4) CM4 model is a natural extension of CM3. In
addition to data incorporated into CM3 model (see
section CM3 Model) scalar data from CHAMP and
vector and scalar data from Orsted have been
incorporated, along with all available observatory data
through 2000. Slight modifications have been made to
the CM3 parameterization in order to accommodate
these data and include:
1) an extension of the main Field secular variation (SV)
basis functions through 2010;
2) insitu quasi-dipole (QD) meridional poloidal currents
in the Magsat sampling shell; and
3) insitu QD meridional poloidal currents in the Orsted
sampling shell which are continuous in diurnal time.
The Power spectrum based on these modern data Fig
13/26, (basically same as Fig 13/2,) shows the change
over from Earth internal/core field and crustal anomalies
is occurs at n=13 to n=16.


Figure 13/26: Lowes-Mauersberger (Rn) spectra for
CM4 (red line) and CM3(symbols) at Earth's surface

Further reading

Ravat et al 2003 New way of processing near-
surface magnetic data: The utility of the
Comprehensive Model of the magnetic field. The
Leading Edge Aug: 784-785.
NASA Website: http://core2.gsfc.nasa.gov/CM/
13.7 Ground magnetic QC work

Requirements of Quality Control for:

Diurnal variation: does not change by more than
5 nT in 5 minutes. Figure 13/27 shows a section of
a base station diurnal record and Figure 13/28
shows a redisplay of this diurnal to demonstrate
whether or not the diurnal variation exceeds the 5
nT/5 minute specification

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 11
GETECH/ Univ. of Leeds:


Figure 13/27: Typical quiet diurnal record.
Where Total reading is in nT and time in
minutes from start of day.
400 600 800 1000 1200
Time in minutes
0
1
2
3
4
5
M
a
g
n
e
t
i
c

v
a
r
i
a
t
i
o
n
s

(
n
T
)
Figure 13/28: Variation in Diurnal per 5 minute
periods. Good data are < 5nT/5 minute period

Spatial application of quality control: This helps
to determine if there are unacceptable values
between individual surveys which can be picked up
by spatial validation of the data.




Figure 13/29 Ground magnetic measurements. Left-
Total field in nT and Right- residual (centre point
value less the mean of surrounding 4
measurements). Any poor reading or processing
would show up.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 13 Page 12
GETECH/ Univ. of Leeds:






J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14, Page 1
GETECH/ Univ. of Leeds:

SECTION 14: MAGNETOMETERS AND
SATELLITE & TERRESTRIAL
GLOBAL MAGNETIC MODELS



Their are two basic types of magnetometers

i. Relative measuring instruments
ii. Absolute measuring instruments

Both measure B, the flux density


14.1 Relative Measuring Instruments

14.1.1 Schmidt Magnetometer

There are two types: a horizontal vector, AH, and a
vertical vector, AZ (Fig 14/1). These instruments are no
longer used and are now museum items. (where A
means difference)
Reference: Dobrin and Telford text books



Figure 14/1: Schmidt vertical balance Magnetometer

14.1.2 Torsion Magnetometer



Figure14/2: Torsion Magnetometer without its tripod.

This instrument (Fig 14/2) replaced, the AZ, Schmidt
magnetometer and still used today in mineral
exploration especially in areas of high magnetic
gradients such as within mines (only measures AZ).

Reference: Haalk, 1956. Geophy. Prosp Vol. 4 p424-
441

14.1.3 Fluxgate Magnetometer

This instrument (Fig. 14/5) has been widely used for
land air and marine magnetic surveys but has now been
superseded to a major extend by the Absolute
measuring magnetometer.

Accuracy : ground 1-10 nT;
airborne 0.1-1 nT.
Advantages : wide range and continuous reading.
Disadvantages : mechanically fragile, needs orientation
and subject to thermal drift

Previous instruments (Schmidt and Torsion) used
magnets which are magnetically hard materials i.e. high
coercivity (retains magnetism Hc ) and low permeability
(does not pick up additional magnetism easily). The
fluxgate uses Ferromagnetic rods (or elements) with
high permeability magnets where permeability = B/H
and B is the flux density and H is strength of external
field


Figure 14/3: Basic concept of Fluxgate instrument

Since the rods are of high permeability they allow the
earth's magnetic field to induce a magnetisation that is a
substantial proportion of its saturated value (see Fig.
14/3 left). So if we have a Ferromagnetic element
surrounded by a coil then it is possible by passing a

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14, Page 2
GETECH/ Univ. of Leeds:

current through the coil to push the element into a
saturated state. If the element is parallel to the earth's
total intensity, T, then saturation will occur early (smaller
current in Primary coil required to generate additional
field) when induced field is in the same direction as the
earth's field. If the inducing field is in the opposite
direction then saturation will occur later and require a
larger current.

Initially consider a single element with no ambient
field


Figure 14/4a: Fluxgate in the presence of no external
field

Now consider coils in Earths magnetic field

Figure 14/4b: Fluxgate Magnetometer in the
presence of the earths magnetic field.

The primary coils are wound identically around each
element in series (i.e. current in same direction). Thus
the flux density in both elements is equal but opposite in
direction.(assuming no external field) The secondary coil
(red in Fig 14/3) is wound about the two elements in
opposite directions and connected to a voltmeter Vr.
When an AC current is applied to the Primary coil
(black) the Earth's magnetic field is reinforced in one
element and opposed in the other (Fig 14/4b).

let c = geometric factor
T = ambient field
f = current field

then Total magnetic field = T + f
Magnetic flux B = c(T + f)
| | ( ) V
d
dt
c f
1
= + T
( ) = +
|
\

|
.
|
c
d
dt
d
dt
f T


and ( ) V c
d
dt
d
dt
f
2
=
|
\

|
.
| T



Vr = V1 + V2
T
dt
d
c
r
V

2 = can determine

dt
d
is known freq
Fluxgate as a Field Instrument

Land: hand held, light and portable uses rechargeable
batteries, held in vertical position (AZ), accuracy 1-
10nT( see Fig 14/5)
Airborne: to measure total field need to keep elements
parallel with T. Two auxiliary fluxgates are mounted
mutually perpendicular to third fluxgate and connected
to a servo-motor. Zero output from auxiliaries when third
fluxgate parallel to T
Satellite: Used in Magsat and CHAMP satellite missions
(see Section 13) as 3 x component vector
magnetometers. Used because of its reliability and
ability to monitor rapidly varying fields to an accuracy of
at least 1/1000 nT



Figure 14/5: Hand held Fluxgate magnetometer held
around the neck with instrument on your belly.
Instrument levelled by spirit bubble


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14, Page 3
GETECH/ Univ. of Leeds:

A third coil (not shown), co-axial with the second, carries
a current which is adjusted by way of the amplifier
output to null-out the earth's field exactly and so reduce
the observed pulses to zero. The magnitude of the
current in the third or compensating coil necessary to
maintain this condition may then be used, with suitable
calibration, to assess continuously the magnitude of the
external field component along the axis of the system.
References: . Most text books

14.2 Absolute Magnetometers

14.2.1 Proton Magnetometer

This is an absolute measuring instrument introduced in
the late 1959s and is normally adequate for most types
of land surveys.
Accuracy - ground 0.1 - 1 nT
- airborne 0.01 - 0.1 nT
Advantages - self orienting and relatively simple to
use and is robust
Disadvantages - intermittent readings (0.2 -1.0 sec)
- limited range without tuning
- measures total field only
- cannot measure very high gradients

Packard and Varian (1954) Phy. Rev. 93: 941 found that
after a strong magnetic field is removed from a sample
of water an audio-frequency signal can be detected from
the water for a second or so. Reason: water ionises into
HO
-
and H
+
ions. The H
+
ion is hydrogen atom less its
electron i.e. a proton. The proton has a spin momentum
whose axis will align with the external magnetic field
(Fig. 14/6). The proton moves into the external
magnetic field direction by precessing at an angular
frequency, e which is known as the Larmor precession
frequency (e =2tf where f is in Hertz) which is directly
proportional to the external magnetic field strength T.
This frequency is in the audio signal range.

Figure 14/6: Principles of the Proton magnetometer

Larmor Procession frequency =2f
Where f is in Hz = const T
= pT where p is the gyromagnetic ratio
T = /p =2f/p = 23.4868 f
For field of 50,000 nT f=2100Hz
e =
p
T
where
p
= gyro magnetic ratio of a proton
(= an immutable atomic constant)

p
= 2.67520 0.00002 x 10
-8
T
-1
s
-1

T= e/
p
= 2tf/
p
= 23.4874 f



Figure 14/7: Flow diagram showing principle of
operation of a Proton precession magnetometer.

Just have to measure frequency of precession, f to
determine T. The sensor head (bottle in Fig. 14/7)
contains fluid containing proton source e.g. water
(problem it freezes), alcohol and/or hydrocarbon fluid
better. Bottle surrounded by coil used to generate
magnetic field along axis of bottle - all protons align.
Switch current off all proton precess en masse at
frequency, f. Coil used to pick up f (electromagnetic
effect) and electronics measure its frequency.

In the simplest instrument, a signal-detector detects the
precession frequency by counting the number of cycles,
N, in a timed interval, t.

f = N/t = p T / 2pmo

If t is chosen such that 1/t = p T / 2pmo then N = T and
the number of cycles counted is numerically equal to the
measured field in nT. Since T is about 50 000 and f
about 2 kHz, a counting time of 25 seconds would be
required.. This is inconveniently slow, even if the
precession signal were still detectable after such a long
time period. An early sophistication was to compare the
signal to a high frequency oscillator (see Fig 14/7) which
locks onto a multiple of the precession frequency giving
an accuracy of one part in 50 000 with a counting period
of less than 1 second.

Note that there is no need to orient the sensor other
than to make sure that the field of the coil and that of the
earth are not coincident. For airborne installations it is
sufficient for the sensor axis to be horizontal at high
magnetic inclinations and vertical at low inclinations. A
mounting transverse to the axis of the aircraft may be
preferable in middle latitudes.

Examples of systems currently available are shown in
Figs. 14/8 and 14/9

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14, Page 4
GETECH/ Univ. of Leeds:



Figure 14/8: As either a single or gradiometer
system www.scintrexltd.com




Figure 14/9: The use of a proton magnetometer in
Mongolia www. Geometrics.com

14.2.2 Overhauser Magnetometer

This is a high sensitivity magnetometer/gradiometer
(Fig. 14/10) designed to be hand held. Resolution is
0.01 nT and 0.2 nT absolute accuracy. For gradiometer
work the sensors are 56 cm apart (could be larger).

Overhauser Effect: In contrast to the standard proton
magnetometer sensor, where only a proton rich liquid is
required to produce a precession signal, the Overhauser
Effect sensor has in addition a free radical added to the
liquid. A free radical is defined as an atom or group of
atoms containing at least one unpaired electron and
existing for a brief period of time before reacting to
produce a stable molecule. The free radical ensures the
presence of free, unbounded electrons that couple with
protons producing a two-spin system. A strong RF (radio
frequency) magnetic field is used to disturb the electron-
proton coupling. By saturating free electron resonance
lines, the polarisation of protons in the sensor liquid is
greatly increased. The Overhauser effect offers a more
powerful method of proton polarisation than standard
DC polarisation, i.e. stronger signals are achieved from
smaller sensors and with less power.


Figure 14/10: Overhauser magnetometer used as a
gradiometer www.gemsys.ca

To measure the magnetic field with the Overhauser
magnetometer GEM GSM-19 consists of the following
steps:

Polarisation: A strong RF current is passed through the
sensor creating polarisation of the proton rich fluid in the
sensor. Polarisation can be very fast and equivalent to
the time of measurement. Keeping the RF on all the
time increases the maximum data sampling rate to 5Hz
( 5 times a second)
Deflection: A short pulse deflects the proton
magnetisation into the plane of precession.

Counting: the proton precession frequency is measured
and converted into magnetic field units

Storage. The results are stored in memory together with
date time and co-ordinates of measurements

14.2.3 Optical Pumped Magnetometer

Although the proton magnetometer is the most popular
for ground and airborne survey instrument, the optical
pumping magnetometer is becoming popular for
aeromagnetic surveys.
Accuracy - ground 0.01 -0.1 nT
- airborne 0.005 - 0.05 nT

Advantages - wider range without tuning
- shorter sampling interval (0.1 sec
possible)
- compatible with most ancillary instruments

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14, Page 5
GETECH/ Univ. of Leeds:

Figure 14/12: Sketch of Optical Pumped Magnetometer

Disadvantages - expensive
- relatively fragile and complicated
- needs some orientation

The optical pumped magnetometer uses the electron
rather than the proton. The electron used is the valence
electron (outer most orbit) of alkali metals (Rb, Cs). The
electrons are moved into low energy ground state and
then moved en masse to next energy level in ground
state (Rb has 8 ground states-Fig 14/13). Since energy
gap between ground states increases linearly with
external magnetic field (Zeeman Effect) all one needs to
do is measure the energy gap AE to determine T.

AE = cT = hf

where h = planks const, c = constant & f = frequency.
Thus measurement of f will permit T to be determined.



Figure 14/11: Principles of the Zeeman Effect and
ground and excited energy levels of a valence
electron. See Fig 14/13 for energy levels for Rb87.

If a alkali vapour is illuminated by ordinary light of
appropriate frequencies, the electrons in the outer orbit
will be excited from either B1 or B2 ground states to A1
or A2 excited states (levels) and then decay back to
appropriate lower energy states (giving out light in
visible part of spectrum see Fig 14/11). At any given
time, the population of electrons in a given energy level
will be normally (Maxwell-Boltzman) distribution. These
different energy states occur since the magnetic
moment of the electrons are caused by the electron
spin which can be parallel or anti-parallel.

It is possible to induce preferentially transitions from
level B to A of one spin state by orienting the spins in
one direction by illuminating them with circularly
polarised light. This is achieved with optical filters in Fig
14/12, The polarised light aids only one transition i.e. B1
to A2 but not B2 to A2. When electrons are in A2 they
can fall back to B2 or B1. Electrons in B2 can not move
to A2 (no appropriate energy available) so we can
selectively fill up B2 energy level. This is what is meant
by Optical Pumping. If we now apply an
electromagnetic field (Resonant RF signal) with
frequency appropriate to energy transition AE (electron
resonance) we can force electrons en masse to move
from ground B2 to ground B1. The instrument (Fig
14/12) uses these principles to help measure the
Earth's magnetic field strength.

All the action takes place in the Absorption Cell. The set
up to the left of the Absorption cell is always trying to
optically pump the alkali vapour in the Absorption cell.

During optical pumping the Absorption cell will be
absorbing light (energy) from the lamp (Rb or Cs light)
and giving out light (Rb or Cs) as electrons move from
excited state to ground state. Since the latter form of
light (photons) are travelling in all directions then the
light that can be focused by the lens (to the right of the

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14, Page 6
GETECH/ Univ. of Leeds:

cell) is small and the output of the photo-cell will be
small.

When the cell is in an optically pumped state then the
Absorption cell will not absorb (energy) light from the
lamp and the cell will not give out light since all the
electrons are trapped in level B2. In this state the photo-
cell will have maximum output since all light from lamp
passes through vapour cell without being absorbed and
will focus via the lens onto the photo-cell.

The photo-cell is connected to a fed-back circuit
containing a coil surrounding the vapour cell. If the
frequency (RF signal) in the coil is increased so that it is
same frequency as f=AE/h then the electrons can
move from B2 to B1. When they are in level B1 there is
no restriction on the electrons absorbing energy and
moving to their excited state from where they can fall
back to the ground state giving out light. So opening the
trap door by the RF signal gives the frequency f. for
minimum current output from the photo-cell Tracking
this minimum (or frequency f) is fairly easy to do, thus
you are effectively tracking the change in the magnetic
field strength.

Figure 14/13: Energy levels for Rb87 which has up
to 8 spin states on each level

Example of a land based systems are shown in
Figs.14/14 and 14/15


Figure 14/14: Optical Pumped magnetometers in
survey mode with GPS and VLF

Other Optical Absorption magnetometers are
Caesium Vapour Magnetometer and the Helium
magnetometer.


14.3 Absolute Magnetic Gradiometers

14.3.1 Land Vertical Gradiometer

Fluxgates, Proton and Optical Pumped magnetometers
are all used as hand held gradiometers. The principle is
they measure the total or vector (Fluxgate) component
at two points in a vertical plane separated by 2 metres.
The difference in readings, measure within a split
second of each other, is divided by the separation
distance thus giving dT/dz.

The type of instrument used will control the resolution of
what can be measured. Normally being close to ground
level means you are close to the magnetic source and
thus the magnetic vertical gradients of total field with be
large. Using Overhauser and optical sensors with their
high absolute accuracy allows the gradiometer to have
the sensors close to each other as seen in Figs14/8,
14/10 and 14/15. This is why such magnetometers are
used as gradiometers on aircraft (section 14.3.2).

It is important to note that the gradients being measures
are those of the Total Magnetic Intensity in the
horizontal or vertical directions.


Figure 14/15: Cesium gradiometers

Reference: Hood Holroyd and McGrath 1979
Magnetic methods to base metal exploration In:
Geophysics and Geochemistry in the search for metallic
ores. Geol. Survey. Canada Economic Report 31: 77-
104

14.3.2 Airborne Gradiometers

In the 1960's magnetic gradients where attempted using
two proton magnetometers sensors separated by about
100 ft. This was dangerous and control on separation
was poor. The optical pumped magnetometer changed
this since they can measure total magnetic field more
accurately and only need to be separated by 2 metres.
Aircraft can have a range of measuring systems on-

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14, Page 7
GETECH/ Univ. of Leeds:

board as shown in Figs. 14/16 and 14/17 from single tail
sensor to up to 4 sensors.



Figure 14/16: Various options for instrumentation.



Figure 14/17: Old method of towing a proton
magnetometers and the original method of
measuring vertical TMI gradients using Optically
pumped magnetometers with specially built tail
section. This setup affects the planes flying
performance (safety) so gradiometer now built in a
retractable mode below the tailfin.

By placing the magnetometers at the ends of the wings
and behind the tail fin (3 or 4 magnetometers), it is
possible to generate in line and cross line gradients
which are considered to be more useful at delineating
edges of structures.

Gradiometer measurements have the following
advantages

i) Eliminates the need to monitor diurnal variations,
since diurnal variation affect both instruments by the
same amount. By measuring the differences in the
magnetometer outputs the diurnal effect is eliminated.

ii) Vertical and now horizontal gradients are a better aid
to geological mapping since local geology rather than
regional geology gives rise to stronger changes in
gradient.

iii) Fixed birds means plane can fly in all weathers

iv) Sensitivity to gradient of 0.004 nT/m

14.3.3 Marine Gradiometers

Major problem in marine surveys is having any form of
base stations to correct for diurnal effects. If there is
one it could be 100's km away and not accurately
monitor the corrections that should be made. This is
particularly the case in high latitudes where major
transient variations of the geomagnetic field, Ttv (or
noise) are polar sub-storms which can cause
departures in Tobs of minutes to 10's minutes and if not
corrected for look like geological signal (see Fig 14/19).
The amplitude of these polar sub-storms varies rapidly
with latitude.



Figure 14/18: Marine Gradiometer system

The solution is to use a horizontal gradiometer. This is
easier to do on a stand alone survey but very much
more difficult on a seismic survey. The marine
magnetometer system (Fig. 14/18) uses two total field
sensors (proton magnetometers) that are towed on a
single cable and are separated by 150 m. The forward
sensor is as far as 600 m behind the vessel to minimise
the effects of the ship's magnetic field. The difference
between simultaneously measured field values at the
two sensors is essentially free the effects of time
variations in the Earth's magnetic field. By dividing this
difference by the distance between sensors, an
approximation of the magnetic gradient is obtained. The
gradient of the magnetic field between the sensors
should be able to be used to determine the corrected
(time-variation-free) total field anomalies by numerical
integration of the gradients along the ship's track. This
proves to be simpler in theory to practice for various
reasons:

i. Feathering of cable will mean the two sensors will not
follow the same track over the geology

ii. Location inaccuracies--now possibly minimised by
DGPS


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14, Page 8
GETECH/ Univ. of Leeds:

iii small changes in field and field resolution of the
magnetometers which will tend to amplify noise



Figure 14/19: Example of results from Antarctica by
Wold & Cooper, 1989 showing Tobs-solid low line,
computed time variation Ttv-solid upper line and
resulting featureless magnetic anomaly along track.

Reference: Wold R J and Cooper A K 1989 Marine
magnetic gradiometer - a tool for the seismic interpreter
Geophysics: The Leading Edge August, p22-27

SeaQuest gradiometer
A new design by Marine magnetics that can
generate 3 dimensional gradient vectors (Fig
14/20). This has applications for cable and pipeline
tracking, unexploded ordinance, marine
archaeology, wreck dectection and environmental
surveying. It uses the Overhauser magnetometer
technology which provides low noise, high
accuracy and repeatability.


Figure 14/20: SeaQuest marine gradiometer





14.4 Global Magnetic Fields

14.4.1 Satellite Crustal Field- CHAMP satellite
(2000-2003)

Taken from part of The Leading Edge article
CHAMP satellite and terrestrial magnetic data help
define the tectonic model for South America and
resolve the lingering problem of the pre break-up fit
of the South Atlantic Ocean by J D Fairhead, S.
Maus, 2003.

This article was published in 2003 which was 3 years
into the 5 year planned CHAMP mission (due to its low
altitude). However it only re-entered the atmosphere in
the Fall of 2010, so Section 14.4.1 covers the period
2003- 2010.

The Geomagnetic field was first imaged from space by
POGO (1965-71) and then by Magsat (1979-80)
satellites (Ravat and Purucker 1999). Since Magsat,
there has been a 20-year gap in space magnetic
observations until the launching of CHAMP in July 2000
and higher orbiting ersted launched in Feb. 1999 with
an elliptic orbit with altitudes between 650 and 850 km
and SAC-C launched in Nov. 2000 with a sun-
synchronous circular orbit of 702 km.

Since the main field making up 99% of the
Geomagnetic field is so well determined by the
CHAMP/ersted/SAC-C satellite, this has allowed the
determination of the long wavelength component of the
crustal magnetic field seen at CHAMP satellite altitude.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14, Page 9
GETECH/ Univ. of Leeds:

This section concentrates on this 1% of the
Geomagnetic field that originates in the crust.

In July 2000, a new potential fields satellite was
launched and is currently measuring the Earths gravity
and magnetic fields, the latter down to 0.1 nT. The
satellite is called CHAMP (CHAllenging Mini-satellite
Payload Fig 14/21) and until about 2005 will be
providing high accuracy crustal magnetic field data of
the Earth for wavelengths ~300 km to 3,000 km.
Previous magnetic satellites have been reviewed by
Ravat & Purucker (TLE, March, 1999) who found that an
excellent comparison could be made between the long
wavelength component of DNAG magnetic field (>500
km) covering North America and the Magsat satellite
field. Magsat was limited to a 6-month operating period
due to its low orbital altitudes (352 km and 578 km)
before burning up in the atmosphere in June 1980. Its
large variation in orbital altitude was one of the factors
that made the generation of the crustal field component
difficult.

The CHAMP crustal field, on the other hand, has been
derived from solar night data, for which the magnitude
and complexity of the transient variation corrections
have been significantly reduced making for a more
stable crustal field solution with significantly greater
resolution. The resolution of the satellite magnetic field
over the Atlantic Ocean is able to map subtle coherent
features (<1 nT in amplitude) which are interpreted as
bulk changes in the remanence of the oceanic crust
associated with the linear magnetic reversals pattern of
the geomagnetic field during and since the Mesozoic
period (see Figs.14/22 and 14/24). Such resolving
power of these N-S striking anomalies, which sub
parallel the orbital tracks, provides confidence that the
larger magnetic variations imaged over the continents
are being reliably mapped and can be used to aid the
identification of geological provinces of interest in
petroleum and mineral exploration.

The CHAMP satellite. CHAMP is similar to the POGO
and Magsat satellites in having low orbital altitudes
capable of imaging the crustal component of the
Geomagnetic field. However the CHAMP satellite
measures the magnetic field in a circular orbit, which
decreases due to atmospheric drag, from 450 km to 300
km over its 5-year mission. This compares to the short
6-month active life span of Magsat, such that the longer
CHAMP mission will yield a much denser data coverage
and will better map the time variant parts of the field.
Furthermore, the fluxgate and Overhauser
magnetometers improve the measurement accuracy by
an order of magnitude to about 0.1 nT. Fig. 14/21
shows the CHAMP satellite with its aerodynamic shape
to minimise atmospheric drag and thus prolong its life at
these altitudes.



Figure 14/21: The aerodynamic designed CHAMP
satellite to combat low altitude flight.

Unlike the Magsat satellite, which had a dawn/dusk
synchronous orbit, CHAMP samples both the solar day
and night parts of the field. CHAMP completes about
6,500 full orbits per year. Only night time data are
suitable for crustal field studies, since during the
daytime the ionosphere is a highly conductive source of
noise. Data were further discarded for periods of strong
solar activity, which reduced the clean data to about
2,000 half-orbits or 5 tracks per degree longitude per
year.


Figure 14/22: Images of Oersted, SAC0C and
CHAMP satellites. For further details of the CHAMP
and other geomagnetic space missions readers are
recommended to visit the GFZ, Potsdam website
www.gfz-potsdam.de and the NASA website
<www.nasa.gov>

In deriving the crustal magnetic field model (MF2-400
model) a key issue was the separation of the crustal
field from other contributions to the measured field,
including the core field, ionospheric and
magnetospheric fields and induced fields. This
separation has been greatly aided by the presence of
two additional magnetic satellites ersted and SAC-C
both in higher orbits (see Fig. 14/22). The ersted and
CHAMP data have been successfully combined to
compute an accurate model of both the core field and
its secular variation. In deriving the MF2-400 crustal
field model (by Stefan Maus, where MF2-400 represents
Crustal Magnetic Field version 2 at 400km altitude), a
core field to spherical harmonic degree 15 was
subtracted and time varying magnetospheric fields of


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14, Page 10
GETECH/ Univ. of Leeds:

degree-2 were fitted and removed on a track-by-track
basis. Since the orbit is polar, the crossovers occur at
very high latitudes and cannot be used for line levelling
(see Fig. 14/23).

Figure 14/23 CHAMP polar orbit.
Furthermore, differences in flight altitude also
complicate any kind of line levelling. Therefore, no line
levelling is done. Instead we rely on rigorous data
selection, use a very precise model of the main field
and secular variation and use a filter to remove any
remaining spurious long wavelength noise. After final
data screening, the spherical harmonic coefficients of
the crustal field map were derived by least squares.

The resulting MF2-400 model, with magnetic data at
400 km altitude, is shown in Figs.14/24 and 14/25 as
both an anomaly map and as a 3D pseudo relief
orthographic projection, centred on the Atlantic, of the
crustal scalar or Total Magnetic Intensity (TMI).



Figure 14/24: An initial crustal TMI field MF2-400 with anomaly amplitudes varying within +/-14 nT. Latitude and
longitude lines at 60
0
intervals superimposed.



Figure 14/25: Orthographic projection of the crustal
scalar or total magnetic intensity (TMI) field. Topo/
bathy fabric and continent and plate outlines
superimposed.


Figure 14/26: Vertical derivative (VDR) of the TMI at
400km altitude for the South Atlantic. The magnetic
isochrons are superimposed on the image and
closely correlate with the VDR anomalies.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14 Page 11
GETECH/ Univ. of Leeds:

The amplitude range of the MF2-400 field at 400km
altitude is up to ~18 nT (over the Kursk anomaly) The
large amplitude anomalies tend to be located within and
about the continents increasing in amplitude with
Latitude. This can be seen in Figures 14/25 and 14/xx
An exception is the Bangui magnetic anomaly centred in
the Central African Republic (see Fig 14/25) as a strong
dipole anomaly. The majority of the anomalies are
induced. Over the oceanic crust particularly for ages
less than 84Ma the anomalies are generally due to
remanence and have amplitudes < 1nT. Although these
oceanic crustal anomalies are small amplitude they do
have a close correlation with the symmetrical set of
magnetic anomalies (albeit highly filtered LP 400km)
paralleling the mid ocean ridges. Figure 14/26 shows
this for the vertical derivative of the TMI field.

14.4.2 Satellite Crustal Field- CHAMP satellite
(2003-2010)

Due to the design of the satellite, it aerodynamics has
helped to overcome atmospheric drag and a boaster
propulsion system has allowed scientists to boast the
satellite into a higher orbit on several occasions. The
decay of the orbit with time is shown in Figure 14/27


Figure 14/27: Decay of the CHAMP orbit and
predicted re-entry and burn-up in the Earths
atmosphere to be late 2010.

The boasters and aerodynamic performance of the
satellite has allowed it a 5 year extension to its life
which came when the solar flux was at its minima and
sun spot activity was at its minimum (see Figures 13/12
and 13/13), thus providing the least sun storm affected
data precisely when the CHAMP satellite orbital had
decayed down to less than 350km.

Since 2003 and MF2 model, a series of MF models
have been derived from the data. The stability of the
continental anomalies is clearly seen in Fig 14/28 for
profiles over the Kursk anomaly which is the largest
anomaly on Earth. Figure 14/29 shows a N-S profile
across the Kursk anomaly whose location is shown in
Fig 14/29together with the 2D magnetic NW-SE trend
identified by the two arrows.



Figure 14/28 Location of the Kursk anomaly north
of the Black Sea

Figure 14/29: The Kursk anomaly and its stability
between successive MF models.


Figure 14/30: The terrestrial TMI data for Poland
showing the NE-SE magnetic trend to the NW of
the Kursk anomaly.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14 Page 12
GETECH/ Univ. of Leeds:

This NW trend is the SW boundary of the East
European platform (see fig 14/31) with the Trans-
European Suture Zone (TESZ) a deep suture zone
with 10+km of sediment. This zone is clearly defined
geophysically in Figure 14/30.



Figure 14/31: geological location of both the Kursk
anomaly and the TESZ.

What causes the Kursk magnetic anomaly? Look it up
via Google? In fact the Kursk anomaly is not a single
anomaly as seen at ~400km altitude but is made up of a
number of very intense magnetic anomalies (Fig. 14/32)
as mapped by aeromagnetic surveys (terrestrial data)



Figure 14/32: A 3D image of the Kursk anomaly
based on EMAG2 terrestrial magnetic data.

Currently the best MF model is MF6 with MF7 currently
being compiled (July 2010) with a final version MF7
(December 2010) once CHAMP has burnt up.

Part of the MF6 model is shown in Fig. 14/33 in the form
of the vertical component at 100 km altitude (Ref: Maus
et al., G3, 2008). As the CHAMP satellite is
progressively orbiting at a lower altitude the wavelength
resolution is improving all the time such it is possible
wavelengths down to <300km may be possible in MF7
and MF8.



Figure 14/33: Crustal magnetic field of the CHAMP
satellite model MF6 for the Vertical magnetic
component downward continued to 100 km altitude
(Maus et al., G3, 2008)


14.4.3 Terrestrial Global Magnetic Field
WDMAM-World Digital Magnetic Anomaly Map


Figure 14/34: World Digital Magnetic Anomaly Map
(WDMAM) based on a 5km grid at 5km height.

The WDMAM model is based on land, aeromagnetic
and ship borne magnetic data and was an initiative of
IAGA. For many continental areas the resolution is
similar to EMAG2 (next Section 14.4.4) where as the
oceanic areas have sparse ship track coverage.




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14 Page 13
GETECH/ Univ. of Leeds:

14.4.3 Terrestrial Global Magnetic Field
EMAG2- Earth Magnetic Anomaly Grid

EMAG2 is a further improvement of WDMAM by Stefan
Maus, NOAA and gridded at 4km and upward continued
to 4km


Figure 14/35 EMAG2

EMAG2 was released in 2010 and is specified as a
global 2-arc-minute resolution grid of the anomaly of the
magnetic intensity. EMAG2 significantly updates the
first global magnetic anomaly grid, EMAG3, which
provided the base grid for the World Digital Magnetic
Anomaly Map for the Commission of the World
geological Map. As reflected in the name the resolution
has been improved from 3 arc minute to 2 arc minute
and the altitude has been reduced from 5 km to 4 km
above geoid. As can be seen from Figures 14/34 and
13/35 EMAG2 has been compiled from satellite, marine,
aeromagnetic and ground magnetic surveys. EMAG2
has additional grid and trackline data included, both
over land and the oceans. Interpolation between sparse
tracklines in the oceans was improved by directional
gridding and extrapolation, based on an oceanic crustal
age grid. The longest wavelengths (larger than 330 km)
were replaced with the latest CHAMP lithospheric field
model MF6.

What resolution does this 0.2 arc second grid have?
GETECH has also contributed decimated (15 minute or
30km) grids for some of the continents to help improve
the coverage without jeopardising the commercial value
of these grids. This is shown in the following figures.

Figure 14/36 shows an area in Brazil covered by CPRM
and Pretrobras aeromagnetic survey data integrated
into a national coverage by GETECH. The EMAG2 grid
is shown in Fig. 14/37 with zoom in area. By contrast
the full resolution GETECH grid (1km grid) is shown for
comparison in Fig 14/38.


Figure 14/36: Example area in Brazil showing
resolution of WDMAM/EMAG2 products vs GETECH
full resolution dataset

Figure 14/37 The visual resolution of the WDMAM
and EMAG2 grids for part of Brazil. The grid cell size
is ~4 km but the resolution of the data are 50km grid
or 60km shortest wavelength.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14 Page 14
GETECH/ Univ. of Leeds:

Versions of WDMAM and EMAG2 3D dynamic
visualization grids can be downloaded from GETECH
website for use in NASA World Wind software.

Figure: 14/38: GETECHs 1km grid compilation for
Brazil showing full resolution over the data
contained in WDMAM 7 EMAG2 (fig 14/37).


Figure 14/39: Amplitude variation in marine areas
i.e. crustal magnetic field grows from South to
North

EMAG2 for the area containing North America (Fig.
14/39) shows clearly that in the marine areas the
magnetic field strength grows from south to north since
the oceanic magnetic crust is dominantly remanent
and has acquired the remanence on formation from an
Earths field that varies from ~35,000 nT at the equator
to 70,000nT at the poles.

For continental areas the field is mainly induced without
this distinct S-N increase in amplitude seen in marine
areas. This is due to the old basement geology having
variable spatial composition and susceptibilities. If one
restricts the analysis to large individual anomalies then
there is a increase in amplitude at higher latitudes. This
can be seen in Figs. 14/25 and 14/28. This is due in
part to anomalies close to the Equator having a dipole
character(+ve and ve components to the anomaly)
whereas closer to the poles the amplitude of the
anomaly is concentrated in the positive mono pole
character of the anomaly.

14.5 Effects of the decreasing
Geomagnetic field strength

In section 13.4.1 and figure 13/11 (reproduced in Fig
14/40) the geomagnetic field in decreasing in its
strength at 6% per 100 years or about -27nT per year.
The most like reason is the Earth geomagnetic field has
commenced a reversal which will take ~10,000 years to
complete(see Section 13.4.1)


Figure 14/40 Decline of the Geomagnetic field over
the last 100years.

The weakest part of the Geomagnetic field is located
over the Brazil/Argentina section of the South Atlantic
as shown in Fig. 14/41

Figure 14/41: Strength of the Earths magnetic field
showing the weakest part is located over the South
Atlantic.

A cross section of the radiation belts surrounding the
Earth is shown in Fig. 14/42 shows that the Van Allen
Belt is within 200km of the Earths surface over the
South Atlantic. The reason for the location of the weak
spot is partly due to the axial dipole is off centre to the
centre of the earth making this area furthest from the
dipole (see Fig. 14/44)..

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14 Page 15
GETECH/ Univ. of Leeds:



Figure 14/42: A cross-sectional view of the radiation
belts about the Earth and where the Van Allen
radiation belts, is located with respect to the
minimum of the Geomagnetic field over the South
Atlantic Anomaly occurs

Satellites passing through this area can suffer from
Memory upsets as shown in Fig 14/43. The Earths
eccentric dipole is protecting us from Cosmic particles.

The BAD news: The effect is small at present but will
grow as the Geomagnetic field weakens. As previously
indicated the field weakens to a low value before
rotating and growing back to its original strength. When
the Geomagnetic field is at its weakness then cosmic
radiation is able to reach the surface and this is a trigger
for species extinction and mutation and may have a
significant effect on animal evolution. This state is likely
to occur from 1,000 to 5,000 years from now!

Figure 14/43: The black spots represent locations of
memory upsets on UoSat-2 between Sept 1988 and
1992. To allow memory to recover the system needs
rebooting. EOS Vol 84 No 5 Feb 2003 page 42

The GOOD news: Humans have evolved on Earth for
more than 3million years. During this time there have
been multiple geomagnetic reversals and we have
survived.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 14 Page 16
GETECH/ Univ. of Leeds:




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 1
GETECH/ Univ. of Leeds:

SECTION 15: AEROMAGNETIC SURVEYING




15.1 Review Articles

These following articles provide good historic
overview:
i) Reford and Sumner, 1964. Review article
aeromagnetics Geophysics 29:482-516

ii) Grant 1972 Review of data processing and
interpretation methods in gravity and magnetics 1964-
1971 Geophy. 37(4) :647-661

iii) Paterson and Reeves, 1985. Applications of gravity
and magnetic surveys: the state of the art in 1985
Geophysics 50:2885

iv) AGSO Journal of Australian Geology and
Geophysics Vol 17, No 2, 1997

15.2 Aeromagnetic System



Figure 15/1: Methods of flying magnetometers

Use single or twin engine aircraft with magnetometer
fixed to an extension of the tail plane-remote from
magnetic noise of aircraft or trail magnetometer from a
helicopter. Old methods trailed magnetometer from
aircraft on long cable (see Fig 14/17).

15.2.1 Navigation

- flight-path film or video camera
- doppler
- ground based radar or UHF transponder
- Global Positioning System (satellite
method of triangulation GPS and DGPS)

Figure 15/2: Helicopter multi sensor survey
(Scintrex)

where DGPS is now the main form of navigation system
(where D = Differential i.e. fixed reference GPS receiver
used) and gives high precision to lat. long. and
geometric or ellipsoidal height (i.e. height above
ellipsoid not height above geoid the latter of which is the
orthometric height)


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 2
GETECH/ Univ. of Leeds:

15.2.2 Data Recording

- analogue charts
- digital recording system (magnetic tape or disk)

Modern survey systems have highly sophisticated digital
recording systems able to rectify, display and store
literally billions of records from a variety of sensors
during a sortie of up to 10 to 12 hours.

15.2.3 Ancillary sensors (Scintrex 1997)

VLF two channel freq ranging 15kHz to 30 kHz
Gamma ray spectrometer (radiometrics)
256 channels, upward and downward looking
sensors, 16.8
litre(1024in
3
)
EM 2 vertical coaxial coil-pairs and 3 horizontal
coplanar coil-pairs
DGPS
Magnetic gradiometer


15.3 Survey Design

15.3.1 Height of Sensor

- constant height above ground
- constant barometric altitude
- loose drape



Figure 15/3: Three types of airborne surveys

Height is maintained and recorded by radar &/or
barometric altimeter &/or DGPS. Choice of height is
dictated by

a. type of terrain (e.g. in mountainous areas it is
difficult to maintain an exact height above ground) ,
contour flying using helicopters is sometimes done

b. purpose of the survey (for hydrocarbon surveys a
constant barometric height is normally preferred) and
c. line spacing (height should not be less than about
1/4 to 1/3 the line spacing or anomalies will be aliased).

In oil exploration the target is usually mapping depth to
'magnetic basement' beneath sediments. To achieve
this, surveys are flown at a safe constant altitude above
sea level. Deviations of height are recorded. Height
depends on terrain but generally flown at lowest safe
flying height.

In mineral exploration surveys are often over exposed
crystalline basement which have strong magnetic
response. Such areas are often mountainous. Thus
surveys try to fly at constant height about ground to
minimise the magnetic response due to the topography
and thus pick up signals that relate more to the change
in geology. The best results are obtained using a
helicopter which can keep at constant height above
ground. This reduces ground coverage and increases
cost per km. A compromise could be a loose drape
using fixed wing aircraft at a slightly higher elevation.

15.3.2 Line Spacing

The choice of line spacing is based on the scale of
mapping the survey wishes to produce or based on the
exploration target size being investigated.

e.g
Regional scale (greater than about 1:200,000): 2 - 4 km
Semi-detailed (1:25,000-1:100,000): 0.25 - 1.0 km
Detailed (less than about 1:25,000)): 100 - 250 m
Ground surveys: usually 50 - 100 m

A useful rule of thumb is the line spacing should be
roughly 1 cm at the final working map scale.

Magnetic basement', which hopefully defines the base
of the sedimentary basin, needs to be in excess of 2 km
generally for oil maturation processes. Anomalies
generated by magnetic basement at depths greater than
2 to 3 km will result in anomalies with wavelengths
about 4 km and greater. Thus flight line spacings of 2
km is commonly chosen. Flight spacing of 4 km up to
30 km are sometimes used for reconnaissance surveys.
Sometimes doublets or triplets of lines are flown with 2
km spacing and with spacing between these
doublets/triplets of 10 km say. This gives some detail
plus regional coverage. See Fig 15/4
Spacing etc. all affect overall cost of survey and thus
surveys are often a compromise to get maximum useful
data within budget limitations.

15.3.3 Line Direction

Use known geology to determine trend of structures to
be surveyed. Fly perpendicular to 2D structural trend to
maximise aeromagnetic information. If structural trends

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 3
GETECH/ Univ. of Leeds:

change in concession area then compromise flight
direction to fly as near to perpendicular to both trends as
possible or if trends do not overlap fly lines in two
different directions or probable better to reduce line
spacing. In low latitudes (near magnetic equator) it is
preferable to have survey lines in a north south direction
as this produces a better resolution of the anomalies. If
the geological formations strike north - south then a 45
degree orientation may be preferable in some cases to
east - west.
Flying in the wrong direction can result in bad sampling
of the anomaly field.(see Figures 15/5 )


Figure 15/4: a) Upper figure is the original
aeromagnetic survey design used to map the Malay
basin striking SE offshore the Peninsula Malaysia.
Flight lines are NE-SW at 5km spacing and tie lines
NW-SE at 15km spacing. b) Lower figure - Survey
offshore southern Kalimantan in Java Sea. Direction
of geology extended survey to SW and NW with 2
lines close together but with 12 km separation
between line pairs.

15.3.4 Tie Lines

Tie- or control- lines are surveyed at right angles to the
main survey lines in order to provide control on the
magnetic diurnal


Figure 15/5: TOP LEFT Computer plotting of
gradiometer data with line spacing 150m; TOP
RIGHT same area but mapped with half the line
data,(line spacing 305m) note the change in trend of
anomalies, BOTTOM LEFT same area but mapped
with quarter of the line data, (line spacing 610m)
note the change to bull eye anomalies

variation which causes the magnetic field to change up
and down with time. By matching the magnetic field
values at flight - line / tie - line crossings adjustments
(Cross Over Errors) are made to remove the diurnal
variation effect. A better way is always to remove diurnal
from flight lines first using base station record(s) and
then carrying out residual COE adjustments.

The distance between tie lines depends on the location
of the survey (diurnal effects are least near the magnetic
equator (except for Electrojet effect) and greatest near
the poles), the flight line spacing and the speed of the
aircraft.
e.g. Regional surveys: 2 - 4 times the line spacing
Detail surveys 5 - 10 times the line spacing

15.3.5 Relation between flying height and line
spacing

Figure 15/6 is taken from the Hemlo area of Ontario,
Canada the site of a major gold discovery. Three
different aeromagnetic surveys were flown as follows
a) Part of Geological Survey of Canada national
coverage flown at an altitude of 300m above mean
terrain with flight line spacing of 800m.
b) Survey flown at 100m altitude with flight line spacing
of 100m.
c) Survey flown at 50m altitude with flight line spacing of
100m.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 4
GETECH/ Univ. of Leeds:

The contrast between the 300m and the 100m altitude is
striking in terms of the added detail. The reduction in the
survey altitude adds some further detail and an



Figure 15/6: Flying height verses flight line spacing.

increase in amplitude to the small and narrow magnetic
features is evident.

A general 'rule of thumb' is line spacing equals source
depth below magnetometer was suggested by Reid, A
B 1980 Aeromagnetic survey design. Geophy. Short
Note v45 no5 : 973-976. The table below is after Reid
(1980)


15.4 Airborne Acquisition and Quality
Control (part input from Reeves, 2005)

This section is subdivided into equipment methods and
special needs of an airborne aeromagnetic survey
followed by the requirements of pre survey QC,
production survey QC and post survey QC. Since
airborne magnetic and gravity surveys are often done
together then one should see section 9.3 for further
information.

15.4.1 Equipment and Methods used

Birds and Stingers: It is clear that great pains must
be taken to eliminate spurious magnetic signals that
may be expected to arise in an aeromagnetic survey
from the aircraft itself. Standard tests must also be
defined to measure the success with which this has
been achieved for any given survey aircraft and
magnetometer system. When monitoring a survey
carried out by a contractor it is essential to ensure that
these tests are performed before the acquisition of
survey data commences, at the end of the survey to
check that nothing has changed, periodically during
survey operation if that extends over a number of
months and whenever a major modification - such as an
engine change - is carried out to the aircraft.

The airframes of modern aircraft are primarily
constructed from aluminum alloys which are non-
magnetic; the main potential magnetic sources are the
engines. As a first approach, then, magnetometer
sensors have always been mounted as far away as
possible from the aircraft engines. The earliest
magnetometer configurations simply involved placing
the sensor in a 'bird' which was towed behind and below
the aircraft to reduce the magnetic effect by simply
increasing the distance as much as possible. This is still
often the preferred arrangement for helicopter
installations (Figure 15/7) for which a great advantage is
usually the ability to mount and de-mount equipment
quickly in an available aircraft at the survey locality.

Apart from being an inelegant arrangement for fixed-
wing aircraft, bird operation has potential dangers and
additional sources of noise and error are evident
through manoeuvering of the bird itself during flight.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 5
GETECH/ Univ. of Leeds:

A rigid extension of the airframe - usually to the aft in the
form known as a 'stinger' (Figure 15/7) - solves many of
these problems but necessitates closer attention to the
sources of magnetic effects on board the aircraft. There
are principally three sources of aircraft noise:


Figure 15/7: Left, a magnetometer slung below a
helicopter as a bird installation and, right, a
magnetometer in a stinger behind a fixed-wing
aircraft.

(a). Permanent magnetisation of the aircraft which
will be unchanging unless engines are changed or
magnetic objects (such as toolboxes) are brought on
board.

(b). Magnetisation induced in engine (or other)
components by the earth's magnetic field. The
magnitude and direction of the magnetisation will be
dependent on the relative orientation of the aircraft and
the geomagnetic field and so will change with survey
location (different magnetic inclinations), direction of
flight and even with small adjustments to flight direction
in three dimensions during normal manoeuvre along a
single flight line.

(c). Magnetic fields set up by electrical circuits within
the aircraft and any eddy currents induced - according to
Biot-Savart's Law (see ttp://webphysics.davidson
.edu/physlet_resources/bu_semester2/c14_biotsavart.ht
m) in the airframe as a result of the motion of the
conductive airframe through the earth's magnetic field .

Old methods of compensation: For many years these
effects were dealt with sequentially. The permanent
magnetic field of the aircraft at the magnetometer
sensor was compensated ('backed-off') by passing
appropriate DC currents through each of three
orthogonal coils in the vicinity of the sensor. The
induced component was offset by mounting pieces of
highly-permeable material close to the sensor in a
position (found by trial-and-error) such that their
magnetic effect was always equal and opposite to that
of the engines. The eddy-current effects were similarly
mimicked but in opposite sign by coils of wire placed
close to the sensor. Since the success of all these
measures could only be fully proved in flight and most of
them could only be adjusted on the ground, the
compensation of an aircraft was a tedious and time
consuming business which had to be often repeated to
ensure ongoing system integrity.
New methods of compensation: In more recent times,
active magnetic compensators have been developed to
address these problems 'on-line' during survey flight
(see next section for details).

Active magnetic compensation: The principle of
the active magnetic compensator is that the magnetic
effects of different heading directions and aircraft
manoeuvres are first measured during a calibration flight
in the absence of magnetic anomalies and then
subtracted in real-time during survey operation as
magnetic anomalies are recorded. During calibration
and survey operation the attitude of the aircraft with
respect to the geomagnetic field is continuously
monitored using three orthogonal flux-gate sensors.
These fluxgate sensors are in addition to and housed
close to the Stinger magnetometer. During survey
operation, the recorded aircraft attitude is used to apply
an appropriate correction to each reading of the
magnetometer.

15.4.2 Pre Survey QC: Performance Testing

(A). Magnetic Compensation: The following sequence
of procedures is carried out before a survey
commences.




Figure 15/8: Pitch, roll and yaw manoeuvres of an
aircraft about three orthogonal axes: (a) transverse
horizontal, (b) longitudinal horizontal and (c) vertical
respectively.

In the vicinity of the survey area, an area of known low
magnetic anomaly relief is chosen and the survey
aircraft flown to high altitude over it. At an altitude of,
say, 3,000 to 4,000m it can be assumed that any
variations due to the local geology will be vanishingly
small and, consequently, the effect of the geomagnetic
field will not change significantly with x,y position (within
a suitably confined area) and any magnetic variations
recorded can be safely attributed to effects of heading
and manoeuvre. The aircraft then flies around the four
sides of a square oriented either north-east-south-west
or parallel and perpendicular to the chosen flight-line
direction of the survey, should that differ from north-
south or east-west. With the compensator in 'calibration'
mode, along one side of the square the aircraft executes
manoeuvres of 10 degrees in roll, 5 degrees in pitch

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 6
GETECH/ Univ. of Leeds:

and 5 degrees in yaw, each within a period of a few
seconds (see Figures 13/8 & 13/9).This exercise is then
repeated on each of the other three sides of the square
in turn. The results are stored in the memory of the
compensator and applied automatically when the
instrument is in 'survey' mode.


Figure 15/9: Top trace: magnetometer output
recorded during the execution of pitch, roll and yaw
manoeuvers. Bottom trace: the same output after
application of the automatic compensation. The
noise level is reduced to a small fraction of 1 nT.

For example, a value is now known by the compensator
for the magnetic effect of a 3 degree roll to starboard
when the aircraft is heading west and is recalled and
applied as a correction whenever that attitude is
encountered in flight. The results of the recorded
magnetometer output before and after application of
automatic compensation are shown in Figure 15/9. It is
necessary to assume that the compensation remains
unchanged after the compensation flight. Since this
assumption could be doubted, the compensation is
done in its entirety when surveying commences and is
checked periodically (every 30 days) during the flying of
a single survey. The compensation test is then repeated
in its entirety at the end of the survey and repeated at a
new locality with a different magnetic field direction or
when equipment and systems in the aircraft are
changed.

The achievement of suitable compensation is the
responsibility of a survey contractor or other
organisation carrying out a survey. The survey client - or
a technical consultant - needs to have the quality of the
compensation demonstrated and this is usually a
requirement in airborne survey contracts. The following
tests are carried out periodically to demonstrate the
success of compensation.


(B). The 'clover-leaf' or heading Test: The 'clover-leaf'
or heading test is designed to demonstrate that the
aircraft and system have no significant 'heading effect',
i.e. that the same magnetic field value will be recorded
at a given location in x,y, regardless of the direction in
which the location is over flown (once corrections for
temporal variations of the magnetic field have been
applied). A visible point on the ground in an area of few
magnetic anomalies is chosen and over flown at survey
altitude in, say, a northerly direction. The aircraft then
turns and flies over the same point again in an easterly
direction, then in a southerly direction, a westerly
direction and finally in a northerly direction again to
check for any diurnal variation since the first over flight.
Figure 15/10 demonstrates the resulting flight-pattern
that gives rise to the name of the test.


Figure 15/10: Flight path in plan for a typical 'clover
leaf' test.

(C). The 'figure-of-merit' Test: A 'figure of merit' for a
system is obtained by carrying out the roll, pitch and
yaw movements while flying at high altitude (2000-3000
m above terrain) in each of the four cardinal compass
directions with the compensator (where fitted) in survey
mode and adding the peak-to-peak amplitude of the
magnetic signal obtained for each manoeuver, i.e. the
difference between the magnetometer reading when
rolled 10 degrees to port and when rolled 10 degrees to
starboard when heading north, is added to the
equivalent number for the 5 degree pitch and the 5
degree yaw and in turn added to the three nT values
obtained for these three manoeuvers on each of the
other three cardinal directions, making 12 terms in all.

In the 1970s, a figure of merit of 12 nT was typical for
regional surveys employing a proton magnetometer with
sensitivity of 1 nT. With improved compensation, this
was reduced to 4 nT for a 0.25 nT noise level. Current
optical-pumping magnetometer systems with the
currently standard automatic compensation equipment
routinely achieve figures-of-merit of a fraction of 1 nT.

(D) Lag test The differing positions of magnetometer (or
other) sensor and positioning equipment within the
aircraft and possible electronic delays in recording one
or both values are checked by overflying a magnetic
object such as a bridge twice, the second time in the
direction opposite to the first. The displacement between
the two anomalies relative to the source is twice the shift

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 7
GETECH/ Univ. of Leeds:

that must be applied to bring magnetic and positional
information into synchronisation. A lag of 0.1 to 0.2
seconds - equivalent to about 10 metres on the ground -
is not uncommon. An example of lag is shown in Figure
15/11.


Figure 5/11: Lag test. This shows two flightlines
flown in opposite direction over a target but plotted
on same x axis. Displacement of signal profiles can
be identified and measured.

Since survey lines are often flown alternately in opposite
direction (i.e. after completion of flying one line east-to-
west, the aircraft turns around and flies the next line
west-toeast), failure to correct adequately for lag can
result in values being shifted systematically east on
lines flown east-west and west on lines flown west-east.
This is one possible cause of the so-called 'herringbone'
effect sometimes seen on contour maps of surveys
which have not been reduced correctly. However, in
modern surveys such effects are more often due to
incomplete levelling of the flight lines.

(E). Noise level monitoring Noise experienced while
recording a magnetometer profile can be divided into
discontinuous and continuous noise. The former causes
spikes to appear on the profile which may be attributed
to a plethora of sources, internal and external to the
aircraft. These include lightning, DC trains and trams,
powerlines, radio transmission, electrical switching, and
so forth. Such effects usually demand manual
elimination - or non-linear filtering - during data
reduction. Such effects are commonly referred to as
cultural noise (see section 15.5 for Deculturing). The
continuous effects will be largely eliminated by the
compensation system in a modern installation, but there
will be detectable residuals which still set the limit to the
sensitivity of the system.

Figure 15/12: Top-Magnetic dynamic test line output
showing repeat profiles over Line 3 which is 20 km
long. Amplitude range 300nT. Bottom-4
th
difference
with range +0.02 to -0.015 nT
The noise level is conventionally monitored in flight by
calculating in real-time the 'fourth difference' of
successive readings of the magnetometer. This is the
numerical equivalent of the fourth derivative of the
recorded profile and may be calculated readily from the
relationship:

4th difference = (T-2 - 4T-1 + 6T0 - 4T+1 + T+2) / 16

where T-2, T+, T0, T+1 and T+2 are five consecutive
readings centred on the current reading, T0. When
plotted continuously during flight the width of the fourth
difference trace is characteristic of the noise level being
encountered. Spikes, DC level shifts and other
extraneous effects are also readily apparent to the
survey operator in an on-line presentation of the fourth-
difference.

In practice it is found empirically that the noise level is
linearly dependent on the figure of merit such that,
Noise level =FOM/15 or slightly less than the average
of the figure-of-merit manoeuver signals (Teskey et al
1991). After compensation the residual noise will be
contractually need to be below a specified level (e.g.
0.05 nT). This again would be taken as the peak to peak
value of the 4
th
difference as described above.

15.4.4 Base stations and Repeat lines

(a) Base station recording Aircraft should return to
the exact same location on completion of each sortie.
This is normally a specially marked location at the
airstrip. Gravity and GPS readings will be taken at these
times as well. A typical GPS static base output is shown
in figure 15/13.


Figure 15/13: GPS statics display of start and return
GPS location pairs. Area 1m by 0.8m


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 8
GETECH/ Univ. of Leeds:

The contract will more often than not specify the
acceptable variation in GPS location for both X, Y and Z,
usually this will be of order 0.1 m. Usually a GPS station
will be set up at the base station to allow differential
GPS to operate.

(b) Magnetometer and GPS base station: A
magnetometer of similar sensitivity to the field
instrument, should be stationed at the base station and
run continuously to allow diurnal corrections to be made.

Figure 15/14: 9hr sections of diurnal magnetic
recording.

Any magnetic base station being used for diurnal
measurements, should be free of cultural effects. Before
airborne operations begin data should be collected and
plotted for a period of up to 12 hours to allow the local
character of the diurnal field to be established. The base
station should be operated either on a continuous basis
or from significantly before to significantly after any
productive sortie. Failure of the base station during
productive flying will result in rejection of the airborne
magnetic data related to the period of malfunction. A
window of acceptable variation will be set in the
contract. For example, anomalies should not generally
exceed a tolerance of 10 nT in 10 minutes.

(b) Repeat Lines or Dynamic Test Line A short
section (20km) of a flight line or suitable profile close to
the airstrip, to test operation of sensors (gravity and
magnetic). This is repeated for each sortie and
examined separately from the production data. It should
be similar in character (frequency content, amplitude)
day to day. Figures 15/12 and 15/15 show the magnetic
and gravity repeat lines over the same 20km flight line.



Figure 15/15: Gravity dynamic test line output.
15.4.5 Navigation and position fixing systems

Navigation and position-fixing systems have to fulfill
three functions in the execution of airborne surveys:

a). To assist the pilot in flying as closely as possible to
the prescribed flight-path along each survey line;

b). To enable accurate recovery of the path actually
flown in this attempt, and;

c). To enable the observed anomalies to be plotted on a
map and recovered, where necessary, on the ground,
preferably in relation to visible topographic features.

Achieving these seemingly simple objectives has been
one of the most tedious, time consuming and labour-
intensive parts of airborne survey operations throughout
most of the history of airborne geophysics and the
inadequacies of the various methods employed have
been a major factor in limiting the ultimate quality of the
data gathered. All 'surveys' presuppose the collection of
data and the relation of those data to their x,y
coordinates. Where the survey is carried out from a
moving vehicle, the simultaneous capture of accurate x
and y values assumes an importance no less than that
of the geophysical parameters beings surveyed. (In the
case of airborne surveys, the capture of a vertical
position parameter is also a concern).

Perhaps the single most important technical
development in airborne survey practice occurred in the
early 1990s with the advent of global positioning
systems (GPS). GPS relies on the simultaneous
reception of signals from a number of earth-orbiting
satellites from which a geo-centric position for the
survey vehicle can be derived in real time. The
positioning of sufficient dedicated first-generation GPS
satellites to enable such a system to be fully operational
at all places on the earth's surface at all times signaled
the start of the GPS era. The universality of the system
allows GPS receivers to be mass produced and
therefore available to potential users at a cost that is
modest compared to alternative methods of positioning
with comparable accuracy. Since the accuracy
achievable very simply with GPS has matched or
exceeded that possible with earlier, more complicated
and more costly positioning systems, GPS quickly
became adopted as the principle method of navigation
and position fixing used by almost all operators in
airborne geophysical surveys. GPS will therefore be
described first, with subsequent reference to systems
used in the past which are, in some cases, still of
ancillary importance, and of real interest to those who
have to deal with airborne data acquired during the pre-
GPS era where accurate position-fixing is often a limit to
data quality.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 9
GETECH/ Univ. of Leeds:

(a). Global positioning system (GPS). The US
Department of Defense's NAVSTAR system comprises
a constellation of 24 satellites, of which 21 are in use,
the remainder on stand-by. Each satellite transmits
coded signals at two microwave frequencies known as
L1 and L2 (1575.42 MHz and 1227.6 MHz respectively)
that enable the receiver to calculate the satellite's
precise position at the time of transmission and the
distance from the satellite to the receiver. The L1 band
was originally available for civilian use and provides an
accuracy better than 100 m for 95 percent of readings.
Greater accuracy is achievable using the precise (or P)
code which is transmitted on both the L1 and L2 bands.
The principle of positioning using the NAVSTAR
satellites is no different from that used in other forms of
positional surveying. The distance from a known point
(in this case a satellite) to the receiver is calculated by
'ranging', i.e. calculating the distance from satellite to
aircraft by measuring the time taken for the signal to
travel and knowing the velocity of propagation. To do
this requires both transmitter and receiver to have
carefully synchronized clocks - a microsecond error
between the clocks is equivalent to 300 m in range. All
satellites have clocks which can be considered perfectly
synchronized with each other, while the error in
synchronization of the receiver clock is treated as one of
the unknowns. Range (or 'pseudo-range', since the
receiver clock-time is uncertain) is measured by
comparing the time-shift between identical step coded
signals generated by the satellite and by the receiver.
This can be achieved with an accuracy of about 1 per
cent of the pulse period (which is 1 millisecond for the
civilian code) equivalent to 3 metres at the speed of
light. Information on the satellite orbit - included in the
transmitted signals - allows its instantaneous position to
be calculated for the time of the pulse transmission.
Simultaneous ranging of four satellites gives four
pseudo-ranges that can be resolved for the four
unknowns - the x, y and z coordinates of the receiver
and the error in the receiver clock-time. This is done
automatically in the receiver.

The satellites are placed in orbits such that at least six
of them are always visible from any point on the earth.
Monitoring of satellites from accurately positioned
ground stations allows precise details of their orbits to
be updated and the latest ephemeris details are up-
loaded periodically from the ground stations to each
satellite for onward transmission to the receivers. The
accuracy of a fix, however, depends on the geometrical
arrangement of the four satellites used and is best when
they define the apexes of a tetrahedron with the largest
possible volume. Readings made instantaneously with a
single receiver in an aircraft have proved accurate to
20 m in x and y. As a result of the geometrical
configuration, height information is subject to errors
about three times greater than this.

Some of the sources of error can be eliminated or
reduced by operating in a so-called 'differential mode'
(DGPS). In this mode, a second receiver is operated at
a fixed point on the ground and is observed to display
minor variations in the x and y values obtained for this
fixed point. These variations are attributable to various
causes (such as the ionosphere) but may be assumed
to be equally applicable to the mobile receiver in the
aircraft, if it is ensured that both receivers are using the
same four satellites. The output of the mobile receiver
can then be corrected for the variations observed at the
fixed receiver. This was originally achieved in post-
processing of the data but now more usually by
transmitting fixed-station x,y information by UHF radio to
the aircraft in real time. Single station GPS accuracy (
20 m) is already adequate to enable a pilot to follow a
desired flight-path with accuracy adequate for most
surveys; differential mode can be demonstrated to
achieve accuracy of 5 m which is adequate for almost
all airborne survey purposes.

A bonus is that the accurate GPS time signal can be
used as a time-base for all airborne data recording, as
well as for the precise synchronization of a ground
magnetometer base station with the airborne system.
GPS aboard the aircraft offers additional benefits for the
pilot, such as the capability to display the direction and
distance to the start of a pre-programmed flight line and,
while on line, the off-track error (left or right) and the
distance to the next way-point or line end. A sequence
of consecutive line start-points and end-points may be
pre-programmed for a single flight or sortie.

(b). Flight-path recovery in the pre-GPS era In the
many years of survey operation before the advent of
GPS, problems surrounding position-fixing occupied a
large part of the effort of survey execution. This was
particularly true in remote areas of the world where
maps of high quality were unavailable. In these cases,
the advent of satellite imagery in the 1970s provided a
new type of base map that offered advantages. A
downward-looking camera in the survey aircraft,
exposing one frame of 35 mm film every few seconds,
recorded the ground locations that were actually flown
over, while the pilot had a strip of maps, aerial
photographs or satellite images from which to steer the
aircraft along each desired flight line.

Such systems evolved from the earliest airborne
geophysical surveys, perhaps with the addition of some
more modern electronic support until the advent of GPS.
Electronic support systems included the Doppler
navigator, inertial navigation and range-range radar,
depending on survey location and the need for
precision. A summary of these techniques, written at a
time when such techniques were about to reach the end
of their useful lives, is given by Bullock and Barritt
(1989). One survivor of the earlier era is the advantage
of having pictures from a downward-looking camera.
These days, where such a camera is used, it is
invariably a digital video camera and its use is mostly for
identifying man-made metallic objects such as barns,

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 10
GETECH/ Univ. of Leeds:

power-lines, railways, etc that can be associated with
local magnetic anomalies (often referred to as culture)
when the acquired data is first being inspected for data
quality. With the advent of digital data acquisition on
board the aircraft in the 1970s, the flight-path recovery
information was the last item to remain in the analogue
world and at the end of the time-consuming and labour-
intensive flight-path recovery operation, the flight-path
map was digitized to give the x,y positions of the aircraft
that could be matched with the digitally recorded data by
way of a system of fiducial numbers that was common
to both analogue and digital data sets. These days,
fiducial numbers are derived from the GPS clocks.

It should be noted that geophysical readings are always
made on a time basis and that the distance covered in
unit time will vary, not only on account of any variations
in the air-speed of the aircraft, but also on account of
any wind which will cause air-speed to differ
systematically from ground-speed.

15.4.6 Altimeters and digital elevation models
(DEM)

The purpose of flight-altitude measurements as part of
an airborne geophysical survey is twofold: first, to give
the necessary data for post-flight survey altitude
verification, and second, to give a possibility to make
flight-altitude corrections for the primary geophysical
data. Such corrections can significantly improve the
accuracy and usefulness of primary data.

The standard instrument for the measurement of survey
altitude has been the radar altimeter. In addition, a
barometric altimeter is part of standard avionics in any
aircraft. In recent years two new alternatives have
emerged: a laser altimeter, and the utilization of the
GPS navigation signal for a full 3D flight path recovery.
(An essential part of any altimeter system in low-altitude
surveying is an automatic warning/alarm-signalling
system for crew safety).

(a) Radar altimeter The two-way distance from the
aircraft to the ground and back is measured based on
the known velocity of electromagnetic waves in vacuum
(air) and on the recording of the small but finite time
difference t that will elapse between the transmission
and the reception of the ground-reflected EM signal. It is
technically advantageous to convert the measurement
of t into a proportional difference f in a frequency-
modulated sweep signal. The carrier-wave frequency is
typically about 4300 MHz, the modulation sweep 100
MHz, and the modulation rate 100 cycles per second.
When a certain frequency f1 is transmitted to the ground
from the transmitter antenna, this signal will be
recovered at the receiver antenna after a time difference
of t. Meanwhile, the transmitter signal frequency has
increased into value of f2 because of the modulation.
This direct reference signal is compared with the
ground-reflected signal f1, and the difference f can be
accurately determined. The value of f can be calibrated
to give a direct display of ground clearance.

The advantages of the radar altimeter are its good
accuracy and compact size. An accuracy of 2 per cent
or 1 meter is routinely achieved with modern radar
altimeters. This is sufficient in most airborne surveys.

In special airborne surveys like sea-ice thickness or
bathymetric measurements, the accuracy of a radar
altimeter may not be sufficient. If the accuracy goal is in
the decimeter range, then a laser altimeter is a solution.

(b) Barometric altimeter This type of altimeter is very
seldom relied upon as a single device for measuring
flight altitude. It operates on the precise measurement of
atmospheric pressure differences between a known
reference level (e.g. the base airfield), and survey
altitudes. The pressure difference can be calibrated into
relative values of altitude differences.

hb - ha = k T log (pa / pb)

where ha is the altitude of the lower point, hb the altitude
of the higher point, pa and pb are the air pressures at the
lower and upper points respectively, k is a constant and
T the temperature (Biddle, undated).

The main drawback of the barometric altimeter is its
inadequate long-term accuracy, about 10 meters in
favorable conditions, although a good short-term
precision can be achieved with quality instrumentation.

(c) GPS altimetry
The GPS navigation signal can be solved for all three
spatial co-ordinate values of the on-board receiver,
including the geocentric distance of the aircraft, i.e. its
distance from the centre of the earth. At least four
satellite signals must be continuously available, the
satellite geometry must be good, and a stationary
reference receiver must be utilized for real-time or post-
flight signal processing. If good-quality radar altimeter
data is also recorded on board the survey aircraft, the
combination of the two data sources makes it possible
to subtract the altimeter height from the geocentric
distance to calculate the topographic elevation of the
ground surface along the survey lines to an accuracy of
<2 m in topographic elevation data. Such a digital
elevation model (DEM) of the survey area may be a very
useful survey product, in addition to the geophysical
survey results. The model can be calibrated and bench-
marked against points of known height on the ground
surface and gives a uniformity of coverage that cannot
be achieved by digitizing published contour maps, even
in the well-mapped areas where such maps are
available.




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 11
GETECH/ Univ. of Leeds:

15.4.7 Recording systems, production rates

Typical flying speed is 100 knots, (~185km per hr) with
magnetic sampling (optical pumped magnetometers) 50
Hz and gravity at 2Hz.

It should be clear from the foregoing that an airborne
survey aircraft must carry a wealth of sophisticated
equipment in addition to the geophysical sensors
selected for a particular survey which, with the
exception of the magnetometer have so far not been
mentioned. The nerve centre is invariably a computer
system. The computer has a screen that serves to
inform the operator of current system functionality, and a
keyboard that enables the operator to input instructions
or respond to requests from the system. The computer
runs a dedicated data acquisition software package that
requires a minimum of human interaction under normal
circumstances. Some systems are now in use where the
equipment operation can be carried out by the pilot as
the only person on board - with consequent saving of
weight which can be translated into longer aircraft
endurance through extra fuel capacity.

The main concern of the operator should be that all
systems are functioning correctly, that all necessary
calibrations have been carried out prior to, during or
after each flight, and that the data are being correctly
stored during flight.

Average survey production rates with fixed-wing aircraft
are typically in the range of 100 to 200 flying hours per
month, depending on survey area and specifications.
The efficiency (ratio of productive survey-line mileage
divided by total mileage flown) is typically from 50 to 70
per cent depending largely on the ferry time necessary
to travel from the operational base airfield to the first
survey line on each given flight or sortie. Each flight will
be designed to utilize the aircraft endurance and other
factors (weather conditions and hours of daylight, for
example) to acquire as many full survey lines as
possible. A fixed-wing aircraft carrying out aeromagnetic
(and gamma-ray spectrometer) survey for geological
reconnaissance typically acquires 20 000 line km of
useful data per month, though this may increase
considerably under favourable circumstances.

15.5 Deculturing Aeromagnetic Data

There is an increasing use of high resolution
aeromagnetic surveys for hydrocarbon exploration in
areas already under production and for environmental
hazard detection. High data quality besides imaging
subtle near surface geological structure may now be
able to detect bacterial and chemically produced
magnetic anomalies relating to petroleum occurrences.
However, the main problem in hydrocarbon producing
basins is the ability to separate these magnetic
anomalies from man-made cultural anomalies.
15.5.1 Deculturing after Wilson et al

An example of deculturing aeromagnetic data is shown
below by Wilson et al (1997), A high precision
aeromagnetic survey near Glen Hummel Field in
Texas; Identification of cultural and sedimentary
anomaly sources. Leading Edge,16, 37-42).

High resolution survey means in this case: navigational
precision of 2m; flightline separation of 400m; elevation
of 100m drape; tie lines of 800m; magnetometer stinger
mounted optically pumped TI helium vapour
magnetometer recording rate of 432Hz; digitally
compensated for aircraft motion using 3 axis fluxgate
magnetometer. Data processing was at final 10 Hz
resulting in 4m sampling along track,; diurnal recorded
and removed from field record, IGRF removed etc.

To identify high frequency (strictly short wavelength)
anomalies the data has had a low order (Basement)
surface removed. The figure below shows a residual line
profile map with cultural anomalies ~10 nT well imaged.
These are due to petroleum production equipment,
buildings, well casings, pipelines, etc. Anomalies of
~1nT have geological meaning (see arrow in figure
pointing to NE trending sedimentary unit boundary.



Figure 15/16: Levelled flight line profiles after low
order surface removed. Cultural noise is clearly
identifiable The arrow points to line that correlates
across profiles this is a ~1nT anomaly due to
sedimentary boundary. Wilson et al (1997)

Much of the petroleum equipment (wells, pipes etc.) are
available on petroleum digital databases and correlate
well with the isolated anomalies. The large amplitude
anomalies can not be removed by simple linear filtering
since they posses spatial scales similar to near surface
sedimentary anomalies. An added problem arises in that
cultural noise will follow geological and structural trends

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 12
GETECH/ Univ. of Leeds:

since that is often where the oil will accumulate.
Cathodic protection of pipelines (making pipelines
electrically negative with respect to ground to protect
them from corrosion). This gives rise to large distinct
linear anomalies. Pipes not protected can have major
remenance anomalies generated at the time the pipe
was cast and cooled. This results in linear set of
anomalies randomly changing their amplitude/phase at
each pipe section.

Figure 15/17: Observed and computed model dipole
anomaly

Where cultural noise has consistent dipole character
then it is possible to
1. Locate source by cross correlation between model
anomaly and data
2. Cultural anomaly reduced to zero amplitude via
multiplying by an appropriate function. In next figure a
[1-Hanning ] filter with window 940m was used. This is
done to suppress, in order, the largest to smallest
anomaly in a profile. When the cross correlation falls
below a specific percentage (threshold the process is
terminated. In this example 25% was chosen.


Figure 15/18: A profile with anomaly suppressing
one anomaly at a time. 25% suppression used

15.5.2 Deculturing using Equivalent Source
Approach

From extended abstract:
Removal of cultural noise from high-resolution
aeromagnetic data using the equivalent source
approach by Ahmed Salem, Kaxia Lei, Chris Green,
Derek Fairhead

and Gerry Stanley Proceedings of
the 9
th
SEGJ International Symposium Imaging and
Interpretation- , Sapporo, Japan 12-14 October
2009

SUMMARY: High-resolution aeromagnetic surveys are
commonly used to locate subtle anomalies that are
important in mineral and oil exploration. However, such
anomalies, especially in highly populated areas, are
often masked by undesirable magnetic signals from
manmade objects or cultural noise, making post
processing and interpretation difficult. Magnetic data
need to be cleaned before applying any analysis to
estimate source parameters for geologic structures or
bodies of interest. Conventional algorithms of cultural
noise removal are based on Fast Fourier Transform
(FFT) operations either on their own or together with
identification and removal of noise signals, either
manually or using non-linear filters. These algorithms
usually introduce artificial anomalies, have difficulty
interpolating across edited sections and rarely yield
cleaner data. For this reason, we have developed a
semi-automated method based on the equivalent source
approach to recover magnetic responses of subtle
anomalies and ignore the unwanted cultural noise.
Theoretical examples of combined subtle magnetic
anomalies and cultural noise were used to test the
effectiveness of the proposed method, which is shown
to provide results that are much closer to the original
magnetic data without the cultural noise. We
demonstrate the practical utility of the approach using
high-resolution aeromagnetic data from Ireland.

INTRODUCTION: The main objective of high-resolution
aeromagnetic surveys is to locate subtle anomalies that
are important in mineral and oil exploration (Hassan and
Peirce, 2005). However, such anomalies, especially in
highly populated areas, are often masked by
undesirable magnetic responses from cultural noise of
manmade objects. Removing these cultural noise
effects from the observed magnetic data is an important
step to both processing and interpreting magnetic data.
Cleaning the data from such noise can improve the
calculated differences between survey and tie lines in
the processing stage. As a result, a more accurately
leveled data set will be produced. Furthermore, many
data sets can be more rigorously interpreted to provide
a good picture of the subsurface structures in the
absence of cultural noise (Muszala et al., 2001).

Generally, cultural noise is characterized as being high
frequency (short wavelength) and high amplitude. Its

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 13
GETECH/ Univ. of Leeds:

bandwidth frequently overlaps signals attributed to
subtle and shallow geologic features. Based on these
characteristics, several filters have been developed to
remove such noise from magnetic data - both space
domain (e.g. moving average filters) and frequency
domain (low pass and band pass filters (Hassan and
Peirce, 2005)). A significant difficulty arises when using
these techniques because the cultural objects are
randomly located and oriented (Muszala et al., 2001).
As a result, the success of these filters is limited,
because they will remove naturally occurring magnetic
anomalies that happen to have wavelengths comparable
to the magnetic anomalies from the cultural objects.
Another difficulty for those filters that work in the
frequency domain is that they usually introduce artificial
anomalies, have difficulty interpolating across edited
sections and rarely yield cleaner data.

A solution to this problem may exist in working with the
equivalent source concept. The idea of the equivalent
source is that the observed magnetic field is
represented by a set of equivalent magnetic sources
(Blakely, 1995). This technique has been studied widely
in the literature (see for example Dampney, 1969).
Medeiros and Silva (1995) developed methodologies to
interpolate potential field data utilizing an equivalent
layer. Davis and Li (2007) presented an improved
equivalent source method to grid and interpolate 3-D
potential field data. Jia and Groom (2007) showed the
potential advantage of using the equivalent source
approach in calculating the derivative of potential field
data; derivatives were found to be more stable using the
equivalent source approach than traditional grid
approaches, such as FFT. Major concerns regarding the
equivalent source technique are the computation time
taken to generate the sources and the potential for
instability in the magnetic sources.

In this paper, we present a new, semi-automated
method using the equivalent source to remove the
effects of cultural anomalies without affecting the
magnetic signal from the surrounding geology. The
method consists of two stages. In the first stage, we
calculate analytic signal of the observed magnetic data
and use the equivalent source technique to identify
cultural noise locations along the profile. In the second
stage, the equivalent source approach is used again to
produce clean magnetic data.

BACKGROUND OF THE EQUIVALENT SOURCE
APPROACH: We start with the mathematical
relationship between the observed magnetic field and
the equivalent sources. The relation between the
observed total-field d and the magnetic sources is given
by the linear relationship:

d Gm (1),

where m is a vector of magnetization values for the
equivalent sources and G is the sensitivity matrix,
describing the geometry between the sources and the
observation points. To calculate a sensitivity matrix for
the total field is trivial using a model consisting of a
single layer of magnetic sources located a certain depth.
Then using standard linear inversion techniques m is
optimized in such a way that the model objective
function is minimized. The optimal solution is found
when minimizing a global objective function , such that


2
2
2
m C Gm d W
n
(2),

Where W is a diagonal matrix assigning weights to each
data point, Cn is an n-order finite-difference operator
used to quantify the model roughness, and is a
Lagrange multiplier. Once equation (2) is solved, a new
data set is calculated using the following equation

Gm d
new

(3) .
At present, we are working with profile data to reduce
the amount of time required for the inversion but our
work will extend to deal with gridded magnetic data.

The important question when working with the
equivalent source approach is what is the magnetic
source? As our strategy is applied to profile data, we
use sources of horizontal cylinders. Our inversion
methodology requires estimation of the magnetic
moment of each horizontal cylinder. Note that these
moments are arbitrary values and can be either negative
or positive to fit the observed data.

STRATEGY FOR CULTURE REMOVAL: We need to
assign values for the weighting matrix before inverting
the magnetic data. We first calculate the analytic signal
of the observed magnetic data and estimate the depth of
equivalent horizontal cylinder sources. Based on a
shallow depth estimate and a high amplitude for each
analytic signal anomaly, the area around each detected
anomaly may be assigned a zero value for the weighting
matrix, where it is one elsewhere. Once the weight
values are determined, inversion of equation (2) is
applied to estimate the magnetic moment of the
equivalent sources.

The optimization problem is solved using the linearized
least-squares approach described in Sasaki et al.
(2008). The inverted magnetic moments are then used
to calculate the new magnetic data at each observation
point using equation (3).

SYNTHETIC EXAMPLE: In order to test the proposed
approach, a theoretical data set (Figure 15/19a) was
created over a dike body located at a depth of 500 m
with an arbitrary magnetization with an inclination of 60
o

and a declination of 0
o
. These data were contaminated
with three cultural magnetic noise sources (Figure

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 14
GETECH/ Univ. of Leeds:

15/19b) produced by three horizontal cylinders (Table
1). All cultural noise produced by an induced
magnetization in the same direction of the magnetic field
over the dike model. In applying the technique, the
analytic signal profile was calculated (Figure 15/19c)
and was used to assign the weighting values.
Calculated total field data for about 200 m around each
detected analytic signal peak were assigned weighting
values of zero, based on the amplitude of the analytic
signal peak and on the depth estimate from the analytic
signal. Other calculated data were assigned weighting
values of 1 (Green areas in Figure 15/19c). A Lagrange
multiplayer value of 0.0001 was used. Figure 15/19d
shows the output of the inversion methodology, which
indicates that this approach has successively removed
the cultural noise from the three cylinders.

0 1000 2000 3000 4000
Distance (m)
-100
0
100
200
(
n
T
)
(a)
0 1000 2000 3000 4000
Distance (m)
-100
-50
0
50
100
150
200
(
n
T
)
0 1000 2000 3000 4000
Distance (m)
0
1
2
3
4
5
(
n
T
/
m
)
0 1000 2000 3000 4000
Distance (m)
-100
-50
0
50
100
150
200
(
n
T
)
(b)
(c)
(d)
(C1)
(C2)
(C3)


Figure 15/19: (a) Theoretical magnetic data over a
dike body located at horizontal location 2000 m and
at depth 500 m. (b) Magnetic noise produced by
horizontal cylinders (see Table 1 for details). (c)
Analytic signal signature of (b). The green areas
showing the locations assigned a weight of 1. (d).
The results of the equivalent source technique.

Table 1: List of magnetic anomalies of horizontal
cylinders generated to simulate cultural magnetic
noise.
Cylinder Horizontal
location (m)
Depth
(m)
Magnetic moment
A.m
2

C1
C2
C3
1000
2000
3000
50
45
40
300
800
200

APPLICATION TO REAL DATA: We have tested the
method on high-resolution aeromagnetic survey in the
area of Harberton Bidge, Ireland. The survey was flown
in 1995 with 300 m line spacing. For each line analytic
signal data were calculated and used to estimate the
depth of equivalent horizontal cylinder sources. These
sources were interpreted as cultural noise when the
estimate of the depth was shallower than 100 m and the
analytic signal anomaly was greater than 0.02 nT/m.
The technique was then applied using an equivalent
layer located at a depth of 200 m to reconstruct the
magnetic data. Lagrange multiplayer value of 0.0001
was used.
0 2000 4000 6000 8000
Distance (m)
-40
-20
0
20
40
60
(
n
T
)
0 2000 4000 6000 8000
Distance (m)
0
0.5
1
1.5
2
2.5
(
n
T
/
m
)
0 2000 4000 6000 8000
Distance (m)
-40
-20
0
20
40
60
(
n
T
)
South North
(a)
(b)
(c)

Figure 15/20: (a) Magnetic data over profile 14080
from an aeromagnetic survey conducted in Ireland.
(b) Analytic signal signature of (a). The green areas
showing the locations assigned a weight of 1. (c).
The results of the equivalent source technique
(black) superimposed on the original magnetic data
(red).


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 15
GETECH/ Univ. of Leeds:

As can be seen from Figure 15/20, the most obvious
(i.e. highest amplitude) noise features have been
correctly identified in the first stage and accurately
removed in the second stage. On the other hand, two
smaller noise features at around 7000 m and 8000 m
have not been identified in the first stage; the second
stage has still largely succeeded in removing these
apparent noise features, but not quite so accurately as
for the features identified in the first stage.

Figures 15/21 shows the original magnetic grid over
Harberton Bidge. Figure 15/22 shows a grid of the
magnetic survey after cultural noise removal using
manual editing. Meanwhile, Figure 15/23 shows a grid of
the magnetic survey after cultural noise removal using
the present approach. The present approach has clearly
been successful in removing large amounts of the
cultural noise better than manual editing as there is a
lack of precision of the interpolation in the manual
process as a residual concern.


Figure 15/21: The original magnetic data conducted
over Harberton Bidge, Ireland.


Figure 15/22: The resultant magnetic data produced
by manual editing of cultural noise over Harberton
Bidge, Ireland.



Figure 15/23: The resultant magnetic data produced
using the equivalent source approach over
Harberton Bidge, Ireland.

DISCUSSION AND CONCLUSION: We have developed
a strategy for removing cultural noise from magnetic
data using the equivalent source approach. Our strategy
is based on using one layer of equivalent sources and
utilizes weighted inversion to estimate the magnetization
of the sources. Prior to inversion of the magnetic data,
the depth of apparent sources from the analytic signal
data is calculated. Based on the amplitude of the
analytic signal and an estimate of the depth, the
anomaly is evaluated and a value of zero is given to the
data around each detected cultural anomaly and a value
of one is given elsewhere. Then the equivalent source
approach is used to invert the weighted magnetic data
to produce data reflecting the geology and suppressing
the noise effect. This process is important for
interpretation since a more realistic picture of the earths
magnetic field can be obtained and automatic depth
determination techniques such as the Euler method
(Reid et al., 1990) and the Tilt-depth method (Salem et
al., 2007) can be applied to these de-cultured data to
give more accurate results.

The consistency of the presented approach is
demonstrated using theoretical data over a dike model
contaminated with sources of cultural noise. The
practical utility of the approach is also demonstrated
using a high-resolution data over Harberton Bidge,
Ireland. In both cases, the approach could produce
clean data that are ready for interpretation. The results
of the method are in general promising and indicate that
the equivalent source approach is suitable for
processing magnetic data for cultural noise removal.
However, initial results showed that some noise was not
detected; analytic signal peaks exist over these noise
sources, but were not identified because their depth
estimates and/or their analytic signal amplitude lie
outside the predefined range for cultural noise. Work is
still required to fully automate the detection of cultural
noise sources. Also in this work we focus on processing
magnetic profiles and use 2D magnetic sources to
simulate the effect of the culture and the geology.
Extension to grid magnetic data using 3D sources
should be implemented and tested.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 15, Page 16
GETECH/ Univ. of Leeds:


REFERENCES

Blakely, R. J., 1995, Potential theory in gravity and
magnetic applications: Cambridge Univ. Press.

Dampney, C.N.G., 1969, The equivalent source
technique: Geophysics, 34, 3953.

Davis and Li, 2007, Joint processing of total-field and
gradient magnetic data using equivalent sources: SEG
Expanded Abstracts 775, 779.

Hassan H. H. and Peirce, J.W., 2005, SAUCE: A new
technique to remove cultural noise from HRAM data:
Leading Edge, March 2005, 246-250.

Jia, R. and Groom, R.W., 2007, Processing gradients of
magnetic data utilizing an equivalent source technique:
SEG Expanded Abstracts 785, 789.

Medeiros, E., and Silva, J.B.C, 1995, Simultaneous
estimation of total magnetization direction and 3-D
spatial orientation: Geophysics 60, 1365-13677.

Muszala, S., Stoffa, P. L. and L. A. Lawver, L.A, An
application for removing cultural noise from
aeromagnetic data, 2001: Geophysics 66, 213219.

Ravat, R., 1996, Magnetic properties of unrusted steel
drums from laboratory and field-magnetic
measurements: Geophysics, 61, 13251335.

Reid, A. B., Allsop, J. M., Granser, H., Millett, A. J., and
Somerton, I.W., 1990, Magnetic interpretation in three
dimensions using Euler deconvolution: Geophysics, 45,
8091.

Salem, A., Williams, S., Fairhead, J D., Ravat, D. and
Smith, R., 2007 Tilt-depth method: A simple depth
estimation method using first-order magnetic
derivatives: The Leading Edge, 26, 1502-5.

Sasaki, Y. Son, J., Kim, C., and Kim, J.2008, Resistivity
and offset error estimations for the small-loop
electromagnetic method: Geophysics 73, F91F95.




J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds:




MAPPING

Section 16 Geodetic Datum and Map Projections
Section 17 Gridding and Mapping Point and Line Data
J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds:




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 16, Page 1
GETECH/ Univ. of Leeds:

SECTION 16: Geodetic Datums and Map
Projections



16.1 Geodetic Map Datum
After Grodecki, 1999

The subject of geodetic datums is perhaps the most
misunderstood and confusing issue in the mapping
community. Nonetheless, because datums provide the
underlying framework for coordinate transformations
and map projects a thorough understanding of the
subject is a prerequisite for accurate mapping and
geographic information systems(GIS). Oil and mining
companies are routinely using geo-referenced data
images to view the relations between different data sets
by overlaying them in a electronic manner. This can be
done easily using applications such as ArcView
TM
and
companies are beginning to insist that deliverable
products from internal and external consultancies are in
a standard digital format. This means that products for
the same area constructed by a range of people can be
electronically overlain and merged in an highly efficient
manner. Knowledge of geodetic datums provides the
means of tying together different map sources, global
positioning system(GPS) positioning and navigation.

So what is the geodetic datum? In short, the geodetic
(horizontal) datum is the reference ellipsoid plus its
position and orientation with respect to a terrestrial
reference frame. The terrestrial reference frame is a 3-
D Cartesian geocentric coordinate system, with the
origin at the centre of the Earths mass, the Z-axis
intersecting the (geographic) North Pole and the XZ-
plane intersecting the Greenwich observatory
(Longitude 0
0
).

To translate from one coordinate system to another
requires seven parameters to be defined. Fig. 16/1
defines in Cartesian coordinates the seven parameters
where:

|
|
|
|
.
|

\
|
|
|
|
|
.
|

\
|


+ +
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|
z
y
x
1
1
1
) 1 (
z
y
x
z
y
x
x y
x z
y z
0
0
0
m

where m is the scale factor, xo, yo and zo translated
and cx, cy, cz are rotations. For most datum
transformations m, cx, cy, cz are zero.
Failures to consider differences between various datums
may lead to large errors when utilising or inputting
satellite derived data into your GIS system. For example
a point in Colorado, USA. In the North American Datum
of 1983 (NAD 83) with at location 40
0
Latitude(), -105
0

Longitude() and 1,700 m ellipsoidal height would be
seen in the North American Datum of 1927(NAD 27) as
having = 40
0
00 0.04846, = -104
0
59 58.06839
with orthometric height in the National Vertical Datum of
1929 (NGVD 29) of 1,716.382 metres. The differences
are 1 metre in latitude, 60 metres in longitude and 16
metres in elevation.



Figure 16/1: Transformation parameters

In the current World Geodetic System 1984 (WGS84)
the point would be seen as having = 40
0
00 0.02333,
= -105
0
00 0.03616 and h =1,699.113 metres. The
difference being A = 1 metre, A = 1 metre and Ah = 1
metre. Thus the differences between the NAD 83 and
WGS 84 are not very great but between the NAD
27/NGVD29 and NAD 83 they are significant in any
high- accurate mapping applications.

The Earths shape as a first approximation is spherical,
however in detail it has flattening of the poles and a
bulge at the equator(Equatorial radius greater than Polar
radius by 21km). To facilitate mathematical
computations, one needs a mathematical surface that
closely approximates the Earths shape, and the closest
approximation is a geoid, the mean sea level surface
extending under the land masses (Figure 16/2).

A geoid is mathematically expressed with spherical
harmonic coefficients. For example the WGS84 geoid
(not be confused with the WGS84 ellipsoid as used in
section 5.2) and the more recently defined Earth
Gravitational Model (EGM96) uses spherical coefficients
up to degree and order 360. The complete EGM96

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 16, Page 2
GETECH/ Univ. of Leeds:

geoid equation requires more than 60,000 coefficients.
Clearly this is too complicated as a computational
surface.



Figure 16/2: The Earths shape

The preferred reference surface is a biaxial reference
ellipsoid (see Fig 16/3), since it has a simple
mathematical form. The most widely used reference
ellipsoids are GRS80, WGS84, Clarke 1866, Clarke
1880, Bessel 1841, International 1924 and Krassovsky
1940.



Figure 16/3 Biaxial Reference Ellipsoid

The most common mistake is to assume that the
reference ellipsoid alone defines the horizontal
datum. In fact even though two geodetic datums may
share the same Earth ellipsoid, the latitudes and
longitudes can be significantly different. For example the
Hong Kong 1963 Datum and the Hu-Tzu-Shan Datum of
Taiwan both use the International 1924 ellipsoid.
However, a point with . = 22
0
, = 115
0
coordinates in
Hong Kong 1963 datum has = 21
0
59 59.8298, =
114
0
59 40.7070 coordinates in the Hu-Tzu-Shan
Datum. The difference is 5 metres in latitude and 552
metres in longitude. The reason for this difference is
that the reference ellipsoids are translated and rotated
with respect to each other, resulting in a coordinate
differences of as much as 500m. The National Imagery
and Mapping Agency (NIMA) Technical Report 8350.2,
third edition in Appendix B.3 lists transformation
parameters of both datums to WGS84. For the Hong
Kong 1963 these are : Xo = -156m, Yo = -271m and Zo
= -189m. For the Hu-Tzu-Shan datum the transformation
parameters to WGS84 are Xo = -637m, Yo = -549m and
Zo = -203m. The source of these differences are

(1) different definition of the origin point causing the two
datums to be translated with respect to each other, and
(2) different orientation of the datums with respect to the
ECEF coordinate system due to different definitions of
the deflections of the vertical at the origin, and
incompatible astronomic azimuths.

Another example of this common fallacy is when
different ellipsoids are vertically identical (e.g.WGS84 &
GRS80 differ by no more than 0.11 metres)
cartographers erroneously assume the coordinates in
both systems are the same. Reasons for location
differences are that different ellipsoids are referenced
differently. WGS 84 is referenced to a 3-D Cartesian
geocentric frame why older models have less precise
information giving rise to errors (reference frame being
both offset and rotated).


16.2 Map Projections

16.2.1 Basics

When a company commences an exploration
study/project one of the first decisions it has to make is
the scale and projection it is going to work with for the
production of intermediate and final maps/products. This
will include a decision on the most appropriate geodetic
datum/ellipsoid, so that work by contractors and
partners are consistent in terms of hard copy
maps/overlays as well as digital grids and files for
inputting into standard imaging or mapping packages
such as ArcView
TM
, ERmapper
TM
and GMT.

Please Note: It is normal practice within the oil and
mineral industries to use national projections and map
parameters when working onshore in a country.
Offshore the Universal Transverse Mercator projection is
often used for small concession areas.

Map projections are attempts to portray the surface of
the Earth or a portion of the Earth on a flat surface. This
inevitably results in some form of distortion such as
conformality, distance, direction, scale and area. Careful
choice of a projection can help to minimise distortions in
some of these properties at the expense of maximising
errors in others, whereas some projections are attempts
to only moderately distort all of these properties. It is
important to clearly understand the terminology that is
used:

- Conformality: when the scale of a map at any point on
the map is the same in any direction, the projection is
conformal. In such situations meridians (lines of
longitude) and parallels (lines of latitude) intersect at
right angles and shape is preserved locally on conformal
maps
- Distance: a map is equidistant when it portrays
distances from the centre of the projection to any other
place on the map.
- Direction: a map preserves direction when azimuths
(angles from a point on a line to another point are

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 16, Page 3
GETECH/ Univ. of Leeds:

portrayed correctly in all directions).
- Scale: scale is the relationship between a distance
portrayed on a map and the same distance on the Earth.
- Area: when a map portrays areas over the entire map
so that all mapped areas have the same proportional
relationship to the areas on the Earth that they
represent, the map is an equal-area map.

The choice of map projection will be different if you are
mapping a small area, such as a concession scale
block, country scale areas or continents. The choice of
projections can also be subject to what part of the
Earths surface you wish to map. Traditional rules
indicate that

- a country in the tropics tends to use the cylindrical
projection (distortion increases toward the poles)

- a country in the temperate zone tends to use the conical
projection (distortion increases away from the standard
parallel)

- in polar areas azimuthal projection tend to be used
(distortion worst at the map edges)

Implicit in these rules of thumb is the fact that these
global zones map into areas in each projection where
distortion is lowest.

There are a large range of projections available (USGS
professional Paper 1395, Map ProjectionsA working
Manual, 1987) many of which are provided as options
and transformations between projections within the
range of mapping and GIS computer packages now
available. This text limits itself to examples of cylindrical,
conic and azimuthal projections.

16.2.2 Cylindrical Map Projections

For these and more details on map projections visit
http://erg.usgs.gov/isb/pubs/MapProjections/projecti
ons.html (poster forms Appendix to Course notes)


Mercator or Equatorial Mercator (after Gerardus
Mercator Flemish mathematician 1512-1594) Probably
the most commonly used projection for map atlases of
the world. It is used for navigation and for maps in
equatorial regions. Any straight line on the map is a
rhumb line (line of constant direction). Direction along a
rhumb line is true between any two points on the map,
but the rhumb line usually is not the shortest distance
between the two points. Distances are only true along
the Equator but are reasonably correct within + 15
0
of
the Equator. Thus the Equator is the standard parallel. It
is also possible to have another parallel with true scale.
In this case the map will look the same with only the
scale being different since the map scale will be true for
the stated parallels. For example if 15
0
N is made the
standard parallel then 15
0
S will also be a standard
parallel and the stated scale will be correct for these
parallels.

Areas and shapes are distorted. Distortion increases
away from the Equator and is extreme in Polar regions.
However the map is conformal in that angles and
shapes within any small area are essentially true. The
map is not perspective equal area or equidistant.

The projection is shown in figure 16/4 where the
features of the Earth are mathematically projected onto
the cylinder tangential to the Equator



Figure 16/4: Features of Mercator (or equatorial
Mercator) map projection

Transverse Mercator (invented in 1772 by Johann
Heinrich Lambert, German physicist 1728-1777) In this
case (figure 16/5) the cylinder has been rotated 90
0
with
the Earths surface mathematically projected onto the
cylinder tangential to the meridian. Distance is true only
along the central meridian selected by the map maker or
else along two lines parallel to it, but all distances,
directions, shape and area are reasonably accurate
within 15
0
of the central meridian. Distortion of distance,
directions and size of areas increases rapidly outside
the 15
0
band. Because the map is conformal, shapes
and angles within any small area are essentially true.
Graticule spacing increases from the central meridian.
Equator is straight. Other parallels are complex curves
concave towards the pole. The central meridian and the
meridian 180
0
from it are straight. Other meridians are
complex curves concave toward the central meridian.



Figure 16/5: Features of Transverse Mercator

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 16, Page 4
GETECH/ Univ. of Leeds:


Figure 16/6: UTM zones

Universal Transverse Mercator Same as Transverse
Mercator projection but the world divided into meridian
zones so that variations of scale within any one zone is
held below a preset level. The Earth between 84
0
N and
84
0
S is divided into 60 zones each 6
0
wide in longitude.
Bounding meridians are evenly divisible by 6
0
and the
zones are numbered from 1 to 60 proceeding east from
the 180
th
meridian from Greenwich (see Fig.16/6). The
geographical location is given x and y coordinates, in
metres, according to the Transverse Mercator
projection, using the meridian halfway between the
bounding meridians as the central meridian and
reducing its scale to 0.9996 of true scale (a 1:2,500
reduction). This reduction was chosen to minimise scale
variation in a given zone; the variation reaches 1 part in
1,000 from true scale at the Equator.

Other Cylindrical projections: Oblique Mercator,
Space Oblique Mercator, Modified Transverse Mercator,
Cylindrical Equal - Area, Miller Cylindrical, Equidistant
Cylindrical, and Cassini to name some

16.2.3 Conical Map Projections

In temperate zones conic projections are usually
preferable to cylindrical projections

Lambert Conformal Conic One of the most widely
used map projections and looks like the Alber Equal
Area Conic, but graticule spacing differs. As the name
suggests it retains conformality. Distances true only
along standard parallels but reasonable accurate
elsewhere in limited regions. Directions reasonable
accurate. Distortion of shapes and areas are minimal at
, but increase away from the standard parallels. Shape
on large-scale maps of small areas essentially true. The
projection can have two standard parallels where scale
is true. The shape of the cone is controlled by the
definition of the two standard parallels used. The cone is
defined by the tangent at the standard parallel or
conceptually by the secant to the two parallels. In figure
16/7 the USGS Base Map series for the 48
conterminous States uses parallels 33
0
N and 45
0
N
which results in maximum scale error for the 48 states of
2.5%


Figure 16/7: Features of the Lambert Conformal
Conic for North America

Other Conic Map Projections Albers Equal Area,
Equidistant , Bipolar Oblique Conic, Polyconic, Bonne

16.2.4 Azimuthal Map Projections

Orthographic Used for perspective views of the Earth,
Moon


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 16, Page 5
GETECH/ Univ. of Leeds:



Figure 16/8: features of the Orthographic projection

and other planets. The Earth appears as it would be
photographed from deep space (infinity).

Azimuth Equidistant: Azimuths and distances
measured from the centre of the projection are true. This
type of project (figure 16/9) is used for earthquake
studies where the epicentre is the centre of the
projection.



Figure 16/9: Features of the Azimuth Equidistant
projection

Other Azimuthal Map Projections Stereographic,
Gnomonic, Lambert Azimuthal Equal Area for polar
maps


16.3 Map Coordinates

Calculations to determine the latitude and longitude of
points on a map can be quite involved. This has been
simplified by the development of rectangular grids. In
this way, a point on the map may be designated merely
by its distance from two perpendicular axes on the flat
map. The Y axis normally coincides with the chosen
central meridian, y increasing north. The X axis is
perpendicular to the Y axis at a latitude of origin on the
central meridian, with x increasing east. Frequently x
and y coordinates are called eastings and northings
respectively, and to avoid negative coordinates may
have false eastings and false northings added. The
grid lines usually do not coincide with any meridians and
parallels except for the central meridian and the
Equator. The standard way of quoting a coordinate or
map reference is meridian (or easting, x) followed by
northing, y.


16.4 Geophysical Data Datums

One of the technical problems, when merging
geophysical map or digital survey data or any other data
types of different vintages, is that old and new data may
have been referenced to different datum. For example

Height is referenced to a sea level datum that can be
defined differently between countries or its definition
within a country can be changed,

Gravity is referenced to a gravity datum that can be
refined with time,

Maps can have different projections and geodetic
datums (sections 16.1 & 16.2 )

All these differences can have significant effects when
merging geophysical measurements. For example as
indicated in section 16.1 the same location on a map
could have a difference in its map coordinates of up to
100m plus. To produce an accurate unified geophysical
dataset requires a good knowledge of how the data
where originally collected and processed as well as
knowing the geodetic datum and projection used. This
does not just apply to map data but to digital geo-
referenced data.

16.4.1 Height Datum

A datum is a reference system from which quantities are
measured from. The height datum is normally measured
from local mean sea level. For the UK the zero height is
defined as mean sea level at Newlyn in Cornwall on a
certain date. For Ireland and Belgium, prior to adopting
a mean sea reference had their height datum referenced
to low water mark on a certain date. These different
datums can result in differences in height in excess of
2m.

The height datum for former Yugoslavia, Hungary and
Italy are tied to the Trieste mean sea level datum in the
Adriatic. The heights in Hungary were changed to the
Baltic reference in 1963. The height difference between
the Adriatic and Baltic datum for Hungary is 0.675 m.

Country Height Difference
The Netherlands 0.02 m
Former East Germany 0.16 m
Italy 0.33 m
Switzerland 0.08 m
France 0.27 m
Belgium 2.31 m
Denmark 0.11 m
Austria 0.27 m

Height differences between countries in Europe relative
to Former West Germany


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 16, Page 6
GETECH/ Univ. of Leeds:

It should be remembered that sea level is slowly
changing with time due to tectonic processes and/or due
to global warming. Thus the datum are generally defined
in stable areas and the datum have a date associated
with them. The mean sea level datum for different
countries does not mean that the radial distance from
the centre of mass of the Earth are the same. The mean
sea level reference system defines the equi-potential


Figure 16/10: Heights relative to Ellipsoid and Geoid

surface of the Earths gravity field (see section 10).
Figure 16/10 shows schematically the relationship
between height datum, the geoid, sea level, the
reference ellipsoid and the orthometric height.

Remember GPS gives heights relative to the reference
ellipsoid which directly relate to the distance to the
centre of the Earth. To measure the orthometric height
(the normal heights we geophysicists work with) from
GPS measurements requires knowledge of the geoid
height (obtained from published maps). If you measure
GPS heights at bench marks then you can determine
the geoid height.

The North American Gravity database Committee has
implemented new standards for reducing North
American gravity observations: The revised North
American Gravity Data such that gravity can be
mapped with respect to geocentric heights from the
Ellipsoid.

16.4.2 Gravity Datum

For observed gravity (gobs) measurements we currently
use the International Gravity Standardised Network
1971 (IGSN71) as our reference system (or datum)
whereas prior to 1971 the 1930 Potsdam datum and
associated network of gravity base stations were used
(see section 5.1). The difference in these datum values
at Potsdam is about 14 mgal. For the UK the network of
gravity base stations used is the National Gravity
Reference Network 1973 (NGRN73, see Fig 5/2) which
is tied to the IGSN71 reference system.

Our ability to define the Theoretical gravity (or Normal
Gravity or Latitude Correction, see section 5.2) has
improved with time and as such the ellipsoidal definition
has changed (see section 5.2).

Please note that the internationally accepted gravity
formulae have been IGF1930, GRS1967 and WGS84,
the last of which is in current use.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 1
GETECH/ Univ. of Leeds:

Section 17: GRIDDING AND MAPPING POINT AND
LINE DATA



With gravity surveys one is often dealing with an uneven
distribution of field measurements along roads, tracks,
seismic lines etc. The accuracy of the gravity anomaly
values and the spatial separation of the observation
points will dictate the accuracy and resolution of the final
map. For line based gravity data (marine and airborne
gravity) line levelling methods need to be employed to
remove systematic noise between lines (see section
17.3 Mapping Profile Data)

17.1 Hand Contouring

General Does and Dont This may appear to be old
fashioned but makes you appreciate the limitations on
the control and ambiguity of anomalies.

i. Construction of Base (Work)Map All observation
points plotted and Bouguer anomaly values posted to
their full accuracy.

ii. Linear interpolation between points to construct
contour positions: Use a copy of the base map to
linearly interpolate values between data points



Figure 17/1: Data distribution / Linear Interpolation /
Contouring

iii. Triangulate between data points: Triangulate
contour positions between data points thus providing
means of drawing in contours (this often isolates an
incorrect observation which can be removed or
reprocessed.)

iv. Contour intervals should be constant.

v. Consistent gradient across a contour: The gradient
across a given contour must always be of the same sign
(easy error to make).

vi. Make sure curvature of contour lines is consistent
with data distribution
vii. Always show station distribution on final contour
maps: Why?



Figure 17/2: Simple rules of contouring

The station distribution is the dataset which has been
used to construct the map. Thus if you have a closure of
contour lines it is important to know the control that the
dataset has on the closure. This is especially true for
computer generated contours (see later). An uneven
station distribution will give rise to a biased or uneven
high frequency content of the map in areas of high
station density. If no station locations plotted this
frequency variation may be wrongly attributed to the
geology. These aspects are often the cause of poor
interpretation especially when using computer methods.

This variable data coverage is illustrated in the gravity
map for southern Algeria (Fig 17/4) where the NE
quadrant has less high frequency anomalies due to poor
data coverage

17.2 Computer Contouring of Point
Data
(see Geosoft Gridding)

There are various methods of contouring up irregular
data using computer programs but they all have three
steps (Fig 17/3)


Figure 17/3: Computer contouring
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 2
GETECH/ Univ. of Leeds:


Figure 17/4: Upper Bouguer contour map of
southern Algeria without station locations. Lower
Station locations showing the low density of
stations in the NE quadrant.

Figure 17/3 shows the three computer stages on
contouring once the regularised grid has been
generated. These three stages are:

i. Regularise grid (see Figure 17/5)
ii. Interpolate within grid along diagonals and edges
iii. Tracking contours through the grid

The most robust method is to initially generate a regular
grid of measurements from your irregular spaced data
set. In the hand drawn map linear interpolation between
data points was used. This can be improved on by the
computer methods. Data gaps are generally the
problem area for any interpolation since you are having
to assume the type of interpolation.

17.2.1 Minimum Curvature Gridding

When dealing with two dimensional or planimetric data it
is often more convenient to convert the randomly
positioned data onto the nodes of a regular grid. Such
data are now suitable for a range of two dimensional
processes e.g. image enhancement, computer
contouring and filtering.

Figure 17/5: Example of original data points (green
dots) and nodes of a regular grid (small black
crosses).

Minimum curvature gridding is based on the method
originally described by Briggs (Briggs I. C., 1974
Machine contouring using minimum curvature,
Geophysics 39:39 - 48) and Swain (Swain, C. J . 1976
A FORTRAN IV program for interpolating irregularly
spaced data using the difference equations for
minimum curvature. Computers and Geosciences 1,)
A minimum curvature surface is defined here as the
smoothest possible surface that will fit the given data
values. The computer methods based on this method
estimate grid values at the nodes of a coarse grid
(usually 8 times the final grid cell size) by using inverse
distance averages of the actual data within a specified
search radius. If there are no data within the search
radius that meets a pre-defined specification (i.e. 3 data
values) then some computer software increase the
search radius until the requirement is satisfied or in other
software the average of all data values is used. An
iterative method is then used to adjust the grid to fit the
actual data points nearest to the coarse grid nodes.
Once an acceptable fit is achieved, the coarse cell size
is divided by 2 and the same process repeated using the
coarse grid as the starting surface. This is continued
until the surface is derived for the final grid cell size.
Figures 17/6 to 17/8 demonstrate the principles of the
method



Figure 17/6: Four iterations for Minimum Curvature
Gridding (from red dots to open red dots, green
open dots to finally black x)
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 3
GETECH/ Univ. of Leeds:



Figure 17/7: First iteration showing the importance
of working with an extended area to constrain outer
grid nodes



Figure 17/8: If search radius does not yield a value
then previous grid interpolation onto data point used

Getting the best out of Minimum Curvature
Size of grid cell: Assuming the station distribution is
of uniform density then the station density is the average
number of data points per unit area. The grid size is
normally half this data density e.g. if station density is 1
station per square km, then the optimum grid size is 0.5
km. This grid will best honour the original data.

If cell size is too small then this will take more computer
time without any real improvement of the result.

A small cell size may be required if the data are to be
contoured. General rule is 2 mm or 0.1 inch at plot scale
for contouring. This can be achieved by specifying a
small cell size or by regridding using bi-directional
filtering (see next section).

A large cell size i.e. one that is much larger than the
optimum cell size described above will need a low-pass
(de-aliasing) filter applied to the data. This is difficult
since the data are not in a grid form that a simple digital
filter can be applied. To get over the problem a method
successfully used by GETECH is to de-sample the
dataset by calculating weighted averages e.g. consider a
large cell size of 10 km, then average your data in 5 km
boxes (average amplitude, latitude and longitude).Use
these averages to construct your minimum curvature
grid

Overcoming poorly sampled data: In theory, sampling
should be half the smallest feature present in the data.
This can thus result in a grid having a frequency content
that reflects the data distribution used rather than the
underlying geology. This problem can be to some extent
minimised by

i. Always showing data distribution on all maps
produced or in an accompanying map.
ii. In poorly constrained areas there can be an
overshoot of values.

This can be controlled by applying an internal tension
(see Fig. 17/9) to the surface (Smith and Wessel, 1990.
Gridding with continuos curvature splines in tension
Geophysics, 55,293-305). This extension to the method
reduces overshoots. Tension can be thought of as
adding springs to the edges of a flat sheet that is
stretched. Infinite tension produces a straight line in the
surface between data points.



Figure 17/9: Affects of tension on grids

Always check computer results: Always include in
your plot original station locations and if possible posted
Bouguer values. This will help you spot errors and relate
them to station errors e.g. bulls eyes anomalies (single
station anomalies causing circular anomalies about the
station), herringbone problems and anomalous values
forced to go through each data point. (see section on
gridding line data) Increasing tension flattens the surface
as illustrated in Fig 17/10. Negative anomalies occur in
the no tension map (left) which are controlled by no data
points.



Figure 17/10: The spatially effects of poor data
coverage showing the affects of no tension and 0.8
tension.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 4
GETECH/ Univ. of Leeds:

Affects of gridding: If the station distribution is uneven
then the regular grid will under-sample areas of high
station density and over-sample areas of poor station
density. (Thus need to show station distribution on final
maps.) The grid size should, as previously indicated,
honour the highest density data used. This will not add
high frequency to low station density areas.

17.2.2 Natural neighbours or Tinning Gridding
Natural Neighbours or Tinning Gridding
Tinning is generally not recommended for closely
spaced along-track data e.g. aeromagnetic data, due to
the large number of triangles generated. It is ideal for
land gravity data and is a rival method to the minimum
curvature method. When data points are uniformly
distributed over a survey area there is often little
difference between the Tinning and minimum curvature
methods. Differences are generally found in the methods
where there are data gaps since each method handles
interpolation differently.

Tinning (NN) Method (Sambridge, M., Braun, J .,
McQueen, H. 1995.)
Tinning is the name used by Oasis Geosoft for the
Natural Neighbour (NN) interpolation/ gridding method.
This method was developed by M. Sambridge et al
(published 1995) based on a so-called Voronoi cell
generating algorithm devised by S. Fortune (published
1987).

The NN method is based upon a particular form of TIN
mesh generated on the x,y distribution of data points.
This mesh is in turn used to generate a pattern of
Voronoi cells in x,y from which weights are generated to
effect an interpolation of z value data onto a regular grid.

A TIN mesh is an irregular triangular network of lines
joining each data point within (for example) a random
distribution of observed data points, to its neighbouring
data points, with no cross-cutting of lines. Clearly, many
alternative triangulation meshes are possible within any
one distribution of data. To make the triangular network
(called Delauney triangles) unique requires a set of
Voronoi cells to be initially generated (fig.17/11). A
conjugate set of Voronoi cells is generated from the
TIN mesh based on the observed data points. A second
set of cells is also generated (Fig 17/12), in effect, by the
addition of grid nodes (to which the data are to be
interpolated) to the observed data points. In this way the
ratio of area overlap of the grid nodes Voronoi cell to the
entire neighbouring data cell area is calculated and used
as a weighting factor in calculating that data points z
value contribution to the overall interpolated grid node
value.

The distinctive quality of a Voronoi cell surrounding each
node is that: (i) it is formed by straight-line segments that
lie at right angles to the lines of the TIN mesh connecting
each point to its surrounding neighbours, and, that (ii)
these segments intersect these lines at the half way
point. Cell closure is achieved by extending the
segments outwards until they have all met their
neighbouring segments. (See Geosoft Technical Note on
Tinning at www.geosoft.com).


Figure 17/11; Construction of a Voronoi cells


Figure 17/12: To determine the value at the grid
node x a second Voronoi cell is generated for it
natural neighbours and the value of X is a weighted
factor based on the percentage the second Voronoi
cell covers the original cells.
A feature of the NN approach unlike Minimum Curvature,
is its inherent trend reinforcement characteristic in any
direction along which there is a localised systematic
long-axis Voronoi cell orientation. This can vary from
locality to locality within any one dataset as it is
dependent on the data distribution pattern. It could be
argued however that this inherent characteristic of trend
reinforcement may in some instances be a liability i.e.
where data are acquired or interpolated along the
wrong orientation.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 5
GETECH/ Univ. of Leeds:

17.2.3 Pros and Cons of Natural Neighbour
(NN) and Minimum Curvature (MC)

The MC approach is theoretically true to the data values
(depending on tolerance setting and grid interval of
course) but appears to show little inherent ability to
follow trends in an intelligent fashion where data
becomes sparse.



Figure 17/13: Hand contouring



Figure 17/14: Same data used to generate NN and
MC grids

See Li and Gotze, 1999 for comparisons NN, MC and
Equivalent Source (ES)

The NN approach is not always true to the original data
values, however it appears to have an inherent ability to
deal intelligently with sparse data. Reference to the
accompanying images shows that it generates a map
conforming much more closely to that of hand-contoured
original data with all the non-linear eyeballing /trend-
recognition that this human approach brings to the
process.

The correct choice of gridding method appears ultimately
to be dependent on the use to which the data are going
to be put e.g.:

(i) A contour map tends to be used more for qualitative
interpretation purposes (i.e. for drawing lines on maps)
rather than for quantitative purposes, and so the NN
approach appears to be the clear choice.

(ii) The parent grid however may of course be needed
for the more quantitative purposes of gravity stripping or
3D inversion - in which case closer trueness to the
original data points should be de rigour. Having said that
however - if the interpolation of data between these
points leaves something to be desired, as may often be
the case with MC in the case of less than ideal data
distribution, then it could be argued that the uniformly
distributed qualitative fidelity associated with the NN
approach is on balance more desirable than the islands
of quantitative fidelity associated with the MC approach.

Natural Neighbours or Tinning References

Fortune, S. 1987.
A sweepline algorithm for Voronoi diagrams.
Algorithmica , 2, pp 153-174.

Geosoft Tinning - Triangular Irregular Network
Gridding for Oasis montaj v5.0. Technical Note.
www.geosoft.com
Sambridge, M., Braun, J ., McQueen, H. 1995.
Geophysical parametrization and interpolation of
irregular data using natural neighbours. Geophysical
Journal International. 122, 3, pp 837-857.

Li and Gotze, 1999. Comparison of some gridding
methods. The Leading Edge.

17.3 Mapping Profile Data

Aeromagnetic ship track and airborne gravity data
present some additional problems to generate final map
products which can take the form of contour maps,
profile maps and colour images/maps

Colour images are becoming more widely used to
traditional contour maps at the larger scales (1:250,000
and greater). At the smaller scales both contours and
colour images are common. Profile maps are only
necessary when wide line-spacing makes contouring
ambiguous or to facilitate quantitative interpretation.
Remember gridding (and map construction) is a form of
filtering and de-sampling line data reduces gradients
especially for anomalies with wavelengths less than 2 x
grid size.

Aeromagnetic data are measured continuous along the
profile and thus has all frequencies whereas
perpendicular to the profile direction the minimum
frequency content is twice the flight line spacing i.e. if
flight lines separation is 1 km then the minimum
wavelength perpendicular to flight line is 2 km. Thus data
along flight lines needs to be filtered to remove
frequencies less than 2 km to allow contour map to
accurately represent field with wavelengths greater
than 2 km. In practice gridding is often 1/3, 1/4 or 1/5 of
line spacing in an attempt to interpolate high frequency
across lines.

Sometimes flight line spacings are in bands (doublets,
triplets, etc. ) i.e. a band of 2 or 3 flight lines with
separated 1 or 2 km. then a 10 km gap before the next
band of flight lines. In this case only the bands of closely
spaced flight lines are contoured up in any detail

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 6
GETECH/ Univ. of Leeds:

17.3.1 Line Levelling Profile Data

Marine and airborne gravity, magnetic and radiometric
profile/line based data all suffer, for one reason or
another, from levelling problems due to inaccuracies in
acquisition, removal of data related corrections such that
data in a cross-line direction systematically do not vary
smoothly as in the in-line direction. This results in
significant distortion of any resulting grid/map such that
the resolution of the geological signal is degraded.
Figures 17/15 & 17/16 show before and after line
levelling.

Levelling profile/line based data is normally done in two
stages

Stage 1. Tie Line Levelling or Cross Over Analysis:
Normally line based surveys have tie lines orientated
perpendicular to the survey lines with tie-line spacing
being of the order of 5 to 10 times the survey line
spacing. After all corrections have been applied to the
along line data, the data values at the survey line, tie line
intersections should be the same. Rarely is this the case
due often to the poor navigation and residual effects of
diurnal corrections (mag. data) being major sources of
the miss-ties.



Figure 17/15: Before line levelling showing strong
correlation with flightline and their direction

Now that base station magnetometry and GPS
navigation are standard for surveys these errors have
been greatly reduced. Differences at all cross-over
points still occur due:
magnetic diurnal corrections being rarely accurate
enough to remove this temporal / spatially varying
correction
variations in flying heights
magnetic compensators doing a good but not perfect
job of minimising aircraft noise which are prone to
directional biases
processing differences between lines.



Figure 17/16: Same data after line levelling

This last point is seen on marine gravity data when ship
tracks are either with or against the sea. In such cases,
the quality of gravity data deteriorates with sea
roughness which is a function of ship direction and time.
As sea state deteriorates all dynamic related corrections
tend to enlarge and are less precise

Tie line cross-over error analysis can use least squares
to minimise and remove major errors. Since tie lines are
subject to error as are the survey profile data one cannot
assume that no errors are in the tie line data. Various
methods have been devised to overcome this problem.
Geosoft tie line levelling works as follows

a. Cross-over difference values are computed at each
line intersection
b. The cross-over difference value are loaded as a new
channel on the tie lines
c. A low-order polynomial curve is fitted to the cross-
over differences values along each tie line. This is, in
effect a low pass filter
d. The fitted curve is added to the tie line values. This
procedure is known as statistical levelling. It is based on
the principle that if the tie line is consistently higher or
lower than the traverse lines crossing it, then the
difference is most likely due to zero-level drift on the line.
e. The fitted curve values are subtracted from the
cross-difference values to give residual cross-difference
values. This is a high-pass filter complementary to the
low pass filter above
f. The residual cross-difference values are copied to
the traverse lines, interpolated and subtracted from the
traverse line values.

Thus a fraction of the difference at each intersection is
added to the tie line and the remainder is subtracted
from the traverse line, bringing the two lines together.
This has the effect of removing zero-level drift on both
the tie lines and the traverse lines.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 7
GETECH/ Univ. of Leeds:

The above procedure is good at removing large errors
but often is unable to remove small systematic errors.
Thus the need to carry out micro-levelling methods.

Stage 2 Micro-Levelling: This is necessary to remove
residual line to line noise. If these small levelling errors
remain then they generate local high gradients that will
distort derivative grids. These errors are seen as long
wavelength biases between flight lines causing short
wavelength variations perpendicular to the flight line
direction.


Figure 17/17: Examples of flight line and ship track
orientated noise due to residual leveling problems
that cross over leveling has been unable to correct
for.

In the examples in Figure 17/17 the errors in
aeromagnetic data will often be associated with incorrect
diurnal corrections being applied or different flying
heights between flight lines whereas for gravity ship data
it is often a bias in Eotvos correction.

The last thing we wish to do is to remove the problem by
low pass filtering since it is often the short wavelength
signals that are of interest and any filtering will remove
these geological signals of interest.

The best way to quality control micro-levelling results is
simply by inspection of the difference grid between input
grid and output grid. A perfect micro-levelling difference
grid is one that only shows profile oriented noise with
little or no geological signal, particularly sub paralleling
the line profile direction.

There are a number of proprietary methods used by
contractors to solve for this. Three very different
methods are compared here using the following test data
(Figure 17/18) which clearly has flight line noise and
geological signal closely paralleling the flight lines.

Also seen in this old dataset are clear changes in the
dataset at edges of map sheets. This is due to limited
computing power that prevented the whole survey being
processed in one go. Instead it was done one map sheet
at a time.



Figure 17/18: Test map sheet showing W-E flight line
noise (red arrow), N-S sheet edge problems (white
arrows) and geological signal sub paralleling the
flight lines(yellow arrows). 2km flight line separation.

a) Median Levelling (uses digital profile data)
(From: Eirik Mauring and Ola Kihle 2006 Leveling
Aerogeophysical data using a moving differential
median filter Geophysics 71, L5 (2006);
doi:10.1190/1.2163912)


Figure 17/19 The concept of median

The method uses the following equation

Znew=Zorg +Areamedian (Am) - Linemedian (Lm)

A median operation is carried out on a digital point within
a search radius specified. The Linemedian, Lm, value is
assumed to have a small line bias where as the
Areamedian, Am, does not.

The operator is then moved to next digital data point and
the process repeated for the whole map.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 8
GETECH/ Univ. of Leeds:


Figure 17/20a: Generation of Linemedian Lm


Figure 17/20b: generation of Areamedian Am

This is an iterative process gradually reducing the
search radius. In this example from 20,000m to 10,000m
to 4,500m and finally to 2,000m. The final result is
shown in Figure 17/21 and the difference plot Fig17/18-
17/21 is shown in Figure 17/22.



Figure 17/21: Median micro-leveled map.



Figure 17/22: Difference grid between 17/18 and
17/21

At first sight of Figure 17/21 the results look good but
they have in fact remove a lot of short wavelength
geological signal that parallel and sub-parallel the flight
line direction.

b) Geosoft GX developed by PGW Ltd (using grid
data only)
(From B.R.S. Minty 1991 Simple micro-levelling for
aeromagnetic data Exploration Geophysics 22(4) 591
- 592 doi:10.1071/EG991591)
Sometimes flight line data are not available so can we
improve just using the original grid? The method
proposed is a two stage method using GX decorr.gx
and miclev.gx

The method generates a noise channel that is defined

noise = line levelling noise + geological signal

The noise channel is then processed to leave only the
line levelling noise which can then be removed from the
original grid data.

GX decorr.gx


Figure 17/23: The De-corrugation filter

This GX applies directional filter plus a 6 order high pass
Butterworth filter with cut-off wavelength 4 times flight
line spacing. This produces the noise channel defined
above.



A. decorr noise grid at wavelength <10 km
(5 x line spacing)

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 9
GETECH/ Univ. of Leeds:



B. decorr noise grid at wavelength < 20 km
(10 line spacing)

Figure 17/24: Map B is preferred

Miclev.gx

In this GX is a low pass Naudy filter used to HP 20km is
used to decorr noise grid to leave just line levelled noise.
The Naudy amplitude filter limits applied were default
(standard deviation of noise grid) and open (i.e. no limit)



Figure 17/25: Final micro levelling result after
applying 10km decorr and 1000 miclev.



Figure 17/26: Difference grid between final (fig.
17/25) and the original (fig. 17/18)

The method preserves well the sub-parallel features but
generally smooths final grid by removing short
wavelength geological signal. Never-the-less this is a
good result since it is inevitable that some smoothing will
be applied since only a grid was available.

c) GETECH Draping Method (uses profiles and
grids)

This method can be considered as a remove-restore
method, where the spectra content of the data are
separated into long wavelength component (grid) and
short wave length component (profile).

The flow diagram is shown in Figure 17/27


Figure 17/27: Flow diagram for the GETECH draping
method

Long wavelength component (grid): The problem with
micro-levelling is all due to short wavelength variations
perpendicular to flight lines (although along an individual
line it will manifest as a bias). So a low pass (LP) filter of
map will generate a good clean regional map without
any micro-levelling problem. One uses the shortest LP
that does not show the corrugations. This grid is then
used to interpolate regional values onto the profile lines.

Short wavelength component (profile): Each profile is
filtered with high pass (HP) filter of ~100km to remove
any profile bias.

The results of these two processes are added together
and gridded. The result should be better than the start.

Then the process is repeated but now long wavelength
component (grid) can be defined for shorter wavelengths
since corrugation not such a problem. The short
wavelength component (profile) is repeated with shorter
HP.

The resulting grid Fig 17/28 and difference grid shows
that the maximum amount of line noise and least amount
of short wavelength signal has been removed making
the GETECH method a robust method that can work in
most situations (i.e. variable flight line configurations).
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 10
GETECH/ Univ. of Leeds:



Figure 17/28: Final (left) after 3 iterations (f20d100,
f10d40, f5d8). Difference (right) showing map change
and nearly pure line noise and very little geological
signal.


17.3.2 Gridding Profile Data

General rules on grid cell size Data was traditionally
provided to clients as contour maps and stacked flight
line profiles (Fig 17/29). Now days colour shaded relief
and contouring are standard products. A general rule for
determining the size of the grid best suited to a particular
data set is to look at the point spacing along line and
across line. The general rule is to grid data at 1/4 or 1/5
of the line spacing. Hopefully the cell size is optimum for
retaining the short wavelength information seen along
line during the gridding process.

A further general rule, is that the flying height should not
be greater than the distance between magnetic bodies to
be resolved. Resolved means being able to see the
edges of the body. For a survey flown at 500m line
spacing and an average of 230m above topography and
magnetic targets being up to 200m below the surface
then a 100m cell size is a very good cell size for data
which are on average 400m above the magnetic source.

Random Gridding (see minimum curvature
gridding section 17.2.1) This method does not
enhance data across line (i.e. in the predominant
geological trend). It has the advantage of utilising data
from both flight lines and tie lines. The tendency to have
high frequency data content produced an
inhomogeneous grid, where short wavelength anomalies
are clustered along the data lines.

Bi-directional gridding Bi-directional gridding is
usually preferable for line oriented data as it tends to
enhance trends perpendicular to the direction of the
survey lines. As the optimum line direction is
perpendicular to the main magnetic lineation within a
survey area (Barlow, 1991 Types of surveys -
Definitions and objectives. In: Teskey, D J (Ed),
Guide to aeromagnetic specifications and contracts,
Geological Survey of Canada, open file report), this
method is ideal for aeromagnetic survey data.

The gridding process is carried out in two principle steps
as illustrated in Fig 17/30.

STEP 1 Each line is interpolated along the original
survey line to yield data values at the intersection of
each required grid line with the observed line.

STEP 2 The intersected points from each line are then
interpolated perpendicular to the original survey line to
produce a value at each grid node. This second pass of
interpolation creates grid lines. A grid line is a series of
numbers that represent all the values along a single grid
row.
Geological trends in the data can be emphasised by the
appropriate orientation of the grid so that the second
interpolation is in the direction of strike. In addition to
trend enhancement, the method allows interpolation to
be selected independently for the down line and across
line directions. The interpolation can be linear, cubic
spline (minimum curvature) or Akima spline.

Filtering of line data before interpolation is also possible.
This method is also a good means of re-gridding into
new direction with same or different projection.



Figure 17/29: Examples of (i) magnetic profile map
with structural trend 30
0
to the perpendicular
direction to flight lines and (ii) magnetic contour
map version with flight lines shown.

A problem, common to all gridding methods, is that
linear trends at acute angles to the flight lines tend to
produce corrugated patterns (see Figs.17/29 and 17/31)
or bull-eyes (Fig.17/32) when they encounter flight lines.
In the bi-directional gridding method this is due to the
fact that the technique acts as an x-y filter, while for
minimum curvature it is due to the inherent nature of the
method, the surface with a minimum curvature in the
absence of constraints is a sphere, or a circle in 2-D.
Thus linear 2-D features will tend to break up (see Fig.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 11
GETECH/ Univ. of Leeds:

17/32). The source of the problem is the under-sampling
of the data in the direction perpendicular to the flight
lines or in the gravity case of point measurements there
being not enough points defining the feature.


Figure 17/30: Bi Directional Gridding

A general requirement, in aeromagnetic data processing,
is that the interpolated surface honours the data values.
When bi-directional gridding is used, a rotation of the co-
ordinate system can reinforce the regional trend. Unlike
Fig. 17/31, the case shown in Fig.17/34 has a north-
south grid orientation which is at an angle to the flight
lines and the dominant 2-D trend (assumed
perpendicular to flight lines) will generate a corrugation
similar to Fig.17/31 that is equivalent to the line spacing.
Fig.17/34 now rotates the initial bi-directional grid
perpendicular to the 2-D trend which allows the
interpolation between profiles to be more accurate
and results in a de-corrugated map. Fig. 17/34 shows
the result of regridding Fig.17/31 perpendicular to the
geological trend. Note the corrugation effect has been
removed. If the 2-D lineament was a high (or low) rather
than a steep gradient, then the initial north south grid
would have resulted in a set of bull- eyes.



Figure 17/31: Corrugated pattern due to bi-
directional grid N-S and E-W (see Fig.17/24).

anomalies at flight line locations, similar to Fig.17/32,
and in the rotated case a more continuous feature.

Rotated grids as in Fig.17/33 can be re-gridded into a
conventional north-south co-ordinate grids by the re-
application of bi-directional gridding (lower diagram in
Fig 17/33). This is done by assuming a grid column as if
it where a flight line. The resulting grid will not have the
original corrugation or bull-eyes and will be ready for
further processing.


An anti-aliasing filter can also be applied to the line data,
such that the shortest wavelength is the same as the
one determined from the line spacing. This solves the
problem, but at the cost of removing useful short-
wavelength geological information along flight lines.




Figure 17/32: Bull-eyes effect along a 2D lineament.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 12
GETECH/ Univ. of Leeds:



Figure 17/33: Bi-Directional grid in original north-
south co-ordinate system, rotated and back to
standard projection orientation




Figure 17/34: De-corrugation of Fig. 17/33

17.4 Extensions to Gridding Methods

Most of the methods so far described have problems
when two competing trends are present Different
software have varying methods of enhancing such
trends.

Geosoft method: If one major trend is present as shown
in left hand side of Fig 17/35 then Geosoft can improve
the trend enhancement by evaluating the correlation of
anomaly highs and lows between flight lines and
constructing trend lines as shown in Fig 17/36 .



Figure 17/35: Geosoft Minimum Curvature before
and after trend enhancement



Figure 17/36: Geosoft Trend enhancement looks for
correlations between lines and draws interpolation
lines

These trend lines allows additional data points to be
generated along these trends between flightlines to
control interpolation. The results are shown in the right
hand side of Fig 17/35.

Strike Interpretive Gridding by Scott Hogg Associates
Ltd web site (http://www.sha.on.ca) illustrate the
problem in Figs. 17/37 to 17/39. Neither minimum
curvature or Bi directional gridding provide an elegant
solution. It should be noted that this example given by
SHA is somewhat biased but illustrates the point. I say
bias since SHA have not tried to optimise any particular
method to minimise the problems and Geosoft has
recognised the problem and partly solved it.


Figure 17/37: Minimum curvature shows up the
bulls-eyes
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 13
GETECH/ Univ. of Leeds:




Fig. 17/38: Bi directional gridding at about N30
0
This
reinforces trends in the direction of 30
0
but
destroyed the N315
0
trend.

The SI grid (Fig 17/40) is generated by loading the
profile line data into the gridding program and a gridded
image is produced. A course grid of vector markers is
displayed over the image and the user can selects the
orientation of the vectors aligning along trends seen in
the data. These vectors are then used to control the
interpolation direction between flight lines


Figure 17/39: Bi directional gridding at about N315
0

This reinforces trends in the direction of 315
0
but
destroyed the N30
0
trend



Figure 17/40: The SI grid that reinforces both trends

Method After Keating, P 1997: A technique,
implemented in some bi-directional gridding software
packages, is to allow the user to manually introduce new
lines that join the local trends to be reinforced. Data for
these trend lines are then linearly interpolated between
measured data points. Keating, 1997 ( Keating, P 1997
Automated trend reinforcement of aeromagnetic
data. Geophysical Prospecting 45:521-534) has
proposed a method to automate this procedure so that it
can be used with most gridding methods.

Step One: locate all maxima and minima on a
preliminary gridded map, then the data point located
nearest to each of these gridded points is found.

Step Two: is to associate each of these points with its
nearest maximum or minimum. Selection among those
pair of points of the ones that should be joined to
reinforce the local trends is based on simple criteria:
points joining together should be on adjacent flight lines,
the angle between two points to be joined and the flight
line should not be more than 30
0
. This last criteria
avoids linking together data points that are unrelated.
Step Three: new data values are then linearly
interpolated between each pair of points and added to
the original data file for subsequent re-gridding.

Trend Spline Gridding (After Fiztgerald, D. Yassi, N
& Dart, P. 1997 A case study on geophysical
gridding techniques: INTREPID Perspective Expl.
Geophys 28:204-208.)

To overcome trends with oblique angles to the flight
direction which as seen generate bulls eye artefacts
they have used the method of Brindt and Hauska
(1985, Directional dependent interpolation of
aeromagnetic data 11
th
Int sym.on machine
processing of remotely sensed data, Purdue Univ
Indiana USA). This method adds interpolated data
points between the line data to control and enhance
trends. These points and values are determined by a
moving 9 point window over three lines of data (3 on
each line) and determine the sum of the variances for
the 9 points/line on the 3 lines The direction of minimum
variance is then the strike direction at. The maximum
strike angles evaluated are within the range -54
0
to
+54
0
. Interpolation is then carried out along this trend to
determine the values locations between the 3 lines
perpendicular to the direction of the moving window. The
window moves to the next location and the process
repeated. Solutions can be rejected if there is no support
from adjacent lines


17.5 Linking Aeromagnetic Surveys

Aeromagnetic surveys may be acquired in different
forms from constant barometric (constant height above
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 14
GETECH/ Univ. of Leeds:

sea level), loose drape to drape (constant height above
ground). See section 13.3.

Before aeromagnetic surveys can be linked together
they have to be reprocessed into a common set of
survey parameters. If the x, y position and elevation of
all the points are known then this set of measurement
points may be thought of as defining a 3-dimensional
surface ( or the survey datum). According to the surface
equivalence theorem of electromagnetics, the magnetic
field values measured on the survey datum may be used
to compute the magnetic field on any other arbitrary
datum above the ground. This provides the basis for
reprocessing surveys with different survey datum. Thus
it is possible to convert between barometric and drape
survey and change the observational height.

If the survey datum for two or more surveys are the
same and all other corrects such as IGRF have been
applied then it is likely that the two surveys will fit
together well except for residual adjustments. Fitting
together two or more surveys with unknown processing
histories can be more difficult since although you have
adjusted the surveys to the same survey datum the
different methods of removing regional trend of the
Geomagnetic field (now normally removed by IGRF) can
give rise to DC and slope adjustments. In addition
location of old surveys can give rise to error.

The procedure generally used is to choose your best
survey as the starting grid. Best here often refers to the
survey which is the most recent with best acquisition
parameters(highest resolution), GPS location and known
processing history. This primary survey will be fixed and
others joined to it with at all times preserving the higher
resolution survey. Adjustment of surveys to fit the
primary survey is done by extracting common profiles
along the overlapping borders of the survey. The long
wavelength component of the differences between these
profiles is used as a basis for constructing a correction
grid by which the survey is adjusted to the primary
survey. The merged survey is now part of the primary
grid. The process is repeated for the next survey and so
on.

If the final linked survey is large enough (greater than
800km then the grid can be draped onto the satellite
derived crustal field The CHAMP satellite field has a
crustal field with wavelength greater than 500km. (see
section 13.3.2). In this way grids that cannot be linked,
because they do not overlap, can be brought to a
common datum. Such a process minimises long
wavelength biases that can arise from linking a large
number of small surveys.


17.5.1 Height Continuation and Draping

Let us assume the required end product is a 1km grid at
500m above topography i.e. a draped survey at 500 m
above terrain. Each survey to be linked needs to be
reprocessed to satisfy these parameters.

Upward continuing a drape survey to a higher terrain
clearance is known as upward continuation. In this case
the magnetic field on the new surface is given by
applying the Fourier-domain filter

exp(-h*k)

where h is the difference in height and k is the
wavenumber. Upward continuation is preferred to
downward continuation since the latter amplifies noise.

In order to solve the more general problem where the
difference between the two datum surfaces is not a
constant, then the chessboard method devised by
Cordell is used ( L. Cordell Techniques, applications,
and problems of analytical continuation of the New
Mexico aeromagnetic data between arbitrary
surfacea of very high relief, Institut de Geophysique,
University de Lausanne, Switzerland, Bulletin No 7
p96-99 1985) . The procedure is illustrated in Figure
14/37 where

a. Barometric to Drape-Direct

Barometric Survey Datum is upward and down
continued to a set of continuation levels and the desired
drape datum surface is interpolated from these
continuation levels. The example shown is mainly
downward continuation which will amplify noise




Figure 14/41: a. Barometric to Drape - Direct

b. Barometric to Drape-Indirect

In this case the barometric Survey Datum is upward
continued to a set of continuation levels and an
intermediate continuation datum is interpolated. This
intermediate datum is then downward continued to
desired drape datum height.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 15
GETECH/ Univ. of Leeds:



Figure 13/41: b Barometric to Drape - Indirect

c. Loose Drape to Drape - Indirect

Here the loose drape Survey Datum is upward continued
and an intermediate datum interpolated before
downward continuing intermediate datum to desired
drape datum height




Figure 14/41: c Loose Drape to Drape-Indirect


d. Drape to Barometric - Direct


Figure 14/42: d Drape to Barometric Direct

This could help to reduce the effects of anomalies
resulting from the sea bed bathymetry.

The continuations types enable two or more surveys to
be adjusted to have the survey datum.


17.5.2 Grid Linking

The following example shows two surveys that have
been processed to the same parameters, but on joining
together there is some mismatch along the edges of the
survey.



Figure 14/42: Two surveys that nearly match

To correct for this mismatch a long wavelength
component of the difference profile, that was constructed
along the edges around the two surveys (red line in Fig
14/42), is used to derive a long wavelength correction
grid shown in Fig 14/43(left). This grid is then used to
correct Grid B to generate the final linked grid shown in
Fig 14/44.



Figure 14/43: Correction grid to be applied to Grid B



Figure 14/44: Final merged grids
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 17, Page 16
GETECH/ Univ. of Leeds:





DATA ENHANCEMENT

Section 18 Understanding the Shape of Anomalies and classic
methods to isolate individual anomalies.
Section 19 Data Enhancement





J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 18, Page 1
GETECH/ Univ. of Leeds:

SECTION 18: UNDERSTANDING THE SHAPE OF
ANOMALIES &
CLASSIC EHANCEMENT METHODS TO
ISOLATING INDIVIDUAL ANOMALIES


Gravity anomalies are probably the easiest to
understand since rock density is a scalar quantity. If the
body under investigation has a uniform higher density
with respect to its surrounding rock, then there tends to
be a simple relation between the amplitude and shape
of the resulting gravitational anomaly to the shape and
volume of the sub-surface high density body. This is not
necessarily the case with magnetic anomalies since the
magnetisation parameter of a rock which controls the
shape of the anomaly field is a vector quantity. The
same body and susceptibility will generate a magnetic
anomaly depending on the inclination of the inducing
field and the orientation of the body in that induced field.
Gravity and magnetic fields are a mix of shallow short
wavelength anomalies and regional and generally
deeper long wavelength anomalies. To identify and
interpret the shallower anomalies it was the custom to
separate the anomalies by regional residual
separation.

18.1 Understanding the Shape of
Magnetic anomalies

18.1.1 How to predict the shape of an
magnetic anomaly knowing the shape and
magnetisation of the causative body
We initially look at the vertical and horizontal vector
responses. To visualise the total magnetic field effect
the magnetic inclination and declination need to be
factored in (see later this section)
i. Shape of magnetic field due to an isolated
magnetic pole (monopole, this is similar to gravity case
i.e. the red Z curve represents the vertical
component). In the presence of no external
Geomagnetic field, the lines of magnetic force (flux) are
radial to the monopole. In the case shown above the
magnetic pole is negative so the lines of force are
pointing inwards, since it is convention to consider the
force acting on a proton (positive charge).Thus m can
be considered as a South Pole of a magnetic and the
+m being the North Pole of a magnet with arrow
representing a magnetic compass with its North Pole
pointing towards the m monopole

Figure 18/1: Vertical and horizontal components
profiles across a monopole (flux model)
.Now consider the interaction force that a proton(positive
particle) will have at different positions along the
surface in the X and Z directions.
i. Z Plot : There are two factors to consider when
describing the Z force field, which is positive
downwards
The direction of the flux lines:- the flux lines are in
this case all pointing down through the surface, so the
complete anomaly will be positive (reinforcing the
Earths field). However, the angular relationship of the
flux changes with distance, so that at large distances
from the body the field in nearly horizontal and thus
there will be little vertical component to the field (cosine
effect)
The distance from the causative body (i.e. Inverse
square law) - over the body will be the shortest distance
and thus the strongest effect on the proton. In flux terms
the divergence of the flux lines represents the decrease
in force (flux per unit area)

These two effects have to be vector summed in the
vertical component to get shape of anomaly
ii. X Plot : Do the same as above but now consider
the component of force on the surface proton to be in
the X positive direction.
Direction of flux lines changes over the centre of
the body, so anomaly changes sign (or phase). Or take
cosine of angle between flux and component being
measured. This is goes from
0 to 90 cos = 1 to 0
90 to 180 cos = 0 to -1
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 18, Page 2
GETECH/ Univ. of Leeds:

Distance Effect same as before
ii. Shape of magnetic field due to vertical dyke.
Can use lines of force or magnetostatic charge to
determine anomaly field.

In the following examples only the field due to X and
Z have been shown. What does the Total field T
look like since this is what is measured by
magnetometers?

Does T X Z
2 2
?
Only true if there is no Earth's Magnetic Field.




Figure 18/2: Magnetic response of dyke with
horizontal magnetisation (close to equator)



Figure 18/3: Magnetic response of dyke with vertical
down magnetisation (close to N Pole)


Figure 18/4: Magnetic response of dyke with inclined
down magnetisation (European area/North America)

Size of Geomagnetic Field = 35,000 nT (Equator) to
70,000 nT (Poles)
Size of Magnetic Anomaly , T ~ 100 nT

Thus magnetic anomalies T are ~ 1 / 500 of
geomagnetic field, thus we only measure the anomaly
field in the direction of the geomagnetic Total field.
T = Z sin I + X cos I cos(C-D)
where I = Inclination, C = Traverse direction X
measured in this direction and D = Declination
To determine T by a computer program
First calculate Z & X
Then calculate T
See later for more details of computer programs

iii. Shape of dyke anomalies with latitude
a) South Pole (Total field now has Z positive upwards
whereas for northern hemisphere it is positive
downwards)


Figure 18/5: Vertical magnetic response of dyke with
vertical up magnetisation (close to S Pole). What
does the horizontal component look like?
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 18, Page 3
GETECH/ Univ. of Leeds:


b) Mid southern latitude


Figure 18/6: Magnetic response of dyke with inclined
up magnetisation (Southern Africa /Australia)

Note : Use same logic as Fig 18/4. but remember Z is
positive upwards in southern magnetic hemisphere.
18.1.2 The Size and Shape of Magnetic
Anomalies

(You figure out whether or not gravity anomalies depend
on same parameters)

Figure 18/7: Magnetic response of an induced
dipole as function of latitude

The Size and Shape of Magnetic Anomalies depends
on:

way the survey was carried out:
Altitude of sensor (barometric or constant height
above terrain
Line spacing
Direction of lines
Station distribution
Parameters and location of body:
Shape of body 3D, 2D or 1D
Volume of body
Susceptibility contrast (density contrast)
Latitude (no effect on gravity)
Depth of body
Remanence of body (no equivalent in gravity)
Question: What is the magnetic response along
an W-E profile across a dyke located at the
equator with dyke direction(strike) N-S?
See next section
18.1.3 Flux, Magnetostatic charge and
anomalies (or lack of anomalies) at the
Magnetic Equator
When there is no magnetic body the flux line,
representing the Earths magnetic field remain
unaltered. When a magnetic body is present the flux
generates a magnetostatic chargeabout the surface
of the body. If we assume the magnetite crystal
domains align with the flux then one can visualise
the domains being small magnets with north and
south poles. By aligning these domains it brings
north pole in line with south poles and thus the field
cancels out, leaving only the monopole charge of the

Figure 18/8: The concept of magnetic flux and
resulting magnetostatic charge
south (-) and north (+) on the surface. It is this
charge that dictates the shape of the resulting
magnetic field. The previous 2D examples of dykes
shows how well the flux and magnetostatic charge
concept allows us to visualise the shape of the
magnetic field. Lets use this magnetostatic concept
in 3D.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 18, Page 4
GETECH/ Univ. of Leeds:

By using the magnetostatic charge concept is
becomes easier to understand how and why the the
induced magnetic field interacts with magnetised
bodies and results in structural anisotropy close to
and over the magnetic equator.

We look at three scenarios when the geomagnetic
field has an Inclination of I = 90
0
or RTP, Fig. 18/9, at
45
0
Fig 18/10 and at 0
0
or RTE, Fig 18/11.

a. Inclination I = 90
0
For magnetic field pointing vertically down (Fig 18/9)
only the top and bottom surfaces have magnetic flux
passing through them thus generating a
magnetostatic charge. On the surface only isolated
south poles are seen giving rise to a ve charge.

Figure 18/9: Two isolated magnetic bodies
showing the magnetostatic charge from vertically
induced magnetic field.
The important points to draw from this are
the vertical sides are not charged (due to no flux
cutting these vertical surfaces).
Upper surface of bodies is charged as are the
lower surfaces at some depth
In plan view there is good distribution of charge
on the surfaces of the magnetic bodies to allow
for a strong magnetic response immediately
above the source to define the location of body
and its edges.
This response is similar to Inclinations of +/-90
0
seen
for the dipole source (Fig. 18/7). There is no
dependence of anomaly with azimuth so that all
structural contacts are equally seen.

b. Inclination I = 45
0

As with the dipole at 40
0
(Fig. 18/7) the field is
asymmetrical with the negative anomaly component
on the north side of the anomaly. The magnetostatic
charge is also asymmetrical as shown in Fig. 18/10.
The plan view of the two structures shows that the
top and the north and south sides of the structure
are charged but the N-S vertical contact is no
charge. The structure is thus well imaged with
complexity of the inclined inducing field. The
anomaly generated from the inclined field can be
easily reduced to the pole to allow the anomalies to
have a more simple relation to the under lying
causative body.

Figure 18/10: Two isolated magnetic bodies
showing the magnetostatic charge from 45
0

inclined induced magnetic field.

c. Inclination I = 0
0

The problems start to manifest themselves at or
close to the magnetic equator. The horizontal
inducing field will only allow north and south facing
vertical edge structures to be charged. Thus there is
no charge on the surface nor on edges striking
north-south. So when the charge is viewed in plan
view only W-E edges are seen and the major part of
the magnetised body and its N-S striking edges are
not seen. Thus structurally as one rotates the well
imaged W-E striking interface towards a N-S strike
position the anomaly associated with the structural
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 18, Page 5
GETECH/ Univ. of Leeds:


Figure 18/11 Same as Figs 18/09 and 18/10 but no
charge on the surface or N-S striking sides of the
body. The charge on a W-E striking dyke is also
shown.
edge will decrease and not be seen. This can clearly
be seen with the dipole model in Fig. 18/9 where the
north and south edges of the dipole source can be
identified by rapid gradients, and the west and east
contacts are associated with a slow gradient field
allowing the anomaly to elongate to the west and
east.
See section 19.2.6 for ways to help minimise this
problem in areas close to the magnetic equator.

18.2 Regional-Residual Anomaly
separation - a classic early method

Prior to quantitative interpretation of gravity and
magnetic anomalies to determine the geological
source structure there is often a need to isolate
that part of the field caused by the structure of
interest. The anomaly associated with this
structure will generally have an aerial extent that
can be up to two or three times the size of the
geological structure.
Regional-residual separation is only justified if
there is a clear separation of these anomaly types
and that the study area contains the complete
effects of the residual structure under investigation.
If not, then it is possible to remove part of the
residual field with the regional resulting in an
incorrect interpretation. When in doubt do not
apply this anomaly separation. Often
concession areas only represent small sections of
a sedimentary basin and it is likely the so called
regional associated with the concession area will
be caused by the sedimentary basin itself. Thus
the oil industry often does not apply any anomaly
separation techniques prior to quantitative
interpretation.

18.2.1 Hand Method.

The old fashioned graphical method is still used
since a skilled interpreter can use his experience
and knowledge for the given area in ways that are
not possible on a computer.
Need to construct :
(i) Anomaly map
(ii) Generate smoothed version of (i) - Regional
(iii) Remove (ii) from (i) leaving the residual map.

i. Anomaly Map

Figure 18/12: Gravity anomaly map

This method of first generating the regional then
removing it from the observations to generate the
residual anomaly is common with most newer
computer methods.

ii. Generate Smoothed Regional Map

a) Construct profiles perpendicular to regional
trend close to traverses.

b) Draw smoothed regional for each profile.

c) Always construct profiles close to real data
points.


In this example the 5 profiles (figure 18/13), used
to construct the regional, have been generated
independent of each other. This can result in
herring bone type distortion of the regional field
due to incorrectly defining the regional along
individual profiles (see next page).

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 18, Page 6
GETECH/ Univ. of Leeds:


Figure 18/13: Profiling method


Figure 18/14: Bad Regional (no tie lines used)

Remedy always construct tie lines and generate
smooth regional so that cross-overs match.


Figure 18/15: Good Regional



iii Generate Residual Map
Remove regional from original data set and
construct residual map which shows isolated
gravity low.

Figure 18/16: Residual Anomaly

18.2.2 Computer Based Methods

There are many analytical methods to carry out
regional-residual separation. These are less
subjective than the hand methods but not
necessarily any better. If you have two or more
computer methods to choose from then the choice
of which is the best method may be subjective!

Most computer methods are based on the data
being in a regular grid format.

Polynomial Surface Fitting Method

The basic assumption made in this method is that
the regional field fits a polynomial surface of
degree n.
A x y TrendSurface C x y
ij
j
n i
i
n
i j
( , )


0 0

when degree
n = 0
Trend surface = A(x, y) = C
00

where C
00
is a coefficient which is a constant or DC
term (as in DC & AC voltages)
n = 1
Trend surface = A(x, y) = C
00
+ C
01
y + C
10
x
this is a linear trend surface in x & y directions

n = 2
Trend surface = A(x, y) = C
00
+ C
01
y + C
02
y
2
+ C
10
x
+ C
11
xy + C
20
x
2


where surface has curvature and C coefficient
values determined by least squares.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 18, Page 7
GETECH/ Univ. of Leeds:


The weaknesses of this method are:

i) The regional can be biased by the residual
field.



Figure 18/17 Polynomial surface fitting

ii) The C coefficients are all interdependent so that the
equation can not be truncated.
e.g. n = 2
A ( x, y) = C
00
+ C
01
y + C
02
y
2
+ C
10
x + C
11
xy +
C
20
x
2

and n = 1
A ( x, y) = K
00
+ K
01
y + K
10
x
K
00
C
00
; K
01
C
01
nor K
10
C
10

Thus need to determine which order of surface best
represents regional first. Normally n >4 not used.

Iterative Polynomial Method



Figure 18/18: Iterative Polynomial method

The weakness (i) and (ii) above can be minimized
by reducing the bias that residual features have in
defining the regional field. This method removes
data points of residual anomalies above a
threshold value. The polynomial method is
repeated without these data points. Additional
anomalous data points are then removed and the
process is repeated. It is important not to remove
more than 1/3 of original data points or the regional
solution becomes unstable.

Orthogonal Polynomial Method

(Known as Chebychev frequency filter-F L
Chebychev (1821-1894) was a Russian
mathematician)

This method can also investigate the frequency
content of a data set, similar to Fourier analysis
methods. References Grant, 1957 A problem in
the analysis of geophysical data. Geophysics
22 :309-344
The method consists of approximating a series of
mutually orthogonal polynomials of increasing
order (in x and y direction) to the regular grid of
data covering an extended area. (avoids edge
effects and/or uncertainties).
The series can be represented as follows:
GG x y B B E x B D y ( , ) ( ) ( )
00 10 1 01 1


B E x D y B E x D y
pq p q 11 1 1
( ) ( ).......... ( ) ( )..
where, GG(x, y) = series representing the data field-
the more accurate the greater the number of higher
order terms used.
B = coefficients to be evaluated
E
1
(x) = polynomial

of order 1 in x direction
D
1
(y) = polynomial of order 1 in y direction
To determine the B values from the data , use least
squares to obtain best fit of gridded data and series.
This means solving the following relationships

B g E j D k
rs rs r k s


2
1
2
( ) ( )

where g G jk E j D k
rs j
N
k
M
r s

1 1
( ) ( ) ( )

j
N
k
M
G jk
1 1
( ) = sum overall data grid points (N by
M grid)

The length of series (N
o
of terms) is at the
discretion of the user. Therefore one can truncate
the series at any point and convert back to field
e.g. can truncate after order 2 and reform field.
The field will only contain DC (or constant) to
(width of data field)/2 wavelength data. This could
be a good representation of the regional field. How
can we quantitatively determine this? The method
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 18, Page 8
GETECH/ Univ. of Leeds:

is similar to Fourier series where shorter
wavelengths relate to higher order terms.

To decide on truncation look at say the first 100
terms (i.e. polynomials up to and including order 9
in both directions). Output of (B
rs
)
2
terms will
show how coefficients decrease with higher order
terms. if terms show sharp cut off or change in
slope then this could give information on the
residual-regional cut off.
In the frequency domain the log plot of the
amplitude or Power (proportional to A
2
) provides a
means of estimating depth and where the source
structures are located. This can provide powerful
means to determine regional - residual separation
and allows a range of techniques such as depth
slices to be determined (see section 19.4.3).



J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 1
GETECH/ Univ. of Leeds:

SECTION 19: DATA ENHANCEMENT




19.1 What is Enhancement?
There are three stages to interpretation

i. Data Enhancement

ii. Qualitative interpretation

iii. Quantitative interpretation

Data enhancement can be undertaken on line or grid
based data with the aim of making the interpretation
stage easier. This can take the form of transforming
and/or filtering the data and generating a range of
derivatives. The prime objective is to enhance or isolate
features that you wish to identify better prior to
qualitative and quantitative analysis.

Since magnetic and gravity anomalies are always
broader than the body causing them, this creates
problems of anomaly interference and make delineation
of the individual sources difficult.



Figure 19/1: Combined anomalies

Also an anomaly due to a deep , large body can mask
a shallow, less magnetic (dense) body.


Figure 19/2: Large and small anomalies combining

The solution is to calculate anomaly derivatives of either
the magnetic or gravity fields to isolate and/or separate
these anomalies in some way.

The following set of profiles (Fig 19/4) show the
magnetic anomaly and its derivatives over the Cleveland
dyke, Yorkshire. Note how the derivatives shrink in
width and have their maxima, distance between
maxima, zero crossing values correlating closely with
the position of the edges of the dyke where as the total
field anomaly does not readily give this information..


Figure 19/3: Location of the Cleveland Dyke and the
field study area


Figure 19/4: Magnetic Components over dyke in N
England (for more information on edge detection
see Phillips, 2000)

Thus derivatives tend to resolve the edges of bodies
better and make qualitative interpretation easier.
Derivative profiles/maps (especially 2
nd
derivatives) will
by their very nature amplify the noise in the data which

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 2
GETECH/ Univ. of Leeds:

can be either geological noise (e.g. at shallow depth) or
instrument or data processing noise.

Commonly used derivatives are;
1st order derivatives are dT/dx, dT/dy, dT/dz.
2nd order derivatives are d
2
T/dx
2
, d
2
T/dy
2
& d
2
T/dz
2

These derivatives are used in various ways and
combinations to help delineate edges of and depths to
structures


19.2 Derivatives

19.2.1 Horizontal Derivative

In Cartesian co-ordinates, the 1st horizontal derivatives
are: dT/dx and dT/dy where T= Anomaly
The 2
nd
horizontal derivatives are d
2
T/dx
2
and dT
2
/dy
2


Figure 19/5: Directional Horizontal derivative
generating +ve and ve derivative anomalies (red is
the anomaly T and black the horizontal derivative
dT/dx )

dT/dx can be useful in delineating magnetic contacts. It
is severely affected by the inclination of the inducing
geomagnetic field and is therefore not a good indicator
of the true location of a contact until the magnetic data
have been reduced to the pole. The horizontal
derivative can be simply calculated in the space domain.

By itself this is not a too useful derivative since in profile
form it is directional ( i.e. depending on the direction of
+ x the gradient could be either negative or positive).
This limits its use for mapping.

19.2.2 Total Horizontal Derivative (THDR)
Good for locating contacts for Gravity and Magnetics

HDR=
|
|
.
|

\
|
(

+
(

2
2
dy
dT
dx
dT

As the name suggests it measures the full horizontal
gradient. The gradients are all positive thus this
derivative is easy to map. In the case of a gravity
anomaly the maxima will lie close to (not necessarily
precisely over) the boundary of the structure causing
the gravity anomaly. In the case of magnetic anomalies
the THDR will give an indication of the boundary but due
to the complexity of the anomaly will not made mapping
of the edges of structures very easy. To make it easy
the magnetic anomaly needs first to be transformed to
RTP= Reduced To Pole or to Pseudo gravity (see
sections 19.3.1 &19.3.2)

Figure 19/6: Full horizontal derivative generating
positive local maxima over high gradients for profile
where dT/dy=0

The following figures show some grid examples



Figure 19/7: Location of example in the Gulf of
Thailand



Figure 19/8: This grid example shows the isostatic
residual anomaly. The area is dominated by N-S
buried horsts and grabens that are clearly imaged in
the data.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 3
GETECH/ Univ. of Leeds:



Figure 19/9: Full (or Total) Horizontal derivative
gravity anomaly showing the relation of fault
bounded grabens to the derivative maxima. Difficult
to say from this derivative whether the faults are
east or west bounding

19.2.3 First Vertical Derivative (VDR)
Good for locating structures from gravity and magnetic
data

In Cartesian co-ordinates, the first vertical derivatives is:
- dT/dz The vertical derivative (vertical gradient) is a
good method for resolving anomalies over individual
structures in total magnetic intensity data and
importantly suppresses the regional content of the data
(Paine J W 1986).

Figure 19/10: The first vertical derivative (red, VDR)
makes anomalies smaller in width and match more
closely the causative body. The effect of long
wavelength regional gradients is minimised so
vertical derivatives oscillate about zero. The VDR is
the negative component so that it is in phase with
the parent field. For gravity and magnetic RTP data
the zero crossing of VDR is a further method of
locating edges

The first vertical derivative cannot be calculated in the
space domain as in the case of the horizontal
derivatives since it requires information on how the field
diverges and decreases with height and this can only be
determined by analysing the spectral content of the
data. This can be estimated in the frequency domain.
Thus first vertical is calculated using the 1D Fast Fourier
Transform (Gunn P J 1975).



Figure 19/11 First Vertical derivative(- dT/dz) of the
gravity field


19.2.4 Second Vertical Derivative (SVDR)

The second vertical derivative (d
2
T/dz
2
or SVDR) has
long been used in gravity and magnetic interpretation
(Fuller, B D 1967. Two-dimensional frequency
analysis and design of grid operators. In: Mining
Geophysics, v. II, Theory. SEG, Tulsa, p658-708).

It has the property of taking a zero value over contacts.
Contour maps or images of SVDR can be noisy,
because you are dealing with a 2
nd
order derivative
which magnifies noise. In addition, zero values are not
necessarily confined to regions over contacts, so that
SVDR maps should be used with care. The second
vertical derivative can be calculated using the 1D Fast
Fourier Transform (Gunn 1975 & Brigham 1974) or from
the space domain using Laplaces equation:

d
2
T/dx
2
+ d
2
T/dy
2
+ d
2
T/dz
2
= 0 Laplaces Equation
or
d
2
T/dz
2
= - (

d
2
T/dx
2
+ d
2
T/dy
2
)



Figure 19/12: Space domain determination of a
simple second vertical derivative

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 4
GETECH/ Univ. of Leeds:

The second vertical derivative (SVDR) can be simply
calculated from maps by calculating d
2
T/dx
2
and d
2
T/dy
2
From Fig. 19/12
b
1
- c =Ag
1

c - b
3
= Ag
2
Thus
d
2
T/dx
2
= ( Ag
1
/r - Ag
2
/r)/r

d
2
T/dx
2
= ( [b
1
-c]/r - [c-b
3
]/r) / r
so d
2
T/dx
2
= ( b
1
+ b
3
- 2c) /r
2

Similarly d
2
T/dy
2
= (b
2
+ b
4
- 2c) / r
2

Thus d
2
T/dz
2
=4 ( c - [ b
1
+ b
2
+ b
3
+ b
4
] /4 )/r
2

where ( ) b b b b
1 2 3 4
4 + + + is the mean value about
the circle.

Only considering the nearest 4 points does not sample
the true curvature of field (see Fig. 19/13)


Figure 19/13: Linear gradients between grid nodes
is a simple means of determining curvature and in
general is effective.

Thus need to use more grid points
Important:
i) Quantitative interpretation of SVDR, is limited to
determining the location of edges by mapping the zero
contour, Units of SVDR are mgal/km
2
or nT/km
2
and
the amplitudes and gradients are very sensitive to
method used to calculate them..

ii) The nature of SVDR maps is dependent on grid size,
r, and on data distribution. Too small a grid (less than
data point spacing) will generate second derivative
anomalies that are the direct result of interpolation
method used. Too large a grid (greater than data point
spacing) will be sampling lower frequency component
of the field and thus the second derivative will be giving
information on structures at greater depth. Thus method
could be used with Spectral depth slice methods (see
section 19.4.3

iii) SVDR derivative method used correctly is an aid to
regional-residual separation.

iv) The first vertical magnetic derivative can be
measured using magnetic gradiometers whereas in
gravity this derivative has not until recently been directly
measured (see section 10). Good estimates of the first
vertical derivative can be obtained from total field
measurements by fourier methods.

19.2.5 Dip-Azimuth Derivative (Grid Data)
Good for Gravity and Magnetic Data and Derivatives

This method uses a range of false colour shading to
generate an image that identifies all the 2D structural
trends in a given area.

A weakness of conventional uni-directional shaded relief
imaging (see section 19.14) is the directional nature of
the shading. This is illustrated in Fig. 19/14 where the
same gravity data for the Gulf of Thailand used in Fig.
19/8 are now illuminated in Fig 19/14 in two different
directions, from the west (left) and from the south (right).
The illumination has a dramatic effect of identifying
trends. The N_S trend in poorly seen when the
illumination is parallel to the trend (i.e. no shadow is
generated). The single NW-SE trend seen in both since
both illuminations affect it equally.




Figure 19/14: Same grid but illuminated from different
directions to illustrate the problems of shading an image


Figure 19/15: Principals of the dip-azimuth
derivative. The anomalies are illuminated from four
different directions with red white blue and black
light. This allows all anomaly trends to be equally
illuminated and identified.

Thus a single direction of shading strongly biases the
visualisation of trends to those that are perpendicular to
it. To overcome this problem four different colour
lighting directions are used, from the N, S, E and W as
shown in Fig 19/15. East facing slopes will be
illuminated by red light, SE facing slope will be
illuminated by red and white light, etc. An example for
the whole of the Gulf of Thailand using free air anomaly




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 5
GETECH/ Univ. of Leeds:



Figure 19/16: Application of the Dip Azimuth
illumination to the Gulf of Thailand satellite gravity
data. The data reveal tectonic controls that are
controlling/offsetting the dominant N-S horst and
graben structures already seen in Figs.19/8 to 19/11

data is shown in Fig 19/16. The N_S and the NW-SE
trends are all clearly seen. This method can be used on
any data set or derivative and is strictly a display
method.


19.2.6 Analytic Signal (AS) (Profile and Grid Data)
Good for Gravity and Magnetic Data

Magnetic Bodies The Analytic Signal A method (or
AS), is also known as the total gradient method.

For the profile case in direction x the expression is:


2 2
z
T
x
T
A(x) A |
.
|

\
|
+ |
.
|

\
|
= =
c
c
c
c


For the grid (x,y) case the expression is

2
2
2
z y
T
x
T
y) A(x, A |
.
|

\
|
c
c
+
|
|
.
|

\
|
+ |
.
|

\
|
= =
T
c
c
c
c




To understand the complexity of the magnetic anomaly,
the 2-D case is shown where the magnetic anomaly has
been deconvolved into an Amplitude function A (x) and
phase function | (x) where


Figure 19/17: Cross section view of 2-D dyke model
showing components of a magnetic anomaly Rao et
al (1991)

For the profile case the complex Analytic signal A(x) can
be expressed in the form Amplitude function A and
phase function as follows:

( ) j exp A A(x) =

where the Analytic signal is:
2 2
z
T
x
T
A |
.
|

\
|
c
c
+ |
.
|

\
|
c
c
=

and the local phase is:
|
.
|

\
|
c
c
c
c
=

x
T
z
T
tan
1


This section will focus on A while section 19.2.7 with
focus on

(Nabighian, 1972; Nabighian, 1974; Nabighian, 1984),
developed the original notion of Analytic signal, or
energy envelope, of magnetic anomalies. An important
characteristic of the Analytic signal is that it is
essentially independent of the direction of magnetisation
of the source. Thus the Analytic signal will peak over
the magnetic structure with local maxima over its edges
(boundaries or contacts).

The total gradient is simply the Pythagorean sum of the
along line horizontal derivative and vertical derivative
(the resulting sum is positive). It peaks directly over the

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 6
GETECH/ Univ. of Leeds:

top of contacts, but is somewhat noisier and has lower
resolution than the horizontal derivative. Because it
requires the vertical derivative, it has to be calculated
using 1D Fast Fourier Transform methods (Gunn,
1975).

The width of the highs is an indicator of the depth to the
contact as long as the signal arising from a single
contact has been resolved. The rule is that Half Width
at Half Maximum (HWHM) = depth. However, depths to
features, which are not perpendicular to the flightlines,
are overestimated.

Roest, W R., Verhoef, J ., Pilkington, M., 1992.
demonstrated in Figure 19/18 that the Analytic signal
peaks over the edges of magnetic bodies. One can also
consider that the amplitude of the Analytic signal is
simply related to amplitude of magnetisation.

Figure 19/19 shows a good example of how the Analytic
signal defines the dykes and contacts for a shallow
basement area in Tanzania, Africa.


Figure 19/18: Plot of the absolute value of the
Analytic signal which is also known as the energy
envelope. This envelope can be obtained by phase
shifting of an anomaly, in this case over a magnetic
contrast, over a range of 360
0
. The individual curves
inside the envelope represent the anomaly shifted in
steps of 30
0


Macleod et al., (1993) also showed that for qualitative
interpretation it may be preferable to have a function
that produces highs over magnetic bodies. This can be
partially achieved by first integrating the total field
anomaly. This can be done in the frequency domain.
The results, particularly in low latitudes, can enable
apparent magnetisation maps to be drawn and track N-
S contacts.

Practice also shows that the best Analytic signal results
are obtained if the TMI are first Reduced to Pole (RTP)
Gravity Bodies
The equivalent expressions for the gradients in the
gravity case are:

Mag. Grav.
oT/ox o
2
g/oxoz
oT/oy o
2
g/oyoz
oT/oz o
2
g/oz
2

This results in

2
2
2
2
2
2
2
z
g
z y
g
z x
g
) y A(x,
|
|
.
|

\
|
+
|
|
.
|

\
|
+
|
|
.
|

\
|
=
c
c
c c
c
c c
c


This is basically replacing T the magnetic field by the
first vertical derivative of the gravity field (point source
for mag falls off by 1/r
3
where as grav falls off by 1/r
2
).
To use this the above expression requires a good data
distribution to start with to obtain reliable derivatives. A
simpler expression is to use

2 2 2
z
g
y
g
x
g
) y Ag(x, |
.
|

\
|
+
|
|
.
|

\
|
+ |
.
|

\
|
=
c
c
c
c
c
c





Figure 19/19: Example of Analytic signal (AS)
derived from the TMI for a basement area in
Tanzania showing locations of high susceptibility
dykes and contacts and intrusives (Kimberlites?).
Best results are obtained by using AS on RTP data.


Equatorial Areas-special case for magnetic
(see also Section 18.3)
Interpretation of magnetic anomalies close to the
magnetic equator is complicated for several reasons:
- Ambient field in horizontal

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 7
GETECH/ Univ. of Leeds:

- Ambient field is weak (~35,000 nT compared to
up to 70,000 nT in higher latitudes)
- Structures striking N-S are difficult to identify

Magnetic anomalies are generated when the flux density
cuts the boundary of a structure (see Fig. 19/20). If the
structure strikes parallel with the field then in Equatorial
areas the flux stays within the structure and no anomaly
is generated. An anomaly will only be generated if the
flux lines cut the edge of a body generating free positive
(north) or negative (south) poles. Figure 19/20
demonstrates this.



Figure 19/20: Shows a rectangular body at the
magnetic Equator with its sides parallel to and
perpendicular to the Earth magnetic field (Main Field
Direction). Where the Main field cuts the W-E
boundaries of the body it generates free south poles
(-) and free north poles(+). For N-S contacts
paralleling the Main Field no free poles are
generated.

A similar effect is seen when a magnetic field is reduced
to the equator (RTE) instead of to the pole (RTP), where
N-S structures are difficult or impossible to identify in
RTE maps. This is demonstrated in Figure 19/21 for 3D
model data showing how magnetic data at the Equator
lacks definition of N-S trending structures whereas W-E
structure are satisfactorily imaged.

This effect is shown in Fig 19/22. Please note: in Fig.
19/22 the TMI has Inclination = 62
0
and Declination =
12
0
. This allows both stable RTP and RTE anomalies
to be derived. In magnetic equatorial regions where
Inclination is within the range +15
0
to -15
0
then RTP is
generally unstable and can not be derived.

However structures are never perfectly aligned with the
main field and often have an en echelon form as shown
in Figure 19/23. This allows the flux to leak out at the
offsets to generate small anomalies. When dealing with




Figure 19/21: 3D model data for magnetic fields
transformed to the pole (RTP) and to the Equator
(RTE) and their respective Analytic Signal
anomalies. The degrading of N-S structures is
clearly seen in the Analytic Signal of the RTE.



Figure 19/22: A: RTP of data B: RTE of same data.
The N-S striking structures are indistinct or absent
in B.

a 2D feature this often shows as a line of small circular
anomalies often called string of pearls. The 2D feature
can then be image enhanced by determining the
Analytic signal.

An example of real TMI data example is taken from
Namibia (Fig. 19/24) and reduced to Pole and Equator.
The Analytic signal is certainly superior for the RTP data
but the RTE does not do too bad a job

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 8
GETECH/ Univ. of Leeds:



Figure 19/23: En Echelon structure at the Equator
paralleling the main magnetic field. At structural
offsets the flux can leak and generate small dipole
anomalies often referred to as String of Pearls



Figure 19/24: Example of Analytic signal for north
trending contacts.

Beard (2000) shows that the Analytic signal is the best
derivative to recover the N-S contacts. This is shown in
Fig 19/24, where the analytic signal does a good job is
identifying N-S edges.

Oil and Gas pipelines often appear to have string of
dipole anomalies due to remanence changes where the
pipes have been welded together since no account was
taken of the differing remanence orientation each pipe
has before welding. Cathodic protection of pipelines
(see section 15.4) can result in suppressing the remanent
component and generating an induced linear 2D
magnetic anomaly due to the current within the pipe.

19.2.7 Local Phase or Tilt Derivative (u or TDR)
As shown in section 19.2.6 the local phase is defined
as:
|
.
|

\
|
c
c
c
c
=

x
T
z
T
tan
1

When analysing potential field (gravity and magnetic)
data the vertical (VDR) and the total horizontal (THDR)
derivatives of the gravity and reduced to pole magnetic
data are used to map the lateral extent of anomalous
density and magnetisation bodies and their edges. The
derivatives work well but have limitations in that the
magnetic and gravity responses are dependent on the
density and susceptibility contrasts present. If the
contrast is large the anomaly will be large, if the
contrasts are small the anomaly will be small. This is
true for VDR and THDR and AS. As such it may be
difficult to image subtle anomalies due to the presence
of large amplitude anomalies. The dynamic range of the
map data is dominated by the anomalously high/low
field values. The dynamic range in turn is normally used
to control colour map fill by colour equalisation,
particularly in computer based displays. Thus small
amplitude signals are sometimes difficult to visualise.

The Local Phase, , or Tilt derivative (TDR) is used in
seismic data analysis and was first reported for potential
field studies by Miller and Singh (1994 who used the
name Tilt. Recent work has been done by Bruno
Verduzco (MSc Exploration Geophysics dissertation
Univ. of Leeds 2003) and published as Verduzco,
Fairhead, Green and McKenzie (2004)


Figure 19/25: Determination of the tilt derivative or
angle using Reduced to pole data

The TDR are defined as follows:

Profile in x direction
|
.
|

\
|
c
c
c
c
=
x
T
z
T
ATAN TDR

Grid (x, y)
|
.
|

\
|
=
THDR
VDR
ATAN TDR


where ATAN is ARCTAN or tan
-1


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 9
GETECH/ Univ. of Leeds:


Figure 19/26: Derivatives over a vertical contact for
RTP data. The derivatives anomaly units are nT/m
and the TDR radians. Since the same units are used
the TDR is limited to within +/- 1.57 The TDR
behaves like the VDR but without the over swing.

The relation between the TDR and other standard
derivatives is shown in profile form in Figs 19/26
(contact model), 19/27 (block or thick dyke model) and
19/28 (thin dyke model) with vertical magnetisation
(RTP). The zero crossing of the Tilt derivative closely
delineate the edges of structures.


Figure 19/27: Derivatives over a block model with
RTP showing the location of the maxima and the
block edges. Again the TDR does not exhibit the
over swing as the seen in the VDR.

Figure 19/28: Dyke model with RTP showing similar
features to the block model except the maxima on
the Analytic Signal (AS) has been replaced by single
maximum.

The major advantages of the Tilt derivative in grid form
are:

- its ability to normalise the signal field to within +/-
1.57 (or +/- /2), which are the limits of the ARCTAN
(ATAN) function.
- Its effect is similar but superior to AGC (see section
19.7) by amplifying weak signals and suppresses
strong signals. This is illustrated clearly in profile
form in Fig19/29 by the dark blue profile, where all
other derivatives have large dynamic ranges and
prevent the more subtle anomalies being observed.
- Its sign is controlled by the vertical derivative (VDR)
since THDR is always positive. This allows easy
comparison between the TDR and VDR derivatives
(see Fig19/28).

Since the Tilt derivative is limited with +/- 1.57 radians,
then colour equalisation provides an additional help to
visualising what were previously subtle anomalies. Thus
the Tilt derivatives has major uses in structural mapping
in mineral exploration environment to define subtle
trends and geological fabric.

Figure 19/29: The important parameter of the Tilt
derivative (TDR) is that is normalised between +/-
1.57 radians thus providing a simple and elegant
AGC method The location of the large anomaly is at
A in Fig.19/31. (see Figure 27/2 for better illustration)





Figure 19/30: The RTP data are used in following
example (Fig. 19/31).


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 10
GETECH/ Univ. of Leeds:



Figure 19/31: Comparison between the Tilt Derivative
(TDR) and the Vertical Derivative VDR for RTP data. The
strength of the VDR signal is related to amplitude of the
RTP field where as the TDR is evenly modulated and
preserves the same phase relation as the VDR. White Line
is profile location shown in Fig 19/29. A is location of the
large TMI magnetic anomaly of amplitude 1200 nT!


19.2.8 Local Wavenumber K

Profile Method for K
Nabighian (1972) defined the complex analytic signal
for a two dimensional model in two ways:
- in terms of horizontal and vertical derivatives
- in terms of the total field and its Hilbert transform
z
z) T(x,
j
x
z) T(x,
z) A(x,
c
c
c
c
=

where T(x,z) is the magnitude of the total magnetic field,
j is the imaginary number, z and x are Cartesian co-
ordinates for the vertical direction and the direction
perpendicular to strike. The above equation is
equivalent to
( ) j exp A z) A(x, =

where
2 2
z
T
x
T
A |
.
|

\
|
+ |
.
|

\
|
=
c
c
c
c

and
u
c
c
c
c
=

tan
1
T
z
T
x


By definition, the Analytic signal amplitude and the local
phase are given above. Rao et al 1991 has described a
manual method using profile based data to determine
the analytic-signal amplitude and phase. Thurston and
Smith 1997 have gone one step further and introduced
the local frequency, denoted f and the local
wavenumber K.

The local frequency, f, is defined asthe rate of change of
the local phase with respect to x. This quantity is given
by
(

=

x
T
z
T
tan
x 2
1
f
1
c
c
c
c
c
c


In the analysis of potential fields it is often more
convenient to use local wavenumber, denoted by K,
rather than f where K=2tf

Making this substitution, and using the differentiation
rule
) 1/(1 )/dx d(tan
2 1
+ =

gives

|
|
.
|

\
|
=
z
T
x
T
x
T
z x
T
A
1
K
2
2 2
2
c
c
c
c
c
c
c c
c

Figure 19/32 shows the derivatives of a dipping contact.
The dip is 135
0
, the depth to the top is 100m, the
ambient field strength is 60,000 nT, the susceptibility
contrast is 0.01 SI, and the declination is zero degrees
(for this particular model 2Kfcsin(d)=848 nT and 2I-d-
90=75
0
)



Figure 19/32: Source Parameter Imaging (SPI
method after Thurston and Smith 1997). Shows the
local phase and local wavenumber and how they
relate to the contact edge. Note: their vertical
derivative is z T c c and not z T c c (The Vertical
derivative should be in phase with the TMI anomaly)


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 11
GETECH/ Univ. of Leeds:



Figure 19/33: Phase and Local wave number for the
Cleveland dyke (see figs 19/3 and 19/4)

Fig 19/33 gives the phase and local wavenumber
derived from the magnetic profile data over the
Cleveland dyke. Due to the data being effectively
differentiated three times the local wavenumber can only
be determined using very clean high resolution data.

The local wavenumber thus has important features:

- provides a high definition visualisation of contacts
- Independent of Inclination and declination of main
field (thus independent of remanence)
- The depth to the contact can be calculated from the
inverse of the local wavenumber values. This is gone
into in section 27.

The first feature, that the local wavenumber defines
contacts, is illustrated in Figure 19/34. In fact it
significantly out performs the Analytic signal. As the
dyke model becomes narrower the two small maxima of
the Analytic signal coalesce into a single positive
anomaly. This does not happen as rapidly for the local
wavenumber.

The anomalies are independent of induced or remanent
effects so that edges are accurately mapped.

Grid Method for K

Thurston and Smith 1997 provide the profile equation
as
|
|
.
|

\
|
c
c
c

c c
c
=
z
T
x
T
x
T
z x
T
A
1
K
2
2 2
2
c
c
c


Using subscripts to indicate differentiation. Thurston
and Smith 1997 formulation becomes:

2
z
2
x
z xx x xz
T T
T T T T
K(x)
+

=



Figure 19/34: The local wavenumber and the
Analytic signal response to a thin dyke. Although
maxima are located over the edges of the block the
local wavenumber derivative retains maxima
definition well beyond the width when the AS
maxima anomalies coalesce.

Using Laplaces equation for a two-dimensional source,
M
xx
+ M
zz
= 0, we can write

2
z
2
x
z zz x xz
T T
T T T T
K(x)
+
+
=


To express this equation in terms of x and y coordinates,
we need the following identities:

2
y
2
x
y yz x xz
2
y
2
x
y
yz
2
y
2
x
x
xz hz
2
y
2
x
2
y
2
x
y
y
2
y
2
x
x
x h
M M
M M M M
M M
M
M
M M
M
M M
M M
M M
M
M
M M
M
M M
+
+
=
+
+
+
=
+ =
+
+
+
=

Therefore,
2
z
2
y
2
x
z zz y yz x xz
M M M
M M M M M M
y) (x,
+ +
+ +
=

or in our standard notation

|
|
.
|

\
|
c
c
c
c
+
c
c
c c
c
+
c
c
c c
c
=
z
T
z
T
y
T
z y
T
x
T
z x
T
y) A(x,
1
y) K(x,
2
2 2 2
2
Using this expression provides a clean local
wavenumber compared to deriving it via the total
horizontal derivative of the Tilt derivative, which

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 12
GETECH/ Univ. of Leeds:

results in sigularities. Figure 19/35 illustrates the
improvements over the total horizontal derivative of
the RTP and Figure 27/4 (section 27) with Analytic
Signal and the total horizontal derivative of the
Pseudo gravity.



Figure 19/35: Comparison between the local
wavenumber derivative and the Total Horizontal
Derivative ( THDR) for the RTP data. The strength of
the THDR signal is related to amplitude of the RTP
field where as the local wavenuber is evenly
modulated and preserves the same phase relation
as the THDR.
Further insights to normalised derivatives see
section 19.8


19.3 TRANSFORMS


19.3.1 Reduction-To-the-Pole (RTP) or Equator
(RTE) Magnetics only
Analytic signal and Local wavenumber anomalies
are alternate methods

Because the appearance of a magnetic map is heavily
dependent on the inclination of the ambient
geomagnetic field, a commonly employed procedure is
to transform the map to that which would be observed in
a vertical geomagnetic field situation. This has the
effect of centring anomaly highs over their causative
bodies and (in many cases) minimises the attendant
lows. The RTP transformation is fairly easily achieved
by wavenumber domain filtering provided that:

- the ambient field direction is known (the IGRF or CM
is normally used);

- there are no appreciable remanence effects;

- the local inclination is greater than 15
0
.

- For smaller inclinations than 15
0
RTE should be
applied or some modifications to RTP method
applied.

Summary: Since magnetic anomalies result from the
variation in rock magnetisation which is a vector
quantity; anomalies will normally have positive and
negative components. The amplitude and phase of
these components will depend on the orientation of the
source structure, the latitude of the source, etc. At the
north pole the induced magnetisation anomalies tend to
take on a simple geometry, losing the dipole nature to
their anomalies. The anomaly is centred over the
causative structure in a similar way to gravity anomalies.
Thus by transforming of the magnetic anomaly field from
a given latitude (with given inclination) to the pole makes
interpretation easier. Within 15 of the magnetic
equator the method of reduction to the pole is often
unstable and fields can be reduced-to-the-equator.

The reduction to the pole method works on the principle
that the anomalies are all induced. If an anomaly has a
component of remanence then its dipole character will
be still remain after reduction-to-the-pole has taken
place. Further anomaly enhancement methods can
applied to reduced-to-the-pole data to produce
derivatives, residuals etc.

Experience shows that most, if not all derivatives
methods perform best if the TMI data are first
transformed to the pole.

At the north or south poles the magnetic anomalies are
positive and lies directly over the magnetic body. Why is
the anomaly at the south pole positive? The answer a
positive anomaly is one that is enhancing the
geomagnetic field. The geomagnetic field at the south
pole vertically up and at the north pole it is vertically
down.






Figure 19/36: Magnetic anomaly at the N or S pole
with profile.



At the magnetic equator the anomaly is negative and
lies directly over the magnetic body but also has
troublesome positive anomalies on either side.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 13
GETECH/ Univ. of Leeds:



Figure 19/37: Magnetic anomaly at the Equator and
profile for Declination 10
0
and Inclination 0
0


At all other magnetic latitudes the anomalies are
asymmetric and do not correspond simply with the
magnetic bodies. This makes it hard for the interpreter
to delineate the boundaries of the bodies causing the
anomalies. The solution is to reduce the total field
anomaly to the north or south pole or to the Analytic
signal or local wavenumber which are both inclination
independent..



Figure 19/38: Inclined field in the northern and
southern hemispheres.

To do RTP requires the data to be in digital form, a
computer and appropriate software (e.g. GETgrid). As
indicated near the magnetic equator it is impossible to
reduce to the pole and maintain complete magnetic
field strength integrity. Reduction-to-the-pole is
recommended before qualitative interpretation for
regions at magnetic latitudes less than 75 degrees. For
latitudes greater than 75
0
the field is essentially RTP.

A number of authors have addressed the problem of
reduction to the pole. The methods either modify the
amplitude correction in the magnetic North - South
direction using frequency domain techniques Hansen
and Pawlowski, 1989 and Mendonca and Silva 1993,


Figure 19/39: Example of TMI and its RPT equivalent
from Tanzania ( Inclination = -35
0
) Note the
sharpness of the RTP anomalies.



Figure 19/40: Examples of magmatic intrusions with
remanence due to intrusions being of different ages.
The RTP does not give clean solutions whereas the
Analytic Signal (AS) does a good job to identify the
individual intrusions.

or calculates an equivalent source in the space domain
Silva, 1986. In all cases, induced magnetisation of
magnetic sources is assumed. Macleod et al,
(Macleod_et_al 1993 have found that the simplest and
most effective technique that addresses the amplitude
problem is that developed by Grant and Dodds in the
development of the MAGMAP FFT processing system of
GEOSOFT Ltd in 1972
The reduction to the pole operator can be expressed as

| |
2
) (D icos(I)cos sin(I)
1
L((
+
=

where u is the wavenumber direction, I is the magnetic
inclination and D is the magnetic declination. It can be

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 14
GETECH/ Univ. of Leeds:

seen that as I approaches 0 (the magnetic equator), and
(D - u) approaches t/2, the operator approaches infinity.
Grant and Dodds (1972) (see Macleod_et_al 1993)
addressed this problem by introducing a second
inclination (I) that is used to control the amplitude of the
filter near the equator:

| |
2
) (D icos(I)cos ) sin(I'
1
L((
+
=

In practice, I is set to an inclination greater than the true
inclination of the magnetic field. By using the true
inclination in the icos(I) term, the anomaly shape will be
properly reduced to the pole( induced magnetisation
only), but by setting (I > I), unreasonably large
amplitude corrections are avoided. Controlling the RTP
then becomes a matter of choosing the smallest I that
still gives acceptable results. This will depend on the
quality of the data and the amount of non-induced
magnetisation present in the area under study.

Although the amplitude correction of the RTP operator
can be controlled, results will still be invalid for
remanently magnetised bodies and where anisotropy is
present. Furthermore, such bodies are difficult to
interpret even when not distorted by reduction to the
pole. It would seem to be preferable to produce a result
that simply provides a measure of the amount of
magnetisation, regardless of direction (hence Analytic
signal).

Further developments in RTP are for variable inclination
and declination. Often large survey areas (e.g. Thailand)
cover a range of inclinations. Most commercially
available software only handle constant inclination and
declination. This can be overcome by overlapping
banding with progressively changing inclination and
compiling map from central portions of each band. A
RTP method for spatially varying inclination and
declination has been developed by Swain XXXX.

19.3.2 Pseudo-Gravity and Pseudo Magnetics
A means of rectifying derivative problems as a
result of RTP

In Section 1 it was stated that there is a general
relationship between density and susceptibility of rock
bodies such that high density rocks are more often than
not associated with high susceptibility (e.g they contain
high percentage of magnetite) Assuming that there is
such a relationship then a magnetic grid can be
transformed to a grid of pseudo-gravity (Bott et al 1966.
and Gunn 1975.) via the wavenumber domain. The
process requires a pole reduction, but adds a further
procedure which converts the essentially dipolar nature
of a magnetic field to its equivalent monopolar form.
The result, assuming the relationship holds and with
suitable scaling, can be directly comparable with a
gravity map. It shows the gravity map that would have
been observed if density were directly proportional to
magnetisation (or susceptibility). As for pole reduction,
the process is meaningless (or too complex to perform
routinely) if there is significant remanence present.

Comparison of gravity and pseudo-gravity maps reveals
a good deal about the local geology. Where anomalies
coincide, the source of the gravity and magnetic
disturbances is probably the same geological structure.
In that case the ratio of density contrast to susceptibility
contrast may be calculated from the relative amplitudes
and the nature of the anomalous body deduced. A very
dense, very magnetic body is likely to be an ultra basic
intrusion. A dense, somewhat magnetic body of
appropriate form may be a basement horst. A low
density, low susceptibility body at basement level could
be a granitic intrusion into more basic basement rock, or
a fault-bounded basement low.

The pseudo-gravity transform exploits Poisons relation
which connects the gravity and magnetic effects of a
single source. A pseudo-gravity map will normally be
annotated with the assumed ratio of density to
susceptibility contrasts. For any given feature, this
figure is just scaled in the ratio of the observed
amplitudes to obtain the ratio for the body of interest.

Gravity changes fastest in the neighbourhood of
contacts (faulted or otherwise) between rocks of
different density. Therefore the total horizontal gradient
of pseudo-gravity (Cordell & Grauch, 1985) may be
employed in exactly the same way as the Total
horizontal gradient of gravity to delineate contacts. The
peak horizontal gradient occurs slightly down dip of the
top edge of a shallowly dipping contact, but marks it
clearly. It occurs directly over any steeply dipping
contact. A grid of Total horizontal gradient of the
pseudo-gravity may therefore be contoured or imaged
and the maxima observed on such a presentation would
mark the contacts (Cordell & Grauch, 1985; Grauch &
Cordell, 1987).

In Section 19.4.4 the pseudo gravity transform is
shown to be an important intermediate step (Fig
19/42) in tracking edges of structures due to RTP
signals having a small negative component (see
Fig 19/41) which when converted into the Total
horizontal derivative generates multiples (ringing)
of the anomaly and thus detracts from the
structural signal trying to be traced. This problem is
illustratedin Fig 19/41 by converting a dipole source
to RTP and Pseudo Gravity and then deriving their
respective THDR. The edge geometry of the
Pseudo Gravity THDR is simple since it only
delineates the maxima associated with the source,

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 15
GETECH/ Univ. of Leeds:



Figure 19/42: Example of generating Total Horizontal derivative grid via Pseudo Gravity and RTP. Plotting the
maximum values (ridge grid) of the Total horizontal derivatives for both the Pseudo Gravity and RTP generates
ringing effect of the RTP-THDR and thus structural lineaments are more clearly defined in the Pseudo gravity
route. (see Fig 19xxxx for enlarged version of the ridge grid.

whereas the maxima of the THDR of the RTP delineates
both the source and a concentric maxima resulting from
the existence of a maximum gradient associated with
the surrounding negative anomaly.




Figure 19/41: Ringing of the THDR for RTP data



19.4 FILTERS

Filters are devices which pass, or fail to pass,
information based on some measurable discriminator.

Usually the filter discriminate is frequency and the filter
alters the amplitude and/or phase spectra of signals
which pass through it. The RC filters used originally in
marine data acquisition (see Section 8) had undesirable
effects of affecting the amplitude at long periods(e.g. a 3
minute RC filter reduces amplitudes out to 15 minutes
period) and changes the phase. Digital filters can
correct for all the effects and provide an output that has
not been distorted in any significant way. Reference
worth reading are Sheriff and Geldart 1983Vol 2 185-
194 Cambridge University Press.

In potential field studies there is a tendency to define the
parameters of the filters by initial analysis of the data
being filtered. This leads to the concept of Geological
filters rather than plain mathematical filters. Basically
this is using the filters correctly with data rather than
using them in a blind approach

All filters are normally Band-pass and Band-stop types
with upper and a lower cut-off wavelength.


19.4.1 Convolution or Space Domain Filters

Simple 1D and 2D Filters or Simple Mathematical
Profile filters

Consider a line of data points A1 to A6.



J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 16
GETECH/ Univ. of Leeds:




Figure 19/43: Simple profile running means. Note the
larger the filter operator the greater the lose of data
at the start and end of the profile



Figure 19/44: the simple running mean seen as a
profile operator

Instead of averaging the closed points to the centre of
the filter can be weighted such that

3 point weighted mean

A
1
+2A
2
+A
3
A
2
+2A
3
+A
4
A
3
+2A
4
+A
5
A
4
+2A
5
+A
6

4 4 4 4

5 point weighted mean
A
1
+2A
2
+4A
3
+2A
4
+A
5

10



Figure 19/45: Weighted running means

The longer the filter the smoother will be the resulting
profile. No change in overall mean value. The above
method can now be applied to a spatial grid where the 3
point running mean now becomes a 9 point grid

Simplest filter is a 2 x 2 mean filter.



Figure 19/46: a simple 2 x 2 filter

By applying 2 x 2 mean filter again to the x grid
(filtered data.) This represents a 3 x 3 filter. A 2 x 2 filter
has the undesirable effects of creating a new grid
locations so a 3 x 3 filter is best (see Hanning Filter)

If the 2 x 2 mean filter is operated 4 times then the
resulting filter is a 25 point weighted filter.



Figure 19/47: Applying the 2x2 filter 4 times to a grid

Such spatial filters can be shaped (changing the
weighted values) by fourier methods to remove
frequencies of any wavelength.
These include high pass, low pass and band pass filters

Hanning Filter A Hanning filter is defined as a 3 x 3
point operator with coefficients on the central row and
column in the ratio 1:2:1 before normalization.

0 1 0
1 2 1 divide output by 6
0 1 0

This is a simple smoothing filter that can have multiple
passes of the filter in one operation. Can be considered
to be a very light cosmetic filter to clean up data

Gaussian Filter The Gaussian response is a fixed
bell-shaped response curve. (which also describes the
impulse response). The cut-off slope is not very steep,
and no ringing is produced. Only the cut-off

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 17
GETECH/ Univ. of Leeds:

wavelength(s) are required to be specified in the filter.
Can also be a frequency domain filter

LaPlacian Bi-Directional Filter This filter uses
curvature as the discriminator between long and short
wavelength anomalies. This method will work on line
and gridded data
Line data: This can be viewed as a 3 point filter with
coef of:



where the

coef. = 0


Over negative curvature



Over linear curvature (inflection points)

Over positive curvature




Figure 19/48: Response of the LaPlacian filter

Grid Data:
Same results as with line filter but now using 3 x 3 filter
with coef. Of:



This is the basis of image enhancement since you can
addout put back onto the data. The effect is basically
stretching the amplitude about inflection points.

Enhanced Image = T +Factor x L

Binomial Filter The coefficients of the central row and
column of the convolution array for binomial filters are
the coefficients of a Binomial expansion of the order
selected. (The local convention is that order 1 gives a
three point filter, for compatibility with the Hanning filters
above. For order n, the coefficients of the central row or
column of the operator are then the (n+1)th row of the
Pascal triangle, and a Binomial filter of order n is
equivalent to n passes of a Hanning filter).

19.4.2 Frequency Domain or FFT Filters

FFT filters are frequency domain filters and include
Butterworth, Gaussian and Cosine and there are four
frequency pass types Low-pass, High-pass, Band-pass
and Band-stop.



Figure 19/49: Frequency and Space Domain Filters

By convention, a filter is designated as low-pass if it
passes frequencies lower than its cut-off frequency and
attenuates frequencies higher than this. Low-pass and
high-pass filters require only one-cut-off wavelength to
be supplied, whilst band-pass and band-stop filters
require two. For filter specifications, wavelengths are

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 18
GETECH/ Univ. of Leeds:

used in preference to frequencies as they are more
convenient to work with. Filter cut-off points are defined
as the wavelengths at which the amplitude response of
the filter is 0.5.

The shape cut off of the frequency filter (upper red solid
line) has the spatial equivalent (lower red solid line)
which generate a ringing effect due to the oscillatory
nature of the secondary lobes. To prevent this
undesirable effect the frequency cut off has to be more
gradual

Butterworth Filter The Butterworth response is a flat
response in the pass region and the roll off is defined as
an amplitude which is a function of frequency
determined by the filter order, which is a positive integer
value. Larger values result in sharper cut-offs, leading to
greater 'ringing' of narrow anomalies. The order and cut-
off wavelength(s) are required to specify the filter. This
filter is very popular in potential field studies

Cosine Filter Cosine filters require a parameter
determining the width of the roll off region. This is
expressed as a fraction of the cut-off wavelength.
Together with the cut-off wavelength(s) this determines
the filter response. With the cosine filter it is possible to
absolutely restrict the range of wavelengths in the
filtered grid


Figure 19/50: Amplitude response of a Butterworth
Filter

Strike Filter: The filter operates in the frequency
domain to selectively remove frequencies from a sector
of the X-Y frequency diagram (and the corresponding
diagonally opposite sector). This sector is defined by its
central direction and angular width. Within the sector,
the frequency operation is equivalent to a Butterworth
low-pass filter operation of the specified order and cut-
off wavelength. At the edges of the sector, the filter is
faded using a cosine roll off over the roll off angle
specified, which is centred on the sector edges. Outside
this range the frequency components are not modified.

For example, to remove ripple from a grid created from
survey lines aligned in a NW/SE direction, you would
need to specify a filter direction of 45 degrees (across
the ripple: identical to specifying to 135 degrees). For a
width of 30 degrees and angular roll off of 10 degrees,
the full filter specified would then be applied over the
angular range 35 to 55 degrees, rolled off at the edges
to zero effect outside the range 25 to 65 degrees. (All
directions are specified clockwise from North).

This filter is used for de-corrugation of surveys but
mi9crolevelling generally more effective since strike
filtering can remove signal in the direction it is being
applied in.

19.4.3 Physically Meaningful Filters (or
Geological filters)

These filters can be convolution and/or FFT type filters
but they use knowledge about the field that is to be
filtered so that the product of the filtered field has
geological meaning.

Analysis of a grid
Spectral Analysis: The power spectrum method is
used for depth estimation and for designing filters to
separate region and residual fields (or deeper from
shallower sources).

A magnetic anomaly map may be considered to be
made up of a series of interfering groups of waves of
different wavelength caused by magnetic sources at
difference depths. The 2D Fourier analysis of a grid can
be represented in graphical form in terms of the
azimuthally averaged power (squared amplitude) plotted
against wavelength (or wavenumber). This type of plot
is known as a power spectrum. It is common practice to
plot natural log (ln to base e) of power against
wavenumber (inverse wavelength) expressed in
cycles/km. Spector & Grant (1970) have shown that if
this is done, source ensembles (see Fig. 19/51) at
various average depths may be identified as linear
segments of the power spectrum. (see later this
section for examples)

This is a very useful procedure for determining average
depths, because the slope of any straight line segment
on such a diagram is simply twice the depth to the top of
that source ensemble (be it basement, intrasedimentary
volcanics or anything else). Thus the power spectrum
method is useful in identifying broad regional trends and
in the design of filters to provide maximum
discrimination of such trends and between source
ensembles at different depths.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 19
GETECH/ Univ. of Leeds:

The results of such filtering should be used with care,
because even shallow sources have long wavelength
anomaly components. Any such filtering process will
leave remnants of the undesired signal and may
remove/suppress parts of the desired signal.

For a single source (e.g. dipole) buried at 500m, shown
in Fig 19/52, the spectral plot clearly indicates a depth of
500m for all wavelengths


Figure 19/51: Ensemble of magnetic bodies at depth
levels


When there are two or more bodies at different levels
then Fig 19/51 reveals two straight line segments for the
Power spectrum. This is illustrated for the Tanzania
example in Fig 19/53 where the basement surface below
aircraft is ~171m and deeper basement bodies are
~565m.


Figure 19/52: Model dipole with Inclination 45
0

,Declination -10
0
, and depth 500m





Figure 19/53: Power Spectrum of part of the Gold
Belt of northern Tanzania

The power spectrum now gives an idea of the limits of
high pass filter to use where k = ~0.4 This allows high
pass filter to retain the maximum information on the
magnetic signal coming from 171m below the aircraft.
The parameters for a band pass filter can be determined
in Fig 19/ 53 where = (2..cell size)/k



Figure 19/54: Band pass filter to recover the central
part of the spectrum. For k = 1.3 and 0.4.

The power spectrum can be used directly to shape the
filters by using Swing tail and Swing head filters (see
next)



J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 20
GETECH/ Univ. of Leeds:

Swing-Tail & Swing Head Filters

A Swing-Tail filter (Fig. 19/55) is an example of a filter
designed to adjust the signal to enhance the longer
wavelength part of the spectrum by suppressing the
response of the short wavelength part of the spectrum.
A frequency domain filter can be defined which swings
the high-frequency tail of the spectrum down such that it
follows the linear trend established by the low frequency
part of the spectrum (Cordell and Grauch 1985). The
effect of this filter is to suppress high frequency signal,
which will be near surface and may lie within the noise
envelope of the data.

The total field data after application of the swing tail filter
can used in subsequent processing and automated
interpretation methods i.e. local wavenumber, 3D Euler,
edge detection etc.

This has the potential advantage of attempting to isolate
anomaly effects at different depth ranges so that there is
least interference from other overlapping anomalies.
The term depth slice or pseudo depth slice or
spectral depth slice have been coined. When such
terms are used it should be remembered that
assumptions are being made based on theory after
Spector and Grant,1970.

The Swing-Head filter (Fig 19/55) is the opposite to the
Swing tail filter in that it suppresses the long wavelength
signal to allow the short wavelength signal to be seen.
This can only be done if two slices have been defined.

Example: The power spectrum shown in Fig 19/56 has
three linear segments .At short, intermediate and long
wavelength (or high, intermediate & low wave number
k)




Figure 19/55: Swing Head and Swing Tail filters as
applied to Fig 19/51



Figure 19/56: Depth Slice concept superimposed
on the power spectrum (from GETgrid, GETECH
software)

The filter design is such that if the intermediate depth
slice (slope gives depth estimate of source ensemble of
335m) is chosen, then the response of long and shorter
wavelengths, defining the other two depth slices, are
suppressed (filtered) such that they align with the
intermediate depth slice. In other words, if a power
spectrum plot is made of the resulting depth slice, the
slope of the entire power spectrum plot would be a
straight line with slope equivalent to 335 m.

Whether or not the user should subsequently band pass
the depth slice data is subject to how the data are to be
used. Not using a band pass filter allows suppressed
shallow and deeper sources making for a potentially
cleaner image that may be used for lineament and depth
analysis. The solution is often try and see and then
decide.


The following figures 19/57 and 19/58 show the two
slices for Tanzanian data shown in Fig. 19/53 for slice
depths 171 m and 565 m

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 21
GETECH/ Univ. of Leeds:



Figure 19/57: This Slice at 171 m is effectively a high
pass showing the geology at ground surface and
removes long wavelength signals that will be
coming from shallow depth below surface. This is a
type of operation needed before mapping surface
structures is embarked on.




Figure 19/58: The Slice at 565 m . Clearly it is also
seeing similar structures but the power spectrum is
suggesting that the signal is coming from ~565 m
depth.


See Section 27.2 for more details on Spectral
analysis

Upward Continuation Filter

(see Section 14.7.1)This calculates the potential field at
an elevation higher than that at which the field was
originally measured at. The continuation involves the
application of Greens theorem and is unique if the field
is completely known over the lower surface. Since
anomaly amplitudes of shallow sources (short
wavelength anomalies) decrease faster with height than
anomalies associated with deeper larger sources
(longer wavelength anomalies), then upward
continuation will smooth out near surface effects.

This type of filter is used to bring two aeromagnetic
surveys to a common elevation so that the two surveys
can be tied/joined accurately together. An upward
continued field can be used as the regional field and
removed from the original field to produce a residual
field.

Downward Continuation Filter

(see Section 14.7.1) This process is to determine the
value of the potential field at a lower elevation. This is
moving the surface of imaging closer to the source. As
the depth from which an anomaly originates is
approached, its potential field expression becomes
sharper and tends to outline the mass better until the
depth of the mass is reached. Beyond this point the field
computed by continuation becomes erratic. Noise in the
data (short wavelength noise) often precludes
successful application of downward continuation. Thus
there is need to downward continuation in stages and at
each stage apply careful filtering to minimize noise
effects.

The slope of the power spectrum goes to zero as the
depth is reached

19.4.4 Combining Filters, Transformations and
derivatives

Since it is possible to filter, transform and derive
derivatives of your data, then there is tremendous
flexibility in processing data to enhance the attributes
you are seeking to highlight. Fig 19/42 is an example
taken from Tanzania and applying separately an RPT
and Pseudo Gravity transformation prior to deriving the
horizontal derivative to map structural edges. The RTP-
THDR contains a ringing character than the Pseudo
Gravity due to the RPT having negative lobes to the
central positive (see Fig.19/41).

Fig 19/60 then takes the maxima and plots them as a
ridge grid. Clearly the ringing effect of the RTP
approach over the Pseudo gravity approach is evident
when delineating the structural edges.

19.4.5 Automatic Lineament Detection
(Lineament Mapping)

The automatic lineament detection algorithm requires
the data to have been processed so that the edge of a
body lies at a peak. Several methods satisfy this
requirement. Horizontal derivative of the RTP and
pseudo-gravity (Cordell and Grauch 1985), the Analytic
signal (Nabighian, 1972; Nabighian, 1974; Nabighian,
1984) and local wavenumber are obvious choices.

These methods are readily available on a grid and
therefore a method to automatically locate maxima
edges from a grid (Blakely and Simpson 1986) can be
applied.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 22
GETECH/ Univ. of Leeds:

Cordell and Grauch (1985), discussed techniques to
determine the location (x, y) of abrupt lateral changes in
magnetization or mass density. The final stage of their
method of picking the derivative maxima from gridded
data has been formulated into an automated method by
Blakely and Simpson (1986).

Assuming the derivative field is a rectangular grid and
each grid intersection is g
i,j
. their method used is 3 x 3
grid operator, in such a way that marginal rows and
columns are not considered as intersections. The
method compares the intersection (grid node) with its
eight nearest neighbours in four directions (x ,y and
diagonal directions) see Fig. 19/59 below



Figure 19/59: Location of grid intersections used to
test for maximum near gi,j . Curved lines represent
contours of horizontal gradient values (THDR) of
magnetic or gravity anomalies.

The following comparison tests are made to identify
inequalities

N-S g
i,j-1
< g
i,j
> g
i+1,j
,

W-E g
i-1,j
< g
i,j
> g
i+1,j
,

SE-NW g
i+1,j-1
< g
i,j
> g
i-1,j+1
, and

SW-NE g
i-1,j-1
< g
i,j
> g
i+1,j+1
,

A counter N is increased by one for each satisfied
inequality. N can range from 0 to 4 and gives a quality
(significance level) of the maximum.

For each satisfied inequality, the horizontal location and
magnitude are found by interpolating a second-order
polynomial through the trio of points, e.g. if

g
i-1,j
< g
i,j
> g
i+1,j
,

the horizontal location of the maximum relative to the
position of g
i,j
is given by

x
max
= -db/2a

where

a= (g
i-1,j
- 2g
i,j
+ g
i+1,j
)

b = (g
i+1,j
- g
i-1,j
),

and d is the distance between grid intersections. The
value of the horizontal gradient at x
max
is given by

g
max
= ax
2
max
+ bx
max
+ g
i,j

If more than one inequality is satisfied, the largest g
max
and its corresponding x
max
is chosen as the
approximate maxima for that grid intersection.

A record of all x
max ,
g
max
, , N and g
ij
is kept for each
grid intersection where N>0. The user can then display
maxima in geodetic co-ordinates at any of the four
significance levels. Blakely and Simpson find N = 2 or
3 produce the most useful maps. Ridge grid examples
for Fig. 19/42 are shown in Fig. 19/60.



Figure 19/60: Plotting the maximum values of the
Total horizontal derivatives for both the Pseudo
Gravity and RTP shown in Fig 19/42. The ringing
effect of the RTP-THDR is very evident and
structural lineaments more clearly defined in the
Pseudo gravity route.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 23
GETECH/ Univ. of Leeds:


19.5 Comparison of Edge Detection
Methods

Phillips (2000) found that using a combination of the
three methods is generally the best approach.
Horizontal gradient contacts (of RTP for magnetics) that
overlie analytic signal or local wavenumber contacts
indicate the edges of horizontal sheets or vertically
dipping contacts. Horizontal gradient contacts that are
offset from analytic signal or local wavenumber contacts
indicate dipping contacts, with the true location close to
the analytic signal or local wavenumber solution and the
dip in the direction of the horizontal gradient solution.
Paired horizontal gradient contacts can be resolved as
single analytic signal or local wavenumber contacts over
non-horizontal sheet or pipeline sources.

The structural index estimated by the local wavenumber
method can be used to resolve conflicting
interpretations. Source depths should be close to the
local wavenumber depths unless the local wavenumber
depth falls outside the minimum/maximum ranges of the
horizontal gradient depths and the analytic signal
depths.


19.6 Colour and Shaded Relief
Mapping

The colour shading derivative is commonly used for
image enhancement in most grid based processes. In
cross section (profile form) as shown in Fig 19/61 it has
a similar effect to sunlight on topography. It generates a
variation in the illuminated surface and picks out
features that cannot been seen if the image is coloured
with simple colour representation as represented in the
upper image of Fig19/61. When the variations in
illumination are superimposed on the simple colour
representation a dramatic visualisation happens (lower
image in Fig. 19/61).

Shaded relief has advantage of preserving the anomaly
map with all its frequency components but being able to
enhance the high frequency components by artificially
illuminating the topology of the anomaly surface as if it
were topography lit by sunshine with associated
shadows.

This method of analysing the anomaly field is done
using colour fill or grey scale maps. A monochrome
examples is shown in Fig 19/62. Images of the data
field can be generated on computer screens by varying
the inclination and azimuth of the illumination and
scaling the amplitude of the anomaly field. The shading
will enhance certain topological directions e.g. a N-S
elongate ridge (anomaly high) with sun from N will not
be seen however with sun from W or E the ridge will be
seen. Changing inclination of sun will enhance shading.




Figure 19/61: Shading of gravity and magnetic
anomalies, brings out features that lie within colour
bands.



Figure 19/62: Grey scale version of Fig 19/59

The angle between Sun direction and the normal of the
local surface is used to control intensity of shading.

Shaded relief can be used to improve readability of
derivative maps. However, it should be remembered that
illuminating from one direction will preferentially
enhance trends perpendicular to that lighting direction.
To preserve all structural trends the Dip-Azimuth
shading is often used (see section 19.2.5)

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 24
GETECH/ Univ. of Leeds:


HSV model

Some Commonly used terms with reference to Fig 19/63
(after Milligan & Gunn 1997) are:

Hue is the combinations of red, green and blue primary
additives (measure around the vertical axis of figure
from 0
0
to 360
0
)
Value is the intensity (or energy) of the colour (varies
from 0 (black) through shades of grey along the central
axis to 1(white) at the top)
Saturation is the relative lack of white in the colour (or
the spectral purity)(varies from 0 on the vertical axis to 1
on the triangular surfaces of the hex-cone)



Figure 19/63: The HSV colour model

Histogram equalisation: Finding max and min field
values and equally dividing the colour interval table to
these limits, often generates problems in colour balance
due to extreme values taking up only small arial extent.
In histogram equalisation colour intervals are related to
equal area (or other non linear criteria), thus giving
better colour balance and definition to features The use
of shaded relief and histogram equalisation is a powerful
combination to improve image resolution and readability.

Histogram normalisation: As above but colour
distributed as though a Gaussian distribution (see Fig.
19/64)


19.7 Automatic Gain Control (AGC)

Automatic gain control (AGC) consists of varying the
input signal such that the output signal is of constant
RMS amplitude, within a moving window along profile
(size X) or within a grid (size X x X). The root mean

Equalisation Normalisation
Figure 19/64: Histogram Equalisation and
Normalisation

square of the input window is calculated. The gain
function is then taken as the inverse of the root mean
square value. The central point of the window is then
multiplied by the gain function. Weak anomalies are
amplified and strong anomalies reduced. The output
signal amplitude can be made independent of input
signal amplitude.

The AGC method gives best results when applied to a
function, which has constant wavelength and varying
amplitude (Rajagopalan S 1987). This is not true of
total magnetic intensity data (TMI), but vertical gradient
data are better suited due to a smaller range of
amplitude and wavelength (Rajagopalan and Milligan
1994).

The AGC method can be applied to various derivatives
in order to determine the best enhancement for the total
field data. Various window sizes need to be applied to
determine the one best suited for the purpose in hand.
The variation in window size enhances differing
wavelengths within the data. The 2nd vertical derivative
is often chosen as it has a lot of very subtle features,
which can be enhanced using this method.

The problem of large TMI amplitude anomalies Fig
19/65 is that they control the AGC value within the
window preventing the amplification of small scale effect
around and superimposed on the TMI large amplitude
anomalies. Using derivatives which have smaller
dynamic ranges minimise this problem with the display.

As the window size increases, short wavelength features
are less evident in the enhancement.. The larger window

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 25
GETECH/ Univ. of Leeds:

size thus increases the prominence of large and
possible deeper anomaly sources.

Examples of AGC are shown in Fig 19/65



Figure 19/65: Plot of AGC of TMI and VDR data for
window size 8 (where cell size is 1 km and 1% of
grid size is about 4 cell size. Resolution over large
TMI anomaly better when AGC done on the vertical
derivative.

Although AGC is a commonly used method to enhance
seismic data, it is not the preferred option in potential
field studies since the resulting image can only then be
qualitatively investigated. The preferred equivalent
method to AGC is signal normalisation as described in
the following section 19.8.



19.8 Normalized Magnetic Derivatives

This section is based on the article by Fairhead
J D and Williams SE 2006 Evaluating Normalized
Magnetic Derivatives for Structural Mapping.
SEG New Orleans EMGM P1, 4 pages

19.8.1 Introduction

In the previous section 19.7 automatic Gain Control
(AGC) has been a classic method within the seismic
industry to enhance weak amplitude signals. The results
of AGC are highly dependent on the size of the operator
(X x X) and the results of which cannot be used for
further stages of processing since the potential field
amplitude values have been changed by the gain
applied.

Magnetic maps contain signals with a wide range of
amplitudes, reflecting the varying depth, geometry and
susceptibility contrasts of sources. Such maps are often
dominated by large amplitude anomalies, which can
obscure more subtle anomalies. Modern methods of
colour display such as histogram colour equalization
and false shaded relief (section 19.6) help to some
extent to enhance these subtle anomalies.

In the last few years there have been a number of
methods proposed to help normalize the signatures in
images of magnetic data so that weak, small amplitude
anomalies can be amplified relative to stronger, larger
amplitude anomalies. Examples of normalized
derivatives discussed here include the Tilt derivative
(Miller and Singh (1994), Verduzco, Fairhead, Green
and McKenzie (2004) see section 19.2.5), Theta
derivative (Wijns et al., 2005) and TDX derivative
(Cooper and Cowan, 2006).

In this section I critically evaluate these normalized
derivative methods, both by describing the fundamental
relationships between the different derivatives, and by
discussing results derived for the 3D test model
previously described in Williams et al 2002, Williams
et al 2005) and Fairhead et al 2004.

19.8.2 Theory of Normalization Derivatives

Calculation of the directional derivatives of a profile or
grid of magnetic intensity T is a straightforward process.
For the profile case, the complex analytic signal can be
expressed as (Thurston and Smith, 1997):

( ) ( ) j exp A z x, A =

where
2 2
z
T
x
T
A |
.
|

\
|
c
c
+ |
.
|

\
|
c
c
=
and
|
.
|

\
|
c
c
c
c
=

x
T
z
T
tan
1


|A| is the 2D analytic signal amplitude, the local
phase. A common theme of the normalized derivatives
is the concept of mapping angles (or functions of
angles) derived from the gradients of the magnetic
intensity, as shown in Figure 19/66.

Tilt Derivative The Tilt derivative, first proposed by
Miller and Singh (1994), has been expressed by
Verduzco, Fairhead, Green and McKenzie (2004) in a
more general form as the absolute local phase
which can be applied to profile and grid based
data:

|
.
|

\
|
=
THDR
VDR
tan Tilt
1 -


where VDR is the vertical derivative and THDR is the
total horizontal derivative. THDR replaces x T/c c , so

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 26
GETECH/ Univ. of Leeds:

the sign of the Tilt is controlled only by the sign of VDR.
The Tilt derivative is restricted to values in the range
+t/2 >Tilt > -t/2, and can be considered as an
expression of the vertical derivative normalized by the
total horizontal derivative. The Tilt derivative is shown in
Figure 19/67 for a vertical contact with vertical induced
magnetization.

TDX Derivative Cooper and Cowan (2006) have
modified the Tilt derivative so that the total horizontal
derivative is now normalized by the vertical derivative
where:
|
.
|

\
|
=

VDR
THDR
tan TDX
1


The angle defined by the TDX expression is effectively
sin(Tilt)(t/2 - |Tilt|), and like the Tilt is also constrained
between +t/2 > TDX > -t/2, but has a much sharper
gradient over the contact (see Figure 19/65).



Figure 19/66: Definition of total horizontal derivative
(THDR), 3D analytical signal amplitude (|A|) and Tilt
angle () used for the Tilt and Theta map derivatives


Theta Derivative Wijns et al (2005) propose mapping
the value of cos(u), where u is defined as the angle to
the horizontal plane of the full vector direction of the
analytic signal:
( )
( ) ( )
A
y M/ x M/
cos
2 2
c c + c c
=


where |A| is the amplitude of the 3D analytic signal.
From figure 19/64 it is obvious that u is actually the
absolute value of the Tilt angle defined above. The
value of the cosine of either u or the Tilt is identical. The
Theta function is limited to values between 0 and 1, and
peaks over a simple vertical contact (see Figure 19/65)
and coincides with the zero crossings of the vertical, Tilt
and TDX derivatives.




Figure 19/67: Induced magnetic responses over a
vertical contact for derivatives of the vertical
inclination field. The Tilt and TDX derivatives are
confined to range t/2 and passes through zero at
the same point as the VDR, which for RTP data is
directly over the vertical contact. The Theta
derivative ranges from 0 to 1, with a peak where the
VDR passes through zero. The angle u is the
absolute value to the Tilt.

19.8.3 Testing using Complex 3D Test Model

Test Model: The geological 3D test model, shown in
Figure 19/66, consists of a basement surface (A)
varying in depth from ~1km in the NW corner to ~9km in
the SE corner and overlain by non-magnetic sediments.
The basement surface is divided into terranes and
intruded by higher susceptibility rocks all with vertical
contacts and truncated at the basement surface (B).
This model (A + B) has been used to generate total
magnetic intensity (TMI) field at various inclinations (for
C the inclination is 25) and the reduced to pole
magnetic (RTP) map (D) has been derived from (C).
This RTP map has been used to test the response of all
the derivatives since they are all inclination dependent.

Images of the synthetic magnetic fields are dominated
by anomalies due to susceptibility contrasts. Anomalies
due to basement topography are barely visible, but
would be very important for interpretation of geological
structure. The calculation of field gradient maps
preferentially enhances high frequencies so gradient
maps (Figure 19/67) define structures in the basement
model more sharply than the magnetic intensity
however, the amplitudes remain much smaller over
subtle features. The aim of normalized derivatives is to

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 27
GETECH/ Univ. of Leeds:

define these subtle features more clearly to the
interpreter.



Figure 19/67: (A) 3D basement model varying in
depth from 1km (NW corner) to 9km (SE corner); (B)
Susceptibility map of the basement surface in units
of micro cgs. All interfaces are vertical; (C) TMI
model field for Inclination 25, Declination 0 and
Geomagnetic field of 50,000nT; (D) Reduced To the
Pole (RTP) response of (C).




Figure 19/68: (A) Horizontal Derivative and (B)
Vertical Derivative, both displayed with histogram
equalized colour.


Normalized Derivatives RTP Data
Figure 19/68 illustrates the output of these derivatives
using linear colour scales. All three derivatives have the
effect of normalizing signals with different amplitudes,
making subtle features much more visible note that
these enhanced features do correspond to topographic
features in the model basement (Figure 19/67A). Figure
19/68 shows the different properties of the normalized
derivatives.

The Tilt derivative (Figure 19/68A) is relatively smooth,
with positive values over sources. The Theta map
(Figure 19/68C) produces peaks at every zero crossing
of the Tilt these peaks are also defined by sharp
polarity switches in the TDX derivative (Figure 19/68B).
The TDX generally has less texture than the Tilt or Theta
map between terrane edges in these areas the Theta
map produces a limited number of additional, low
amplitude peaks, some of which correlate with relief in
the model basement.



Figure 19/69: (A) Tilt, (B) TDX and (C) Theta map
derivatives for RTP data, and (D) comparison with
zero crossing of the vertical derivative, only positive
values plotted.

For RTP data, the location of vertical contacts is clearly
defined by the zero contour of the Tilt, the peaks of the
Theta map and/or a sudden polarity switch in the TDX
all of which are controlled by the sign (or zero contour)
of the vertical derivative (Figure 19/68D). In
combination, the normalized derivatives help, in their
various ways, to define the locations of magnetic edges,
show which side of each edge is likely to have a higher
magnetization, and give a qualitative indication of the
dip of contacts.

Normalized Derivatives Low Magnetic
Inclination
The discussion has so far been confined to the relatively
simple case of RTP data. Each of the normalized
derivatives discussed here are derived from gradients,
which depend on the magnetic field direction.

For data from low magnetic latitudes, a satisfactory RTP
may not be possible. Wijns et al. (2005) state that the
Theta map method is equally valid for data reduced to
the pole or reduced to the equator (RTE). Figure 19/69
demonstrates the behaviour of normalized derivatives
for RTE data from our test model (Figure 19/66).
Predictably, the east-west striking structures are well

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 28
GETECH/ Univ. of Leeds:

defined. However, for the major north-south source
structures defined reasonably clearly in the derivatives
of RTP data, the equivalent anomalies for derivatives of
RTE data are either absent or have a smeared
appearance and no longer define the edge locations
accurately. Only when the theoretical assumption of an
idealized, isolated contact is satisfied will the peak
define such sources accurately (Li, 2006).



Figure 19/69: (A) Tilt, (B) TDX and (C) Theta map
derivatives for RTE data, and (D) comparison with
zero crossing of the vertical derivative, only positive
values plotted. Outlines of susceptibility contrasts in
the test model basement are overlain in white.

19.9.4 Conclusions

Normalized derivatives provide geophysicists with a new
suite of images that if used correctly will help to identify
and map geological structure from magnetic data. The
Tilt, TDX and Theta derivatives work well and all show
different aspects of the anomaly field. The results shown
here demonstrate the potential for these normalized
derivatives to aid qualitative interpretation of subtle
anomalies.

Under certain assumptions normalized derivatives may
be used to track magnetic contacts (Wijns et al., 2005).
The zero contours of the vertical derivative, Tilt and TDX
derivatives (Figure 19/68A & B) follow the same path as
maxima of the Theta map (Figure 19/68C). Results
shown here suggest that tracking edges from
normalized derivatives of RTE data should be carried
out with caution. For RTP data, the results here can be
compared to those derived from the local wavenumber
or total horizontal gradient of pseudo gravity presented
in Fairhead et al. (2004). The relative merits of these
and other contact mapping techniques have been
reviewed recently by Pilkington and Keating (2004).

The concept of normalized derivatives is not limited to
the examples discussed here for example, Cooper
and Cowan (2006) derive normalized derivatives in the
x- and y-directions. An alternative approach would be to
calculate the Theta derivative using a directional
gradient as the numerator in equation 6. Normalized
derivatives of higher order gradients can be derived. For
example, second order normalized derivatives can be
calculated using the gradients of the first vertical
derivative of the magnetic intensity rather than of the
intensity itself, analogous to the second order analytic
signal (Hsu et al., 1996). Results for our test model (not
shown) suggest that such images define subtle
basement relief even more accurately than the first
order derivatives, and may be useful where data quality
is sufficiently high.
References

Cooper, G. R. J., and D. R. Cowan, 2006. Enhancing
potential field data using filters based on the local
phase: Computers & Geosciences (in press).

Fairhead, J. D., C. M. Green, B. Verduzco, C. and
MacKenzie, 2004. A new set of magnetic field
derivatives for mapping mineral prospects: ASEG 17
th

Geophys. Conf. and Exhibit., Sydney 2004., Extended
Abstract.

Fairhead, J. D., S. E. Williams, and G. Flanagan, 2004.
Testing magnetic local wavenumber depth estimation
methods using a complex 3D test model: SEG
Expanded Abstracts, 742-745.

Hsu, S. K., J. C. Sibuet and C. T. Shyu, 1996. High
resolution detection of geological boundaries from
potential field anomalies: An enhanced analytic signal
technique. Geophysics, 61, 373-386.
Li, X., 2006. Understanding 3D analytic signal
amplitude: Geophysics, 71, L13-L16.

Miller, H. G. and V. Singh, 1994. Potential Field Tilt a
new concept for location of potential field sources:
Journal of Applied Geophysics, 32, 213-217.


Pilkington, M. and P. Keating, 2004. Contact mapping
from gridded magnetic data a comparison of
techniques: Exploration Geophysics, 35, 306-311.

Thurston, J. B. and R. S. Smith, 1997. Automatic
conversion of magnetic data to dep, dip, and
susceptibility contrast using the SPI method:
Geophysics, 62, 807-813.

Verduzco, B., J. D. Fairhead, C. M. Green, and C.
MacKenzie, 2004. New Insights into Magnetic
Derivatives for Structural Mapping: The Leading Edge,
23, 116-119.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 29
GETECH/ Univ. of Leeds:


Wijns, C., C. Perez, and P. Kowalczyk, 2005. Theta
Map: Edge detection in magnetic data: Geophysics, 70,
L39-L43.

Williams, S. E., J. D. Fairhead, and G. Flanagan, 2002.
Realistic models of basement topography for depth to
magnetic basement testing: SEG Expanded Abstracts,
814-817.

Williams, S. E., J. D. Fairhead, and G. Flanagan, 2005.
Comparison of grid Euler Deconvolution with and
without 2D constraints using a realistic 3D magnetic
basement model: Geophysics, 70, L13-L21.









J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 19, Page 30
GETECH/ Univ. of Leeds:



J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds:





INTERPRETATION

Section 20 Interpretation Methodology/Philosophy General Approach
Section 21 Structural (Edge) Mapping: Detection of faults and
Contacts
Section 22 Estimating Magnetic Depth: Overview, Problems & Practice
Section 23 Quantitative Interpretation-Manual Magnetic Methods
Section 24 Quantitative: Forward Modelling
Section 25 Quantitative :Semi-Automated Profile Methods
Section 26 - Quantitative -Semi-Automated Grid Methods : Euler
Section 27 Quantitative Semi-Automatic Grid Methods:
Local Phase (or Tilt), Local Wavenumber, Spectral
Analysis and Tilt-depth

J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds:





J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 20 Page 1
GETECH/ Univ. of Leeds:

SECTION 20: Interpretation Methodology and
Philosophy General Approach




20.1 Introduction

Qualitative interpretation is a complex mixture and
interaction of: (a) data enhancement, (b) spatial
correlations between old and new datasets and
interpretations as well as (c) the experience of the
interpreter. Subsequent quantitative interpretation,
using 2D to 3D forward modelling and inversion
methods, can then be used (as needed) to generate a
final structural-depth model that satisfies all the
available information and is geologically sound and
reasonable. An importance point, not to be forgotten, is
why you are doing the interpretation? The solution
should provide the explorationists with a range of
exploration targets worthy of further investigation.
The interpretation solution although quantitatively
derived and based on available data, is not unique and
will contain errors due to the subjective processes
involved in its formation and the non-uniqueness of
potential field interpretation models. Clearly the
interpretation should aim at being the best possible
geological solution based on the available data
constraints. Once additional data are collected, then
there will be a need to refine the interpretation to
account for the new data. Thus the interpretation can
be an iterative process. As already indicated the
interpretation process also contains what one may wish
to term interpreters bias, based on the interpreters
experience and the interpretation philosophy used. This
is often important since clients often favour one
interpreter over another and often only want the
interpretation being done by a named individual, or
interpretation group, due to their reputation.
Potential field data often become important within a
study area (concession block) when it contains shallow
high seismic impedance intervals (e.g. volcanics, salt
and sills), since seismic reflection data often become
incoherence at greater depth. In such areas the
involvement of potential field (gravity and magnetic)
methods is often sought to resolve basin/basement
structure. This involvement both helps to generate a
greater uniformity of the interpretation, and helps to
identify and locate features that are seismically less
easy to recognise such as: (i) strike-slip faults, (ii)
regional discontinuities, (iii) dykes, and, (iv) true
basement surface.
Section 20 draws on the views/philosophy that GETECH
employ in their generalised interpretation scheme.
Subsequent sections provide details on the techniques
and methods that can be used.

20.2 Qualitative Interpretation:

There are two aspects to gravity and magnetic
interpretation - qualitative and quantitative.

The qualitative element is now largely GIS-based and
dominates the early stage and late presentational stage
of a study. The preliminary structural element map that
results from this qualitative study is essentially the
cornerstone of the final interpretation. Qualitative
interpretation involves the use of all available
geological/geophysical, well/seismic data not only locally
within the limits of the concession block but also
regionally. Ideally the data coverage should be
sufficiently extensive to allow the qualitative
interpretation to extend well beyond the limits of the
concession block to provide a regional perspective. It is
much easier to appreciate the significance of structures
affecting the concession block if one has the opportunity
to first understand the regional tectonic/geological
setting of the concession block.

What is recommended is a two-stage approach, firstly
developing a sound understanding for the regional
tectonic/geological setting before secondly
concentrating on the interpretation of the concession
block. The significance of the regional tectonic trends,
e.g. fault directions and their relative timing of
movement, can be used to influence the interpretation
within the concession block. Such trend directions may
be only subtly expressed in the data and without the
regional appreciation these potentially important tectonic
components may be wrongly interpreted or entirely
overlooked. To achieve the best results for both stages
require significant gravity and magnetic data processing
using transforms, filters, derivatives and lineament
analysis to identify and map appropriate attributes that
will form the basis of the geological/structural map. In so
doing the full spectral content of the data will have been
exploited.

To achieve a full understanding of the geology and
structures present requires a commitment to undertake
an extensive background literature search using
library/internet facilities as well as materials and reports
supplied by the client. This allows the interpreter to

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 20, Page 2
GETECH/ Univ. of Leeds.

build on past knowledge rather than trying to reinvent
the wheel! This whole process will give the client the
confidence the interpretation is sound well researched
and provides an up to date view. This process can take
up to 20% of project time to search, read and absorb the
information as well as presenting it in a useable form.
With the availability of GIS applications e.g. ArcView
TM
it
is now possible to scan and geo-reference existing
maps and interpretations, so that they can be accurately
superimposed onto your data. In this way the maximum
use of existing data can be rapidly used and referred to
by correlating old and new data and interpretation
concepts/ideas. This makes the end product of the study
a more comprehensive and integrated study. Profile
data in the form of seismic sections, geological
interpretations can also be used within the GIS
application by geo-referencing the end points of the
profile and scanning the profile such that by clicking
onto the profile line the image of the profile appears.
With Internet linkage of computers now commonplace, it
is possible to hot link data, articles, etc. from other web
sites at a click of a mouse button within your GIS
application. The advantage of such a final presentation
format is the client can add and expand the project to
keep the study up to date. This would not be possible
with hard copy reports. This is not to say that the client
also requires a hard copy report or power point
presentation to help guide the client through the study
and stress what the important conclusion are. Often a
formal presentation is costed into the contract so the
client is fully aware of the results, their uncertainties and
suggestions for further work. This also allows the
opportunities of the exploration team working the
prospect to be more aware of the work done and
provides the opportunity for dialogue, which is generally
good for both parties.

The set of geo-referenced images/overlays thus
provides an environment within which it is now possible
to undertake Qualitative interpretation in an efficient
manner. The interpretation will involve the recognition
and mapping of: (i) the nature of discrete anomalous
bodies including intrusions, faults and lenticular
intrasedimentary bodies - often aided by the running of
simple test models, (ii) disruptive cross-cutting features
such as strike-slip faults, (iii) presence of mutual
interference of gravity and magnetic responses, (iv) age
relationships of intersecting faults, (v) structural styles,
and, (vi) unifying tectonic features/events that integrate
seemingly unrelated interpreted features. Such analysis
will result in a preliminary structural model/framework of
mapped features that fits all the available data within
both the concession area and also the larger regional
study area. This initial model will be refined later by the
results of quantitative modelling results.


20.3 Quantitative Interpretation

The qualitative results can now be used as the basis for
detailed quantitative 2D (profile based) and 3D (grid

based) modelling. This includes the more precise
determination of the xyz location of known and inferred
geological structures. Depth estimation is a major
element of this quantitative stage, which can be
determined by various methods (see subsequent
sections). A qualitative foreknowledge allows the
interpreter to better discriminate between geological
solutions on the basis of likely body types. This same
qualitative foreknowledge also benefits the quantitative
modelling stage when seismic control is less than clear.

The interpretation of magnetic data is theoretically more
involved than the corresponding gravity data, due to: (i)
the dipolar nature of the magnetic field in contrast with
the simpler monopolar gravity field, and, (ii) the
latitude/longitude dependent nature of the magnetic
response for a given body due to the variable strength
and inclination of the Geomagnetic field. Despite this,
interpretation of magnetic data is in practice often
simpler than that of gravity due to the smaller number of
contributory sources. Often, though not always, there is
just one source - the magnetic crystalline basement.
The gravity response, by contrast, is generated by the
entire geological section.

The complexity of magnetic anomalies, conferred by the
dipolar nature of the inducing magnetic field, lends an
interpretative advantage in the case of intra-sedimentary
magnetic bodies, as the dipolar response is particularly
diagnostic of the disposition (e.g. dip) of the source. It is
for this reason that it is important for the interpreter to be
familiar with the wide range of induced magnetic
responses produced by simple yet geologically sensible
bodies specific to the magnetic inclination of the region.
It is important to remember that the shape of a magnetic
anomaly over a body (e.g. a contact, block, or sill) is
dependent on inclination of the geomagnetic field and
the orientation of the contact with the field, whereas
increasing the depth of the causative body reduces the
amplitude and increases the wavelength of the anomaly
(this assumes the geomagnetic inclination and body
orientation are kept fixed). Seeking mutual consistency
of both gravity and magnetic interpretations tends to
ensure that results are more sensible and ambiguities
minimized. So there are often distinct advantages of
being able to include both gravity and magnetic data in
the interpretation process.






J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 20, Page 2
GETECH/ Univ. of Leeds.

20.3.1 Profile (2D) modelling

Profile 2D modelling of potential field data is an
important aspect of the quantitative interpretation
process. It is often performed using a bottom-up /
outside-in / magnetics first approach. This tends
to ensure that deep (magnetic basement related)
and distal sources, which impact regionally on the
concession block, are sensibly configured before
attention is focused on the detail i.e. shorter wavelength
signals, within the concession block. In practice the
interplay of deep, distal and centrally shallow crustal
features invariably requires a degree of iteration
between deep and shallow source assignment. Final
modelling conclusions rely on seeking consistency
between the gravity and the magnetic data while
adhering to other data constraints and using sensible
geological principles. The magnetics first modelling
approach is favoured, recognizing that the sedimentary
section within petroleum provinces often possesses little
in the way of significant magnetic susceptibility, in which
case by far the largest proportion of magnetic signal is
generated at the crystalline (igneous or metamorphic)
basement level. This is useful, because unlike gravity
where the entire section contributes to the observed
field leading to a potentially confused overprinted
picture, all but the shortest wavelength magnetic
responses can be ascribed to the underlying basement.
Where intra-sedimentary magnetic sources exist, these
are often sufficiently discrete and of short wavelength
character to be recognised for what they are. The
modelling of the magnetic data is therefore particularly
important for extending interpretation below the effective
level of seismic penetration using a combination of
depth estimates and 2D modelling. Once the magnetic
data have been addressed in this way, consistency is
sought with the longer wavelength gravity features. By
this means any remaining long wavelength gravity
anomalies may be more properly ascribed to broad
shallow sources, rather than to deep sources.

The forward modelling interpretation process is
essentially recursive, where qualitative results prime the
quantitative stage, and results from the latter require
adjustment of the former and so on. A crucial aspect of
interpretation is recognising the arrival of the point of
diminishing returns, that is, beyond which
disproportionate effort generates few useful results. At
this stage only the inclusion of additional data, e.g. on a
more detailed scale, is likely to reinvigorate the
interpretation process.

2D interpretation within the concession block is usually
done using a series of individual profiles, which often
need to extend well outside the concession block to
provide adequate control on regional structures and
their responses. Having these individual profiles linked
together by an axial profile or a network of
interconnecting profiles is desirable particularly if 3D
interpretation is to be undertaken. 2D means two
dimensional in such a way that the profile being
interpreted is perpendicular to the strike (axis) of the 2D
structure being studied and the 2D structure is assumed
to have a constant cross section and infinite extent in
and out of the plan of the profile. This simplistic model
assumption is generally a reasonable approximation for
structures whose strike length to width ratio are 4:1 or
greater, so long as the profile is located centrally along
strike of the body. Modifications to this assumption are

2.5D: here the structures have a constant cross
section but the bodys length has limited extent in
and out of the plan of the profile. The extent of the
body in and out of the profile needs to be defined.
2.75D: here the structure is the same as the 2.5D
structure with the exception that the body is
orientated at an angle to the profile, rather than
being perpendicular to it.
Positioning of profiles need to be such that they will
generate maximum information on the structure being
modelled. This can only be done if the profile locations:

have good gravity and magnetic control
have directions that are perpendicular as possible to
the strike of the anomalies being interpreted and
tie into known subsurface information based on
seismic and wells information.

To maximise these profile requirements it is generally
good practice to have a tie line profile since this
enables internal model consistency between profiles in
terms of geological structures and physical properties
(density and susceptibility).


20.3.2 Grid (3D) modelling

Grid (3D) modelling and inversion methods often require
the initial setting of the physical parameters, thus the
reason for carrying out the 2D interpretation as the
preliminary step. Normally 3D modelling is restricted to
gravity modelling using seismic well and magnetic depth
estimates as constraints to the gravity based model. In
determining magnetic depth estimates good practice
requires that two independent method (e.g. Euler &
Werner; Euler & graphical depth estimate, spectral and
[Euler or Werner or Naudy], etc) be used to provide
such estimates and in so doing provides some degree
of confidence. Some of the depth estimation methods
cited are based on 2D assumptions and studies using
Euler 2D and 3D methods indicate that the 3D solutions
are more coherent and less prone to spurious solutions
The reason for this is, it is highly unlikely that all
anomalies identified in a profile with conform to the
simple 2D assumption.. 3D forward modelling and
inversion will be discussed in later sections.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 1


SECTION 21: STRUCTURAL (EDGE) MAPPING
DETECTION OF FAULTS & CONTACTS



21.1 Introduction

In its basic form, gravity and magnetic data image the
response of structural edges, such that by careful
analysis we can calculate the location and depth of
these edges. These edges are geological features such
as
Geological contacts separating rock types with
differing density and/or susceptibility or
Faults with lateral changes in susceptibility and/or
density.
Since these laterally changing in physical properties
generate the magnetic and gravity anomalies and these
changes can occur at any depth, it is not surprising that
the gravity and magnetic fields can generate a complex
set of structural edges (faults and contacts). It is the role
of the interpreter to map the structures and their depth>
This section deals with methods of mapping the spatial
location of these edges.

The gravity response of a subsurface density (scalar)
body (assume point source) is proportional to 1/r
2
(or
inverse square of distance) whereas the total magnetic
intensity response (TMI) of the same subsurface
magnetic (vector) body is proportional to 1/r
3
. Thus the
equivalent to the TMI field in gravity response is dg/dz
(i.e. vertical gradient). Because of the different fall off
rate of the TMI and gravity fields another important
factor to consider is that the gravity field represents the
response of the integrated mass(density) distribution of
the sub-surface body, whereas for TMI the field is more
the changes in surface magnetostatic charge. This is
invariably corners (edges) of bodies.

Many authors have investigated the response of gravity
and magnetic anomalies over structural edges, and
many of these authors have already been referred to in
these Course Notes.

Philips (2000) investigated the location of magnetic
contacts (see Figure 21/1)

Thurston and Smith 1997) investigated the location of
magnetic vertical contact (Figure 21/2, already shown as
Figure 19/32 pages 19-10)

Fairhead and Williams (2006), investigated the various
magnetic responses from normalised phase derivatives
(see figure 21/3 and pages 19-8-19-10 and 1926 to
19-29) Theta derivative after Wijns et al (2005) and the
TDX derivative after Cooper and Cowan (2006)






See also Keating and Pilkington_2004 and Pilkington
and Keating (2010)



Figure 21/1: Set of induced magnetic derivatives
generated from simple geological models of vertical
contact, sloping contact vertical dyke horizontal sill.
Where HGM-RTP is horizontal gradient of gradient
magnitude of reduced to pole magnetic field; HGM-
PG is horizontal gradient magnitude of pseudo-
gravity; AS-MAG analytic signal of the observed
magnetic field; AS-FVI analytic signal of the first
vertical integral of the magnetic field and LW-MAG
local wavenumber of the observed magnetic field.

Derivatives work well for RTP magnetic data since the
magnetic anomaly source will be symmetrical to
magnetic source and not distorted by the inclination of
the geomagnetic field. Thus the lateral locations of
edges will be vertically over the upper corner vertex.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 2




Figure 21/2: Source Parameter Imaging (SPI method
after Thurston and Smith 1997). Shows the local
phase and local wavenumber and how they relate to
the contact edge. Note: their vertical derivative is
z T and not z T (The Vertical derivative
should be in phase with the TMI anomaly).


Figure 21/3: Induced magnetic responses over a
vertical contact for derivatives of the vertical
inclination field. The Tilt and TDX derivatives are
confined to range /2 and passes through zero at
the same point as the Tz= VDR, which for RTP data
is directly over the vertical contact. The Theta
derivative ranges from 0 to 1, with a peak where the
VDR passes through zero. The angle is the
absolute value to the Tilt.

For gravity data the Bouguer anomaly is the response of
the mass body at depth. If the edge of the body is a
vertical contact then the horizontal derivative will peak
(maximum slope) over the edge of the vertical contact.
However if the contact has a dipping contact then the
peak will laterally moves as shown in Figure 21/4.
Conversely the magnetic peak for the horizontal
derivative of the RTP filed will remain over the vertex.

The important thing to note is that all derivatives are
sensitive to contacts (edges) in one way or another
showing up as a maximum or as a zero crossing. This
variability of the location is not a major problem but
needs to be recognised in gravity data.

How do we identify the contact and map it?


Figure 21/4: Movement of local gravity THDR
maxima due to dip of contact.



21.2 Mapping contacts (edges) using
RTP data

Mapping the structural edges is one of the first activities
of interpretation. The best way to appreciate what the
mapping steps are and their pit-falls is by demonstrating
them using the Bishop 3D basement model (see Figure
21/5 and section 19.8.3 for further information).

The Bishop model assumes a constant susceptibility for
the basement and zero susceptibility for the overlying
sediments. Figure 21/5 illustrates the variation in
thickness of the sediments or depth to basement. For
any mapping of structure the first action is to remove the
complexity of the anomalies (see figure 19/67C for
Inclination 25
0
) by applying the transformation Reduced
to pole (RTP). The RTP field is shown in Figure 21/6
and has a close correlation with the basement surface
that is generating the field.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 3




Figure 21/5 The Bishop basement model

By deriving the Total horizontal derivative (THDR see
section 19.2.2) the structural edges appear as maxima
and are thus easier to visualise and map. In Figure 21/7
two methods are shown to track structural edges: the
THDR of the RTP data and the THDR of the Pseudo-
gravity (PsGr). Which is best?

First what is Pseudo-gravity? If one assumes that there
is a fixed relationship between magnetic susceptibility of
a rock and its density, then the magnetic field map
(Figure 21/6) can be transformed into its Pseudo gravity
field equivalent (figure 21/7bottom left). This assumes
that the magnetisation is uniform and induced and
contains not remanence.

For 2D structures Figure 21/7 uses the Bishop model to
show how the Total horizontal field of the Pseudo gravity
method for tracking lineaments out performs the Total
horizontal field of the RTP.

For 3D structures Figure 21/8 shows a simple magnetic
dipole with I= 55
0
transformed to RTP. The total
horizontal derivative of the RTP has its maxima tracking
the shape of the dipole source using the Blakely and
Simpson (1986) method (for details of tracking method
see section 19.4.5).

The lower half of Figure 21/8 shows the route to the total
horizontal derivative via the Pseudo-gravity. The profiles
of the RTP and PsGr show that the RTP positive central
anomaly is flanked by a small negative anomaly
whereas the PsGr is not. When these fields are
converted into total horizontal derivatives, the result is a
double circular maxima for the RTP field and a single
maxima for the PsGr field. When the Blakely and
Simpson (1986) maxima tracker is applied to these
derivative fields the RTP data generates a weak
multiple or ringing, when the true size and shape of
the dipole is that shown by either the PsGr field or the
inner circle of the THDR of the RTP data.


Figure 21/6: Reduced to Pole (RTP) of the 3D Bishop
model

The tracking of the maxima by the Blakely and Simpson
(1986) method is based on a 9 point filter (see Figure
19/59) which generates a score of 0 to 4, where 0 to 2
are poor maxima while 2 to 4 are good maxima. Figure
21/9 shows how dramatic the maxima are cleaned up by
only plotting 2-4 results. To illustrate how these maxima
tracks fit with the geological basement model, the tracks
have been superimposed in Figure 21/10. No account is
made for the size of the THDR anomaly since the
amplitude is a function of susceptibility contrast and
weak susceptibility contrasts may have no relation to the
importance of the edge feature being mapped.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 4



Figure 21/7: Commonly used magnetic Intensity derivatives used to define structural edges these are Top Right-
the Total horizontal derivative; Bottom right- Analytic signal and Bottom Left the Pseudo-gravity and its Total
horizontal derivative.



Figure 21/8: Differences in RTP and PsGr for structural mapping


J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 5



Figure 21/9 Tracking the maxima for total horizontal derivatives of the reduced to pole (RTP) and Pseudo gravity
(PsGr) fields. The ringing phenomena is clearly seen in the RTP derivative results
(
Other derivatives that can map structural edges include
the Vertical derivative (VDR, see section 19.2.3) and the
second vertical derivative (see section 19.2.4). These
derivatives delineate the edges via their zero value
contour and in visualisation terms these are not as easy
to identify as the total horizontal derivative. The Analytic
signal (AS, see section 19.2.6) is also referred to having
a maxima over the edges of structures. This is indeed
the case in theory but in practice comes a poor second
after the total horizontal derivative.


Figure 21/10: the excellent fit between the basement
topographic model and the maxima tracker 2-4
derived from the PsGr transformation.

21.2 Tracking structures (edges) close
to the magnetic Equator

When magnetic data are transformed to the pole (RTP),
structures (edges) of all azimuths can be seen so long
as the magnetisation of the body is large enough. All
derivatives so far shown in section 21 can be classified
as derived from the magnitude of the RTP. This means
large amplitude TMI or RTP anomalies will generate
large amplitude derivatives (i.e. THDR, AS and VDR). In
structural mapping we wish to map structures that are
also associated with weak magnetic structures. Thus
derivatives that relate to the Local Phase have been
introduced which are independent of magnetisation.
Such phase derivatives will be shown to be important at
the magnetic equator as well as in any RTP field
analysis.

To illustrate what happens for 2D structures close to the
magnetic equator, Figure 21/11 shows the field change
of the Bishop model if located at the Pole and Equator.
Th W-E striking anomaly is clearly seen in the RTE but
the anomaly sign has changed. The main N-S striking
anomaly, clearly seen in the RTP data is however lost to
a string of positive and negative anomalies.

To illustrate what happens for a 3D structure Figure
21/12 shows the magnetic response of a dipole ( I = -
37.3 and D = -2.1 at the pole (RTP) and at the magnetic
equator (RTE). As described for Figure 21/8 the dipole
edges for the RTP field are well defined for all azimuths
whereas for the RTE data only the north and south

J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 6


edges are delineated. The west and east edges are not
imaged. This results in all derivatives being smeared out
in a westeast direction and north-south orientated
structures not being imaged. This phenomena is often
described as magnetic anisotropy (see sections 18.3
and 19.2.6)


Figure 21/11; The Bishop model showing RTP left
and RTE field right.



Figure 21/12: The TMI field of a magnetic dipole at
500 m depth with Inclination -37.3 and Declination -
2.1. The RTP and RTE fields show the azimuthal
variation in the dipole anomaly. The RTP and its
associated Tilt derivative are positive and isotropic
to azimuth, while the RTE and its associated Tilt
derivative are negative and anisotropic with
azimuth. For simplicity the Tilt derivative anomalies
are restricted to the positive value (Tilt of RTP) and
to their negative component (Tilt of RTE) to better
define the zero contour. Colour bars are non linear
due to colour equalization in this figure and
subsequent figures.

This anisotropy is clearly seen in the Bishop and the
dipole models for RTP and RTE when compared side by
side (see Figures 21/11 &12)

A real anomaly situation is shown in Figure 21/13 for the
Malay Peninsula which is located over the magnetic
equator. The edges of the circular granite body are
clearly seen in the TMI map on the north and south
sides but not on the west and east sides. Enhancement
of the TMI data (map A) by applying the


Tilt derivative (map B) fails to enhance the weak to
absent anomalies along the N-S striking edges.



Figure 21/13: A granite structure in Penisula
Malaysia showing the effects of the magnetic
anisotropy preventing the N-S striking edges from
being seen.

To help resolve N-S trending structures for magnetic
data in magnetic equatoral regions, the role of the first
order derivative Analytic signal (for poor quality data
and second order derivative local wavenumber (for
high quality data) become important. These derivatives
are for all practical purposes independent of inclination
and thus help to improve the resolution of structures
such as faults when they provide a boundary between
shallow shortwavelength anomalies and deeper longer
wave length anomalies. Two examples are shown in
Figures 21/14 and 21/15. The minerals Figure 21/14 is
located in Namibia over near surface basement while

J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 7


the oil Figure 21/15 is over a major Karroo rift basin in
southern Tanzania.




Figure21/14: The TMI Namibian data has been reduced to pole and equator and then converted to Analytic signal.
The arrows show near N-S trending features which are difficult to image in the RTE but their existence is clear
from the Analytic signal derivatives



J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 8


Figure 21/15:Shows the significant diference between converting the TMI image (A) to RTP (B) and RTE (E).
Assuming the RTE (C) is the TMI field close to the equator then structural mapping is only possible if (C) is
converted to Anylytic signal or Local wavenumber (see Figure 21/16) (Salem et al., 2011).

Figure 21/16: Tracking major structures (edges) using Analytic signal of the RTE and Local Wavenumber of the
RTE (Figure 21/15C). Both of these derivatives show up the change in wavelength and susceptibility content well
and the clear faulted contacts between the basement and the deep sedimentary basin.

For Namibia, prior to the aeromagnetic survey, the
geology map sheet was as shown in Figure 21/17

Figure 21/17: Original pre aeromagnetic survey
geological map of the study area.

By using a range of derivatives maps (Figure 21/18) it is
possible to generate both a new structural and
geological map for the region (Figure 21/19).

Figure 21/18 Example of just two of the derivative
maps used.
Not all faults have associated anomalies, often shear
faults can destroy the magnetic properties along the
fault and the only way to detect the fault is by the
truncation of anomalies meeting the shear zone.

Figure 21/19: New Geology and Structural map


21.3 Direction of susceptibility contrast
(direction of fault throw

Besides delineating the fault or contact it would be good
to know the fault throw direction and /or the way that
susceptibility changes across the fault or contact. By
investigating the gradient of the Tilt anomaly across a
contact/fault for RTP data (thick black profiles with arrow
in Figure 21/20) it can be seen that the slope is negative
towards the low susceptibility side (i.e. basement to
sediment). This gives direction of dip but not is
magnitude (Fairhead et al., 2011).

J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 9




Figure 21/20: Tilt derivatives for different field
Inclinations and for two model types
The problem of using the zero contour locations to map
faults/contacts (edges) is that faults generally are linear
and of finite length with variable throw along their length
generating a variable magnetic response, whereas a
contact can be both linear and/or a closed feature
defining a discrete geological structure in plan view, with
constant magnetic response. However, a map contour
(in this case the zero Tilt derivative contour) has a
closed form and its shape can be controlled by a range
of spatial factors relating to the 3D distribution of
anomaly sources.

How can we identify the section of the zero contour
that is located over edges? To answer this, we need
to use derivatives that are based on the magnitude of
the TMI or RTP fields since where an edge exists
between two different susceptibility blocks an anomaly
will be generated and distinctive THDR, VDR and AS
anomalies will be generated. In the case described
below we use the Analytic signal (AS) to determine the
susceptibility contrast (K) across the zero contour of
the Tilt derivative. If K is large then an edge is likely, if
it is small no edge is present.



Figure 21/21: For the southern Tanzania area the sign of the gradients crossing the edge (Figure 21/20) is plotted
as arrows crossing to the zero contour of the Tilt derivative (remember zero contour marks the edges of
structures).

Figure 21/22: A The Analytic signal for the same area of southern Tanzania but with a threshold of 0.005 nT/m
removed so that subtle anomalies in the deep basin can be visualised



J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 10


21.4 Susceptibility contrast

As previously described (section 19.2.7) the Tilt
derivative is devoid of geomagnetic field intensity and
susceptibility information. To generate an estimate of
the susceptibility contrast we need to use a first order
derivative, of the field magnitude, that has its maximum
value over the edge of the fault/contact. The Analytic
signal(SA) is used rather than THDR or VDR since it
works well for RTE data and can be considered for most
practical purposes, to be independent of induced
magnetisation and remanence. For the case of a 2D
vertical contact model of a vertical field (RTP), the two
equations for (after Nabighian (section 27.4 page
27/11) can be used to generate the Analytic signal |A|
response over a buried vertical contact at depth zc and
by rearranging the equation generate an expression for
the susceptibility contrast, K :

(5)

Where A is the amplitude of the Analytic signal, F is the
field strength and c is a constant
This method (Fairhead et al., 2011) has been applied
along the zero contour of the Tilt derivative in Figure
21/22B producing a colour plot of the susceptibility
contrast with the warmest colours (red) having the
largest susceptibility contrast. Furthermore, the
thickness of the colour on the zero contour is modified in
such a way that its thickness increases with
susceptibility contrast. These two effects of colour wide
plus the warm colour help to reinforce where significant
contacts with large K values are located along the zero
contour of the Tilt anomaly.

Using the above display is important for interpretation
purposes, since it helps to define the location of faults.
Basement faults can be reasonably assumed to be
linear while the zero contour of the Tilt angle map is a
closed line. Thus only discrete sections of the zero Tilt
derivative contours will be tracking faults (edges), and
therefore is important to be able to recognize which
parts of the contour lines coincide with a fault

The methods described here, thus provide important
constraints that allows interpreters to efficiently and
accurately develop a valid structural interpretation
consistent with the magnetic data, often all within a GIS
software package.

When the Analytic signal is converted to susceptibility
contrast (Figure 21/22B) the ENE-WSW trending feature
identified and zoomed into in figure 21/21 and identified
as W in Figure 21/22 is associated with a strong
susceptibility contrast with its dip direction pointing into
the basin. This information all points to a deep seated
fault dipping to the NW.







Figure 21/23 Structural Interpretation of the Karoo basin in Southern Tanzania based solely on magnetic data
Fc
z A
K
c
2

J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 11


The N-S striking feature Z in Figure 21/22B suggests
that there exists a half graben with the master fault on
the east side dipping west. This is some inconsistent
with the structural interpretation on Fig. 21/23B and thus
needs further investigation Feature X (Figure 21/22A
suggests a NNE-SSW trending feature not identified in
the Tilt derivative map. Thus the use of the zero Tilt
contour has to be used with care and reinforces the
need of the structural interpreter to use a range of
derivative maps to aid the interpretation.

The end result is our ability to construct a meaningful
structural model as shown in Figure 21/23. The only
thing missing is depth. This is discussed in the following
sections


21.5 Practical Tips

For gravity data, if you have good gravity station
coverage then the Total Horizontal derivative is going to
do an excellent job in identifying large and small edges
with large and small density contrasts. For edges with
small density contrasts then the Tilt derivative prevents
the domination of the large density contrast edges.
Differentiating shallow from deep edges can be done
spectral methods such as depth slicing (see section
27.2.1) which is using the spectral content of the data to
help define wavelength ranges associated with different
depth structures.

For magnetic data, there are a range of problems. For
RTP data we show that tracing 2D edges is best done
by using the horizontal derivative of the Pseudo-gravity.
Again this method tends to favour large magnetic
contrasts so to identify small susceptibility contrasts the
Tilt of the RTP anomaly can be used but the problem to
appreciate is the structural edge is represented by the
zero contour of the Tilt anomaly. This can be rectified by
applying the Theta derivative (see section 19.8.3).
Would the Theta derivative of the Pseudo gravity be
better than the Theta derivative of the RTP?

When tracing edges it is well worth while to check out all
the standard derivative methods,i.e. those applied in this
section plus dT/dz, d
2
T/dz
2
, AS local wavenumber.

For magnetic data close to the Equator we have see the
effects of magnetic anisotropy where N-S striking edges
are difficult to identify or absent. As recommended here
the use of the Analytic signal (AS) and/or Local
wavenumber become important since they are
Inclination independent derivatives. What these
derivatives show in Figure 21/16 is that a combination of
amplitude and wavelength content these derivatives are
good at identifying major edges such as bounding faults
of sedimentary basins separating show basement from
deep basement.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration
GETECH/ Univ. of Leeds: Section 21, Page 12




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 22, Page 1
GETECH/ Univ. of Leeds:

SECTION 22: ESTIMATING MAGNETIC DEPTH:
OVERVIEW, PROBLEMS & PRACTICAL HINTS
(Based on Li, 2003 The Leading Edge article, Vol. 22,No 11(Nov)): 1090-1099.



22.1 Introduction

Magnetic depth estimation plays an important role in
magnetic interpretation. A complete quantitative
interpretation of potential field data aims to estimate
three types of information about sources of geological
interest: the depth, the dimension, and the contrast in
the relevant physical property. Such an interpretation
suffers from inherent ambiguity. It is impossible to obtain
all three types of information simultaneously without a
priori information. In many oil applications, we are often
interested in magnetic basement depth more than either
dimension or physical property contrast. Thus different
quick methods have been developed, over half a
century, to estimate the magnetic depth. These methods
work for simplified source geometries (dimensions) and
are independent of the susceptibility contrast. The
depths estimated by these methods can be used as the
final, quantitative solution in some ideal situations: the
anomaly is well isolated and the noise is insignificant or
well removed. Estimated depths often provide a good
starting point for a genuine structural interpretation, e.g.,
an interactive modelling or a constrained inversion.
In petroleum exploration, for example, the structural
surface interpreted from magnetic depth estimates is
often the best available approximation to the true
crystalline (i.e., metamorphic/igneous) basement
configuration. Basement depth (or equivalently,
sedimentary thickness) is a primary exploration risk
parameter. Estimates of basement depth are directly
applicable to basin modelling (e.g., source rock volume
estimation) and thermal maturity applications (e.g.,
source-rock burial-depth). Basement structure inferred
from magnetic depth estimates provides insight into the
evolution of more recent sedimentary features (e.g.,
sub/mini-basin compartmentalization, salt structure
distribution/kinesis, localization of reservoir-bearing
structures) for instance, in areas where the inherited
basement fabric/architecture has affected (either
continuously or episodically) basin evolution and
development. Examples of the latter are the prolific,
passive rifted margins of the South Atlantic Ocean. A
basins plumbing often exploits faults and fractures
within the sedimentary section that are of basement
origin. The movement and flow of fluids within a basin,
such as hydrocarbon migration along lateral and vertical
carrier beds, can be facilitated by basement-involved
sedimentary faults/fractures. Basin heat-flow patterns
can also be moderated by fluid circulation along
basement-involved fault systems. Thus the ability to
estimate/interpret basement structure via magnetic
depth estimates provides a more complete
understanding of critical first-order basin exploration
parameters.
Within the last decade, magnetic depth estimates have
also been widely applied for the mapping of sedimentary
faults, folds, channels, and salt structures. This more
recent application results from major advances in
magnetic surveying (primarily GPS navigation) and
interpretation technologies.
There are many depth estimation methods. The number
keeps growing with continual development of new
algorithms. These methods include: slope (manual
method), Naudy, Werner Deconvolution, analytical
signal, Euler Deconvolution, Euler Deconvolution of the
analytical signal, SPI
TM
(local wavenumber) or
TDR_THDR (Total Horizontal derivative of the Tilt
derivative) and spectral analysis. The Peters half-slope
is one of many manual methods developed fifty years
ago, and its variants include the straight-slope and the
Bean ratio. The other methods (Naudy, etc.) are all
often categorized as automatic methods. The SPI
TM
has
been developed only in the last six years. Even so, no
single method is best overall. In fact, there may never
be an ultimate solution to the depth estimation problem.
At present, personal preferences fill the void that rational
selection criteria should occupy. Different persons or
groups prefer different methods. No accepted guidelines
have been established to help in the selection of a
proper or optimal depth estimate method (or methods)
from the many possible candidates.
The best guideline is that a proper or optimal method
should be selected according to the data quality and
the nature of ones particular geological problem.
The recommendation is that in practice, in order to
produce an accurate depth solution, it is better to
use more than one reasonable method, together
with experience and other geological and
geophysical knowledge.

22.2 Manual Methods

See Section 23 for comprehensive set of manual
methods.

22.3 Automatic Methods


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 22, Page 2
GETECH/ Univ. of Leeds:

The five main depth estimate methods may be
described as follows. The continuous wavelet
Deconvolution (CWT) is not
discussed but may in the future become increasing
important.

22.3.1 Naudy method
(see Section 25.1 for more information): This is a
curve-matching technique. The depth estimate is done
via a look-up table technique. The accuracy is closely
associated with these tables. (Properties: Profile
method, using a moving window operator, only
suitable for 2D sources such as contacts and dikes,
does not require derivatives)

22.3.2 Werner Deconvolution
(see Section 25.2 for more information): This involves
the transformation of a complex, non-linear magnetic
inversion (for depth, dip and susceptibility) into a simple
linear inversion. (Properties: Profile method, using a
moving window operator, only suitable for 2D
sources such as contacts and dikes, uses horizontal
derivative)

22.3.3 Euler Deconvolution
(see Section 25.3 for profile based method and
Section 26.1 for grid based method): This uses
Eulers homogeneity equation to construct a system of
linear equations and then solving them, in a least-
squares sense, for the single source of a given type.
Since Eulers homogeneity equation holds not only for
the magnetic field itself, but also for gravity fields and its
derivative and a combination of derivatives,
people/groups have developed Euler Deconvolution of
the analytical signal using the first- or second-order
vertical derivatives of the magnetic field so as to
determine the structure index (SI) and/or to improve the
resolution. Two advantages that Euler Deconvolution
has over other methods are its easy generalization from
2-D (profile analysis) to 3-D (grid analysis); and its
capability of directly applying to observations with
variable altitudes. (Properties: Profile and grid based
methods, using a moving window operator, can
cope with 2D and 3D source structures, requires
first order derivatives and in some advanced forms
second order derivatives are require)

22.3.4 Source Parameter Imaging (SPI
TM
) or
Local Wavenumber () or Total Horizontal
Derivative of the Tilt Derivative
(THDR_TDR)
(see Section 25.4 for profile based method and
Section 27.1 for the grid based method): This is
based on the complex analytical signal technique and
using either the magnitude of the analytical signal
and/or its phase. (Properties: Profile and grid based
methods, using no moving window operator needed,
best for 2D sources such as contacts and dikes, the
various forms require either 1
st
, or 2
nd
and 3
rd
order
derivatives)

22.3.5 Spectral Analysis
(see Section27.2): This method is normally restricted to
grids. The slope of the Power spectrum is a function of
source depth based on the assumption that there is an
ensemble of sources at a given depth. The method is
after Spector and Grant (1970) (Properties: Grid
based method, using a moving window operator,
average depth based on amplitude spectrum
contained within window)

22.4 Which Method is best?

Faced with 5 contrasting methods, which one should be
used? A way to resolve this is posed by the answering 3
basic questions.

1) Does the method use the moving-window
concept?
2) What types of source geometries can the
method treat in theory?
3) Does it require derivatives the first-order
derivatives only or does it require the second-
or even third-order derivatives?

Answers to these questions will eliminate or narrow the
number of options.

22.4.1 The Moving-Window Operator Concept:
The moving window for profile and grid based data is
linearly or spatially sub-sample the data contained within
the profile or grid so that it spectral content can be
analysed according to the method being applied. All the
earlier automatic techniques use a moving window. One
might even say that it is the use of a moving window that
makes magnetic depth estimation automatic. Previously
estimation had been purely manual (see Section 22). In
fact, computer-assisted, instead of automatic, was
once used to label the newer methods, and is still a
better definition. Notwithstanding their early successes,
the moving window has some drawbacks, as discussed
below and modifications to the early forms of the
method have solved for many of the drawbacks. Some
of the latest techniques such as SPI
TM
do not need the
moving window.

22.4.2 Source Geometries:

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 22, Page 3
GETECH/ Univ. of Leeds:

Except for the Spectral Analysis method, no single
depth estimate method listed above generally works in
theory for an arbitrary source geometry. Different
methods work for different geometries. The structural
index (SI), a concept originated from Euler
deconvolution, is widely used to characterize the source
geometry. It basically represents the fall-off rate of the
amplitude with distance from the source or the negative
of the degree of Eulers homogeneity. For the magnetic
field, the following SI values relate to simple geometries
SI Type (magnetic)
0, Contact
1, Thin dike
2, Horizontal (or vertical)
cylinder
3 Sphere
Interestingly, the SI for a thin-bed fault is 2, the same as
a horizontal cylinder. This is because of the assumption
made: the thickness of the bed and the vertical throw of
the fault are both much smaller than its burial depth.

How does one make a proper selection? For example,
to solve the kimberlite pipe problem, one would need to
use Euler Deconvolution, not Naudy nor Werner. Many
methods solve for depth while assuming geometry.
However, some recent methods can determine depth
and geometry simultaneously. These methods include
Euler Deconvolution of the analytical signal and the
Improved Source Parameter Imaging (iSPI
TM
).

22.4.3 Derivatives:
The original Naudy method doesnt use derivatives at
all. Werner deconvolution uses the horizontal derivative
when the source is a contact model. However, SPI
TM

and iSPI
TM
require second- and third-order derivatives,
respectively. This is not so for Tilt and Horizontal
derivative of the Tilt. An accurate calculation of second-
and third-order derivatives is often difficult when data
contains noise. What is a proper selection? If noise in
data is strong, just select a method that doesnt use the
higher order derivatives, i.e. beyond the first order
derivatives.


22.5 Difficulties

The moving window and the source geometry, largely
suffer from interference; and the third, i.e., the derivative
calculation, from noise in data. In this section, these
difficulties are analysed and ways to overcome or avoid
each of them are suggested. A comprehensive
approach to achieving an accurate depth solution is
proposed.

22.5.1 Moving Window Methods
The first difficulty is with the moving window concept.
Figure 22/1 shows a synthetic model consisting of a
single vertical dike and its TMI response. No noise has
been added and the solutions (e.g. Euler, Naudy,
Werner) give a tight cluster about the true solution.
Figure 22/2a now shows a synthetic model, consisting of
two vertical thin dikes 8 km apart and its TMI (total
magnetic intensity) responses. No noise has been
added. Ideally, we expect only two individual point
solutions at the top of each dike. Figures 22/2b-e
display Euler Deconvolution solutions for the moving
window widths of 1000, 2000, 6000, and 8000 m,
respectively. The SI has been given as 1, the correct
value for a thin dike. Evidently, the solutions are not just
two individual points. They are scattered unless the
window width is too large. The scattering of solutions
reduces with increasing the window size: from 1000
(Figure 22/2b) to 2000 (Figure 22/2c) to 6000 m (Figure
22/2d). When the window width has approximately a
value of the distance between two anomalies, i.e., 8000
m (Figure 22/2e), no single solution is obtained at all
because the window is too wide. The reason for the
spurious solution is the mixing of the anomalies from the
two sources generates small but significant systematic
changes in the derivatives with distance from the
sources. In other words the percentage anomaly
interference increases the greater the distance from the
source body, i.e. the amplitude of the source body
decreases with distance whereas the amplitude of the
interfering body increases.
This spatial change of anomaly interaction (summation)
will systematically move the computed source location
progressively away from its true position. This is the
reason for the generation of diffraction like source
sprays. So how to choice optimum window size and
minimise interference effects. Different methods treat
interference differently. For example, Werner
Deconvolution (see Section 25.2) takes it into account
by introducing a polynomial background (in each
window). However, in the standard (or conventional)
Euler Deconvolution method, it is assumed to be a
constant.


Figure 22/1: Isolated structure and tight cluster of
solutions about true solution

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 22, Page 4
GETECH/ Univ. of Leeds:

An optimum window size is one that is small enough to
see only a single anomaly yet large enough to cover
sufficient variations in slopes or curvatures of the
anomaly.

There are three possible ways to overcome the moving
window difficulty.

First, work on significant anomalies individually. This
means that the method is no longer automatic.
However, the solutions are often more reliable. This is
particularly a reasonable way to check the accuracy and
reliability of an automatic method. For the two-dike
model shown in Figure 22/2, the best and most accurate
depth solutions are obtained for windows located near
the two TMI maxima. This is due to the derivatives
having their maximum values, their slopes of the
greatest rate of change and where the interference
effects of the noise in percentage terms, is at its least.

Second, use a method that doesnt require the moving
window concept, such as the Local Wavenumber (SPI,
Tilt) (see sections 25.4 and 27.1).



Figure 22/2a: Two Dike model and interfering
anomalies


Figure 22/2b-e: Conventional 2D Euler Solutions for
different window sizes

Third, use a moving window technique that is able to
identify less reliable solutions and remove them so that
only robust (reliable) solutions remain. Recent
developments to Euler Deconvolution in the form of
LaPlacian Euler, profile based Extended Euler and grids
based 2D Constrained Euler have overcome this
problem (see Section 25.3 for profiles and Section 26.1
for grids). This has been achieved by understanding
what constitutes a robust (reliable) solution from a
poorly constrained solution and the ability to analyse the
grid within operator window using Eigenvectors and
Eigenvalues as well as to test the methods using
realistic 3D test models. Surprisingly such use of
sophisticated 3D test models has only recently been
introduced (see Section 25.2).

22.5.2 Source Geometry
The second difficulty is with the source geometry.
Figure 22/3a demonstrates a synthetic model consisting
of two grabens and one thin dike. Using a contact model
assumption and performing Werner Deconvolution, an
accurate depth solution is obtained for the grabens
contacts or top corners (Figure 22/3b) but not for the
dike (depth too shallow). When a dike model is assumed
(Figure 22/3c), an accurate solution is obtained for the
dike but not for the grabens (depths too deep). Thus the
selection of geometry is critical in depth estimation.
Only single solutions are shown for clarity, normally the
moving window will produce multiple solutions like
Euler but their spatial geometry is somewhat different
(see Section 25.3)

How can estimates of source geometry be determined
correctly? There are two ways.
First, the geometry can be inferred from the geology.
The Naudy method, Werner and Euler Deconvolution,
analytical signal, and SPI
TM
, either require or can utilise
an assumption of source geometry. In practice, there
are many different means to derive geometry from
geology. In particular, the experience of an interpreter,
familiar with magnetic responses from various
geological situations becomes invaluable.

Some rules that may be helpful are as follows:
(a) The anomalies frequently exhibit typical spatial
patterns in plan view that dictate the choice of the
model. Linear features in the magnetic anomaly often
correspond to dikes or faults, not vertical cylinders or
spheres. If the basement is of interest, the contact, not
the dike or cylinder should be used.
(b) Assume a thin dike source if the magnitude of the
Analytic Signal (AS) has a single maxima and the
magnitude of the Total Horizontal derivative (THDR) has
a two maxima. Assume a contact if there is a single AS
maximum and a single THDR maximum. Since the
resolution of both the AS is poor (see Section 19..2.7) a
better diagnostic may be the Total Horizontal derivative
of the Tilt derivative (TDR_THDR)

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 22, Page 5
GETECH/ Univ. of Leeds:

(c) Above all else, the criterion of consistency should be
widely applied. Geometric assumptions must be in
reasonable agreement with adjacent profile lines.

Second, use automatic estimation of source geometry.
Some methods, such as the iSPI
TM
and Euler
Deconvolution of the analytical signal, can determine
depth and source geometry (indicated by the SI)
simultaneously. The iSPI
TM
requires the third-order
derivatives of TMI, and Euler Deconvolution of the
analytical signal needs the second-order derivatives.

Since the determination of SI is very sensitive to noise it
is worth emphasizing that although idealised simple
geometric bodies have integer SI, geological structures
are not necessarily simple and thus real SI values (e.g.
0.5) are possible. Application of 2D Constrained Euler
on known 3D test models proves this.
22.5.3 Derivatives
The iSPI
TM
or Euler Deconvolution of the analytical
signal require the calculation of higher-order derivatives.
This can generate problems. Derivatives are easy to
calculate in theory, but can be difficult in practice. It is
generally easy to calculate the derivatives that are used
for qualitative interpretation such as the lineament
analysis (see Section 19.4.5). In that case, we care only
about the horizontal locations of derivative or gradient
maxima or minima. However, it is more difficult to
calculate derivatives required by quantitative analyses
such as depth interpretation. In this latter case, we need
not only the accurate locations but also the accurate
magnitudes of the maxima or minima. Because of noise
in data, it is difficult to calculate the magnitudes
accurately.

For the single dipping dike model shown in Figure 22/1
and for the noise-free TMI responses, Euler
Deconvolution can produce a tight cluster of accurate
depth solutions. Since noise is general present in the
system in the form of geological noise, acquisition noise,
processing noise and post processing (e.g. gridding)
noise, it is more realistic to deal with noise contaminated
signal. If random Gaussian noise is added to the TMI,
and then filtered to remove it by one of many
techniques, the resulting signal will differ from the
original noise free signal. It is argued that Guassian
noise is unrealistic of most of the above noise sources.
Application of Euler Deconvolution can creates false
solution clusters besides the true one. If no attempt has
been made to eliminate such spurious solutions then the
interpretation will suffer.

The conventional Euler Deconvolution method requires
the first-order derivatives only. When 2
nd
and 3
rd
order
derivatives in TX and TZ are generated the affect of
noise can dominate and make methods that rely on
these derivatives unworkable. The trend in data
acquisition is to obtain cleaner (noise free) data with
high spatial resolution, which implies that these higher
order derivative methods can work with modern data
sets but not necessarily with old archive data.

22.6 Determination of Source body
dip and susceptibility contrast

Dip and susceptibility contrast can only be determined
when the source location and the source geometry (e.g.,
SI) is known or in the latter case correctly assumed. In
the case of a contact, one can then solve for the dip and
susceptibility contrast. For a thin dike, the dip and the
susceptibility-thickness product may be estimated. In
general, the estimate of dip and susceptibility is less
accurate than the depth estimate.

Depth estimation from gridded data can be less
accurate compared to using field profile data. One
reason lies with the calculation of derivatives. In the
airborne magnetic survey, the along-line data is densely
sampled and the across-line information poorly
recovered. It is thus difficult to calculate accurately all
derivatives required. For example, in the standard 3D
Euler Deconvolution method, we need the first-order
derivatives along all three orthogonal directions. Only
one horizontal derivative may be accurately and reliably
calculated. On the other hand, 3D grid analysis has the
advantage that real geology is also 3D and no 2D
assumptions need to be made as in the profile case.
Rarely, if ever, can one assume that all profile based
anomalies result from 2D sources. The skill in grid
based methods is to identify the 2D structures and invert
for dip and susceptibility contrast (e.g. 2D constrained
Euler and iSPI
TM
methods)

22.7 A comprehensive approach to
profile Depth Interpretation

Magnetic depth estimation is neither magnetic inversion
nor a black box. Magnetic inversion for structure
(including depth) requires the knowledge of
susceptibility contrast and magnetization direction. Most
importantly, an accurate inversion requires many
constraints. The depth estimate methods work for
simplified source geometries and are independent of the
susceptibility contrast. The automatic methods are also
independent of magnetization direction. They can often
generate a quick (much quicker than magnetic
inversion) interpretation of magnetic data. In practice, a

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 22, Page 6
GETECH/ Univ. of Leeds:

comprehensive and integrated approach is
recommended to deriving an accurate depth solution.

This approach involves the following essential aspects:

1. Select proper or optimal methods

2. Apply more than one reasonable method to the same
data set. For example, for a contact or thin dike problem
with average noise in data, one may run the Naudy,
Werner Deconvolution and Euler Deconvolution, instead
of just one method. Finding a common solution of
different reasonable methods tends to increase the
reliability of solutions.

3. Consider the three difficulties to help define the
uncertainties in the results.

4. Carefully pick the geologically most plausible
solutions. An automatic pick-up technique doesnt
always work well. The horizontal location of the depth
symbols is important since it corresponds to the
calculated position of the centre of the source body,
except for the contact model where the position is the
edge/corner. Make sure that it is in general agreement
with where you would place the centre of the body.
Again, the experience of an interpreter, familiar with
magnetic responses from various geological situations
(and source geometries), becomes invaluable. For
example, in Figure 22/4, a solution near X = 10000 can
be manually and easily picked as a true solution
according to the TMI anomaly (and the inclination of the
study area), and all false solutions may be manually
rejected. When the differences between nearby depth
solutions are great, the anomaly may be a complex one
for which the method cannot accurately identify the
centre or, more likely, the method treated the anomaly
as a sum of several separate sources. Choosing
between any one of these solutions, particularly in
absence of other geological information, is a dangerous
practice. However, if the various solutions are very close
to each other, often the average position can be used.

Figure 22/4: Anomaly with noise generating
spurious solutions

5. Examine the position of the significant anomaly on the
image/map of gridded results. The profile should cross
the 2-D body at right angle to strike and near the centre.
Otherwise, we need to apply the strike direction
correction and the strike length correction. The strike
direction correction is to multiply the estimated depth by
the sine of the actual angle between the profile and the
strike. The strike length correction is not so
straightforward. A simple formula doesnt exist.
However, the depth estimate can be directly accepted,
when the length of the body is 4 times the width or
more.

6. Compare the solutions for adjacent lines. The
interpretation should be internally consistent, particularly
for closely spaced lines.

7. Use geological, geophysical and well controls that are
available. Magnetic inversion requires constraints, as
does depth estimation. Independent constraints are
often critical.

8. Finally use a good 3-D visualization tool to display
depth solutions. This also provides a dynamic
environment to compare various depth estimates and to
integrate other geological, geophysical, and well
information.

22.8 Concluding Remarks

A proper or optimal depth estimate method should be
selected according to the data quality and the nature of
ones particular geological problem. Three basic rules
are suggested
First, avoid the use of (higher-order) derivatives when
noise is strong.
Second, understand and determine geometry from
geology as much as possible.
Third, work on individual anomalies or avoid use of the
moving window when interference is severe, as is
probable if the results derived automatically are less
reliable.

Magnetic depth estimation is a cost-effective and useful
tool of quantitative interpretation, and thus helps reduce
exploration risk. In order to produce a reliable depth
solution, the experience of an interpreter is important
and other independent controls are necessary. The
comprehensive approach recommended in this work will
help derive a final and accurate solution.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 22, Page 7
GETECH/ Univ. of Leeds:

SUGGESTED READING by Li
For the theory of different methods for magnetic depth
estimation the following papers are recommend. The
direct approach to magnetic interpretation and its
practical application by L. J . Peters (Geophysics,
1949). A rapid graphical solution for the aeromagnetic
anomaly of the two-dimensional tabular body by R. J .
Bean (Geophysics, 1966). Automatic determination of
depth on aeromagnetic profile by H. Naudy
(Geophysics, 1971). Werner deconvolution for
automated magnetic interpretation and its refinement
using
Marquardt inverse modelling by C. C. Ku and J . A.
Sharp (Geophysics, 1983). The analytic signal of two-
dimensional magnetic bodies with polygonal cross-
section Its properties and use for automated anomaly
interpretation by M. N. Nabighian (Geophysics, 1972).
EULDPH A new technique for making computer-
assisted depth estimates from magnetic data by D. T.
Thompson (Geophysics, 1982). Magnetic
interpretation in three dimensions using Euler
deconvolution by A. B. Reid et al. (Geophysics, 1990).
Automatic conversion of magnetic data to depth, dip,
and susceptibility contrast using the SPI
TM
method by J .
B. Thurston and R. S. Smith (Geophysics, 1997).
iSPI
TM
The improved Source Parameter Imaging
method by R. S. Smith, J . B. Thurston. T. Dai, and I.
N. MacLeod (Geophysical Prospecting, 1998).
Identification of sources of potential fields with the
continuous wavelet transform: Complex wavelets and
application to aeromagnetic profiles in French Guiana
by P. Sailhac, A. Galdeano, D. Gibert, F. Moreau, and
C. Delor (J ournal of Geophysical Research, 2000).
Euler deconvolution of the analytical signal by P.
Keating and M. Pilkington (Geophysical Prospecting,
2004). Unification of Euler and Werner deconvolution in
three dimensions via the generalized Hilbert transform
by M. N. Nabighian and R. O. Hansen (Geophysics,
2001). R. O. Hansen and his collaborators have
developed the multiple-source techniques: Multiple-
source Werner deconvolution (Hansen and
Simmonds, Geophysics, 1993), Multiple-source Euler
deconvolution (Hansen and Suciu, Geophysics,
2002), and 3-D multiple-source Werner deconvolution
(Geophysics, 2003, submitted). The use of Euler
deconvolution has been well discussed in Analysis of
the Euler method and its applicability in environmental
investigations by D. Ravat (J ournal of Environmental
and Engineering Geophysics, 1996).

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 22, Page 8
GETECH/ Univ. of Leeds:



J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 23, Page 1
GETECH/ Univ. of Leeds:

SECTION 23: QUANTITATIVE INTERPRETATION
Manual Magnetic Methods


This investigation uses all aspects of the anomaly field
(amplitude, wavelength, frequency range and phase) to
deduce parameters (normally the depth, location and
shape) of the source body.
Often quantitative interpretation only takes place once
the anomaly has been isolated from the regional field.
This is often difficult to do without distorting the anomaly
one wishes to interpret. As described in section 22 the
anomaly field often needs to be subdivided into a long
wavelength (REGIONAL) and short wavelength
(RESIDUAL) components of which the residual field is
generally the field to be quantitatively interpreted. To
help identify residual anomalies derivatives of the
anomaly gravity / magnetic field are determined.

23.1 Depth Determination

To understand the principles behind depth
determination it is important to understand what
happens to an anomalys shape when viewed for
increasing distance above the earth. This is the same
as keeping measurement height fixed and increasing
the depth to the causative body. Figure 23/1 clearly
shows this.

Magnetic Example Gravity Example
i. Anomaly amp. decreases i. Anomaly remains the same
with distance above source

ii. Anomaly width increases ii. Depth to basement relief
with distance above source increases (density contrast
kept constant)





Figure 23/1: Diverge of magnetic field with distance


23.1.1 Monopole
(Gravity or simple magnetic case)



Figure 23/2: Monopole case

m = pole strength Potential U = - m / r
r = (x
2
+ z
2
)
1/2
AZ = dU/dz = m sin(u)/r
2

AZ = mz / (x
2
+ z
2
)
3/2
AZ = m / z
2
( 1 +x
2
/z
2
)
3/2




Figure 23/3: normalise response of a monopole
source

AZ is a max. (AZ
max
)
when x = 0 then AZ
max
= m /z
2

AZ / AZ
max
= 1 / (1 + x
2
/z
2
)
3/2
Plot AZ /AZ
max
vs x/z (Fig 23/3)


So when AZ = 1/2 AZ
max

m/z = 0.767~3/4 when x = 3/4 z
When x = z, AZ = 1/3 AZ
max



Thus one normalized curve satisfies all
monopole cases

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 23, Page 2
GETECH/ Univ. of Leeds

23.1.2 Dipole
(Magnetic case)


Vertical Dipole Inclined Dipole
Figure 23/4: Dipoles

Can consider a vertical dipole made up of 2 monopoles.

Upper Dipole (-m pole)
AZ
m
z x
z
1
1
2 2
2
3/2
1
1
= +
+ ( )

Lower Dipole (+m pole)
AZ
m
z x
z
2
2
2 2
2
2
3 2
1
1
=
+ ( )
/

and
A A A Z Z Z = +
1 2

When AZ = 0 there are two solutions for z
1
and z
2

1
st
Solution when z
1
= z
2

(trivial)
2
nd
Solution is
z z
x
z
z
2
2
3
1
4
3 0
1
2
3
1
2
1
2
3
1
2
4
1
2
= +
|
\

|
.
|
|
|

where x
0
= zero crossing point (see Fig23/6)
if we know x
0
we can make estimates on z
2
and find z
1

or visa versa


Figure 23/5: Family of curves for vertical dipole

anomaly to be interpreted


Figure 23/6 Anomaly to be interpreted
Important to remove or estimate regional correctly
otherwise x
0
will be in error

How to determine z
1
and z
2
a) For anomaly we consider A Z at distance x
thus know A Z/ A Zmax
b) Choose o value, say o
1
, = x/z
1

thus know x/z
1
value, thus know z
1

c) Since o = z
2
/z
1
can determine z
2

d) Plot z
1
& z
2
on graph (Fig 23/7
e) Repeat b) to d) for different values of o
f)
Repeat b) to e) for new value of A Z and x z
2



Figure 23/7: Solution space

Solution for z
1
and z
2
is where lines cross
These two very simple cases of mono-pole and vertical
dipole illustrate how certain unknown parameters such
as depth can be deduced from specific measurement of
the anomaly (i.e. amplitude and width) using family of
curves


23.2 Depth Determination using
Standard Curves

Prior to the availability of modern day computers,
geophysicists had to rely on curve fitting methods to
deduce depths of structure. The methods to be shown
in this section are taken from Grant and West book (see
section 1 for full book reference). These methods
require the accurate measurement of anomaly
parameters and their plotting on standard sets of curves
for the type of geological model that the anomaly is
thought to result from.
.
The parameters that have been chosen are basically
describing the spectral content (curvature) of the
anomaly e.g. the amplitude and slope of an anomaly
and their change with the width of the anomaly. Such
parameters are very sensitive to the causative depth of
the anomaly, which is the most common model
parameter that needs to be determined.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 23, Page 3
GETECH/ Univ. of Leeds

23.2.1 Simple and quick depth estimators

The following diagram shows some simple parameters
that are taken from an anomaly to allow the
determination of depth.


Henderson and Zietz, 1948


where z= coef x W



Vaquier, et al. 1951 Interpretation of aeromagnetic
maps. Geol. Soc. Am Memoir 47


where z= ceof xHSD

Peters,1949

where Z = ceof x P


Sokolov, 1956

where Z = ceof x S



Logochev, 1961

to determine center of dike




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 23, Page 4
GETECH/ Univ. of Leeds

Werner, S 1953 Interpretation of magnetic anomalies
at sheet-like bodies Sveriges Geoliska
Undersokning, Ser. C.,
Arsbok43(1949),N:0.6

for determining center of dike

More complex estimators using more parameters are
the Bean method (Fig. 23/14a) and the Koefoed method
(Fig, 23/14b). These methods use a series of
characteristic curves to deduce body parameters. These
two methods use greater spectral content of the
anomaly than the previous methods and thus are
preferred by practicing geophysicists although they take
more time but they can be computerized.
Bean,1966


Koefoed



23.2.2 Simple Geological Structures




Three simple structures can account for most geological
structures

1 Step Model

This is a semi-infinite polygon which will be described
mathematically later (see Section 24)





Gravity Step Model



J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 23, Page 5
GETECH/ Univ. of Leeds

Characteristic estimators for the step model
From the shape of the anomaly the two parameters
k
x x
x x
1
2 1
2
1
2
1
=


and
k
g g
x x S
2
2 1
2 1
2
=

A A
( ) max


Figure 23/18: Characteristic curves for gravity step
model

can be determined from the anomaly profile and then
plotted on the above plot(Fig 23/18). The dip, d, and
quantity h/l can be determined and used in the plot
below (Fig 23/19) to determine
( ) x x
l
2 1

.
Since ( ) x x
2 1
is known then l is known, thus h can be
determined (Fig 23/20).


Figure 23/19: Further characteristic curves for
gravity step model.

The density contrast can also be determined in following
plot.


Figure 23/20: Further characteristic curves for
gravity step model

Magnetic Step Model
Unlike the gravity case, where density is a scalar
quantity, the magnetization is a vector quantity and the
size and nature of the magnetic anomaly of a given step
structure will change with inclination, i, of the field (if
anomaly is all induced then inclination can be
considered to reflect latitude) and strike direction, , of
the step.



Figure 23/21: Profiles of the total field anomaly
across a step at magnetic latitude equivalent to i=
60
0

The parameters measured are maximum slopes s1 and
s2 on both sides of the anomaly, the max. and min.
anomaly values ATmax, ATmin and width of anomaly at

1
3
2
3
A A T T max min +

Figure 23/22: Dip and depth estimators for the step
model
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 23, Page 6
GETECH/ Univ. of Leeds



Figure 23/23: Total field magnetic characteristic
curves for step model, where l = thickness of step
and h = depth to top of step.
The plot of
s
s
2
1
and A A T T max min , provide the means
in the next diagram to determine the dip of the step.
Since the inclination i and are variables then a set of
curves need to be generated for all situations. The plot
below is one set of curves for the situation where
| =

tan (tan sin )
1
i =80
0


The results of the above curve can then be used in the
next complementary curves to determine depth h. This
is achieved by using d and h./l to plot position and
this
gives
w
l
1 3
,
h
l
. Since w
1 3
is known then l can be
determined and thus h.


Figure 23/24: Complementary curves for estimating
depth to top of step

2. Ribbon Model



Figure 23/25: The Ribbon Model


Gravity Ribbon Model
The changing shape of the gravity case as a function of
dip and depth is illustrated in Fig. 23/26

Figure 23/26: Profiles of the gravity effect across a
ribbon model of infinite length

The characteristic estimators are shown in Fig 23/27


Figure 23/27: Gravity characteristic estimators for
the ribbon model


Similar concept to the curves for the gravity step model
are used to determine the depth and dip (see Grant and
West).
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 23, Page 7
GETECH/ Univ. of Leeds

Magnetic Ribbon Model
The shape of the magnetic anomaly is dependent on
depth of ribbon model, its dip and the direction of
magnetization (Fig 23/28


Figure 23/28: Magnetic vertical intensity across a
two dimensional ribbon model for various dips,
depths and directions of magnetization.

The estimators are similar to the step model

Figure 23/29: Magnetic estimators for the ribbon
model


3. The Prism Model
( Depth determination of sedimentary basins)



Figure 23/30 Prism model
This method uses the premise that sedimentary rocks
are essentially non-magnetic so that any magnetic
anomaly must originate from the basement rocks
beneath. The depth estimator tends to yield an upper
limit to total thickness after height of aircraft above the
ground has been removed.

Since the depth parameter is mainly of interest and the
shape of the causative body is of little interest, then the
tendency has been to use an elementary model. This is
not unrealistic since the anomaly sources are so distant
from the magnetometer that the causative body can no
longer be considered 2D. Thus previous models are out
and there is a need to use a 3D model. The method
used is based on Vacquier, Steenland, Henderson and
Zeitz (Interpretation of Aeromagnetic Maps. Geol. Soc.
Am. Memoir 47, 1951). The model used (see above)
consists of a bottomless vertical - sided prism of
rectangular cross section.
Five parameters control the prism model. These are:

i. width a of the prism
ii. length b of the prism
iii. depth h to the top of the prism below aircraft (not the
ground surface)
iv. magnetic azimuth (relative to side b of prism
v. magnetic inclination i of the geomagnetic field



Figure 23/31: Prism model parameters

Using a as a unit of measure the 5 parameters can be
reduced to 4 : h/a, b/a, , and i. where and i
either known or assumed.
The characteristic features of shape of anomaly are
used to estimate a and b in order to find h. The 4
independent parameters gives rise to a very large
number of cases (i.e. sets of curves). Since there is a
connection between b/a and we only have to consider
0
0
>>90
0

Determination of b/a
This is the ratio of the length to width of the body
causing the anomaly . The values of b and a are
estimated as
b w y ~
|
\

|
.
|
1
2
and a w x ~
|
\

|
.
|
1
2


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 23, Page 8
GETECH/ Univ. of Leeds


Figure 23/32: Profile across anomaly showing
estimators

Determination of b/a and h/a (for i > 60
0
)
From the anomaly the following two estimators
sw
T T
1 2
A A max min
and
( )
( )
w y
w x
1 2
1 2
are used, where
s s s = +
1 2




Figure 23/33: Total field characteristic curves for i
=75
0
and =45
0
to determined the parameters b/a
and h/a



Determination of h
The parameters b/a and h/a are now used in the
following curves to determine
( ) w x
h
1 2
. Since ( ) w x
1 2
is
known then h can be determined.


Figure 23/34: Complementary curves to determine
depth of prism

Example



Figure 23/35: Theoretical total-field anomaly over a
vertical prism showing estimator b/a



Figure 23/36: Portion of an aeromagnetic survey
flown at an altitude of 1,800 ft over a part of
southwestern Ontario, Canada. (See previous
diagrams for location of data points on
characteristic plots)

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 23, Page 9
GETECH/ Univ. of Leeds

23.2.3 First Order Depth Estimates
(Gravity)

Many geological structures can be considered as simple
symmetrically shaped structures to enable initial (first
order) estimates to be made on depth. Below are some
examples









MODELS
a) Sphere


"half width" x
w
i.e. where Ag = Agmax/2
x
w
= 0.75 z
max. depth = 0.86[ Ag
max
/(d|/dx
max
)]


b) Cylinder

Ag/Ag
max
= 1/(1 +x
2
/z
2
)
x
w
= z
max. depth = 0.65(Ag
max
/d|/dx
max
)

c) Step or Normal Fault



see later (Section24)


d) Vertical Cylinder of infinite depth


anomaly at centre
Ag = 2tGA(b-h)


e) Vertical Cylinder of finite depth


Ag = 2tGA[l - (b
2
- b
1
)]
= 0.419A[l - (b
2
-b
1
)]

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 23, Page 10
GETECH/ Univ. of Leeds


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 24, Page 1
GETECH/ Univ. of Leeds:

SECTION 24: QUANTITATIVE INTERPRETATION:
Gravity and Magnetic Forward Modelling
2D, 2.5D 2.75D & 3D Interpretation



24.1 2D - Magnetic Case

Reference Talwani and Heirtzler Computation of
magnetic anomalies caused by 2D structures of any
shape Stanford Univ. Press In: Computers in the
Mineral Industries. Edit G A Parks. (Copy found in Dept
of Earth Science Library) See also Talwani 1965
Assumptions to the method are
I. Body has uniform magnetisation - this can on
occasions be a poor assumption compared with
density but is required to permit the computer
method to work.

ii. Magnetic anomaly is approximately 2D i.e. if
anomalys length to width ratio > 4 to 1 then 2D
assumption good for profiles passing perpendicular and
mid-way through the anomaly. Even when ratio is 3:1
still gives good estimate

Figure 24/1: Basic Model showing parameters
From the above model the following can be determined

( ) ( )
AZ J
x
r
r
J
z
r
r
= +

(
(
2
2 1
2
1
2 1
2
1
sin cos sin ln sin cos ln | u u | | u u | |

( ) ( )
AH J
x
r
r
J
z
r
r
=

+ +

(
(
2
2 1
2
1
2 1
2
1
sin sin cos ln cos sin ln | u u | | u u | |


These are the equations for a semi-infinite 2D body
To determine the Total field anomaly, T, then
A A A T Z I H C D = + sin cos( )
where I = inclination, D = declination, and C = profile
direction (in x direction since AH measured in x
direction)
Details of how equations derived
There are four steps
i. Calculate formula for small cube at origin of co-
ordinate system

ii. Integrate in +y and -y directions to +

iii. Integrate in x direction from x to

iv. Integrate in z direction from z1 to z2 and substitute
function of x and z

STEP ONE

Figure 24/2: Basic cube element for 3D modelling

Volume of element = AxAyAz
Magnetic moment = m
Thus m = JAxAyAz where J = intensity of
magnetisation
The Magnetic Potential U at origin O is

( )
U
mR
R
J x J y J z
x y z
x y z
x y z
= =
+ +
+ +
3
2 2 2
3
2
A A A

STEP TWO

Figure 24/3: Potential field of a rod of infinite
length in +y and -y directions with cross section
ABCD
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 24, Page 2
GETECH/ Univ. of Leeds:


( )
U x z
J x J y J z
x y z
dy x z
J x J z
x z
x y z
x z
=
+ +
+ +
=
+
+
|
\

|
.
|

+
}
A A A A
2 2 2
3 2 2 2
2

Differentiating in vertical direction gives AZ
| |
| |
A A A Z
U
z
x z
xzJ J x z
x z
x z
= =

+
|
\

|
.
|
|
|
c
c
2
2
2 2
2 2
2

Differentiating in vertical direction gives AH
( )
| |
A A A H
U
x
x z
J x z xzJ
x z
x z
= =
+
+
|
\

|
.
|
|
|
c
c
2
2
2 2
2 2
2



STEP THREE


Figure 24/4: Determine the field of a lamina by
integrating from x to infinity

A A Z z
J z J x
x z
x z
=

+
2
2 2
and A A H z
J x J xz
x z
x z
=
+
+
2
2 2
STEP
FOUR


Figure 24/5a: Integrate in z direction with interface
NK


Figure 24/5b: Integrate in z direction with
interface NK

Equation for line NK passing through points (x , z)
and (x1 , z1) is
( ) x x z z = +
1 1
cot cot | |

Then
AZ
J z J x
x z
dz
x z
z
z
=

+
}
2
2 2
1
2

substitute for x and integrate


( ) ( )
AZ J
x
r
r
J
z
r
r
= +

(
(
2
2 1
2
1
2 1
2
1
sin cos sin ln sin cos ln | u u | | u u | |

and
AH
J x J z
x z
dz
x z
z
z
=
+
+
}
2
2 2
1
2

( ) ( )
AH J
x
r
r
J
z
r
r
=

+ +

(
(
2
2 1
2
1
2 1
2
1
sin sin cos ln cos sin ln | u u | | u u | |

The Total field AT is defined by

A A A T Z I H I C D = + sin cos cos( )

A A A T Z H = +
2 2
since the Total field anomaly is
measured at the same time and together with the Earths
magnetic field. Since the Earths field is approximately
47,800 nT for UK and very much stronger than the
anomaly field by 2 or 3 orders of magnitude, then
basically the anomaly field component in the Earths field
direction is being measured. Remember Induced
magnetisation Ji will be in Earths field direction and only
remanent magnetisation Jr will be in a different direction
to Earths field direction.
The working equations can be simplified as follows
AZ J Q J P
x z
= 2( )
and
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 24, Page 3
GETECH/ Univ. of Leeds:

AH J P J Q
x z
= + 2( )

where ( ) P
z
z x
z x
z x
r
r
=
+
+
+
u u
2 1
21
2
21
2
12
2
21 12
21
2
12
2
2
1
ln
and ( ) Q
z x
z x
z
z x
r
r
=
+

+
u u
2 1
21 21
21
2
12
2
21
2
21
2
21
2
2
1
ln

where x x x
12 1 2
= , z z z
21 2 1
= ,
r x z
1 1
2
1
2
= + and r x z
2 2
2
2
2
= +
subscripts 1 and 2 represent successive corners of the
body in clockwise order
Evaluation of Jx and Jz

Figure 24/6: Magnetisation components
Jx = JcosAcos(C-B) & Jz = JsinA
where J = kT (J = Ji the induced magnetisation, k is
susceptibility and T is Total field)

A = Inclination = I; B = Declination = D;
C = Direction of profile
Equations can be used for remanence if A and B
determined by palaeomagnetic methods and Jr
measured where J= Ji+Jr
Construction of 2D magnetic profile
The calculation is to determine the induced anomaly of
M1 (see Fig 24/7). The (x, z) co-ordinates of vertices
(body M1) are known, so the distances and angles from
origin O can be calculated.
Cross Section of body M1 of 2D model to be
calculated
By adding and subtracting semi-infinite polygons S1 to
S4

Figure 24/7: constructing arbitrary shaped mode M1
from sets of semi infinite polygons

Calculate at origin AZ, AH then AT then
Move origin O to x and repeat above calculations for AZ,
AH then AT until by repeating the process the calculated
model anomaly has been determined for the whole
profile.
The whole process can be repeated to calculate
remanence anomaly by redefining Jx and Jz and
adding induced AT and remanent AT effects together at
each point along the profile. If the model is more than
one body then reapplying the process and adding
results will provide complex anomaly profile.
The working equation can be reduced to a simpler
working equation for the following geological structures.
For the examples shown only AZ is considered.


Fault Model
This is identical to working equation given previously


Figure 24/8: Cross section of fault model.


( ) AZ J
r
r
x
= +


`
)
|
\

2
2 1
2
1
sin cos sin ln | u u | |
( )


`
)
|
.
|
|
J
r
r
z
u u | |
2 1
2
1
sin cos ln




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 24, Page 4
GETECH/ Univ. of Leeds:

Vertical Fault
Let u1 -u2 = u, | = 90
0
, cos| = 0 and sin| = 1
| | AZ J
r
r
J
x z
=

(

|
\

|
.
|
|
2
2
1
ln u


Figure 24/9: Vertical contact/fault model

Vertical Dike (finite depth)

This is basically adding two vertical fault models
together.


Figure 24/10: Vertical finite depth dike

( ) ( ) AZ J
r
r
J J
r
r
J
x z x z
=

(
2 2
4
2
4 2
3
1
3 1
ln ln u u u u
( ) AZ J
r r
r r
J
x z
= +

(
2
4 1
2 3
4 2 1 3
ln u u u u

Vertical Dike (infinite depth)
This is same as above but r r
3 4
1 / = and u3 = u4
and u = u2 - u1
AZ J
r
r
J
x z
= +

(
2
1
2
ln u



Figure 24/11: Vertical infinite depth Dike





Inclined Dike (infinite depth dipping at 45
0
)
AZ J
r
r
J
r
r
x z
=

(
+ +

(
ln ln
1
2
1
2
u u

Figure 24/12: Inclined Dike Model


24.2 Gravity Case

24.2.1 2D Gravity model with constant
density,

The working equation is similar to the magnetic case
and used in the same way (i.e. adding and subtracting
semi-infinite polygons) to form 2D structures of any
cross sectional shape. As with the magnetic case the
polygon is infinite in the +y, -y and +x directions




Figure 24/13: 2D Gravity Model

A A| g G dz = 2

A A g G dz
d
D
=
}
2 |

( ) ( ) A A g G D x
r
r
=
|
\

|
.
| +

(
(
2
2 1
2
1
2 1
| | o o o | | sin . sin ln cos




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 24, Page 5
GETECH/ Univ. of Leeds:

24.2.2 Variable Density 2D - Modelling

The above equation is for constant density contrast.
For sedimentary basins, density normally increases with
depth of burial. Thus need to replace constant density
term by a term that varies with depth.





Figure 24/14: D gravity model with linear density
increase with depth



A linear relationship of density with depth could be:

= + a a z
o 1


The algorithm (below) has been derived by C M Green
(GETECH)






This equation can be readily programmed so that
complexity of the formula is no obstacle. The linear
equation used for the density-depth function can be
now used to vary the density with depth in a non-linear
way. This can be done by using the above formula and
putting lower depth limit z2 before changing
density/depth function





24.2.3 2.5D Gravity Modelling




Figure 24/15: 2.5 D Gravity Model. The polygon in
cross section with finite length in y direction,
extending to Y1 in + y direction and Y2 in - y
direction.


(xi , zi) and (xi+1 , zi+1 ) are the co-ordinates of
polygon side) and angle u and other quantities
required in integration:-
1. u
i
i i
i i
z z
x x
=

(
+
+
tan
1 1
1

2. polygon side forms the straight line
z z x
o
= + tan . u
3. defining the following quantities
S x x
i i o
= + R Y x z
N i N i ,
( ) = + +
2 2 2
1
2

T x x
i i o
= +
+1

r x z
i i i
2 2 2
= +

r x z
i i i + + +
= +
1
2
1
2
1
2


K a
i i
= cosu , ci = secu

The gravity effect of this 2.5D structure is




J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 24, Page 6
GETECH/ Univ. of Leeds:

24.2.4 2.75D Gravity Modelling



Figure 24/16: 2.75D model same as previous 2.5D
model but profile now inclined to perpendicular


24.3 3D Gravity Modelling

The following method (after Cordell and Henderson
1968) described is based on the gravity response of a
right rectangular prism


Figure 24/17: 3D Prism Model


( )
Ag
G zdxdzdy
x y z
z
=
+ +

2 2 2
3
2

or
( )
( )
Ag
G a a z zdxdzdy
x y z
z
o
=
+
+ +
1
2 2 2
3
2


for linear varying density with depth (after GETECH)
Integrating dz dy dx
z
z
y
y
z
z
1
2
1
2
1
2
} } }

the solution is given in the box section in next column
where the gravity effect at the origin is Ag
z



The expression for constant density is just the first half of
the equation i.e. only terms in ao
Application: To make the initial example simple, we will
assume
a. density contrast is constant
b. the gravity anomaly is the residual Bouguer anomaly
c. the residual field results from a single source.
d. We will also assume the gravity values at each grid
node are located above the centre of the prism which
has a cross section the same size as the grid.

By assuming a density contrast, the residual gravity at
each grid node can now be used to estimate the finite
thickness (depth) of the prism located below its grid
node. This is done by using the infinite slab formula (or
Bouguer correction = 2tGh = 0.04191Ah mGal where
A is in g/cc and h is in metres). This is a crude estimate
of prism thickness but has the advantage that depth to
top of prism is not required. This process is done at each
grid node and a model is constructed. The gravity
response of this model can now be calculated precisely
using the prism formula (i.e. at each grid node the
summed effect of all prisms can be calculated). The
difference between the residual anomaly and the
calculated model anomaly, can be determined at each
grid node and this difference used to modify the prism
size below the grid node. Thus by iterative means the
calculated and observed residual anomalies will
converge and will give a reasonable solution after about
3 iterations.

In the above scheme one thing is lacking which prevents
the calculation of the model anomaly, that is at what
depth is the top of each prism. this is overcome by
deciding initially how your model should be constructed.
The three commonly used models build:
a) up from a flat reference surface at a given depth
b) down from a flat reference surface at a given depth
| | |
Ag G a y x x y z x y x y z
z o
= + + +
|
\

|
.
|
|
\
+ + + +
|
\

|
.
| ln ln
2 2 2 2 2 2

+ + + +
+ + +

(
+
|
\

|
.
|
|
|
|
|
.
|
|
|
|
+ + + +
|
\

|
.
|
|
\

z
z y y x y z
y x y z y z
a xy z x y z sin ln
1
2 2 2 2 2
2 2 2 2 2
1
2 2 2

+
|
\

|
.
|
|
+ + + +
+ +
|
\

|
.
| +
|
\

|
.
|
|
|
|

y z y z z x y z
z x y z y z
2 2
1
2 2 2 2 2
2 2 2 2 2 2
sin
|||

+ + + + +
+ + +
|
\

|
.
| +
|
\

|
.
|
|
|
|

x z x z z x y z
z x y z x z x y z
x y z
2 2
1
2 2 2 2 2
2 2 2 2 2 2
1 1 1
2 2 2
sin
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 24, Page 7
GETECH/ Univ. of Leeds:

c) Symmetrically about a flat reference surface at a given
depth
The number of calculations that the computer has to do
is very large. This can be reduced by the high degree of
symmetry of calculation resulting from the same prism at
different grid nodes. Thus the program calculates the
gravity effect of one prism at all grid nodes before
calculating the next prism at all grid nodes. (i.e. not to
calculate effects of all prisms at one node then moving
calculation to next node)
Building a model using a reference surface
The reference surface can be flat or user defined. In the
simple case below a flat reference surface is used.


Figure 24/18: Building models down, up and about a
reference surface in cross section.



Figure 24/19: Isometric View of grid and model built
up from a flat reference surface.

Problems with building up from a reference surface
:Outward dips only allowed. Also model could build up
through your surface. This can be prevented so that
when the model has grown to zero depth it can not grow
upwards any further.
Problems with building down from a reference
surface: Dips are always inward 0
0
< u < 90
0
therefore
outward dipping contacts not possible (e.g. granites or
thrusts)


Figure 24/20: How program build prior to model 1st
iteration

Program used residual gravity anomaly for body being
interpreted and determines prism length based on
Bouguer anomaly formula of infinite slab( (see Fig
24/20). Once all prisms are computed then precise
gravity response at each grid node can be calculated for
all prisms in model. This generates the calculated
anomaly map
First Iteration uses difference in gravity field map BA=(
residual-computed) to adjust prisms at each grid node
using same strategy as in Fig 24/20. Recalculate
Bouguer anomaly for model
Iterations repeated until reach fixed number of iterations
(e.g.3to 5) or until RMS of difference (residual-computed)
is below accepted value.
The program is very flexible and can be used with
seismic data to calculate the 3D gravity effect of sub-
surface structures defined by the seismic data or used to
estimate the size and geometry of sub-surface
structures.
Use of 3D program with seismic data

Stage One
First need to convert seismic (time) model to a depth
model



Fig 24/21: Conversion from time to depth section and
generating depth horizons of layers 1-3

Stage Two
Use density log to determine density of each layer, or
use other methods to derive densities from interval
velocity information based on your knowledge of the type
of geology in section.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 24, Page 8
GETECH/ Univ. of Leeds:

Stage Three
Define top and bottom of seismic layers as grids. Extend
model in a realistic manner as possible to prevent edge
effects. Use program to calculate gravity effect of each
layer



Figure 24/22: Gravity Stripping layers

The grid size should be same size or half line
separation. In this way 2d sets of seismic sections can
be used to generate depth maps as digital grids and
used to calculate gravity effects of each layer in turn.
This process is repeated down through the seismic
section until the quality of the seismic data prevents
accurate definition of layers. The gravity effects of each
layer can now be removed from the Bouguer anomaly
(or free air anomaly if the sea water is treated as a
layer in model)

The resulting gravity can now be used to control
uncertain seismic interpretation or be used to model
deeper structure not imaged by the seismic data. This
is often the case in the southern North Sea where the
salt layer prevents seismic energy penetration. In this
case the resulting residual gravity can be used to
model the deeper parts of the basin not seismically
imaged. In the southern North Sea the low density
Carboniferous which underlying the salt and is the
source of the gas can be observed and modelled.

To do such modelling of the unseen lowest
sedimentary layer to the basin it is necessary to use
the last reflecting horizon as the reference surface and
build the model down from that surface. If the
interpreter wishes to obtain a feel for what is going on
solely in terms of the gravity expression then the
bottom surface of the density model must be horizontal
so that the gravity effect of the bottom interface
(reflecting horizon) does not influence the resulting
residual anomaly.

The whole possess is sometimes called gravity
stripping since the process involves removing layer by
layer the gravity effects of the sedimentary layers,
identified by the seismic data, so that the gravity
expression of deeper layers can be identified and
interpreted.

Using the modified equation, set out above, the density
contrast within a layer can vary with depth. It should be
possible to couple such an expression with a laterally
varying density function to take account of lateral
changes in interval velocity due to facies changes
rather than due to depth of burial.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 25, Page 1
GETECH/ Univ. of Leeds:
SECTION 25: QUANTITATIVE INTERPRETATION
Semi-Automatic Profile Methods

There are two types of semi-automated interpretation in
general use. These are
- line or profile methods (where profile x is 1D and
section x,z is referred to as 2D
- grid methods (were grid xy is 2D and subsurface
model xyz is 3D)

This section deals with profile based methods



Figure 25/1: Magnetic source locations are
invariably top corners of structures not their bottom
corners. The table indicates parameters that can be
estimated from magnetic anomalies and the
methods commonly used. All parameters and
methods apply to profile methods except parameter
to which is assumed to be vertically below the
profile
The profile method has the advantage of using the
highest resolution data, albeit 1D i.e. just along profile
line where as the grid method is 2D where the data have
generally undergone some kind of filtering and thus the
shortest wavelength content of the grid is limited to twice
the grid cell size. Beware of over gridding where true
data resolution may be >> 2 cells.

Figure 25/2: Examples of geological structures
generating magnetic responses where red zones are
rocks with higher magnetisation. The solutions are
denoted as open circles and dip as tail directions
and length of tail the magnitude of the susceptibility
contrast or product (susceptibility contrast x
thickness) for dike model.
Since most geophysical data are collected along line
where sampling is generally much finer than the
sampling across track then any method using the along
line data will be using the highest frequency (shortest
wavelength) data available. This is true for magnetic
surveys but not necessarily true for gravity surveys e.g.
current high resolution marine and airborne gravity
surveys can have minimum wavelength longer than the
line spacing. (marine gravity data have resolution of 0.5
km whereas line spacing can be 150m to 200m)
Most semi- automated methods rely on the anomalies
being 2D with the survey profile line perpendicular to
anomaly strike direction. This is often close to the
situation since flight lines are orientated perpendicular to
the geological strike to maximise useful data return. If
the anomaly orientation is not as above then the
wavelength and gradients seen along line will be less
than true values and thus the depth estimate will
increase. Some computer programs now compensate
for this by checking on 2D assumption by looking for the
correlation of the anomaly position on adjacent lines and
to inputting the orientation angle of the anomaly.
25.1 Naudy Method

Details of this method are given in Naudy, 1971
Automatic determination of depth on aeromagnetic
profiles. Geophysics #6 No 4:717-722. The basis of
the method is fitting anomalies resulting from theoretical
structures e.g. bottomless prism (dikes) and thin plates,
to the profile data to obtain a measure of goodness of fit
which is determined automatically and provides an
estimate of source location and depth to top of structure
based on structure preferred.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 25, Page 2
GETECH/ Univ. of Leeds:
Theory
An anomaly can be split into a symmetrical and
asymmetrical components. (see section 25.2 where 2D
equation for dike has these components)



Figure 25/3: Dipping dike model

The observed anomaly (Fig 25/3) over a dike can be
symmetric or asymmetric (with no symmetric term) as
shown in fig.25/4. When the anomaly is at the pole
(Inclination of induced field is 90
0)
then this will result in
no anti-symmetric component. What happens at the
Equator?

Figure 25/4: Splitting anomaly into two components

The two components when added together give the total
anomaly. The asymmetric component can be used if it
is reduced to the pole.


Figure 25/5: Transformation of anomalies

Thus for profile being investigated, there is the
measured profile and the transformed RTP profile.
These profiles can then be compared (correlated) with
the theoretical anomalies. Assuming an operator
window (2m+1) is moved along the profiles then a
correlation coefficient can determined which has value
between +1 and -1

Several sampling intervals are used and the best
correlation for sampling interval can be determined for a
individual anomaly will provide both a location of the
source (top centre of anomaly) and its depth based on
the type of source assumed.


start Finish
Figure 25/6: Example of results from Intrepid
automatic Naudy method using dipping sheet or dike
bodies.

Several depth values are obtained at the location of the
centre of anomalies due to different models being
assumed (e.g. dike or thin plate). The interpreter then
has to decide which model is most appropriate. In the
automated method the change in source parameters
continues until a fit between observed and calculated
anomalies is achieved as in (Fig 25/6).

25.2 Werner Deconvolution
(Notes taken from J. L. Friedberg, Aero Service) The
primary objectives in many surveys are generally to
determine the depth and structural configuration of the
magnetic basement. A fundamental problem in any type
of potential field interpretation is the fact that at any
depth level, with certain restrictions, a distribution of
magnetic dipoles can be found which will produce a
given anomaly (equivalent layer concept). This
ambiguity can only be removed by the assumption of a
single geometric configuration to the causative body (i.e.
dike, contact, fault, prism, lens, basement rise). Once
the assumption of body type has been made one can
find a unique solution with geological meaningful
parameters such as depth of source, dip and
susceptibility.
Pre-computer methods (see Section 23), also (see
Vaquier, et al. 1951 Interpretation of aeromagnetic
maps. Geol. Soc. Am Memoir 47; Reford, 1964
Magnetic anomalies over thin sheets, Geophys. 29
No 4; and Grant and West 1965 Interpretation Theory
in Applied Geophysics. McGraw Hill New York)
tended to use anomaly parameters which when used by
experienced interpreters on reasonable isolated
anomalies together with adequate geological knowledge
are able to produce good quality interpretations.
However, when magnetic/geological conditions are less
than ideal the anomaly parameter methods are subject
to significant amount of error induced ambiguity.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 25, Page 3
GETECH/ Univ. of Leeds:
The Werner method was introduced by S. Werner in
1953 (Werner, S 1953 Interpretation of magnetic
anomalies at sheet-like bodies Sveriges Geoliska
Undersokning, Ser. C., Arsbok 43(1949),N:0.6.) who
realises the need of an interpretation method that was
able to separate an anomaly from the interference
caused by adjoining anomalies. This was achieved by
Werner by considering the entire anomaly in the
analysis rather than measuring a few parameters as in
previous methods (Section 23).


Figure 25/7: Minimum sampled Dike (or thin sheet )
Model

Theory: The Total field equation for a two dimensional
dike with infinite strike length and infinite depth to base
can be expressed as




Or in matrix form



where
x = distance along profile
T = Total magnetic field intensity at x
A, B = functions of field strength, susceptibility
plus
geometry involved
x0 = horizontal distance to point directly above
dike
z = depth to top of dike

There are four unknowns A, B, x, z

In the simple case Werner showed the equation for a
dike could be represented as

T x xT b T b x a a
2
1 0 1 0
= + + +

In this equation where x and T have the same definition
then by substitution it can be shown that

a Ax Bz
a A
b x z
b x
0 0
1
0
2 2
1 0
2
= +
=
=
=


These equations can be altered to give the four
unknowns

( )
x b
z b b
A a
B
a a b
b b
0 1
0 1
2
1
0 1 1
0 1
2
1
2
1
2
4
2
4
=
=
=
=
+



Since there are 4 unknowns then there is a need for four
simultaneous equations, which require 4 values of x.

To make the method more robust the equations are
linked to the regional interference effects of a
polynomial surface of some degree, thus the first
equation now becomes

( )
( )
T x
A x x Bz
x x z
C C x
n
( ) =
+
+
+ +
0
0
2
2
0 1

where n is the order of the polynomial. This results in
(n+5) unknowns which requires (n+5) equations and
(n+5) data points to solve for these unknowns. In
practice a first order polynomial is found to be sufficient,
so the minimum number of points required is 6.

It is important that the theory given above is valid for all
semi-infinite, homogeneous, sheet-like bodies,
regardless of strike or dip and at any magnetic latitude.
Furthermore the calculations of depth and position are
independent of direction of magnetisation and thus
unaffected by remenance. Another important point is
that a thin sheet (or dike) is made up of two closely
spaced interfaces such that the anomaly effects of their
individual interfaces cannot be distinguished. The
magnetic anomaly of a thin sheet is precisely the same
as the derivative of the magnetic anomaly for a similarly
positioned interface. This is shown in the next set of
diagrams:
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 25, Page 4
GETECH/ Univ. of Leeds:


TF = Total Field
HG = Horiz. Grad.

Figure 25/8: Total field anomaly (TF or T) for a
vertically dipping interface assuming an inclination
of 90
0
and dT/dx curve is its horizontal derivative
along line.

Because of the above relationships the Total field data
collected along a profile can be converted to the thin
sheet type anomaly by simply deriving its horizontal
derivative of the Total field. Once in derivative form it
can be subjected to Werner analysis resulting in
estimation of depth horizontal position, susceptibility and
dip for interfaces and edges.

If vertical gradients are measured then they can also be
subjected to Werner deconvolution is a similar way to
that of the horizontal gradient.



Figure 25/9: Total field over a horizontal thin
sheet (magnetic inclination = 90
o
)



Figure 25/10: Total field over a vertical thin sheet
(magnetic inclination = 90
o
)

Practice: The data profile is re-sampled into regularly
spaced values, since this is a requirement of the
algorithm. The analysis is then undertaken on 6 or 7
successive equally spaced points and the linear
equations are solved. The 6 or 7 point operator is then
moved along the profile by one data point and the
analysis repeated. The sensitivity of the 6 or 7-point
operator to detect an anomaly and determine the source
location is strongly related to the horizontal distance
covered by the operator array (Fig 25/11). The deeper a
geological source lies, the broader will be its associated
magnetic anomaly. Therefore the operator array
dimension must be increased in order for the anomaly to
be recognised. This involves a number of passes over
the profile with the distance between the operator points
increasing. Before each pass the data are smoothed by
a low pass filter and re-sampled to remove/suppress
high frequency anomalies recognised in the previous
passes.

In practice smoothing does not completely remove the
anomaly and a broader anomaly of lower amplitude will
be interpreted by the deconvolution operator as though
arising from a deeper geological source. This can be
easily identified on a depth profile since the solutions
will lie on a vertical line, where the correct depth is at or
near the top of the vertical array of depth points.

When the operator is passing over an anomaly then
there will be a closely grouped set of depth points, which
will indicate the most probable source depth. These
solutions can be used to determine dip and
susceptibility values. The same procedure is carried out
with the horizontal derivative calculated from the Total
field. A third pass would be made using the vertical
derivative . Each of these sets of solutions are plotted
with different symbol. In general the gradient anomaly
will generate depths 20% shallower than Total field
anomalies. The susceptibility and dip will aid the
interpreters decision on what type of body may be
expected. e.g. if a vertical dip is associated with the
Total field depth group and a near horizontal dip is
associated with the gradient depth group then the
source body is quite likely to be a vertically dipping dike.
e.g. if a vertical dip is associated with the gradient depth
group and a near horizontal with the total field depth
group then the source is either a horizontal sheet at
depth or a vertical edge at the depth of the gradient
calculation.


Figure 25/11 Operator window size change with
depth of source
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 25, Page 5
GETECH/ Univ. of Leeds:

A range of criteria are suggested by Hartman, Teskey
& Friedberg 1971 A system for rapid digital
aeromagnetic interpretation Geophys. 36, No 5 to aid
the interpreter. These relate to the horizontal spread of
depth groups and the relationship between the two sets
of groups.

The examples (Figs 25/13 and 25/14) are taken from
Hartman et al. where the triangles are derivative shallow
solutions and the squares the Total field solutions.. Fig.
25/12 shows computerised version of the method.






Figure 25/12: Werner Deconvolution from a South American sub Andean area by EarthField Inc. Upper panel
shows TMI its amplified version and Horizontal derivative. Lower panel shows the complexity of the solutions.


hand computed for selected for all windows
window locations

Figure 25/13: Resolution of Werner deconvolution.
Where Thick line - Total field, Thin line - Vertical
gradient and Dashed line - Horizontal derivative.









Figure 25/14: A graben with a basement
susceptibility of 0.0005. Hand computed solutions
for selected window locations.







J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 25, Page 6
GETECH/ Univ. of Leeds:
25.3 Euler Deconvolution

25.3.1 Conventional 2D Euler Deconvolution

This method was originally developed by Thompson,
1982 (EULDPH) new technique for making
computer-assisted depth estimates from magnetic
data. Geophysics, v 47, 31-37). The Euler equation for
profile based data assuming 2D structures are
perpendicular to the profile is

NT
z
T
z z
x
T
x x =
c
c
+
c
c
) ( ) (
0 0


Where N defines the structure index which is equivalent
to the exponent of the amplitude of the field T with
distance (or the rate of fall-off of the signal with
distance). For magnetic fields N=0 for large faults, 0.5
for moderate faults, 1.0 for small faults (or thin dike) and
3 for point sources. For gravitational fields the fall off is
normally between 0 to 2 where the latter is for a point
source. The source location (xo, zo) and the terms in
the equation are such that (x-xo) will increase while
x
T
c
c
decreases and this is true for (z-zo) and
z
T
c
c

The unknowns in the equation are (xo, zo) and the N is
normally assumed. To obtain a stable solution it is best
for the equation to be over determined and thus the
operator length will be more than 4 points. It is
important that the operator is longer than the
wavelength of the anomalies you are interested in. See
how the Werner method gets over this problem. The
operator is moved along the profile at a certain move-
along-rate and at each step a solution is determined.

7 points Operator Positions
1
st
Position x x x x x x x
2
nd
Position x x x x x x x
3
rd
Position x x x x x x x
etc
Profile Data Points x x x x x x x x x x x..

Solutions can be weakly or strongly/robustly constrained
by the data, this generally depends on the location of
the operator window with respect to the source structure
and whether there are interfering anomalies within the
window.

The Euler method for 2D solution determination has not
been very popular due to:
i) the sprays of solution (see figure 25/15) that
are generated do not distinguish between poor
and reliable solutions.

ii) the method, unlike Werner only provides
locations not dips or susceptibility contrast.



Figure 25/15: Typical 2D Euler solutions without any
rejection criteria applied. Different symbols for N=0
(shallowest), 0.5 and 1.0 (deepest)



Figure 25/16: Euler solutions (red dots) for isolated
2D structures (contact and dike) giving perfect
solutions and no spurious solutions


Ideally perfect solutions as shown in Fig 25/16 are
needed where the top corner of the structure is identified
by the solutions.


In reality when two structures are located close to each
other then their anomalies interfere. This systematically
shifts the solution since the gradients used in
determining the solution also change. This particularly
occurs when the window is located in a position to one
side of an anomaly where the percentage of interference
is greater due to the anomaly having low amplitude.
Over the central parts of an anomaly the percentage
interference is low and the solution is located close to
its correct location.. This is illustrated in Fig 25/17.
Also if the window is too small or too large with respect
to the anomaly wavelength then the solutions will be
degraded.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 25, Page 7
GETECH/ Univ. of Leeds:


Figure 25/17: Poor solutions (green dots)
originate from windows locations away from
the anomaly where noise and interference
effects will be larger. Solutions over the largest
amplitude parts of the anomaly will cluster
around the true solution (red dots).



25.3.2 Extended 2D Euler Deconvolution

Recently the 2D Euler method has been upgraded and
is now renamed Extended Euler (Mushayandebvu,
van Driel, Ried and Fairhead, 1999 and
Mushayandebvu, van Driel, Ried and Fairhead,2001).
by the introduction of a second controlling equation
which uses the principle that homogenous functions are
invariant under rotation so that

NT
x
T
z z
z
T
x x =
c
c
+
c
c
) ( ) (
0 0


Magnetic Contact Model: For a magnetic contact, the
equation for the magnetic field and its associated
derivatives for a contact extending to depth with its top
edge at (xo, zo) is given by Nabighian, 1972 The
derivative equations after Nabighian are:

( ) ( ) ( ) ( )
2
0 0
sin cos 1
r
z z x x
z
T | |
o

=
c
c

And
( ) ( ) ( ) ( )
2
0 0
cos sin 1
r
z z x x
x
T | |
o

=
c
c


where ), sin( 2 d Ktc = o
0
90 2 = d I |

and ( ) ( )
2
0
2
0
2
x x z z r + =

With K the susceptibility contrast at the contact, d the
local dip, t the magnitude of the Earths magnetic field,
c=1-cos
2
(I) sin
2
(A) for I the ambient field inclination and
A the angle between the magnetic north and the x-axis
and tan(I)=tan (I) / cos (A). Substituting these into
above two equations leads to

( ) ( ) ) sin(
0 0
| o =
c
c
+
c
c

z
M
z z
x
M
x x

( ) ( ) ) cos(
0 0
| o =
c
c
+
c
c

x
M
z z
z
M
x x
Thus it is possible by simultaneously inverting for the
conventional Euler and rotational constraint equations
allows the location, dip and susceptibility contrast to be
determined. This is tested by Mushayandebvu et al
1999 for contact and dike (figure 25/18) models giving
excellent results.

Thin dike Model: The equation for the magnetic field of
a thin dike extending to depth with its top edge at
(xo, zo), is the

horizontal derivative of the field for a magnetic contact
(Nabighian, 1972) and is given by



( ) ( ) ( ) ( )
2
0 0
cos sin
r
z z x x M | |
o

=

With is now equal to 2Kfctsin(d) where t is the
thickness of the dike. The other terms have the same
definitions as for the magnetic contact. The derivatives
are

( )
( ) ( ) ( ) ( ) ( ) | | ( )
2 4
0 0 0
sin cos sin 2
,
1
r r
z z x x x x
z x
x
M | | |
o
+

=
c
c

( )
( ) ( ) ( ) ( ) ( ) | | ( )
2 4
0 0 0
cos cos sin 2
,
1
r r
z z x x z z
z x
z
M | | |
o
+

=
c
c

Substituting these into equation in the profile Euler and
rotational constraint equations (page 25/7) gives

( ) ( ) M
z
M
z z
x
M
x x =
c
c
+
c
c

0 0

and

( ) ( ) ( ) ( ) ( ) ( ) | | V z z x x
r
x
M
z z
z
M
x x = + =
c
c
+
c
c
| |
o
sin cos
0 0
2
0 0

The last equation but one is the conventional Euler
equation for a thin dike with a structural index of 1. It is
interesting to note in the last equation V is the vertical
gradient of the magnetic field from the magnetic
contract. Since V is not known we can use the last
equation but one for location and then the last equation
for V

Where
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 25, Page 8
GETECH/ Univ. of Leeds:
2
2
2 2
r
V M
o
= +
and

( ) ( ) ( ) | osin
0 0
= + M x x V z z


( ) ( ) ( ) | o cos
0 0
= M z z V x x


Thus the dip and the product of the susceptibility and
the thickness can be determined if the field strength
inclination are known.
Cleaning up the spray solution using Extended Euler
Using model profile data both Conventional and
Extended Euler methods generate sprays of locations
when 2 or more bodies are located close to each other.
The adjacent body then causes slight regional effects on
the anomaly under analysis (Fig 25/18) These depth
location sprays radiate away from the true source
location in different ways depending on whether
Conventional Euler or Extended Euler method is used
(see figure 25/18). Conventional and Extended depth
solutions that disagree by more than 10% for any given
window are rejected i.e only those parts of the sprays
that give nearly identical depth solutions are retained.
This is a feature that no other 2D profile method has and
is thus likely to make Extended Euler become very
widely used as a profile interpretation method.

Once the tight cluster of depth solutions has been
identified, then these solutions are used to identify their
corresponding dip and susceptibility solutions which will
also form tight clusters (bottom three panels of Fig
25/18).

Figures 25/19-21 illustrates solution for field data across
the Trans European Suture Zone (or Tornquist zone of
Poland) . The Tornquist zone is a major suture
separating the East European Craton from the younger
warmer and thinner crust of the Phanerozoic orogens of
western and southern Europe. Over lying the suture is
about 10km of non-magnetic sediment. The magnetic
anomaly coming from the basement of Eastern
European craton is so large it is clearly seen a satellite
anomaly by the CHAMP satellite at 400km altidude. The
extended Euler generates reasonable depths, dips and
susceptibility contrasts (Fig. 25/20) that can be used in
Fig 25/21 to regenerate the magnetic TMI profile



Figure 25/18: Removing poorly constrained solutions
by comparing locations using extended and
conventional Euler. Only solutions within 10%
location error accepted.



Figure 25/19: location of magnetic profile used in
Figs 25/20 and 25/21

.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 25, Page 9
GETECH/ Univ. of Leeds:


Figure 25/20: inversion of the magnetic profile
across the Trans European Suture


Figure 25/21: Taking the solutions and constructing
model and generating a close fit to observed data.
Not previously done by people working in inversion
methods.

25.4 Source Parameter Imaging, SPI
using complex attributes

The complex attributes (see Section 19.2.8) can be
used directly to determine the edge location, depth,
dip and susceptibility contrast. This method has
been published by Thurston and Brown (1997)

Sloping contact (after Nabighian,1972)

The vertical gradient is

c
c
T
KTc d
x I d h I d
h x
z
=

+
2
2 90 2 90
2 2
sin
cos( ) sin( )


and the horizontal gradient is

c
c
T
x
KTc d
h I d x I d
h x
=
+
+
2
2 90 90
2 2
sin
cos( ) sin(" )

where
K= susceptibility contrast
T= magnitude of Earths magnetic field
c=1-cos
2
isin
2
o, where o is angle between the
positive
x axis and the magnetic north,
i is the ambient field inclination,
tan I = tan i / cos o
d is the dip (measured from the positive x axis)
h is the depth to the top of the contact and all
trigonometric arguments are in degrees

Depth h for Contact
Substituting the above two equations into the local
wavenumber (section 19.2.8) yields
k =
+
h
h x
2 2

where the co- ordinate system has been defined such
that x=0 directly over the edge. This equation makes it
evident that the maxima of the local wavenumber are
independent of the magnetization direction. Thus the
peaks outline source edges, and are at these locations

x = 0. At x = 0, the local depth can be calculated by

h = 1/ k

Local Dip d
to compute the local dip d, the expressions for the
gradients of the sloping contact are substituted into the
expression for the local phase (see Section 19.2.8 page
9)
giving
u =

+
|
\

|
.
|

tan
cos( ) sin( )
cos( ) sin( )
1
2 90 2 90
2 90 2 90
x I d h I d
h I d x I d


the local dip can be estimated by setting x=0 and
rearranging the above equation to

d = u +2I 9 0

Local Susceptibility, K
At x = 0 and using local depth and local dip rather than
depth and dip then
K
A
Tc d
=
2k sin


The location of local wavenumber (see section 19.2.8)
gives a better location of the contact edge position than
other derivatives. Using the wavenumber, the phase and
the analytic-signal amplitude at this position allows the
local depth, local dip and the local susceptibility contrast
to be determined as 100m, 135
0
, and 0.01 SI
respectively. See the Tilt and Total Horizontal Derivative
of the Tilt methods.

25.5 Other Methods

The reader is directed to the following papers

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 25, Page 10
GETECH/ Univ. of Leeds:
25.5.1 Linearized least-squares method for sources
of simple geometry
This paper is entitled
(Ahmed Salem, D, Ravat, M. F. Mushayandedbvu and
K Ushijima 2004 Linearized least-squares method for
interpretation of potential-field data from sources of
simple geometry, by GEOPHYSICS 69 No 3 p 783-
788)

Abstract
We present a new method for interpreting isolated potential-
field (gravity and magnetic) anomaly data. A linear equation,
involving a symmetric anomalous field and its horizontal
gradient, is derived to provide both the depth and nature of the
buried sources. In many currently available methods, either
higher order derivatives or postprocessing is necessary to
extract both pieces of information; therefore, data must be of
very high quality. In contrast, for gravity work with our
method, only a first-order horizontal derivative is needed and
the traditional data quality is sufficient. Our proposed method
is similar to the Euler technique; it uses a shape factor instead
of a structural index to characterize the buried sources. The
method is tested using theoretical anomaly data with and
without random noise. In all cases, the method adequately
estimates the location and the approximate shape of the
source. The practical utility of the method is demonstrated
using gravity and magnetic field examples from the United
States and Zimbabwe.


25.5.2 Interpretation of magnetic data using
analytic signal derivatives
This paper is entitled
(Salem, A 2005 Interpretation of magnetic data
using analytic signal derivatives Geophysical
Prospecting, , 53, 7582)

Abstract
This paper develops an automatic method for interpretation of
magnetic data using derivatives of the analytic signal. A linear
equation is derived to provide source location parameters of a
2D magnetic body without a priori information about the
nature of the source. Then using the source location
parameters, the nature of the source can be ascertained. The
method has been tested using theoretical simulations with
random noise for two 2D magnetic models placed at different
depths with respect to the observation height. In both cases,
the method gave a good estimate for the location and shape of
the sources. Good results were obtained on two field data sets.

25.5.3 Depth and structural index from normalised
local wavenumber of 2D magnetic anomalies
This paper is entitled
(Salem, A and Smith, R 2005 Depth and structural
index from normalised local wavenumber of 2D
magnetic anomalies Geophysical Prospecting 53,
83-89.)

Abstract
Recent improvements in the local wavenumber approach have
made it possible to estimate both the depth and model type of
buried bodies from magnetic data. However, these
improvements require calculation of third-order derivatives of
the magnetic field, which greatly enhances noise. As a result,
the improvements are restricted to data of high quality. We
present an alternative method to estimate both the depth and
model type using the first-order local wavenumber approach
without the need for third-order derivatives of the field. Our
method is based on normalization of the first-order local
wavenumber anomalies and provides a generalized equation to
estimate the depth of some 2D magnetic sources regardless of
the source structure. Information about the nature of the
sources is obtained after the source location has been
estimated. The method was tested using synthetic magnetic
anomaly data with random noise and using three field
examples.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 26, Page 1
GETECH/ Univ. of Leeds:


SECTION 26: QUANTITATIVE INTERPRETATION
Semi-Automatic Grid Methods:
EULER


Semi-automated inverse theory methods are becoming
increasingly useful in providing rapid means of
estimating the three dimensional structure of
sedimentary basins from potential field data. These
methods promise rapid and reliable initial interpretation
of gravity and magnetic data sets and include:

- 3D Euler deconvolution
- Source Parameter Imaging(SPI
TM
),
- Spectral analysis,

and utilise more fully the spectral content of the potential
field data than the traditional forward modelling
methods. Moreover, unlike forward modelling, the
inverse methods do not require a priori knowledge of
the geology, thus making them highly suited to
evaluating frontier exploration areas such as in the FSU
(Former Soviet Union), Africa and China where
compilations of gravity and aeromagnetic data are now
becoming available.


26.1 Conventional Grid (3D) Euler

Magnetic Case
(See also Section 25.3 for 2D Euler)
This section will focus on how the Euler method works
with magnetic data rather than with gravity data.
Solutions for magnetic data image the top corners of
structures where as the gravity method will image
contracts at a deeper level corresponding more to the
centre of mass. This is illustrated in Fig 26/1 for a 2D
contact model

Figure 26/1: Relative locations for idealised
magnetic and gravity solutions for a contact

Euler's homogeneity equation (for fuller details see Reid
et al., 1990 Magnetic interpretation in three
dimensions using Euler deconvolution,
Geophysics,55, 80-91) is :
( ) ( ) ( ) ( ) x x
T
x
y y
T
y
z z
T
z
N B T
o o o
+ + =
c
c
c
c
c
c

when N=0
( ) ( ) ( ) tan x x
T
x
y y
T
y
z z
T
z
Cons t
o o o
+ + =
c
c
c
c
c
c

where T x y z ( , , ) is the total field magnetic in Cartesian
co-ordinates, ( , , ) x y z
o o o
the co-ordinates of the
magnetic source, N the Structural Index and B the
regional field.

The equation has four unknowns, x0, y0, z0 and N thus
the minimum number of equations needed to solve for
the 4 unknowns is 4 (4 grid points). However, if a large
number of grid points are used e.g.7 x 7 then there are
49 equations that are likely to give a more stable
solution using matrix inversion.

The first term (x-x0) cT/cx is distance from source in x
co-ordinates multiplied by the rate of change of the total
field T with respect to the x direction. Normally as (x-xo)
increases cT/cx decreases. The sum of these products
for all three directions is a constant for N = 0 or a
function that relates to N(B-T).

The Structural Index N is basically the fall off rate of the
anomaly with distance.



Inputs:
T, cT/cx, cT/cy, cT/cz
Set Parameters:
Window size,
Acceptance level (can be done afterwards)
Structural Index. N
Outputs:
source 3D locations x0, y0, z0 and stats
regional anomaly B

The application of Euler deconvolution is illustrated in
Fig 26/2 where the grid to be interpreted is referred to
as T which for magnetics is the TMI field. From this
T grid the three derivatives can be generated dT/dx,
dT/dy and dt/dz. These are shown as different colour

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 26, Page 2
GETECH/ Univ. of Leeds:


grids. The size of the Euler operator window is decided,
in this case a 7 x7 point operator is used.. This
generates 49 equations to solve for 4 unknowns (x0, y0,
z0 and N) This is over determined set of equations. The
operator window is moved over the T grid in a
systematic manner with overlap of window locations so
that all the grid is covered. At each location of the grid
the solution is found (one per window). This means
many thousands of solutions are generated both good
and bad.



Figure 26/2: Illustrates how the operator (window of
size 7 x 7 grid cells in this example) moves over grid
at one grid interval at a time and at each grid
interval a solution is determined based on 49
equations

A typical set of result is shown in Fig. 26/3 for a grid as
well as a profile. The figure shows all possible solutions
and the circular symbol is used to represent the sources
locations.

Figure 26/3: All 3D Euler solutions for a grid are
shown based on 10 km window and N =0.5. For
comparison the profile situation showing all
solutions is also shown.
Since all solutions are shown it is difficult to image the
main structures. What is needed is the ability to
discriminate between reliable/robust solutions and
poorly constrained solutions.

Symbolism of Solution Spaces: Representation of
Euler solutions ( , , ) x y z
o o o
by circles provides limited
information. The radii can be increased with source
depth z
o
.(Fig. 26/4) Another display method keeps the
radii constant and use colour to differentiate source
depth

Figure 26/4 Old symbols are symmetric and only
vary in size and colour for depth representation.
New symbols use strike of RTP field at location for
orientation of source structure and dip points to
lower value of RTP field.

A better and more geological symbol is the linear fault
symbol shown in Fig 26/4. This symbol can be oriented
along the RTP magnetic (and gravity) zero horizontal
gradient ( along the direction of the contour at the point)
and whose length and/or colour can define depth. In
addition the gradient of the RTP at the location can be
used to identify the of lower susceptibility side of the
fault symbol. This is marked by the tick mark in Fig
26/4 (see also Fig. 26/18).

The results of using the linear fault symbol become
immediate apparent in Fig 26/5. The fault symbol now
provides a better mapping tool for structural boundaries
e.g. faults and contacts as well as determining depth to
source (fig 26/6).


Figure 26/5: All solutions shown but using different
symbols. Structures are more easily seen in lower
map.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 26, Page 3
GETECH/ Univ. of Leeds:




Figure 26/6: Example of magnetic solutions for a
volcanic source at depth within the Timan Pechora
basin, Russia. Solutions using fault symbol with
colour indicating depth. Problem too large a scatter
of solutions, which are the good ones?

Although the linear symbol helps there are two many
poorly constrained solutions such that as shown in Fig.
26/6 in the zoomed in buried volcanic centre there are a
multiplicity of depths and poor solutions will tend to
obscure structure.

How to clean up the solution space?
Assuming the correct structural index has been used,
then the solutions need to be subdivided into Robust
(good quality), Poor and Artefact solutions. The last two
types of solutions need to be identified and eliminate to
permit the explorationist a clearer vision of the
subsurface structures. Attempts to remove such
solutions using an acceptance criteria has met with
limited success. Were the acceptance criteria measures
the percentage error with depth based on the inversion
matrix.

Section 25.3 has already illustrated why poor solutions
occur due to operator window being too small or too
large and the operator window being too distant from
the anomaly being analysed. This last problem tends to
generate large numbers of poorly constrained solutions
since the flanks of anomalies are small amplitude areas
where effects of other anomalies and noise will be a
maximum.

Fairhead et al 1994, Euler beyond the black box
SEG abstract) have developed a method to minimise
poor and artefact solutions from the solution space. To
achieve effective elimination of poor and artefact
solutions necessitates an understanding of how they
originate.

Fairhead et al (1994) show that window size and
window location are critical to the removal of these
poorly constrained solutions.
Window Size: Source solutions ( , , ) x y z
o o o
are
highly dependent on the derivatives
c
c
c
c
c
c
T
x
T
y
T
z
, , of the
Total field, which have wavelengths approximately half
that of T. over the anomaly. The frequency content of
the derivative field after application of the Laplacian
filter which defines the curvature of field-(see section
19.4.1 page 13). By determining the principal
frequencies in this field derivative provides a means of
determining the optimum window size. If the window
size is too large then derivatives within the window
originating from different edges/contacts will tend to
corrupt the final solution. For a 1 km grid we find that a
4 x 4 up to 7 x7 window is adequate for magnetic data
and a 7 x 7 up to 10 x 10 window for gravity data over
the same area.

Location of the Window: Robust/reliable solutions
are obtained for windows located directly over the
source structure i.e. over the maximum of the Reduced
to the Pole (RTP) full horizontal gradient anomaly.
Solutions rapidly deteriorate as the location of the
window moves away from the horizontal derivative
maxima. Thus a method named 'Laplacian (xy) Euler'
(see below) has been developed which controls where
Euler solutions are determined.



Figure 26/7: Lapacian (x, y) Euler solutions
determined.

This method is shown in Fig.26/7 based on a simple
gravity model study. The starting panel (top left) shows
all possible solutions for the outline body shown in red.
From the T grid (where T is in this case Bouguer
anomaly) the horizontal gradient (or derivative, HDR) is
generated. This has a positive maximum close to the
edges we wish to map. The Laplacian filter (see section
19..4.1 page 19/13) is applied to the HDR to define
areas of positive and negative curvature. If the centre of

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 26, Page 4
GETECH/ Univ. of Leeds:


the operator window is located in the area of positive
curvature then an Euler solutions is determined
(example shown in green window-yes). Areas
containing negative curvature are not used in solution
generation (example shown in green window-no). This
restricts solutions generation to areas close to the
edges of the structures that you wish to map since we
know this is where the best constrained solutions are
located.



Figure 26/8: Continuation of example shown in Figs
26.3 & 26.5 showing the extent of the clean up of the
solution space. The gravity solutions are reduced by
65% from 9458 to3280. The magnetic solutions are
reduced also by 65% from 9522 to 3185.

This procedure reduces the number of solutions
by up to 65% and thus greatly speeds up the
computation and eliminate the need to apply an
acceptance criteria. The extent of the Laplacian
positive curvature area can be reduced by careful
adjusting the threshold value.


The following examples show how much cleaner
the solutions become.

Example 1 from Western Siberia, Russia (fig
26/8)

Here the gravity and aeromagnetic solutions have been
superimposed on each other to help correlation and
interpretation

Example 2 from West Africa, (figs 26/9 and 10)

This example is located in West Africa over the Yola
Cretaceous Rift striking W-E from the Benue Trough in
Cameroon. The grid was 1km aeromagnetic grid and N
= 0.5 and operator window size was 10km x10km


Figure 26/9: Yola rift showing Conventional Euler
with colour and radii of circles showing depth. Yola
basin outlined in red.


Figure 26/10: Yola rift. Using Laplacian Euler with
fault symbols. This solution space reduces by ~50%
and the solutions better define the faults and depths
within rift basin area shown as yellow.


26.2 2D Constrained Grid Euler

(after ; Mushayandebvu, Lesurz,Reid and
Fairhead, 2004Grid Euler Decovolution with
constraints for 2D structures, Extended Abstracts, SEG
Annual Meeting Calgary; Williams, S., Fairhead, J . D.
and Flanagan, G., 2002 Realistic models of basement
topography for depth to magnetic basement testing. Soc.
Expl. Geophys, Expanded Abstracts, GMP1.4);

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 26, Page 5
GETECH/ Univ. of Leeds:


Williams Fairhead Flanagan 2003 Grid based Euler
Deconvolution: Completing the Circle with 2D
Constrained Euler
.
In section 25 the profile based Euler has been
described. What is the expression for 2D solutions in
the 3D environment? We need to look at some basic
concepts. The equation for Euler Deconvolution in 3D is
given by Reid et al. (1990) as:
Equation 1
( ) ( ) ( ) ( ) x x
T
x
y y
T
y
z z
T
z
N B T
o o o
+ + =
c
c
c
c
c
c


Where (xo, yo, zo) is the position of a source whose
total field is detected at (x, y, ,z) and B and N are the
regional value of the field and the structural index
respectively. For a contact model the right hand side of
the equation reduces to an offset A which incorporates
amplitude, strike and dip factors.

To solve for the source location the process normally
involves writing equation (1) in matrix form, for each
window with n data points,
Equation 2

(
(
(
(
(
(
(

+
c
c
+
c
c
+
c
c
+
c
c
+
c
c
+
c
c
=
(
(
(
(

(
(
(
(
(
(
(

c
c
c
c
c
c
c
c
c
c
c
c
n
n n n
n n n
NT
z
T
y
T
x
T
NT
z
T
y
T
x
T
B
z
y
x
N
z
T
y
T
x
T
N
z
T
y
T
x
T
:
:
: : :
: : :
1
1 1 1
0
0
0
1 1
1

which is of the form Ab=X. To obtain a least squares
solution entails working out the inverse or pseudo
inverse of A. For 2D features, the ratio of the horizontal
derivatives
|
|
.
|

\
|
c
c
c
c
y
T
x
T
/ is a constant equal to the
tangent of the strike of the feature. So the first two
columns in matrix A are linearly dependant hence one
cannot obtain a least squares estimate for (xo, yo, zo,).
So why do we obtain solutions for 2D structures using
conventional grid Euler over the dike or contact?. The
reason is due to one or a combination of i) the presence
of noise, ii) interfering sources iii) grid interpolation and
or, iv) field resolution i.e. truncation of amplitude values
that ensures
|
|
.
|

\
|
c
c
c
c
y
T
x
T
/ is not a constant. This
explanation can thus in part account for the linear
breaks as well as their scatter in solutions along a 2D
structure. An example is shown in Fig 26/11 showing
conventional Euler and 2D Constrained Euler.

Thus Euler Deconvolution using gridded data needs to
be re-thought if we are to have any theoretical
meaningful solutions and improve the solution space for
grid data.
To determine 2D solutions use is made of the Eigen
vectors and Eigen values originating from each operator
window.

First we need to estimate the 2D strike of the magnetic
field in a given operator window. The left hand side of
conventional Euler equation (top of this page) can be
altered to solve separately for xo and yo from the
gridded data by projecting the data to a profile


Figure 26/11: Conventional (left) and 2D Constrained
(right) Euler showing the improved continuity of
solutions.

perpendicular to the 2D strike of the feature and
passing through the centre of the window. This leads to
a reformation of conventional Euler equation to give

( ) ( ) ) (
0 0
T B N
z
T
z z
y
T
m
x
T
x x =
c
c
+
(

c
c
+
c
c


and
( ) ( ) ) (
1
0 0
T B N
z
T
z z
y
T
x
T
m
y y =
c
c
+
(

c
c
+
c
c


Where m is the tangent to the strike of the profile. The
joint solving of the above equations gives the full co-
ordinates of the source point (xo, yo, zo). For the above
equations the locations of the points in the window are
projected to the profile.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 26, Page 6
GETECH/ Univ. of Leeds:



The stability of the solution space depends on the 2D
nature of the field within the Euler window. An effective
way of determining the 2D nature of the field is examine
the Eigen vectors and Eigen values of A
T
A which can
lead to a more accurate way to quantify windows
containing two-dimensional solutions from those that do
not.

26.3 Grid Application of 2D Constrained
Euler

Evolution of Euler Deconvolution Grid based Euler
Deconvolution has evolved in significant theoretical
ways within the last few years following the original 2D
(profile) work of Thompson (1982) and the 3D (grid)
implementation by Reid et al. (1990). For consistency
the original equation introduced by Reid et al (1990) is
called conventional Euler and uses a single equation:
T) N(B
z
T
) z (z
y
T
) y (y
x
T
) x (x
0 0 0
=
c
c
+
c
c
+
c
c


where the terms have their normal meaning

Mushayandebvu et al. (2001) introduced a second
equation, described as a rotational constraint. The
profile based method is known as profile extended
Euler ( see section 25.3) where y T/c c is assumed
zero in the above equation. For each operator
window along a profile the solution location derived
from the second equation is compared to that
derived from conventional Euler and only solutions
having similar locations are accepted, The
accepted solutions are well resolved 2D source
structures and give spatially more consistent
results than conventional 2D Euler. A further
advantage of this method is that dip and
susceptibility contrast can be determined for
contacts and dike models. Extended Euler was
then applied to grid data by Mushayandebvu M.F.,
Lesur V., Reid, A.B. and Fairhead, J.D.,2004,
Grid Euler Deconvolution with constraints for
2D structures: Geophysics 69, 489496,

Nabighian, M.N., and Hansen, R.O., 2001, Unification
of Euler and Werner Deconvolution in three
dimensions via the generalized Hilbert transform:
Geophysics, 66,1805-1810. further developed a grid
implementation of extended Euler by utilizing the Hilbert
transform components of the TMI grid. This method
called here Hilbert Euler involves three equations,
allowing up to seven possible combinations and thus
seven Euler solutions per operator window.
Although these Euler developments are mathematically
rigorous, they are becoming potentially very difficult to
implement or use by non-specialists. In any comparative
test, conventional Euler still performs well but has
limitations in not being able to

- distinguish between anomalies arising from 2D
or 3D structures
- determine whether the Euler solution so
derived is reliable/robust.
- determine dip and susceptibility contrast

The grid based 2D Constrained Euler method, (named
here to distinguish it from extended Euler) was originally
proposed by Mushayandebvu et al (2000), and only
uses the single equation of conventional Euler. Unlike
conventional Euler, it employs, within each operator
window, the Eigenvectors and Eigenvalues of the Euler
equation to discriminate between anomaly fields arising
from 2D, 3D and no source structures. Development of
the method by :

Williams, S., Fairhead, J. D. and Flanagan, G., 2002
Realistic models of basement topography for depth
to magnetic basement testing. Soc. Expl. Geophys,
Expanded Abstracts,

Williams, S., Fairhead, J. D. and Flanagan, G., 2003
Grid based Euler Deconvolution: Completing the
Circle with 2D Constrained Euler Soc. Expl.
Geophys, Expanded Abstracts,

Fairhead, Williams, Flanagan 2004 Testing Magnetic
Local Wavenumber Depth Estimation Methods
using a Complex 3D Test Model Soc. Expl. Geophys,
Expanded Abstracts,

have found it to be a powerful method of cleaning up
the solution space and providing reliable solutions for
further analysis and interpretation.

The 2D Constrained Euler method thus returns to using
a single equation used in the conventional Euler and
provides an easier and more reliable means of
implementing conventional Euler. To demonstrate the
method a realistic 3D basement model (test model) is
used Figure 26/12



Figure 26/12: Basement Test model in plan and
isometric views. With depth of non magnetic
sediment from <1km in NW corner to >9 km in SE
corner.

For a given operator window observing anomaly data
from a pure 2D case, the Eigenvector will points along
strike in the x,y plane and have value close to one (1)

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 26, Page 7
GETECH/ Univ. of Leeds:


and its Eigenvalue will be zero (0). Euler has 4
unknowns and thus for each window will have four
Eigenvalues/vectors


Figure 26/13: Illustration of the case where there are
3 unknowns thus three Eigenvalues/vectors
(difficult to image 4 dimensional space!)

Once the operator window has identified that the
anomaly is caused by a 2D body then it can determine
the strike of the 2D structure. Having identified the
azimuth of strike of the 2D solution then. the original 4
unknowns goes to 3. The matrix then solves for the 3
unknowns by eigenvector expansion.

So before a set of Euler solutions can be determined the
operator window is used to define the Eigenvalues and
vectors over the study area so they can be used to
define optimum parameters. The Eigen value will not be
zero but close to zero.

The Eigen vector will not be 1 but something like 0.7



Figure 26/14: Upper panel: Representation of
software controls to determine optimum values of
Eigen values and Eigen vectors. Lower map panel
shows how these parameters are used to define
areas of 2D solutions (red) 3D solutions (dark blue)
and no solutions (light blue). To help a lineament
map could to used as an underlay to help define
optimum solution space.



Parameters needed to run 2D Constrained
grid Euler
Research shows that:

Reduced to Pole (RTP) magnetic data have inherent
advantages over TMI data at low latitudes for generating
reliable solutions. Figure 26/15 illustrates the derived
magnetic fields from the 3D test model at I = 25
0
and I
=90
0
(RTP). Theoretically, Euler Deconvolution
solutions are independent of the field direction. This is
true for isolated structures, but problems begin to arise
when there is interference between magnetic source
structures that strike in different directions, particularly
at low magnetic latitudes.



Figure 26/15: Differences of TMI data at I = 25
0
and I
= 90
0
or Reduced to Pole(RTP)

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 26, Page 8
GETECH/ Univ. of Leeds:




Figure 26/16a: The Bishop 3D basement surface model (A) and the depth solutions (B to D) based on varying the
model Inclination from 0
0
(RTE) to 90
0
(RTP). The ability to map more reliably the structure features of the
basement surface is shown with the reduced to pole (RTP) data.

Although Euler deconvolution is supposed to
theoretically work with TMI data for all practical
purposes the best results are obtained with RTP data.
This is clearly illustrated in Figures 26/16a for 2D
Constrained Euler solutions Figure 26/16b further shows
that 2D Constrained Euler out performing conventional
Euler for a given Inclination.

At such latitudes, the anomaly over a N-S striking
structure reduces in amplitude whereas a magnetic
anomaly over a W-E striking structure increases. This is
clearly seen in the magnetic response of the test model
(Figures 26/15). Euler solutions may not clearly define
N-S structures due to the dominance of minor, cross
cutting W-E features

Optimum 2D Structural Index: The optimum 2D
structural index (N) for our 3D test model of a faulted
magnetic basement is the non integer value of N = 0.5.
(Theoretically the value should be N=1 for sheet edges).
Other models may give different results.

.
Figure 26/16b: Conventional and 2D Constrained
Euler solutions for low Inclinations and RTP

The 3D test model consists of major N-S and W-E
faults and a series of lesser N-S and NNW-SSE
trending en echelon faults at various depths (Fig.
26/12). Theory and simple models would suggest
we use an integer value for the structural index (N),
e.g. 0 for contacts or 1 for dikes or edges of thin
sheets. Comparisons using a range of N values for
this test model show that the best fitting set of
solutions occur when N=0.5. This has long been
considered to be the case, based on optimum
focusing of solutions, and our test model
corroborates this (see Fig 26/17).



Figure 26/17: Results of tests using N=0, 0.5
and 1 on the test model. The best histogram for
depths closest to known model are for N = 0.5

Determination of Throw Direction: The throw
direction on the basement faults can be reliably
determined using RTP data. The value of the dip angle
can only be determined if N is an integer for contact
(N=0) or dike (N=1). However when using a non-integer
value N=0.5 (e.g. finite offset faults), this is not possible.
To determine fault throw direction for each solution, the
dip-azimuth derivative of the RTP field within each Euler
solution window is used. This allows a reliable estimate
of the fault throw direction to be made since the RTP
field images all contact/fault type anomalies in a similar
way regardless of strike (see Figs 26/18 and 26/19),).

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 26, Page 9
GETECH/ Univ. of Leeds:


The vast majority of solutions in the test model (>80%)
give correct throw directions, when the full dip direction
is found to be within the quadrant shown in Figure 26/19

Non 2D Euler solutions: For this model non 2D
solutions are not truly point source with N=3 type
solutions. The best structural index was found to be 0.5

Performance Analysis tests: An important aspect
of using 3D test model is the ability to undertake
performance analysis. Such performance includes the
determination of the optimum



Figure 26/18: RTP field over a contact for any
orientation of strike. If the contact block is high
susceptibility then strike and dip direction can be
determine easily from the slope of the RTP field


A B C
D

Figure 26/19: From left to right: A: Test depth model
showing fault; B: the RTP; C: the Dip-azimuth map
and D Euler solutions. The Dip azimuth grid allows
the determination of the dip direction at the solution
location. So long as it lies within the green quadrant
then it is accepted. When it does not this usually
occurs close to ends of 2D features.



Figure 26/20: Performance test between
conventional and 2D constrained Euler

structural index (Fig. 26/17), the simple overlaying of
Euler solutions onto basement structure maps and the
determination of the relative solution accuracy of
different methods (Figure 26/20).


26.3 Tensor Euler

To understand what is happening here you may need to
refresh yourself with Tensor gravity components in
Section 10.1. 2 and to read Mushayandebvu, M.,
Zhang, C., Reid, A.B. and Fairhead, J. D. 2000. Euler
deconvolution of gravity tensor gradient data.
GEOPHYSICS 65 512-520 for full mathematical
treatment.


Since gravity Tensor measurements are now routinely
measured in marine and airborne surveys, then all the
derivatives that conventional Euler has to calculate from
the free air gravity grid are now measured. This removes
significant errors generated from grid interpolation,
subject to the noise levels in the Tensor data being
small. Except for tensor Euler, few if any current
interpretation methods can invert for structure using all 5
independent tensors simultaneously. Since Tensor
gravity is very expensive to acquire using standalone
surveys (i.e. not collected on the seismic boat) the cost
can be 10 times that of a conventional gravity survey.


Because of the cost such Tensor surveys are normally
conducted over areas already surveyed by detailed 3D
seismic surveys and are primarily used to improve the
resolution of near surface (upper 1km) velocity and
structural models using more interactive forward 3D
modelling methods using the seismic data to define the
initial 3D model. In so doing the results allow seismic

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 26, Page 10
GETECH/ Univ. of Leeds:


imaging of deeper structures to be more precise. Thus
the need of Tensor inversion is not often called for.



Figure 26/21: Conventional gravity Tz for the Eugen
Island marine gravity Tensor survey



Figure 26/22: Comparison between Conventional
Euler and Tensor Euler. The Tensor data gives
best/cleanest looking results if data from individual
points are used rather than gridding up the
individual Tensors
The Tensor gravity surveys have proved popular in salt
areas such as the Gulf of Mexico. Beyond about 1km
depth the resolution of Tensor gravity is found to be no
better than high resolution conventional marine gravity,
thus in ultra deep water, the focus of much exploration,
Tensor gravity has less advantage. The Tensor gravity
does record very accurate conventional gravity or its
vertical component (BHP Falcon system uses 2 Tensor
X and Y components to derive the vertical derivative)
and is being used widely in the mineral industry as an
airborne system. Bell Geospace has recently
undertaken successful airborne gravity surveys using
their 3 Tensor component X,Y,Z total gravity.

The Tensor Euler example is shown in Figs. 26/21 and
26/22 for the Eugen Island area in the Gulf of Mexico
using Bell Geospace full Tensor data. The conventional
gravity (free air anomaly, Tz is shown in Fig. 26/21) to
derive the gradients to input into conventional 3D Euler.
The results are shown in upper panel of Fig. 26/22.
Various tests using a variety of methods of initially
gridding and line levelling each Tensor component was
compared to having a moving operator window which
only worked on actual measured Tensor values.. The
results of the latter are shown in the bottom panel of Fig
26/22 and give well focused set of solutions

26.4 Tilt Euler Method
See paper by
Ahmed Salem, Simon Williams, Derek Fairhead,
Richard Smith, and Dhananjay Ravat, 2008
Interpretation of magnetic data using tilt-angle
derivatives

ABSTRACT
We have developed a new method for interpretation of
gridded magnetic data which, based on derivatives of
the tilt angle, provides a simple linear equation, similar
to the 3D Euler equation. Our method estimates both
the horizontal location and the depth of magnetic
bodies, but without specifying prior information about
the nature of the sources (structural index). Using
source-position estimates, the nature of the source can
then be inferred. Theoretical simulations over simple
and complex magnetic sources that give rise to noise
corrupted and noise-free data, illustrate the ability of the
method to provide source locations and index values
characterizing the nature of the source bodies. Our
method uses second derivatives of the magnetic
anomaly, which are sensitive to noise (high-
wavenumber spectral content) in the data. Thus, an
upward continuation of the anomaly may lead to reduce
the noise effect. We demonstrate the practical utility of
the method using a field example from Namibia, where
the results of the proposed method show broad
correlation with previous results using interactive forward
modeling.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 1
GETECH/ Univ. of Leeds:


SECTION 27: QUANTITATIVE INTERPRETATION
Semi-Automatic Grid Methods:
Local Phase (or Tilt), Local Wavenumber,
Spectral Analysis and Tilt-depth


27.1 Local Phase (or Tilt) Method

This section builds on the notes given in section 19.2.5
(Tilt derivative) and the studies of Miller, H. G. and
Singh, V. 1994 Potential field tilt-a new concept for
location of potential field sources. J of Applied
Geophys. 32:213-217; and Thurston, J.B., and Smith,
R.S., 1997, Automatic conversion of magnetic data to
depth, dip, and susceptibility contrast using the SPI
method. Geophysics, 62, 807-813. and Verduzco,
B, Fairhead, J D., Green, C. M. and MacKenzie, C.
2004 New Insights into Magnetic Derivatives for
Structural Mapping SEG The Leading Edge February
2004. (See also Tilt-depth section 27.5 & 27.6)


The following theory and 2D model examples will be
shown how such theory can be applied to grid data,
generating maps that can provide a reliable means of
mapping source body parameters.

It has already been shown that the complex Analytic
Signal for 2D structures is given by

( ) j exp A z) A(x, =

where:
2 2
z
T
x
T
A |
.
|

\
|
c
c
+ |
.
|

\
|
c
c
= is known as Analytic
Signal (AS),

T is the magnitude of the Total Magnetic Intensity (TMI)
and
(

c
c
c
c
=

x
T
z
T
tan
1
is the local phase

27.1.1 The Phase or Tilt derivative (TDR)
(TDR) is similar to the local phase, but uses the
absolute value of the horizontal derivative in the
denominator
(

=

THDR
VDR
tan TDR
1

where VDR and THDR are the First Vertical and Total
Horizontal derivatives respectively of the TMI. Whilst
VDR can be positive or negative, the THDR is always
positive.

For profiles in the x direction the


THDR=
2
|
.
|

\
|
c
c
x
T


whilst for grids

THDR=
2 2
|
|
.
|

\
|
c
c
+ |
.
|

\
|
c
c
y
T
x
T
.
Due to the nature of the arctan trigonometric function, all
amplitudes are restricted to values between + /2 and -
/2 (+90
0
and -90
0
) regardless of the amplitudes of VDR
or THDR. This fact makes this relationship function like
an Automatic Gain Control (AGC) filter and tends to
equalise the amplitude output of TMI anomalies across a
grid or along a profile.

27.1.2 The Local Wavenumber

For a profile
x
TDR
TDR_THDR
c
c
=

and for a grid
2 2
y
TDR
x
TDR
TDR_THDR
|
|
.
|

\
|
c
c
+ |
.
|

\
|
c
c
=

the local wavenumber is equivalent to the absolute value
of the slope of the Tilt for 2D structures. These
derivatives are applied in Figure 27/1 to a range of
simple 2D models (step, block and dike) for a range of
geomagnetic field inclinations (0
0
, 30
0
, 60
0
and 90
0
).

The important features to note from these models (Fig
27/1) are:
a) The Analytic Signal (AS) is invariant for all inclinations
(second panel from top) whereas conventional
derivatives (VDR and THDR) are not. The VDR and
THDR are drawn for inclination = 30
0
only.
b) The Tilt derivatives vary markedly with inclination
within an 2 t amplitude range. For inclinations of 0
0

and 90
0
, its zero crossing is located close to the edges of
the model structures
c) The Total Horizontal derivative of the TDR is
independent of inclination, similar to the Analytic Signal.
The difference between these derivatives is that the
former is sharper generating better defined
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 2
GETECH/ Univ. of Leeds:




Figure 27/1: Magnetic responses along S-N profiles across W-E striking 2D step, block and dike models


Figure 27/2: Profile P
1
(for location see Fig 27/3D) comparing the response of common derivatives to both the Tilt
and Total Horizontal derivative of the TDR
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 3
GETECH/ Univ. of Leeds:


maxima centered over the body edges, which persist to
narrower features before coalescing into a single peak
as shown in the dike model.

Not shown in Fig. 27/1 is the fact that the amplitude
response of all the conventional derivatives (VDR, THDR
and AS) is closely linked to the amplitude of the TMI
anomaly, whilst the Tilt derivative, and by association its
Total Horizontal derivative, are independent of amplitude
of the TMI anomaly and are controlled more by the
reciprocal of the depths to the sources which in the study
area are small. These features are best illustrated in
profile form in Fig. 27/2 ( see also section 19.2.5) using
observed data taken from Figure 27/3.



Figure 27/3: Colour-equalised images (red high, blue
low) for A) Total Magnetic Intensity, B) Reduced to
Pole, C) Vertical derivative of the RTP and D) Tilt
derivative of the RTP

The Tilt derivative of the Reduced to Pole (RTP) field
shows 7 symmetric anomalies with highly variable TMI
amplitudes. This symmetry is also seen in the TDR
block and dike models (Fig. 27/1) for I = 0 and I = 90
0
.
Although Reduced to Pole and Equator transformations
work well, the RTP field is preferred since it preserves
the imaging of N-S structures. In simple terms, the Tilt
derivative is acting like a very effective AGC (automatic
gain control) filter. It also appears to act as an effective
signal discriminator in the presence of noise apart from
the unlikely case when the noise has a similar spectral
content to the signal.

The Total Horizontal derivative of the TDR preserves this
amplitude enhancement in its ability to define edges by
well-defined maxima (bottom panel of Fig. 27/2). Since
we have removed the directional component of the
horizontal derivative, both the Tilt and its Total Horizontal
derivatives are easy to compute and plot. The
advantages are threefold:

1) the Tilt derivative has its zero values close to the
edges of the body for RTP and RTE fields,

2) its phase is controlled by the Vertical derivative and
3. the AGC allows it to out perform the Vertical
derivative of the RTP (see Fig. 27/3).

The Total Horizontal derivative of TDR is theoretically
independent of geomagnetic Inclination so will generate
useful magnetic responses for bodies having induced or
remanent magnetization, or a mixture of both.

27.1.3 FIELD EXAMPLES
The field examples shown in Figures 27/3 through 27/7
are taken from an area 12 km x 14 km in north-central
Namibia, 150 km NNW of Windhoek, containing the
Erindi gold prospect that covers an area of 36 km
2
. The
prospect lies in the southern Central Zone of the Damara
Belt containing Neoproterozoic to Paleozoic
metasediments and granites which are generally
covered by up to 10 metres of soil and calcrete. The gold
occurrences are mainly associated with metamorphosed
magmatic intrusions within the Swakop Group marbles.
Previous exploration within the prospect using
geochemical soil sampling and ground magnetics had
located a highly anomalous gold zone, which was
subsequently drilled. This drilling intersected a number
of high-grade gold zones (with a best intersection of 11m
grading 9.5 g/t Au), but due to the thick cover and
indistinct geochemical response the geological continuity
of these zones was poorly understood. The drill holes
are shown as an inset to Fig. 27/3A, superimposed on
the TMI aeromagnetic data. Drill results indicate the
mineralised structures dip at approximately 60
0
to the SE
and show veins of magnetite skarn and massive
sulphides within the marble unit. Pyrrhotite, pyrite and
magnetite are the dominant ore minerals and as such
are highly magnetic compared to the hosting Swakop
marble.

The study was undertaken by one of the authors (BV) as
part of his 2003 Masters thesis and post Masters
research with GETECH. The aim of the study was to
investigate by geophysical means whether or not the
original drilling program was optimum for assessing the
mineralization of the prospect and whether a further
drilling program should be recommended. The study
involved travelling to Namibia to collect all necessary
data from the Geological Survey and BAFEX and
undertaking a GPS survey of the existing boreholes. The
digital aeromagnetic data used in the study were from
the new high resolution national datasets flown in 2001
and 2003 for the Namibian Geological Survey.

The aeromagnetic survey specifications are:
Flight spacing: 200m
Flying height: 80m
Flight Direction: N-S
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 4
GETECH/ Univ. of Leeds:


Survey names: 2116AD and 2116BC
The digital grid used had a cell size of 50 m, and all grid-
based processing was performed using GETECHs
software GETgrid and the 2D modelling using NGAs
GM-SYS. The inset to Fig. 27/3A shows the location of
the existing exploration drill holes which lie close to the
maximum of the large magnetic anomaly (~1600 nT) that
dominates the area.

Fig. 27/3 shows the TMI(A) and RTP(B), using I = -62
0

and D = -12
0
, for the area as well as the Vertical
derivative of the RTP(C) and the Tilt derivative of the
RTP(D). An alternative to the VDR(C) could be a high
frequency band pass filtered version of the RTP, but this
has the disadvantage of requiring the need for careful
setting of filter parameters and like the VDR does not
preferentially amplify the small amplitude signals that are
automatically amplified by the TDR(D). Thus the Tilt
derivative provides an effective substitute for both the
Vertical derivative and .the high frequency band pass
residual anomaly. It more clearly images and enhances,
by its ability to AGC, the smaller amplitude features
thus allowing better opportunity to map subtle basement
fabric. Of particular note is the removal of the blue halo
generated by the VDR (C) in the vicinity of the large
anomaly and its replacement by a more evenly
modulated field in D. Superimposed on Fig. 26/25D is
the N-S profile P
1
and the identification of the anomalies
A-G shown in Figure 27/2.

Mapping the edges of structures can be achieved by a
variety of methods as shown in Figure 27/4. None of the
methods (A-C) succeed at defining edges as well as the
Total Horizontal derivative of the TDR (D). In Fig. 27/4
the Analytic Signal (C) and the Total Horizontal
derivatives have been generated from the RTP field
since the maxima over the edges of the same structure
tend to generate more similar anomaly amplitudes than
using the TMI field. The role of other derivatives in any
interpretation should not be overlooked. The Analytic
Signal (C) is a good spatial indicator of susceptibility
contrast and the Pseudo Gravity (B) helps to define the
possible extent of the mineralization.
.
To investigate the relation between structure, the drilling
results and the mineralization, Figure 27/5 provides a
zoomed-in view of part of the prospect (dashed white
box in Fig. 27/4D). This reveals that the drill holes tend
to be located over and to the north of the main contact
and have probably not intersected sufficiently the main
magnetic body delineated by the maxima. Since the
anomalies could represent either a series of dipping
thick sheets (blocks) or dike features, both
interpretations are shown in the interpreted profile P
2
for
the main anomalies passing through the drill hole
locations (Fig. 27/5). For both model types a closely-
spaced set of thin sheets (not shown) could equally
result in a similar set of anomalies.


Figure 27/4: Colour-equalised images of Total
Horizontal derivatives of the RTP (A) and the Pseudo
Gravity (B), the Analytic signal of the RTP (C) and
the Total Horiz. derivative of the Tilt derivative after
RTP (D) or local wavenumber. Locations of profiles
P
1
and P
2
are shown in (D).



Figure 27/5: Zoomed-in area of white dashed box in
Figure 27/4D showing the relationship between the
location of the drill holes and the maxima of the
RTP_TDR_THDR which is interpreted as defining the
extent and edges of the causative magnetic body.

27.1.4 Depth Mapping
The advantages of the Tilt derivative are its abilities to
normalise a magnetic field image and to discriminate
between signal and noise. Since the zero crossing of
the Tilt derivative is located close to the edge of the
structure for RTP and RTE data, then applying a
threshold cut-off of 0.0 in Figure 27/6A allows all bodies
with positive susceptibility contrast to be isolated.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 5
GETECH/ Univ. of Leeds:




Figure 27/6: Interpretation of profile P
2
(location
shown in Figure 27/4D) passing through the drill
holes and intersecting the ENE-WSW trending
structure (see Figure 27/5) to the ENE of the TMI
magnetic maximum. Models assuming either blocks
or dikes as the cause of the main anomalies
(identified by arrows and dashed lines) are shown.
Susceptibility contrast between of the blocks (and
dikes) and the background marble units reaches
0.02 cgs.

The enhanced amplitudes of the Tilt derivative can be
carried through to its Total Horizontal derivative (or local
wavenumber) thereby making the edge anomalies
prominent and invariant to geomagnetic inclination, thus
making this derivative an effective tool for mapping
geological edges. This study has not encountered the
problem of multiple maxima (ringing) giving rise to false
edges.

Further, since the depth to top is inversely related to
amplitude of the Total Horizontal derivative for
contacts, a threshold cut-off can be set acting as an
effective depth discriminator allowing isolation of shallow
sources.. This has been done in Figure 27/7B making
structural mapping more intuitive. Alternatively the Fig.
27/7B can be converted to depth map and maxima
tracked and recorded as depths as done in Fig. 27/8
assuming all anomalies are related to contacts where
Depth = 1/(local wavenumber) after Smith, Thurston,
Dai and MacLoed 1998 iSPI
TM
The Improved Source
Parameter Imaging Method. Geophysical
Prospecting 46,:141-151. They showed that there is a
relationship between local wavenumber maxima and
depth for a range of simple geological structures. The
depths are based on the maximum of the local
wavenumber (or Total Horizontal Derivative of the Tilt
Derivative)


Geological Model Depth

Contact 1/h
Thin sheet 2/h
Horizontal Cylinder 3/h





Figure 27/7: A) The Tilt derivative with all values less
than 0.0 replaced by null value. This shows the
approximate width and distribution of features with
positive susceptibility. B) The RTP_TDR_THDR with
depth threshold cut-off set to visualise only shallow
structures with positive or negative susceptibilities.




Figure 27/8: Taking map 27/7B and finding negative
reciprocal. Background depth is -200m (or 120m
below surface) and red area range up to 90m ( or 10
m below surface)

27.2 Spectral Analysis Method

The basic principals of spectral analysis of magnetic
anomalies were first discussed by Bhattacharyya 1966
Continuos spectrum of the Total magnetic field
anomaly due to a rectangular prismatic body
Geophys. 31 :97-121) and further developed and tested
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 6
GETECH/ Univ. of Leeds:


by Spector and Grant 1970 Statistical models for
interpreting aeromagnetic data Geophys. 35 :293-
302). The method was extended to use gravity data by
Fairhead and Okereke (1988 Depths to major
contrasts beneath the West African rift system in
Nigeria and Cameroon based on the spectral
analysis of gravity data J. of African Earth Sc. 7 :
769-777) and semi-automated methods have been
developed by Bennett (1993 Leeds MSc thesis) and
Kivior et al (1993 Crustal studies of South Australia
based on energy spectral analysis of regional
magnetic data Exploration Geophys 24 :603-608).

Theory: Based on the assumption that two dimensional
observed magnetic and gravity fields are due to the
integrated effects of several independent ensembles of
subsurface rectangular blocks, Spector and Grant
(1970) showed that the variation of the azimuthally
averaged power (or amplitude) spectrum with
wavenumber S k
2
( ) for an ensemble can be expressed
as:

S k k k A k
2 2 2
4 1 4 ( ) [exp( )]( (exp( ) ( ( ))) = t t |

where the square brackets denote azimuthal and
ensemble averaging, k =1/ is the wavenumber, h and t
represent the depth to the top and thickness of a prism ,
and A(k|) is a function of the horizontal dimensions of
the prisms. In a more generalised form, the expression
can be written as:

( )( )( ) S k f h f t f
2
( ) ( ) ( ) ( ) = |

where f(h), f(t) and f(|) are now the depth, depth extent
and size factors. Of these factors, the depth factor is the
most dominant contributor to the power spectrum, such
that, if the prisms are uniformly distributed in the depth
range of h h h = A / 2 , where h is the mean depth,
then f(h) can be expressed as

{ } ( )( ) exp( ) exp( ) sinh / = 4 4 4 8 t t t t hk hk h k h A A

If Ah is less than 0.5h, the spectrum is highly influenced
by the term exp(-4thk) so that the semi-logarithmic plot
of log amplitude versus wavenumber of the averaged
power spectrum will give a straight line whose slope
equals -4th. The attraction of this method is that the
depth estimate is based entirely on the observed data
and does not require the initial separation of the
observed field into regional and residual components as
is usually the case with most forward modelling
techniques.

These notes do not go into how the power spectrum. Is
generated.

Another way of understanding what is occurring is that
for any geological structure the main part of the anomaly
field is being generated at the top of the body, normally
at its edges. At the surface if the power spectrum (log
amplitude squared verses wavenumber or frequency) of
the anomaly is analysed it will generate a straight line
relationship. The slope of which is the depth to the top of
the source body.

Fig.27/9 shows what happens when magnetic basement
which is at the surface (200m from the aircraft) to the
north and south of the Yola Rift is places at 3 km to 4 km
depth beneath the rift. Here the sediments of the rift are
non magnetic. The change in the spectral content is
clear to see both in map form and profile form.



Figure 27/9: The aeromagnetic field over the Yola
rift, Nigeria in map and profile form.

27.2.1 Single Source anomalies

First let us consider single sources before we look at
multiple sources and semi-automated method of depth
mapping.

Good depth estimates of single anomalies can be
obtained if the window covers the whole of the anomaly.
A simple dipole anomaly (no noise) is shown in Fig
27/10 with depth at 500m.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 7
GETECH/ Univ. of Leeds:




Figure 27/10: Dipole anomaly and its power
spectrum for window covering the complete
anomaly

The power spectrum of the complete anomaly gives a
good straight line to the spectrum with a slope of 499m.

When the anomaly is only partly sampled as is the case
in Figure 27/11 (two possible windows shown and
outlined) and both windows sample the short and long
wavelength parts of the spectrum then only the central
part of the spectral plot will give sensible depth values.



Figure 27/11: When only part of the anomaly is
seen by smaller window then spectrum starts
to degrade at its long and short wavelengths

When the anomaly is basically under-sampled as in the
case of Fig 27/12 the spectral plot is totally degraded
and no reliable depth can be obtained. The basic rule
thus is to fully sample the anomaly and pick the middle
part of the slope.



Figure 27/12: When an even smaller part of the
anomaly is sampled then the spectrum becomes
difficult to interpret.

Example
This example is taken from central Brazil within the Sao
Francisco craton which is covered by up to 3km of
sediment. The dipole anomaly is deep within the crust on
the basis of its size (one degree grid superimposed). A
simple dipole with similar inclination and declination as
found in Brazil indicates the anomaly is induced.



Figure 27/13: example of dipole looking anomaly in
central Brazil with simple dipole anomaly shown as
inset with same Inclination and Declination as Brazil.

The power spectrum (Fig 27/14) shows two straight
segments with the deeper one having a depth of 20km
and the shallow one being 3km. Thus we have multiple
sources ( see next section). The power spectrum can
now be used to shape filters to separate the anomaly
effects using swing head and swingtail filters ( see
section 19.4.3 page 19/16) This has been done in Figure
27/15 for the deep and shallow depths indicated on the
power spectrum.




Figure 27/15: Power spectrum of the Brazilian area
shown in Fig 27/14

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 8
GETECH/ Univ. of Leeds:





Figure 27/16: The depth slices at 20km and 3km as
seen from the surface. The anomalies have been
very effectively separated with the basement
structure at 3km better seen now that the effects of
the deeper anomaly has been removed

27.2.2 Multiple sources at different depths

Figures 27/14 27/16 is also a good example of multiple
deep sources clearly well separated from each other in
terms of depth and spectral content of their anomalies.
In this example we can measure two depths

A power spectrum (Fig 27/17) over the basement area to
the north or south of the Yola rift (Fig. 27/9) generates
very different power spectrum to the power spectrum for
areas within the rift where the basement source is
deeper.

The reason why these curves are different is that over
the basement the anomalies are being generated at the
basement (topography surface), which is a constant
distance below the aircraft. The basement is made up of
topography and varying basement bodies all giving
similar spectrum. Thus the spectrum in Fig 27/17 is not
from a single body but from an ensemble of bodies all at
the same depth. The rift spectrum is due to the
basement but now the magnetic sources are coming
from a range of depths within the widow of investigated.



Figure 27/17: The basement power spectrum is
linear for most of its length.

Figure 27/18: The rift basin Power spectrum is
curved and slope differs depending on which set of
points taken.

27.2.3 Automated depth determination

To construct a semi automated method requires the
sensitive parameters to be carefully chosen. These
parameters are window size, band width and move
along rate

i. Window Size: The larger the window (operator) the
longer the wavelengths that can be analysed. This will
allow deeper sources to be investigated but will limit the
area of the map that can be analysed. Since the depth
estimate is based on analysing an area, the depth is
thus an overage for the area. Thus larger the

window (Fig 27/19) the fewer the depth estimates and
the smoother will be the resulting depth map. If a window
location only partly covers an anomaly then incorrect
depths can be generated (see Fig27/12). To carry out a
spectral analysis over a map calls for a moving window
(or operator), similar in concept to Euler, so that the
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 9
GETECH/ Univ. of Leeds:


depth estimates can be made at regular intervals over
the map and the values contoured.



Figure 27/19: showing the setup for automatic depth
analysis.

The window size should be small enough to see
individual anomalies and determine accurate depths
from the short wavelength part of the spectrum. This is
normally done by testing out the window size over
basement features of known depth (from seismic/well
data). This calibrates the window size needed over the
area with known depths to generate reliable depths.

ii. Band width: This is the width of the wavenumber
plot over which the regression analysis is applied to
determine slope.


Figure 27/20: showing the importance of
choosing band width over which the software
measures slope

Since the window is made as small as possible to obtain
the best depth estimate for the highest gradient part of
the spectrum. The reliability of the first 2-3 points in a
power spectrum are questionable so these are normally
not included as the highest wavenumber. Used. The
method is tested over structures of known depth to
indicate the best band width. Thus window size and
band width are interlinked and need setting for any depth
study.

iii. Move Along Rate: The number of grid cells the
operator is moved between solutions. Just moving one
cell will generate a lot of solutions and will take a lot of
computer time. The optimum move along rate is
possible half window width.

The move along parameter is really a function of how
much overlap is required by the solutions, e.g. if the
window size is 16 km x 16 km on a 1 km x 1 km grid
then the result of one estimate will result from the
spectral analysis over 256 km
2
. Since the resulting map
of depth estimates will be a smoothed version of reality,
the move along rate can be 4 km without degrading the
final depth map

It is important to stress that Parameters i. and ii. if set
incorrectly will affect the depth estimates. It is imperative
that initial tests are carried out over, hopefully known
geology with borehole control on depth to basement, to
check and set the above parameters. Swaths profiling of
depths can be generated at different window sizes along
a profile hopefully controlled by seismic and wells. This
was done over the Yola rift to demonstrate the variability
of results with parameter settings



Figure 27/21: changing of window size has major
affect on depth within the Yola rift.


27.3 Susceptibility Mapping

A pole reduced magnetic map may be subjected to a
further transformation, which calculates the apparent
susceptibility, at a defined depth, of a layer of rock (of
infinite thickness) which could have given rise to the
magnetic observation. The transformation exploits the
signature of a magnetised block of size equal to one grid
cell.

This method has been shown (Urquhart and
Strangway 1985, Yunsheng, Urquhart and Strangway
1985) to lend itself to geological mapping. The apparent
susceptibility map should give a better view of geological
boundaries as the data have had a regional removed
and they have been downward continued. It is also in
apparent susceptibility units, which can be related to
rock properties. Some basic assumptions are made in
the process:

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 10
GETECH/ Univ. of Leeds:


The measured magnetic field is caused by an
assemblage of bodies of rectangular cross-section one
grid cell in dimension, with a body centred on each grid
point.
Magnetisation is by induction only.
The bodies are vertically sided.
The bodies extend to infinite depth.

The apparent susceptibility map gives a good result on
the larger lithological units, smaller details have been
smoothed, as high frequency signal had to be low pass
filtered to enable the downward continuation to be
successful.

27.4 Tilt-Depth -1
This section is based on the The Leading Edge article:

Tilt-depth method: A simple depth estimation method
using first-order magnetic derivatives: The Leading
Edge, 26, 1502-5. 2007. By Ahmed Salem, Simon
Williams, Derek Fairhead, Dhananjay Ravat & Richard
Smith.
27.4.1 Summary

Mapping the magnetic tilt angle, derived from first order
derivatives, has the advantage of enhancing weak
magnetic anomalies compared to stronger magnetic
anomalies due to the effective automatic gain
control(AGC) imposed by the arctan operator that
restricts the tilt angle to within the range 90
0
to +90
0
,
irrespective of the amplitude or wavelength of the
magnetic field. We have found that it is possible to
simply use the contours of the tilt angle to estimate the
location and depth of the magnetic sources. The zero
contours (shown as a dashed line in figures) indicate the
location of source edges and the half distance between
the 45
0
and +45
0
contours provides an estimate of their
depth. In a synthetic example and a field example, we
demonstrate that when the region between the 45
0
and
the +45
0
contours is high-lighted (in grey), the resulting
map provides an intuitive means of identifying the
location and depth of the magnetic sources. For this
contribution we assume that the sources are simple
vertical contacts; that there is no remanent
magnetization; and the inducing field has either vertical
inclination or has been reduced to the pole (RTP).
Advantages of the method, called here the Tilt-Depth
method, are discussed with respect to existing methods
using second and third order derivatives.

27.4.2 Introduction

Aeromagnetic data are routinely presented as contour or
colour shaded maps of the total magnetic intensity (TMI).
An interpreters task is to identify features (anomalies)
contained within the map and qualitatively and/or
quantitatively interpret them into geological structures at
depth. If the map contains anomalies that have large
magnetic intensities, the bodies might be considered to
have large magnetizations, or to be at shallow depths.
Small amplitude anomalies superimposed on these
anomalies could be masked or even missed by an
interpreter. Thus the task of the interpreter is to use the
spectral content of the anomalies to try and resolve
these ambiguities. Part of this process is also to obtain
estimates of the depth and shape of the body causing
the anomalies.

An interpretation difficulty with TMI anomalies is that
they are dipolar (anomalies having positive and negative
components) such that the shape and phase of the
anomaly depends in part on the magnetic inclination and
the presence of any remanent magnetization. This
anomaly complexity makes interpretation more difficult
because the body and its edges do not necessarily
coincide with the most obvious mapped feature (e.g.
anomaly maxima). The reduction-to-the-pole (RTP)
technique transforms TMI anomalies to anomalies that
would be measured if the field were vertical (assuming
there is only an inducing field). This RTP transformation
makes the shape of magnetic anomalies more closely
related to the spatial location of the source structure and
makes the magnetic anomaly easier to interpret, as
anomaly maxima will be located centrally over the body
(provided there is no remanent magnetization present).

To map the edges of bodies the horizontal derivative of
the RTP field or of the Pseudo gravity field are often
used. In both cases the horizontal derivative will peak
above a vertical contact. However, a dipping contact, an
incorrect inclination used in the RTP transformation or
the presence of remanent magnetization will tend to shift
the anomaly maxima away from the true location of the
contact. In general, the interpreters ability to avoid these
complexities in a simple manner can have immense
advantages. In this paper, we present a simple method
of estimating the depth of magnetic source bodies
(assuming a vertical-contact model) from just the
contours of the magnetic tilt angle map. The magnetic tilt
angle is a normalized derivative based on the ratio of the
vertical and horizontal derivatives of the RTP field. We
call this new method the Tilt-Depth method which
provides an intuitive means of understanding the
variation in depth of magnetic source bodies (or
magnetic basement as shown with the field example). Its
main advantage is it can be used by non-specialists and
is independent of any need for more advanced
numerical analysis of the data. The method in its
simplest form assumes that the source structures have
vertical contacts, there is no remanent magnetization
and that the magnetization is vertical.

27.4.3 Method

The tilt angle was first described by Miller and Singh,
before being further refined by Verduzco and GETECH
and defined as
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 11
GETECH/ Univ. of Leeds:


(
(
(

c
c
c
c
=

h
M
z
M
1
tan u
, (1)
where
2
2
|
|
.
|

\
|
c
c
+
|
.
|

\
|
c
c
=
c
c
y
M
x
M
h
M
, and
y
M
x
M
c
c
c
c
, &
z
M
c
c
are first order derivatives of the magnetic field M in
the x, y and z directions. The tilt angle has many
interesting properties, for example, due to the nature of
the arctan trigonometric function, all tilt amplitudes are
restricted to values between 90
0
and +90
0
regardless of
the amplitude of the vertical or the absolute value of the
total horizontal gradient. This fact makes calculating the
tilt angle similar but superior to an AGC filter in that
besides equalizing the amplitude output of the magnetic
anomalies across a grid or along a profile it retains
spectral integrity of the signal allowing further
quantitative analysis (e.g. determining local
wavenumber).

The general expressions published by Nabighian for the
vertical and horizontal derivatives of the magnetic field
over contacts located at a horizontal location of h=0 and
at a depth of zc are

( ) ( )
2 2
90 2 sin 90 2 cos
sin 2
c
c
z h
d I h d I z
d KFc
h
M
+
+
=
c
c
,
(2)

( ) ( )
2 2
90 2 sin 90 2 cos
sin 2
c
c
z h
d I z d I h
d KFc
z
M
+

=
c
c
,
(3)

where K is the susceptibility contrast at the contact, F
the magnitude of the magnetic field,
A i c
2 2
sin cos 1 = , A the angle between the
positive h-axis and magnetic north, i the ambient field
inclination, A i I cos / tan tan = , d the dip
(measured from the positive h-axis), and all
trigonometric quantities are in degrees. Under certain
assumptions such as when the contacts are nearly
vertical and the magnetic field is vertical or RTP,
equations (2) and (3) can be written as
2 2
2
c
c
z h
z
KFc
h
M
+
=
c
c
(4)
2
2
2
c
z h
h
KFc
z
M
+
=
c
c
(5)
Substituting (4) and (5) in (1), we get

(

=

c
z
h
1
tan u
(6)
Equation (6) indicates the value of the tilt angle above
the edges of the contact is 0
0
(h=0) and equal to 45
o

when h=zc and -45 when h=-zc. This suggests that
contours of the magnetic tilt angle can identify both the
location ( =0) and depth (half the physical distance
between 45 contours) of contact-like structures.

27.4.4 Theoretical Examples

The profiles in figure 27/22 demonstrate the relationship
between the tilt angle and source depth for a vertical
contact model. The anomaly is calculated as a north-
south profile across an east-west striking source, for a
magnetic inclination of 90. The profile of the tilt angle
passes through zero directly over the contact edge
(h=0), and passes through the dashed lines marking

Figure 27/22: Profile model of (top) the magnetic
anomaly, and (middle) the tilt derivative over a
vertical contact (bottom) for RTP (or vertical
inducing field). Tilt values are restricted to within +/-
90 degrees. The contact coincides with the zero
crossing and the part of the Tilt derivative between
+/- 45 degrees is highlighted

45 at a distance from the edge equal to the source
depth (h=Zc). Note that our method is valid only for data
that has been reduced-to-the-pole. Figure 1 of Verduzco
et al (2004) clearly demonstrates the asymmetry of
profiles of the tilt angle for other magnetic inclinations.
Figure 27/23a shows the synthetic magnetic anomaly
contour map for a model containing two vertical-sided
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 12
GETECH/ Univ. of Leeds:


prisms with the edge locations indicated by the dashed
lines. The top of Prism A is located at a depth of 4 km
and Prism B at a depth of 16 km. Both prisms are
defined with effectively infinite depth extent, and with a
positive magnetization contrast of 10
-4
A/m. The inducing
field has an inclination of 90. The anomaly field is
calculated on a regular grid with a spacing of 0.5 km.
Figure 27/23b shows the magnetic tilt angle map
generated from the data shown in Figure 27/23a. The
region enclosed by 45 and -45 contours is shown in
grey and the zero contour is shown by the dashed line
(indicates an approximate location of the source edges).
Whilst the distance between the two 45 contours and
the 0 contours is not everywhere identical around the
perimeter of each body, due to anomaly interference and
the breakdown of the two dimensionality assumption, we
observe that the source depth is roughly equivalent to
half the width of the shaded strip delineating its edge
(i.e. depth to the top of these sources models). The 2D
imaging of the +/- 45
0
strips allows a spatial indication of
where anomalies are suffering from interference as well
as a rapid means of estimating the depth to the top
edges of the sources in locations least affected by
anomaly interference.

27.4.5 Field Example

In this section we demonstrate the Tilt-Depth method on
aeromagnetic data over the Karoo sedimentary rift
structures of south-east Tanzania. The regional
geological setting is a consequence of the breakup of
Gondwana and rifting along the eastern margin of Africa.
A schematic map of the geological structure over the rift
is shown in Figure 27/24a. The Selous Basin is a NNE-
trending rift basin infilled with up to ~10 km of mainly
non-marine and non magnetic sediments ranging in age
from Permian-Triassic to Tertiary. The basin is bounded
to the west and east by shallow basement with the
Masasi Spur separating the Selous Basin from the
coastal basins of eastern Tanzania. The Rufiji Trough is
located to the NE of the Selous Basin and exhibits east-
west extensional structures of Jurassic age
superimposed on earlier NNE trending structures.

A countrywide aeromagnetic grid for Tanzania has been
compiled by GETECH from 1 km spaced flight-line data
oriented predominantly east-west, with mean terrain
clearance of 120 m. The resulting grid has a nodal
separation of 0.25 km. Figure 27/24b shows the TMI
anomaly map over the study area and clearly delineates
the rift basin outline by the shape change in the
frequency content of the magnetic anomalies. Before
applying the Tilt-Depth method, the data were converted
to RTP using a magnetic inclination of 40 and a
declination of 3.5, and upward continued to a distance
of 1 km. The upward continuation was found to be


Figure27/23: Synthetic magnetic test model
A: The magnetic response of the synthetic test
model containing two vertical-sided prisms with
magnetizations of 10
-4
A/m, and with the edge
locations indicated by the dashed lines. The top of
the upper left prism (A) is located at depth of 4 km,
the top of the lower right prism (B) at a depth of 16
km. The inducing field has an inclination of 90 and a
declination of 0.
B: Magnetic tilt angle map generated from the data
of A. Dashed lines show the 0 contour of the tilt
angle. Solid lines are contours of the tilt angle for
45 and 45.

necessary to obtain the cleanest image of the structures
based on the contours of the magnetic tilt angle. Figure
27/24c shows the simple form of the magnetic tilt angle
map only displaying the contours of 45, 0, and 45,
and the areas bounded by these contours are shaded in
grey. The tilt angle map over the shallow basement
areas are characterized by high frequency magnetic
anomalies and closely spaced contours. In contrast over
the deep parts of the basin widely spaced contours are
observed. Using the Tilt-Depth contour methodology, we
can determine the width of these grey zones to provide
an immediate estimate of the depth to basement and
how it varies across the area.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 13
GETECH/ Univ. of Leeds:

Figure 27/24: Field test dataset
A: Regional geological structure of the Karoo basin of south-east Tanzania. Adapted from Tanzania
Petroleum Development Council promotion brochure (http://www.tpdc-tz.com).
B: TMI anomaly map for south-east Tanzania upward continued to 1km
C: Tilt-Depth Map showing contours of the tilt derivative for the Tanzania magnetic anomaly data (B)
after RTP and upward continued to a height of 1 km before the tilt was calculated. Dashed lines show the
0 contour of the tilt angle. Solid lines are contours of the tilt angle for 45 and 45 - the distance
between these contours is approximately equal to twice the depth to the magnetic source assuming a
contact-type source geometry. Some examples of depths (below terrain) indicated by the tilt contours
within the Selous Basin and Rufiji Trough are labeled; A = 5km, B = 5km, C = 7-8km, D = 4km, E = 5-6km,
F = 3-4km, G = 3-3.5km, H = 1-2km, I = 5-6km.
D: Lineament analysis of the aeromagnetic field data using automated tracking of the maxima of the
horizontal derivative of the pseudo gravity field

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 14
GETECH/ Univ. of Leeds:

The regions of shallow basement (the Masasi Spur, and
the region west of the Selous Basin) are characterized by
numerous lineaments in the tilt derivative, with the
distance between the 45 and 45 contours typically
less than 4 km. This distance is twice the source depth
(assuming a contact source geometry), so after correcting
for the continuation distance the approximate depths to
magnetic sources indicated by the tilt contours are
predominantly very shallow - no greater than 1 km
beneath the surface. Since the flight line spacing is 1 km,
aliasing of the anomalies could be a reason for the
depths not being closer to the surface. Within the Selous
Basin and Rufiji Trough, the tilt contours are much more
widely spaced. These contours define magnetic
lineaments within the basin, as well as areas of more
chaotic contours which in part could be due to anomaly
interference. Since the methodology assumes 2D vertical
contacts, the basement depths in the Selous basin
(locations D to I) range from 3 to 6 km while in the Rufiji
basin (locations A to C) they increase up to 8 km. These
depths show a good correspondence with the regional
variation in sediment thicknesses based on seismic and
well control data as indicated by Figure 27/24a.

The advantage of the Tilt-Depth method is however its
ability to identify those parts of anomaly structures that
are least affected by interference where repeated depth
estimates are most likely to be reliable. For completeness
Figure 27/24d provides the plot of the maxima of the
horizontal derivative of the pseudo gravity which
automatically the maxima of which define the location of
the contacts. These contacts are also closely mapped by
the dashed (0
0
tilt angle) contours in Figure 27/24c less
the spider legs seen in Figure 27/24d that define small
scale local 2D ridges within the grid. The results of
Figures 27/24c and 27/24d should be viewed as stage
products from which a structural and depth maps can be
constructed.

27.5.6 Discussion and Conclusions

We present a simple and fast method to locate vertical
contacts from RTP magnetic data. The Tilt-Depth method
only depends on mapping specific contours of the
magnetic tilt angles. The zero contours delineate the
spatial location of the magnetic source edges whilst the
depth to the source is the distance between the zero and
either the 45
0
or the +45
0
contour or their average.
The Tilt-Depth method adds to the arsenal of geophysical
methods currently in use to estimate magnetic source
depths, many of which use second and/or third order
derivatives. These include methods based on Eulers
equation and the local wavenumber, both of which
calculate the source depths for a range of source body
geometries and more recently for the simultaneous
estimation of both source depth and source type. The Tilt-
Depth method by comparison can be considered to be
both simple and elegant to derive. The two principal
advantages of the method are its simplicity both in its
theoretical derivation and in its practice application
.
it provides both a qualitative and quantitative approach to
interpretation by allowing the interpreter to visually
inspect (spatially analyse) the Tilt-Depth map to identify
locations where depth estimates may be compromised by
interfering magnetic anomalies and locations where more
reliable depth estimates can be made. These reliable
locations can then be re-evaluated using different
magnetic depth estimation methods.

Other advantages of the method are that by virtue of
using first order derivatives, it is potentially less sensitive
to noise in the data compared to methods relying on
higher order derivatives, and unlike the Euler method
there is no need to choose window size nor is their a
problem of solution clusters to contend with.
The visual inspection advantage is clearly demonstrated
in the field example using vintage digital aeromagnetic
data, which contains noise sources that can be
considerably reduced by modern high resolution surveys
using superior magnetometers, acquisition methods and
GPS controlled navigation.
We believe that there is ample scope to improve the
method further and by making it less dependent on the
need to process the TMI data to RTP, a particular
problem close to the magnetic Equator, and having to
assume a single source type structure. What we have
presented here, is hopefully a new simpler way of
qualitatively and quantitatively evaluating magnetic
survey data that can be more readily appreciated by non-
specialists..

SUGGESTED READING. Potential theory in gravity and
magnetic applications by Blakely (Cambridge University
Press, 1995). Numerical calculation of the formula of
reduction to the magnetic pole by Baranov et al (SEG,
1964). Approximating edges of source bodies from
magnetic or gravity anomalies by Blakely and Simpson
(SEG, 1986). Enhancing potential field data using filters
based on the local phase by Cooper et al (Computers &
Geosciences, 2006). Mapping basement magnetization
zones from aeromagnetic data in the San Juan Basin,
New Mexico, in W. J. Hinze, ed., Utility of regional gravity
and magnetic maps by Cordell and Grauch (SEG,1985
Expanded Abstract). The sedimentary basins of
Tanzania reviewed by Mbede (Journal of African Earth
Sciences, 1991). Potential Field Tilt a new concept for
location of potential field sources by Miller and Singh
(Journal of Applied Geophysics, 1994). The analytic
signal of two-dimensional magnetic bodies with polygonal
cross-section; its properties and use for automated
anomaly interpretation by Nabighian (SEG, 1972). The
historical development of

the magnetic method in
exploration by Nabighian et al. (SEG, 2005). New
insights into magnetic derivatives for structural mapping
by Verduzco et al.(The Leading Edge, 2004).

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 15
GETECH/ Univ. of Leeds:


27.6Tilt-Depth -2

This section is based on Geophysical Prospecting paper
entitled New developments of the magnetic Tilt-Depth
method to improve the structural mapping of sedimentary
basins by J Derek Fairhead, Ahmed Salem, Lorenzo
Cascone, M. Hammill, Sheona Masterton and Esuene
Samson Geophysical Prospecting 2011 (in press)

27.6.1 Abstract

This paper interprets the aeromagnetic data for a deep basin
section of the Karroo rift in south east Tanzania. We use a
novel integrated approach involving the application of
advanced derivatives to define structure and the Tilt-Depth
method to determine and map the depth to basement. In the
latter case we use the result of both reduced to pole and
reduced to equator data to help constrain the shape and
depth of the basin. We show that for a reduced to pole
aeromagnetic dataset, the generalized form of the local
phase, called the Tilt derivative, is an effective means of
providing an initial (first pass) mapping of a sedimentary
basin in terms of its fault structure, dip direction of faults and
depth to basement. Since the amplitude of the Tilt derivative
does not contain information on the strength of the
geomagnetic field nor magnetization (other than inclination)
of the causative body, the susceptibility contrast across
faults/contacts is derived from the Analytic signal derivative.
We also investigate how effective the Tilt derivative and Tilt-
Depth method are for structural and depth to basement
mapping in regions close to the magnetic equator, where the
reduction to pole transform is often unstable; this is done
using the same Tanzania dataset transformed to the pole
and the equator.. We find the Tilt derivative applied to
reduced to equator data cannot be used to map structure
because of the effects of magnetic anisotropy which results
in the magnetic response of structures varying with strike
azimuth. To overcome this anisotropy problem the Analytic
signal and/or local wavenumber derivatives, which are for all
practical purposes independent of Inclination, provide the
best means of defining the major structural trends. We also
find that the Tilt-Depth method provides coherent depth to
basement estimates for both reduced to pole and reduced to
equator data. For the deep basin sections of the Karroo rift,
there is a sparsity of Tilt-Depth results from both the reduced
to pole and reduced to equator datasets. However, each set
of results have a different spatial coverage, so when
combined they provide a better spatial sampling of the long
wavelength magnetic character of the basin, and thus
improve the constraints on the minimum curvature gridding
method to map the shape and depth of the basin.
Keywords: Magnetic, basement depth, Tilt derivative,
Tilt-Depth, structure, sedimentary basin, susceptibility
contrast.

27.6.2 Introduction

The Tilt derivative (Miller and Singh, 1994; Verduzco et
al.,(2004) is a normalised phase derivative that uses first
order derivatives and has been shown to be an effective
method of mapping subsurface structural edges associated
with both strongly and weakly magnetised bodies. Although
most derivatives provide information on the location of
structural edges there have been a number of variations to
the Tilt derivative proposed, such as the Theta derivative
(Wijns et al., 2005) and the normalized derivative (Cooper
and Cowan, 2006), that transform the zero contour of the Tilt
derivative either to a maximum or high gradient values,
respectively.

In addition to providing information on structural edges, the
tilt derivative also provides information on the depth to these
structural edges from grid based data. This has resulted in
the development of the Tilt-Depth method by Salem et al.
(2007 and 2010). The Tilt Depth method is grid based and
depths are determined directly from the contour intervals
relating to individual Tilt of the reduced to pole (RTP)
anomalies. This has significant advantages over methods
such as Euler deconvolution (Reid et al. 1990), which
analyses grid windows of data results in clouds of multiple
depth solutions using an assumed structural index (SI), thus
making it difficult to define the depths of structural edges
without additional analysis. Since the Tilt-Depth solutions are
anomaly specific, they do not generate clouds of multiple
solutions; however, the solutions are restricted to a single
structural index (that of SI=0 for infinite depth contact). A
major advantage of this approach is that all depth estimates
based on the Tilt-Depth method will be conservative. In
recent years a range of powerful new methods based on
wavelet transforms and multiscale analysis have been
developed (Fedi and Florio, 2006; Fedi, 2007; Cella et al.
2009). Although these methods provide accurate depth
estimates, they suffer from the need to apply a series of
analytical procedures to obtain them compared with the Tilt-
Depth method where the depth can be directly derived from
the Tilt derivative contour of RTP data.

In this contribution we expand the original method to define
other important structural parameters (Figure 27/25) such as
susceptibility contrast and its direction, the latter of which can
be indicative of the direction of basement fault throw. We
also examine the application of the Tilt-Depth method in
areas close to the magnetic equator (i.e. with low magnetic
inclination). In this situation, structural anisotropy occurs
such that the magnetic response of linear structures varies
with azimuth, e.g. edges of west-east striking fault/contact
structures are well imaged whilst edges of north-south
striking structures are either absent or poorly imaged. We
want to investigate whether this anisotropy also affects depth
estimates. In order to evaluate this and assess how best we
can structurally map sedimentary basins close to the
magnetic equator, we compare the results generated from
the RTP and the reduced to equator (RTE) data derived from
a common mid-latitude magnetic survey and integrate them
for a more detailed interpretation.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 16
GETECH/ Univ. of Leeds:



Figure 27/25: A schematic diagram to illustrate that the
aim of magnetic interpretation is to transform the
magnetic anomaly map into a structure and depth to
basement map. The parameters that can be derived from
the magnetic data are listed where strike, direction of
susceptibility contrast, and depth can be derived from
the Tilt derivative and sediment susceptibility contrast
can be estimated from the Analytic signal.

27.6.3 Tilt Derivative and Tilt-Depth Method

Tilt Derivative

The tilt angle, or Tilt derivative, is the generalized definition
for the local phase (Miller and Singh, 1994; Verduzco et al.,
2004) and is defined as:

|
|
|
.
|

\
|
c
c
c
c
=

h
M
z
M
1
tan u ,
(1)
where
2
2
|
|
.
|

\
|
c
c
+
|
.
|

\
|
c
c
=
c
c
y
M
x
M
h
M
, and
y
M
x
M
c
c
c
c
, &
z
M
c
c
are first-order derivatives of magnetic field M in the x,
y and z directions. Since the Tilt derivative consists of the
ratio of the vertical and horizontal derivatives, the resulting
Tilt amplitude function, measured in degrees or radians, does
not contain information on the strength of the geomagnetic
field nor the susceptibility of the causative bodies. It does
however preserve the spectral wavelength content of
anomalies and the geomagnetic field Inclination. The arctan
part of the function limits the magnitude of the Tilt derivative
to +/- 90 and together with the ratio of the vertical and total
horizontal gradients operates as an effective automatic gain
control filter such that small and large amplitude TMI
anomalies now have normalized Tilt amplitudes. If the total
magnetic intensity (TMI) field is converted to RTP, then the
Inclination dependency of the Tilt anomaly is removed such
that the zero contour of the Tilt derivative is now located
close to the boundary of the causative body (Figure 2). Thus
the zero contour of the RTP Tilt derivative tracks
faults/contacts in a similar way to the RTP vertical derivative,
i.e. the location of the zero contour of the RTP Tilt derivative
is identical to the zero contour of the RTP vertical derivative.

The application of the Tilt derivative to structural mapping is
normally carried out on RTP data so as to remove the
Inclination dependency (Figure 27/26). This represents a
limitation in magnetic equatorial regions, because the
induced field is almost horizontal and prevents a stable RTP
transformation of the TMI data (Macleod et al., 1993).
Further, the inducing horizontal field generates a magnetic
anisotropic effect such that faults/contacts striking close to
magnetic north will be poorly or not imaged. This effect
results from there being hardly any magnetic flux lines cutting
both the N-S striking fault/contact surface and the top surface
of the magnetic body. The lack of flux cutting the upper
surface is critical in the anisotropic effect since the flux of a
RTP field does not cut any vertical fault/contact surface of
any orientation, and it is only the magnetic response from the
flux cutting the top surface of the magnetic structure that
generates an observable signal.



Figure 27/26: The shape of the Tilt derivative along a N-S
profile perpendicular to a 2D magnetized inclined
contact/step and block model for four Inclinations 0 to
90 in steps of 30. Only the RTP and RTE anomalies
have their zero values coinciding closely with the top
edge of the body and the negative gradient of the RTP
(shown by arrows) at the zero contact is consistent with
the change of susceptibility from high to low. The main
difference between the RTP and the RTE anomalies apart
from their reverse sign is the RTP is azimuthally
invariant whereas the RTE is azimuthally dependent due
to its anisotropic response of the horizontal field (see
Figures 27/27 and 27/28).
The anisotropic effect can be easily seen in model and real
data examples. Figure 27/27 shows the RTP and RTE fields
and associated Tilt components for a dipole model (having
similar induced TMI components as found in SE Tanzania).
The RTP and the positive values of the Tilt derivative (Figure
27/27) clearly identify the location and shape of the dipole,
while the RTE and the negative values of the Tilt derivative
(Figure 27/27) show that only small sections of the north and
south edges of the dipole source are delineated correctly.

The anisotropy associated with the RTE field is also clearly
seen in real data (Figure 27/28). The TMI data (Figure
27/28A) images the aureole of the granite batholiths located
in northern Peninsular Malaysia, which is close to the
Geomagnetic equator (inclination of -6.8). The north and
south parts of the aureole where the contacts strike

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 17
GETECH/ Univ. of Leeds:



Figure 27/27: The TMI field of a magnetic dipole at 500 m
depth with Inclination -37.3 and Declination -2.1. The
RTP and RTE fields show the azimuthal variation in the
dipole anomaly. The RTP and its associated Tilt
derivative are positive and isotropic to azimuth, while
the RTE and its associated Tilt derivative are negative
and anisotropic with azimuth. For simplicity the Tilt
derivative anomalies are restricted to the positive value
(Tilt of RTP) and to their negative component (Tilt of
RTE) to better define the zero contour. Colour bars are
non linear due to colour equalization in this figure and
subsequent figures.


Figure 27/28: Magnetic map of the granite aureole
centred at 101.86 E and 4.99 N in northern Peninsular
Malaysia (Inclination -6.8). A: TMI anomaly map; B: Tilt
derivative of the RTE; Note: the Tilt derivative preserves
the definition of the northern and southern edges of the
anomaly better than the TMI anomaly but like the TMI is
unable to delineate the western and eastern edges (Data
provided with permission from the Geological Survey of
Malaysia).

W-E, are well defined, whereas the N-S striking sections are
not imaged. The Tilt derivative of the RTE data (Figure
27/28B) better images the aureole location but still is unable
to image the N-S striking parts of the contacts.

The Tilt-Depth method

The Tilt-Depth method developed by Salem et al., (2007)
uses the RTP field and assumes a simple buried vertical 2D
contact model (Figure 27/29).

Figure 27/29: Vertical 2D contact model with infinite
depth extent and its RTP Tilt derivative. The Tilt
derivative has zero value over the contact edge and the
lateral distances between the zero value and the +/- 45
values are equal to the depth to the top of the contact.
Figure after Salem et al. (2007).
Following the derivation of Nabighian (1972), for total
horizontal and vertical derivatives of a 2D contact model
located at a horizontal location of h=0 and at a depth of zc
are:

( ) ( )
2 2
90 2 sin 90 2 cos
sin 2
c
c
z h
d I h d I z
d KFc
h
M
+
+
=
c
c
(2)
( ) ( )
2 2
90 2 sin 90 2 cos
sin 2
c
c
z h
d I z d I h
d KFc
z
M
+

A =
c
c
(3)
where K is the susceptibility contrast at the contact, F the
magnitude of the magnetic field, c=1cos
2
i sin
2
A, A the angle
between the positive h-axis and magnetic north, i the
ambient field inclination, tanI = tani/cosA, d the dip
(measured from the positive h-axis), and all trigonometric
quantities are in degrees. Substituting the above derivative
terms into the Tilt derivative equation (1), then it can be
shown that:
Tilt
|
|
.
|

\
|
=

c
z
h
1
tan , for RTP field and (4a)
Tilt
|
|
.
|

\
|
=

c
z
h
1
tan , for RTE field (4b)
where h is the horizontal distance (h origin vertically over
contact) and zc is the depth to top of the contact model.
Equations 4a and 4b indicate that, For RTP and RTE data,
when the Tilt derivative is 0 (h=0) this is the location of the
contact. For RTP data, the Tilt value has a value of 45 and -
45 when h=zc or h=-zc, respectively. For RTE data, the Tilt
value has a value of 45 and -45 when h=-zc or h=zc,
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 18
GETECH/ Univ. of Leeds:

respectively. For both RTE and RTP data, the depth
estimates can be derived directly from the Tilt map, by simply
measuring the distance between appropriate contours.

27.6.4 Tilt-Depth and throw of fault

Salem et al. (2007, 2010) have reported accurate depth to
basement estimations to within estimated accuracy of ~15%
for the Tilt-Depth method using different continental magnetic
datasets. Lee et al. (2010) have reported that depth is
underestimated if the magnetic body being investigated is of
finite thickness such that the magnetic response of the
bottom surface modifies the magnetic response of the depth
to top (Figure 27/30A). Here we avoid using depth to bottom
terminology and prefer the term fault throw; this is because
we consider the depth extent of the magnetic body to be
considerable and limited only by Curie temperature when
mapping the depth to top of crystalline magnetic basement.
We have tested out the effect of throw, using a series of 2D
models built with a varying depth to top and a range of finite
fault throws for each depth to top (Figure 27/30A). The Tilt-
Depth method is applied to each model, and depth estimates
are used to calculate the error between the estimated depth
value and the model depth to top. The results of this analysis
(Figure 27/30B) show that the depth error depends only on
the ratio of Zb / Zt. We found that the extent of
underestimation of depth to top decreases as the ratio Zb / Zt
increases. Generally, when the ratio Zb/Zt is greater than 10,
the depth to top (Zt) is underestimated by less than 15%.
This is not unexpected since all depth estimation methods
(Euler, SPI, etc.) suffer from this problem and an extra
parameter called structural index is used to handle faults with
small and large displacement (Reid et al., 1990).

Figure 27/30: A - 2D vertical contact model with vertical
fault contact of limited throw. B - The percentage depth
error of the top surface for varying ratios of Zb / Zt.

27.6.5 New Developments

1 Direction of Susceptibility Contrast

The Tilt-Depth method works well for initial structural
appraisal of sedimentary basins, where a high magnetization
contrast can be assumed at the basement-sediment interface
and where the geological structure of the basement is mainly
formed by near vertical faults systems and contacts. In this
situation the direction of dip, or the direction of change of
susceptibility from high to low across a fault or contact can
be determined. Figure 27/26 illustrates the RTP Tilt response
from two 2D models (a dipping interface and a thick block
with vertical interfaces) which show that the shape of the Tilt
derivative is sympathetic to the change in susceptibility, i.e.
the dip direction at the contact is in the direction from positive
to negative Tilt values. Black arrows highlight the slope
direction of the RTP Tilt derivative in Figure 27/26
representing the basement-sediment contact dip direction. A
simple way to integrate this information into the grid based
Tilt-Depth method is to use arrows pointing in the full dip
direction, perpendicular to the zero contours, to indicate the
dip direction of faults (see application in section 27.6.6).

2 Susceptibility Contrast (K)

As previously described the Tilt derivative is devoid of
geomagnetic field intensity and susceptibility information. To
generate an estimate of the susceptibility contrast we need to
use a first order derivative that has its maximum value over
the edge of the fault/contact. In this contribution we use the
Analytic signal since it works well for RTE data and can be
considered for most practical purposes, to be independent of
induced magnetisation and remanence, despite the subtle
inclination effects identified by Li (2006). For the case of a 2D
model of a vertical field, equations 2 and 3 can be used to
generate the Analytic signal |A| response over a buried
vertical contact and by rearranging the equation generate an
expression for the susceptibility contrast, K :

................................(5)

Following the application of this method, we produce colour
plots of the susceptibility contrast with a similar colour scale
to the Tilt-Depth estimate: the largest contrasts have the
warmest colours (red). Furthermore, the thickness of the zero
contour is modified in such a way that the thickness of the
zero contour line increases with susceptibility contrast. These
two effects, contour line weight plus the warm colour help to
reinforce where significant contacts with large K values are
located along the zero contour of the Tilt anomaly.

Using the above display is important for interpretation
purposes, since it helps to define the location of faults.
Basement faults can be reasonably assumed to be linear
while the zero contour of the Tilt angle map is a closed line.
Thus only discrete sections of the zero Tilt derivative
contours will be tracking faults, and therefore is important to
be able to recognize which parts of the contour lines coincide
with a fault (see application in section 27.6.6). The methods
described here, thus provide important constraints that
allows interpreters to efficiently and accurately develop a
valid structural interpretation consistent with the magnetic
data, all within a GIS software package.

27.6.6 . Application to Southern Tanzania

The Karoo rift basin of south east Tanzania used by Salem et
al. (2007) to illustrate the TiltDepth method is used here
Fc
z A
K
c
2
= A
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 19
GETECH/ Univ. of Leeds:

since it provides a good example of a deep fault bounded
sedimentary basin with few volcanic intrusions and a strong
magnetization contrast between the basement and
sedimentary infill. Since the Inclination of the Earth's
Magnetic Field for south east Tanzania is I = -37.3 the data
can be converted to both RTP and RTE to enable the
problems associated with magnetic anisotropy to be
examined (Figure 27/31).

Figure 27/31: The study area in SE Tanzania showing A:
TMI at 1000m above terrain; B: RTP anomaly; C: RTE
anomaly. All maps are in Lambert Conic projection with
distances shown in km.
27.6.6.1 RTP Interpretation

Structural Mapping

As previously described in section 27.6.3 (Figures 27/26 and
27/29), the zero contour value of the Tilt derivative at RTP
closely marks the spatial location of the top edge of the
faults/contacts. The problem of using the zero contour
locations to map faults/contacts is that faults generally are
linear and of finite length with variable throw along their
length generating a variable magnetic response, whereas a
contact can be both linear and/or a closed feature defining a
discrete geological structure in plan view, with constant
magnetic response. A contour, on the other hand, has a
closed form and its shape can be controlled by a range of
spatial factors relating to the 3D distribution of anomaly
sources. In Figure 27/32A W identifies a small amplitude
2D Analytic signal anomaly symmetrically located on a NE-
SW trending zero Tilt contour. When the Analytic signal is
converted to susceptibility contrast (Figure 27/32B) the NE-
SW trending feature is associated with a strong susceptibility
contrast with its dip direction pointing into the basin (Figure
27/33). This information all points to a deep seated fault
dipping to the NW. In contrast X identifies a N-S strike
Analytical signal anomaly which does not have an associated
zero Tilt contour, instead the zero Tilt contour meanders
across the feature and if the interpretation was restricted to
the zero Tilt contour map its significance would have been
missed. Thus the use of the zero Tilt contour has to be used
with care and reinforces the need of the structural interpreter
to use a range of derivative maps to aid the interpretation.

Figure 27/32: Analytic signal and susceptibility contrast
maps of study area in Tanzania. A: The Analytic signal
(AS) map, with the zero Tilt derivative contour
superimposed. The AS map has had a threshold of 0.005
nT/m removed to better image rift related features;
location W identifies subtle deep seated fault, location X
identifies possibly a N-S trending fault not defined by the
zero Tilt contour, and location Y identifies short
wavelength anomalies possibly representing near
surface volcanics; B: The colour susceptibility contrast
map has been enhanced by increasing the weight of the
zero contour in proportion to the size of the
susceptibility contrast. This helps to further help identify
linear structures such at Z possibly representing a half
graben with master fault on the east side.
The slope of the Tilt derivative across the zero Tilt contour
provides a means of determining the throw direction of the
fault assuming the normal relation that basement has strong
magnetization and sediment has weak magnetization (Figure
27/26). The principle has been described in section 27.6.5
and used in Figure 9. Based on these methods and using a
range of derivative maps, an initial structural interpretation
has been generated and is shown in Figure 27/34.


Figure 27/33: Slope direction of the Tilt derivative across
the zero Tilt contour. The arrow head points to lower Tilt
value or lower susceptibility for RTP data.

Depth determination

For this work we applied the Tilt-Depth method (Salem et al.
2007 and 2010) to the RTP data (Figure 27/31B). To
generate the Tilt derivative map (Figure 27/34A) we
measured half the perpendicular distance between the +/- 45
degrees contours. This is shown in Figure 27/35A with the
space between the +/-45 degree contours coloured
according to half their width, which gives the Tilt-Depth.
Using colour fill allows both major and more subtle changes
in depth to be easily visualized.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 20
GETECH/ Univ. of Leeds:


Figure 27/34: A: the shaded relief Tilt derivative map
(100% colour equalization) of the RTP data with
structures superimposed, where the dip directions ticks
are given by Figure 27/33; B: structural interpretation
based on the RTP data.

The lack of magnetic solutions (or sparse coverage of zero
contours) in Figure 27/35A for the deep central parts of the
basin is a limitation of this first order derivative method. Thus
gridding parameters need to be carefully selected to ensure
no artificial basements features are generated. This can be
qualitatively checked since we do have magnetic data
covering these solution gaps in the form of the Analytic signal
and Tilt derivative (Figures 8A and 10A respectively). By
using a 0.005 nT/m cut off in Figure 27/32A, this helps to


Figure 27/35: Initial depth conversion of the magnetic
anomaly maps for SE Tanzania. A: Tilt-Depth of the RTP
data with colour fill of the 45 deg to +45 deg contours
(after Salem et al., 2007) and B: the interpolated grid
based on the Tilt-Depth estimates along the zero Tilt
contour using 250 m minimum curvature gridding. The
structures shown in Figure 27/34B are overlain. The
symbol ? (see Y in Figure 27/32A) indicates an area
where data may be affected by intra-sedimentary
volcanics, rather than a true basement high.
identify that there are few low amplitude and long wavelength
Analytic anomalies resulting from deep magnetic source
structures within the basin. Shallow depth to basement can
arise when there is a presence of near surface magnetic
sources, such as intra-sedimentary or surface volcanics. For
example a potential false basement high is seen at position
marked by symbol Y in Figure 27/32A and symbols ? in
Figure 27/35. These false basement highs are not supported
by Landsat visual inspection of position Y in Figure 27/32A.
Thus the interpreter should recognize such problems for
follow up ground work and/or inspection of gravity data and
remove such shallow estimates as necessary before the final
basinal depth mapping is undertaken.


27.6.6.2 RTE Interpretation

Structural mapping

We have already indicated that for RTE data there is a
serious problem of magnetic anisotropy to contend with. This
is clearly seen in Figure 27/31 by comparing the RTP and
RTE images for SE Tanzania. The RTP image outlines
structures well whereas the RTE field and its Tilt derivative
smear the anomaly features in a west-east direction (Figures
27/31C and 27/36A). This effect is also clearly seen in the
magnetic response of a dipole source (Figure 27/27). This
has the effect of altering the orientation of the magnetic
fabric, as defined by the zero contour, in both the shallow
basement and deep basin areas. In the RTE map, N-S trend
anomalies (and zero contours) with strikes within +/- 30 of
north are not generally observed. Consequently, any trends
defined by the zero Tilt derivative contour are unreliable,
although in some places they will trace small sections of the
W-E contacts/faults. use of the Tilt zero contours for either
tracking structure or susceptibility contrast is therefore not
recommended.


Figure 27/36: A: Tilt derivative of RTE and B: Tilt-Depth
of the RTE showing the smearing of contours in a west-
east direction compared to the RTP version of the Tilt
derivative in Figure 10A.
To help improve the situation, the Analytic signal (AS) and
the Local Wavenumber (LW), have been applied to the RTE
data (Figure 27/37). These derivatives show up the major
bounding faults and contacts similarly but not quite as well as
the RTP data. Thus Figure 27/37 shows how the AS and LW
allow us to identify the major N-S structural trends by their
wavelength changes across the bounding faults. Arrows are
used to delineate the main basin edge structures. This then
provides a means of delineating the main structure the RTE
data.

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 21
GETECH/ Univ. of Leeds:


Figure 27/37: Tracking major structures A: using
Analytic signal of the RTE and B: using Local
Wavenumber of the RTE. Both of these derivatives show
up the change in wavelength content well and the clear
faulted contacts between the basement and the deep
sedimentary basin.

Depth determination

The same processing stages were performed on the RTE
data as with the RTP data to generate the Tilt-Depth map
and gridded depth to basement map shown in Figure 27/38.
By comparing these maps with the RTP depth maps (Figure
27/35) the architecture and geometry of the rift and sub-
basins are similar. The comparison of zero Tilt contours
within the deep basin is limited due to the sparseness of zero
contour Tilt-Depth estimates. However, there is clear
indication that the spectral content of both the RTP and RTE
datasets are retained in their respective Tilt derivatives so
the Tilt-Depth method can provide valuable basin
morphology and depth information.


Figure 27/38: A: Tilt-Depth map of the RTE Tilt derivative
data; B: the gridded depth to basement map derived
from A with structural overlay derived from the
derivatives shown in Figure 27/37.

To determine how similar the RTP and RTE depths are along
the zero contours we have compared the two datasets and
generated a Q-Q plot for all zero contour-crossovers and
have used a search radius of 1 km centred on the zero
contour-crossover positions so that the average depth within
a radius of 1km for each crossover can be determined and
plotted in Figure 27/39 with associated statistics. This plot
shows a clear linear trend for most of the solutions. As
expected the shallow basement areas show an abundance of
zero Tilt contours crossovers.

This finding is supported by the near identical average power
spectra for the deep basinal areas (Figure 27/40) for both the
RTP and RTE data suggesting that the spectral content of
the respective grids is not significantly affected by RTP and
RTE transformations. This could be due to the imperfect
nature of geological fault/contacts and surfaces that allows a


Figure 27/39: A: The Q-Q plot, with search radius of 1km,
showing the linear 1:1 trend of depth estimates derived
independently from the RTP and RTE TiltDepth method.
The line is the 1:1 trend. B: Statistics of the Q-Q plot.

large ensemble of small flux leakages. The implication of this
result for magnetic datasets located in mid-latitudes, which
can be transformed into either RTP or RTE equivalents, is
that the depth estimates based on the Tilt-Depth method, can
be combined to give a more robust solution; this is
particularly relevant in deep parts of the sedimentary basins
where the individual depth solutions are sparse. The
combined depth solution is thus presented as Figure 27/41.


Figure 27/40: Power spectrum for study area for the RTP
and RTE data.




Figure 27/41: A: Combined Tilt-Depth results from the
RTP and RTE, B: Depth grid that generates a more
robust depth to basement map.


J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 22
GETECH/ Univ. of Leeds:

27.6.7: Discussion and Conclusion

This study has focused on the Tilt derivative and the Tilt-
Depth method and how they can be used successfully to
spatially map and delineate structures, depth and
morphology of sedimentary basins. For mid-latitudes where
the TMI data can be transformed into both RTP and RTE
datasets we have shown that it is possible to integrate the
Tilt-Depth results of each dataset to generate a more robust
grid solution. These methods have been developed to
provide a robust reconnaissance tool to obtain a rapid means
of appreciating what magnetic data can reveal about the
subsurface. This initial understanding of the subsurface
structure then allows, as needed, more appropriate and
multiple analytical depth methods to be applied to the
magnetic data together with physical depth constraints from
well and seismic data and where possible the inclusion of
additional datasets, such as gravity, to generate a more
constrained and integrated solution/model.

The results of this study are:

Structure: The zero contour of the RTP Tilt derivative field
has been shown to coincide with the location of faults.
Deriving the susceptibility contrast along the zero Tilt
contours can indicate more precisely where fault are actually
located due to faults generating strong susceptibility
contracts. Once the fault structures have been identified it is
possible to use the RTP Tilt derivative to determine the sign
of the slope perpendicular to the zero contour to map the
direction of the susceptibility contrast. For basement faults, in
most cases, the slope direction indicates the throw direction,
helping the final basinal mapping process.

We have also shown that RTE zero Tilt contours are
controlled by the magnetic anisotropy effects at low magnetic
inclinations and cannot be relied on for mapping structure.
Beard (2000) has shown that since geological structures are
not simple linear geometric bodies significant flux leakage
can take place. Faults/contacts/dykes will consequently
generate strings of dipole anomalies that can be significantly
enhanced by using the Analytic signal making them all
positive derivatives and thus easier to detect and map. It was
for this reason that we have mapped the structure close to
the equator using the Analytic signal and Local wavenumber
anomaly fields since both derivatives can be considered for
practical purposes to be independent of inclination.

Depth to basement: We have shown that the Tilt-Depth
method works well for RTP datasets for mapping depth to
basement beneath sedimentary basins at both continental
and local scales. The method is based on the assumption of
a vertical contact model but small variations in the dip of the
contact do not appear to affect the depth results. The most
significant factor controlling the depth accuracy of the
method is the relation between fault throw and depth (section
27.6.4). Depth estimates to the top surface of a vertical
contact model have in general been found to be good for
basin boundary faults with large throws as well as basement
contacts (i.e. lithology changes within the basement). The
systematic underestimation of the depths based on model
studies (Figure 27/30) needs to be taken into consideration
and cross checked with methods where structural index can
be applied.

An interesting finding of this study is that both the RTP and
RTE datasets have near identical average power spectra,
suggesting that the spectral content of the respective grids is
not significantly affected by magnetic anisotropy. The
implication on the Tilt-Depth method is that it can be
successfully applied to both RTP and RTE data. The
compatibility of the depth estimates generated from the Tilt-
Depth method when applied to the RTP and RTE datasets is
shown. When RTP and RTE Tilt-Depth methods are applied
to the same mid-latitude survey area, greater spatial
coverage of depth solutions is generated and the results of
the RTP and RTE depth analysis can be combined into a
single depth map, which is likely to reduce grid interpolation
errors. The method used here is minimum curvature with
tension=0 and has been done successfully.

A further problem with gridding is that the depth estimates to
be gridded come exclusively from the top of the magnetic
basement sources. Simple gridding methods will seriously
underestimate sedimentary depths particularly close to and
on the down-thrown sides of faults, where depth solutions
define the upper corner of a fault but not the bottom of the
down-thrown side of the fault. Such mapping problems can
be reduced or overcome by undertaking a combination of 2D
forward profile modelling and 3D inversion of gravity grid
data constrained by well and seismic data.

In this study no account has been made to pre-condition the
data prior to the application of the Tilt-Depth method to
identify and remove short wavelength anomalies originating
from shallow magnetic sources. Such a problem is shown in
Figure 27/35 by symbol ?. Here, a distinct set of short
wavelength anomalies can be observed to be superimposed
onto much longer wavelength anomalies (Figure 27/31), the
latter having the same character as the anomalies coming
from the deep parts of the rift basin elsewhere in the basin.
The effect of applying the Tilt derivative to these shallow
source anomalies is that they dominate the resulting Tilt
derivative map. Without any available ground geological
evidence or gravity coverage we consider, based on Landsat
imaging, these short wavelength anomalies to be delineating
the edge of a thin shallow volcanic layer with considerable
thickness of sediment beneath. Thus the depth maps in this
area need further analysis.

27.6.8: References

Beard L. P. 2000. Detection and identification of north-south
trending magnetic structures near the magnetic equator.
Geophysical Prospecting 48, 745-761.
J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 23
GETECH/ Univ. of Leeds:

Cella F., Fedi M. and Florio G. 2009. Toward a full multiscale
approach to interpret potential fields. Geophysical
Prospecting 57, 543557.
Cooper, G. R. J. and Cowan D. R. 2006. Enhancing potential
field data using filters based on the local phase. Computers
& Geosciences 32, 1585-1591.

Fairhead, J. D. and Williams, S. E. 2006. Evaluating
normalized magnetic derivatives for structural mapping.
Extended Abstract workshop, SEG, New Orleans, Louisiana,
USA.

Fairhead J.D., Salem A., Williams S.E., Bourne A.J., Green
C.M. and Samson E.M. 2008a. Mapping the structure and
depth of sedimentary basins using the magnetic Tilt-Depth
method. 70
th
EAGE meeting, Rome, Italy, Expanded
Abstracts.

Fairhead J.D., Salem A., Williams S.E. and Samson E.M.
2008b. Magnetic interpretation made easy: The Tilt-Depth-
Dip- K method. 78
th
SEG meeting, Las Vegas, Nevada,
USA, Expanded Abstracts.

Fedi M., Primiceri R., Quarta T. and Villani A. 2004. Joint
application of continuous and discrete wavelet transform on
gravity data to identify shallow and deep sources.
Geophysical Journal International 156, 7-21.

Fedi M. and Florio G. 2006. SCALFUN: 3D analysis of
potential field scale function to determine independently or
simultaneously structural index and depth to source. 76th
SEG meeting, New Orleans, Louisiana, USA, Expanded
Abstracts, 963967.

Fedi M. 2007. DEXP: A fast method to determine the depth
and the structural index of potential fields sources.
Geophysics 72, 111.

Li X. 2006. Understanding 3D analytic signal amplitude.
Geophysics, 71, L13L16.

Lee M., Morris B. and Ugalde H. 2010. Effect of signal
amplitude on magnetic depth estimations. The Leading Edge
29, 672-677.

Macleod I. N., Jones K. and Dai T. F. 1993. 3D Analytic
Signal in the interpretation of Total Field Data at Low
Magnetic Latitudes. Exploration Geophysics 24, 679-687.

Miller H. G. and Singh V. 1994. Potential field tilt: a new
concept for location of potential field sources. Journal of
Applied Geophysics 32, 213-217.

Nabighian M. N. 1972. The analytic signal of two-dimensional
magnetic bodies with polygonal cross-section; its properties
and use for automated anomaly interpretation. Geophysics
37, 507-517.

Reid A. B., Allsop J. M., Granser H., Millet A. J. and
Somerton I. W. 1990. Magnetic interpretation in three
dimensions using Euler deconvolution. Geophysics 55, 80-
91.

Salem A., Williams S. E., Fairhead J. D., Ravat D. and
Smith R. 2007. Tilt-depth method: A simple depth
estimation method using first-order magnetic derivatives.
The Leading Edge 26, 1502-5.

Salem A., Williams S. E., Samson E., Fairhead J. D.,
Ravat D. and Blakely R. J. 2010. Sedimentary basins
reconnaissance using the magnetic Tilt-Depth method.
Exploration Geophysics 41, 198209.

Verduzco B., Fairhead J. D., Green C. M. and
MacKenzie C. 2004. New insights into magnetic
derivatives for structural mapping. The Leading Edge
23, 116 -119.

Wijns C., Perez C. and Kowalczyk P. 2005. Theta map:
Edge detection in magnetic data. Geophysics 70, 39-43

J D Fairhead, Potential Field methods for Oil & Mineral Exploration Section 27, Page 24
GETECH/ Univ. of Leeds:





APPENDIX

USGS Map Projections


Map Projections Poster
| The Globe | Mercator | Transverse Mercator | Oblique Mercator | Space Oblique Mercator |
| Miller Cylindrical | Robinson | Sinusoidal Equal Area | Orthographic | Stereographic |
| Gnomonic | Azimuthal Equidistant | Lambert Azimuthal Equal Area | Albers Equal Area Conic |
| Lambert Conformal Conic | Equidistant Conic (Simple Conic) | Polyconic |
| Bipolar Oblique Conic Conformal | Summary Table | General Notes |
Map Projections
A map projection is used to portray all or part of the round
Earth on a flat surface. This cannot be done without some
distortion.
Every projection has its own set of advantages and
disadvantages. There is no "best" projection.
The mapmaker must select the one best suited to the needs,
reducing distortion of the most important features.
Mapmakers and mathematicians have devised almost limitless
ways to project the image of the globe onto paper. Scientists at
the U. S. Geological Survey have designed projections for their
specific needssuch as the Space Oblique Mercator, which
allows mapping from satellites with little or no distortion.
This document gives the key properties, characteristics, and
preferred uses of many historically important projections and
of those frequently used by mapmakers today.


Gerardus Mercator (1512-1594).
Frontispiece to Mercator's Atlas sive
Cosmographicae, 1585-1595. Courtesy
of the Library of Congress, Rare Book
Division, Lessing J . Rosenwald
Collection.
Which ones best suit your needs?
Every flat map misrepresents the surface of the Earth in some way. No map can rival a globe in truly
representing the surface of the entire Earth. However, a map or parts of a map can show one or morebut
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (1 of 20)31/01/2005 12:15:22
Map Projections Poster
never allof the following: True directions. True distances. True areas. True shapes.
For example, the basic Mercator projection is unique; it yields the only map on which a straight line drawn
anywhere within its bounds shows a particular type of direction, but distances and areas are grossly
distorted near the map's polar regions.
On an equidistant map, distances are true only along particular lines such as those radiating from a single
point selected as the center of the projection. Shapes are more or less distorted on every equal-area map.
Sizes of areas are distorted on conformal maps even though shapes of small areas are shown correctly.
The degree and kinds of distortion vary with the projection used in making a map of a particular area.
Some projections are suited for mapping large areas that are mainly north-south in extent, others for large
areas that are mainly east-west in extent, and still others for large areas that are oblique to the Equator.
The scale of a map on any projection is always important and often crucial to the map's usefulness for a
given purpose. For example, the almost grotesque distortion that is obvious at high latitudes on a small-
scale Mercator map of the world disappears almost completely on a properly oriented large-scale
Transverse Mercator map of a small area in the same high latitudes. A large-scale (1:24,000) 7.5-minute
USGS Topographic Map based on the Transverse Mercator projection is nearly correct in every respect.
A basic knowledge of the properties of commonly used projections helps in selecting a map that comes
closest to fulfilling a specific need.
| Top | Main table of contents |
The Globe
DirectionsTrue
DistancesTrue
ShapesTrue
AreasTrue
Great circlesThe shortest distance between any two
points on the surface of the Earth can be found quickly
and easily along a great circle.
Disadvantages:
I Even the largest globe has a very small scale and shows relatively little detail.
I Costly to reproduce and update.
I Difficult to carry around.
I Bulky to store.
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (2 of 20)31/01/2005 12:15:22
Map Projections Poster
On the globe:
Parallels are parallel and spaced equally on meridians. Meridians and other arcs of great circles are
straight lines (if looked at perpendicularly to the Earth's surface). Meridians converge toward the poles
and diverge toward the Equator.
Meridians are equally spaced on the parallels, but their distances apart decreases from the Equator to the
poles. At the Equator, meridians are spaced the same as parllels. Meridians at 60 are half as far apart as
parallels. Parallels and meridians cross at right angles. The area of the surface bounded by any two
parallels and any two meridians (a given distance apart) is the same anywhere between the same two
parallels.
The scale factor at each point is the same in any direction.
After Robinson and Sale, Elements of Cartography (3rd edition, J ohn Wiley & Sons, Inc. 1969, p.212).
| Top | Main table of contents |
Mercator
Used for navigation or maps
of equatorial regions. Any
straight line on the map is a
rhumb line (line of constant
direction). Directions along a
rhumb line are true between
any two points on map, but a
rhumb line is usually not the
shortest distance between
points. (Sometimes used with
Gnomonic map on which any
straight line is on a great circle and shows shortest path between two points).
Distances are true only along Equator, but are reasonably correct within 15 of Equator; special scales can
be used to measure distances along other parallels. Two particular parallels can be made correct in scale
instead of the Equator.
Areas and shapes of large areas are distorted. Distortion increases away from Equator and is extreme in
polar regions. Map, however, is conformal in that angles and shapes within any small area (such as that
shown by USGS topographic map) is essentially true.
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (3 of 20)31/01/2005 12:15:22
Map Projections Poster
The map is not perspective, equal area, or equidistant.
Equator and other parallels are straight lines (spacing increases toward poles) and meet meridians (equally
spaced straight lines) at right angles. Poles are not shown.
Presented by Mercator in 1569.
CylindricalMathematically projected on a cylinder tangent to the Equator. (Cylinder may also be
secant.)
| Top | Main table of contents |
Transverse Mercator
Used by USGS for many quadrangle
maps at scales from 1:24,000 to
1:250,000; such maps can be joined at
their edges only if they are in the same
zone with one central meridian. Also
used for mapping large areas that are
mainly north-south in extent.
Distances are true only along the central
meridian selected by the mapmaker or
else along two lines parallel to it, but all
distances, directions, shapes, and areas are reasonably accurate within 15 of the central meridian.
Distortion of distances, directions, and size of areas increases rapidly outside the 15 band. Because the
map is conformal, however, shapes and angles within any small area (such as that shown by a USGS
topographic map) are essentially true.
Graticule spacing increases away from central meridian. Equator is straight. Other parallels are complex
curves concave toward nearest pole.
Central meridian and each meridian 90 from it are straight. Other meridians are complex curves concave
toward central meridian.
Presented by Lambert in 1772.
CylindricalMathematically projected on cylinder tangent to a meridian. (Cylinder may also be secant.)
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (4 of 20)31/01/2005 12:15:22
Map Projections Poster
| Top | Main table of contents |
Oblique Mercator
Used to show regions along a great
circle other than the Equator or a
meridian, that is, having their general
extent oblique to the Equator. This kind
of map can be made to show as a
straight line the shortest distance
between any two preselected points
along the selected great circle.
Distances are true only along the great
circle (the line of tangency for this
projection), or along two lines parallel to it. Distances, directions, areas, and shapes are fairly accurate
within 15 of the great circle. Distortion of areas, distances, and shapes increases away from the great
circle. It is excessive toward the edges of a world map except near the path of the great circle.
The map is conformal, but not perspective, equal area, or equidistant. Rhumb lines are curved.
Graticule spacing increases away from the great circle but conformality is retained. Both poles can be
shown. Equator and other parallels are complex curves concave toward nearest pole. Two meridians 180
apart are straight lines; all others are complex curves concave toward the great circle.
Developed 1900-50 by Rosenmund, Laborde, Hotine et al.
CylindricalMathematically projected on a cylinder tangent, (or secant) along any great circle but the
Equator or a meridian.
Directions, distances, and areas reasonably accurate only within 15 of the line of tangency.
| Top | Main table of contents |
Space Oblique Mercator
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (5 of 20)31/01/2005 12:15:22
Map Projections Poster
This new space-age conformal
projection was developed by the
USGS for use in Landsat images
because there is no distortion along
the curved groundtrack under the
satellite. Such a projection is needed
for the continuous mapping of
satellite images, but it is useful only
for a relatively narrow band along
the groundtrack.
Space Oblique Mercator maps show a satellite's groundtrack as a curved line that is continuously true to
scale as orbiting continues.
Extent of the map is defined by orbit of the satellite.
Map is basically conformal, especially in region of satellite scanning.
Developed in 1973-79 by A. P. Colvocoresses, J . P. Snyder, and J . L. J unkins.
| Top | Main table of contents |
Miller Cylindrical
Used to represent the entire Earth
in a rectangular frame. Popular for
world maps. Looks like Mercator
but is not useful for navigation.
Shows poles as straight lines.
Avoids some of the scale
exaggerations of the Mercator but
shows neither shapes nor areas
without distortion.
Directions are true only along the Equator. Distances are true only along the Equator. Distortion of
distances, areas, and shapes is extreme in high latitudes.
Map is not equal area, equidistant, conformal or perspective.
Presented by O. M. Miller in 1942.
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (6 of 20)31/01/2005 12:15:22
Map Projections Poster
CylindricalMathematically projected onto a cylinder tangent at the Equator.
| Top | Main table of contents |
Robinson
Uses tabular
coordinates rather
than mathematical
formulas to make
the world "look
right." Better
balance of size
and shape of high-
latitude lands than
in Mercator, Van
der Grinten, or
Mollweide. Soviet Union, Canada, and Greenland truer to size, but Greenland compressed.
Directions true along all parallels and along central meridian. Distances constant along Equator and other
parallels, but scales vary. Scale true along 38 N & S, constant along any given parallel, same along N &
S parallels same distance from. Equator. Distortion: All points have some. Very low along Equator and
within 45 of center. Greatest near the poles.
Not conformal, equal area, equidistant, or perspective.
Used in Goode's Atlas, adopted for National Geographic's world maps in 1988, appears in growing
number of other publications, may replace Mercator in many classrooms.
Presented by Arthur H. Robinson in 1963.
Pseudocylindrical or orthophanic ("right appearing") projection.
| Top | Main table of contents |
Sinusoidal Equal Area
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (7 of 20)31/01/2005 12:15:22
Map Projections Poster
Used frequently in atlases
to show distribution
patterns. Used by the
USGS to show prospective
hydrocarbon provinces and
sedimentary basins of the
world. Has been used for
maps of Africa, South
America, and other large
areas that are mainly north-
south in extent.
An easily plotted equal-area projection for world maps. May have a single central meridian or, in
interrupted form, several central meridians.
Graticule spacing retains property of equivalence of area. Areas on map are proportional to same areas
on the Earth. Distances are correct along all parallels and the central meridian(s). Shapes are increasingly
distorted away from the central meridian(s) and near the poles.
Map is not conformal, perspective, or equidistant.
Used by Cossin and Hondius, beginning in 1570. Also called the Sanson-Flamsteed.
PseudocylindricalMathematically based on a cylinder tangent to the Equator.
| Top | Main table of contents |
Orthographic
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (8 of 20)31/01/2005 12:15:22
Map Projections Poster
Used for perspective views of the Earth, Moon, and other planets. The Earth appears as it would on a
photograph from deep space. Used by USGS in the National Atlas of the United States of America
TM
.
Directions are true only from center point of projection. Scale decreases along all lines radiating from
center point of projection. Any straight line through center point is a great circle. Areas and shapes are
distorted by perspective; distortion increases away from center point.
Map is perspective but not conformal or equal area. In the polar aspect, distances are true along the
Equator and all other parallels.
The Orthographic projection was known to Egyptians and Greeks 2,000 years ago.
AzimuthalGeometrically projected onto a plane. Point of projection is at infinity.
| Top | Main table of contents |
Stereographic
Used by the USGS for maps of Antarctica and American Geographical Society for Arctic and Antarctic
maps. May be used to map large continent-sized areas of similar extent in all directions. Used in
geophysics to solve spherical geometry problems. Polar aspects used for topographic maps and charts for
navigating in latitudes above 80.
Directions true only from center point of projection. Scale increases away from center point. Any straight
line through center point is a great circle. Distortion of areas and large shapes increases away from
center point.
Map is conformal and perspective but not equal area or equidistant.
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (9 of 20)31/01/2005 12:15:22
Map Projections Poster
Dates from 2nd century B.C. Ascribed to Hipparchus.
AzimuthalGeometrically projected on a plane. Point of projection is at surface of globe opposite the
point of tangency.
| Top | Main table of contents |
Gnomonic
Used along with the Mercator by some navigators to find the shortest path between two points. Used in
seismic work because seismic waves tend to travel along great circles.
Any straight line drawn on the map is on a great circle, but directions are true only from center point of
projection. Scale increases very rapidly away from center point. Distortion of shapes and areas increases
away from center point.
Map is perspective (from the center of the Earth onto a tangent plane) but not conformal, equal area, or
equidistant.
Considered to be the oldest projection. Ascribed to Thales, the father of abstract geometry, who lived in
the 6th century B.C.
AzimuthalGeometrically projected on a plane. Point of projection is the center of a globe.
| Top | Main table of contents |
Azimuthal Equidistant
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (10 of 20)31/01/2005 12:15:22
Map Projections Poster
Used by USGS in
the National Atlas
of the United
States of
America
TM
and for
large-scale
mapping of
Micronesia.
Useful for
showing airline
distances from center point of projection. Useful for seismic and radio work. Oblique aspect used for atlas
maps of continents and world maps for radio and aviation use. Polar aspect used for world maps, maps of
polar hemispheres, and United Nations emblem.
Distances and directions to all places true only from center point of projection. Distances correct between
points along straight lines through center. All other distances incorrect . Any straight line drawn through
center point is on a great circle. Distortion of areas and shapes increases away from center point.
AzimuthalMathematically projected on a plane tangent to any point on globe. Polar aspect is tangent
only at pole.
| Top | Main table of contents |
Lambert Azimuthal Equal Area
Used by the USGS in its National Atlas and Circum-Pacific Map Series. Suited for regions extending
equally in all directions from center points, such as Asia and Pacific Ocean.
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (11 of 20)31/01/2005 12:15:22
Map Projections Poster
Areas on the map are shown in true proportion to the same areas on the Earth. Quadrangles (bounded by
two meridians and two parallels) at the same latitude are uniform in area.
Directions are true only from center point. Scale decreases gradually away from center point. Distortion
of shapes increases away from center point. Any straight line drawn through center point is on a great
circle.
Map is equal area but not conformal, perspective, or equidistant.
Presented by Lambert in 1772.
AzimuthalMathematically projected on a plane tangent to any point on globe. Polar aspect is tangent
only at pole.
| Top | Main table of contents |
Albers Equal Area Conic
Used by USGS for maps showing
the conterminous United States
(48 states) or large areas of the
United States. Well suited for
large countries or other areas that
are mainly east-west in extent
and that require equal-area
representation. Used for many
thematic maps.
Maps showing adjacent areas can be joined at their edges only if they have the same standard parallels
(parallels of no distortion) and the same scale.
All areas on the map are proportional to the same areas on the Earth. Directions are reasonably accurate
in limited regions. Distances are true on both standard parallels. Maximum scale error is 1 1/4% on map
of conterminous States with standard parallels of 29 1/2N and 45 1/2N. Scale true only along standard
parallels.
USGS maps of the conterminous 48 States, if based on this projection have standard parallels 29 1/2N
and 45 1/2N. Such maps of Alaska use standard parallels 55N and 65N, and maps of Hawaii use
standard parallels 8N and 18N.
Map is not conformal, perspective, or equidistant.
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (12 of 20)31/01/2005 12:15:22
Map Projections Poster
Presented by H. C. Albers in 1805.
ConicMathematically projected on a cone conceptually secant at two standard parallels.
| Top | Main table of contents |
Lambert Conformal Conic
Used by USGS for many 7.5-
and 15-minute topographic
maps and for the State Base
Map series. Also used to
show a country or region that
is mainly east-west in extent.
One of the most widely used
map projections in the United
States today. Looks like the
Albers Equal Area Conic, but
graticule spacings differ.
Retains conformality. Distances true only along standard parallels; reasonably accurate elsewhere in
limited regions. Directions reasonably accurate. Distortion of shapes and areas minimal at, but increases
away from standard parallels. Shapes on large-scale maps of small areas essentially true.
Map is conformal but not perspective, equal area, or equidistant.
For USGS Base Map series for the 48 conterminous States, standard parallels are 33N and 45N
(maximum scale error for map of 48 States is 2 1/2%). For USGS Topographic Map series (7.5- and 15-
minute), standard parallels vary. For aeronautical charts of Alaska, they are 55N and 65N; for the
National Atlas of Canada, they are 49N and 77N.
Presented by Lambert in 1772.
ConicMathematically projected on a cone conceptually secant at two standard parallels.
| Top | Main table of contents |
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (13 of 20)31/01/2005 12:15:22
Map Projections Poster
Equidistant Conic (Simple Conic)
Used in atlases to show areas in
the middle latitudes. Good for
showing regions within a few
degrees of latitude and lying on
one side of the Equator. (One
example, the Kavraisky No. 4,
is an Equidistant Conic
projection in which standard
parallels are chosen to
minimize overall error.)
Distances are true only along all meridians and along one or two standard parallels. Directions, shapes
and areas are reasonably accurate, but distortion increases away from standard parallels.
Map is not conformal, perspective, or equal area, but a compromise between Lambert Conformal Conic
and Albers Equal Area Conic.
Prototype by Ptolemy, 150 A.D. Improved by De I'Isle about 1745.
ConicMathematically projected on a cone tangent at one parallel or conceptually secant at two parallels.
| Top | Main table of contents |
Polyconic
Used almost exclusively for large-
scale mapping in the United States
until the 1950's. Now nearly obsolete,
and no longer used by USGS for new
plotting in its Topographic Map
series. Best suited for areas with a
north-south orientation.
Directions are true only along central
meridian. Distances are true only
along each parallel and along central
meridian. Shapes and areas true only along central meridian. Distortion increases away from central
meridian.
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (14 of 20)31/01/2005 12:15:22
Map Projections Poster
Map is a compromise of many properties. It is not conformal, perspective, or equal area.
Apparently originated about 1820 by Hassler.
ConicMathematically based on an infinite number of cones tangent to an infinite number of parallels.
| Top | Main table of contents |
Bipolar Oblique Conic Conformal
This "tailor-made" projection is used to show one
or both of the American continents. Outlines in the
projection diagram represent areas shown on USGS
Basement and Tectonic Maps of North America.
Scale is true along two lines ("transformed standard
parallels") that do not lie along any meridian or
parallel. Scale is compressed between these lines
and expanded beyond them. Scale is generally good
but error is as much as 10% at the edge of the
projection as used.
Graticule spacing increases away from the lines of true scale but retains the property of conformality
except for a small deviation from conformality where the two conic projections join.
Map is conformal but not equal area, equidistant, or perspective.
Presented by O. M. Miller and W. A. Briesemeister in 1941.
ConicMathematically based on two cones whose apexes are 104 apart and which conceptually are
obliquely secant to the globe along lines following the trend of North and South America.
| Top | Main table of contents |
Summary Tables
Summary of Projection Properties
Key:* = Yes x= Partly
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (15 of 20)31/01/2005 12:15:22
Map Projections Poster
Projection Type Conformal
Equal
area
Equidistant
True
direction
Perspective Compromise
Straight
rhumbs
Globe Sphere * * * *
Mercator Cylindrical * x *
Transverse
Mercator
Cylindrical *
Oblique
Mercator
Cylindrical *
Space Oblique
Mercator
Cylindrical *
Miller
Cylindrical
Cylindrical *
Robinson
Pseudo-
cylindrical
*
Sinusoidal
Equal
Area
Pseudo-
cylindrical
* x
Orthographic Azimuthal x *
Stereographic Azimuthal * x *
Gnomonic Azimuthal x *
Azimuthal
Equalidistant
Azimuthal x x
Lambert
Azimuthal
Equal Area
Azimuthal * x
Albers Equal
Area Conic
Conic *
Lambert
Conformal
Conic
Conic * x
Equidistant
Conic
Conic x
Polyonic Conic x *
Biplolar
Oblique
ConicConformal
Conic *
Summary of Areas Suitable of Mapping with Projections
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (16 of 20)31/01/2005 12:15:22
Map Projections Poster
Key:* = Yes x = Partly
Projection Type World Hemisphere
Continent/
Ocean
Region/
sea
Medium scale
Large
scale
Globe Sphere *
Mercator Cylindrical x *
Transverse Mercator Cylindrical * * * *
Oblique Mercator Cylindrical * * * *
Space Oblique Mercator Cylindrical *
Miller Cylindrical Cylindrical *
Robinson
Pseudo-
cylindrical
*
Sinusoidal Equal
Area
Pseudo-
cylindrical
* *
Orthographic Azimuthal x
Stereographic Azimuthal * * * * *
Gnomonic Azimuthal x
Azimuthal Equalidistant Azimuthal x * * * x
Lambert Azimuthal Equal
Area
Azimuthal * * *
Albers Equal Area Conic Conic * * *
Lambert Conformal Conic Conic * * * *
Equidistant Conic Conic * *
Polyonic Conic x x
Biplolar Oblique
ConicConformal
Conic *
Summary of Projection General Use
Key:* = Yes
Projection Type
Topographic
Maps
Geological
Maps
Thematic
Maps
Presentations Navigation
USGS
Maps
Globe Sphere * *
Mercator Cylindrical * * * *
Transverse Mercator Cylindrical * * *
Oblique Mercator Cylindrical * *
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (17 of 20)31/01/2005 12:15:22
Map Projections Poster
Space Oblique Mercator Cylindrical * *
Miller Cylindrical Cylindrical * *
Robinson
Pseudo-
cylindrical
* *
Sinusoidal Equal
Area
Pseudo-
cylindrical
* *
Orthographic Azimuthal *
Stereographic Azimuthal * * * *
Gnomonic Azimuthal * *
Azimuthal Equalidistant Azimuthal * *
Lambert Azimuthal
Equal Area
Azimuthal * * *
Albers Equal Area Conic Conic * * *
Lambert Conformal
Conic
Conic * * * * *
Equidistant Conic Conic
Polyonic Conic * *
Biplolar Oblique
ConicConformal
Conic * *
| Top | Main table of contents |
General Notes
AzimuthThe angle measured in degrees between a base line radiating from a center point and another
line radiating from the same point. Normally, the base line points North, and degrees are measured
clockwise from the base line.
AspectIndividual azimuthal map projections are divided into three aspects: the polar aspect which is
tangent at the pole, the equatorial aspect which is tangent at the Equator, and the oblique aspect which is
tangent anywhere else. (The word "aspect" has replaced the word "case" in the modern cartographic
literature.)
ConformalityA map projection is conformal when at any point the scale is the same in every direction.
Therefore, meridians and parallels intersect at right angles and the shapes of very small areas and angles
with very short sides are preserved. The size of most areas, however, is distorted.
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (18 of 20)31/01/2005 12:15:22
Map Projections Poster
Developable surfaceA developable surface is a simple geometric form capable of being flattened
without stretching. Many map projections can then be grouped by a particular developable surface:
cylinder, cone, or plane.
Equal areasA map projection is equal area if every part, as well as the whole, has the same area as the
corresponding part on the Earth, at the same reduced scale. No flat map can be both equal area and
conformal.
EquidistantEquidistant maps show true distances only from the center of the projection or along a
special set of lines. For example, an Azimuthal Equidistant map centered at Washington shows the correct
distance between Washington and any other point on the projection. It shows the correct distance between
Washington and San Diego and between Washington and Seattle. But it does not show the correct distance
between San Diego and Seattle. No flat map can be both equidistant and equal area.
GraticuleThe graticule is the spherical coordinate system based on lines of latitude and longitude.
Great circleA circle formed on the surface of a sphere by a plane that passes through the center of the
sphere. The Equator, each meridian, and each other full circumference of the Earth forms a great circle.
The arc of a great circle shows the shortest distance between points on the surface of the Earth.
Linear scaleLinear scale is the relation between a distance on a map and the corresponding distance on
the Earth. Scale varies from place to place on every map. The degree of variation depends on the
projection used in making the map.
Map projectionA map projection is a systematic representation of a round body such as the Earth or a
flat (plane) surface. Each map projection has specific properties that make it useful for specific purposes.
Rhumb lineA rhumb line is a line on the surface of the Earth cutting all meridians at the same angle. A
rhumb line shows true direction. Parallels and meridians, which also maintain constant true directions,
may be considered special cases of the rhumb line. A rhumb line is a straight line on a Mercator
projection. A straight rhumb line does not show the shorter distance between points unless the points are
on the Equator or on the same meridian.
| Top | Main table of contents |

For information on other USGS products and services, call 1-888-ASK-USGS, use the Ask.USGS fax
service, which is available 24 hours a day at 703-648-4888, or visit the general interest publications Web
site on mapping, geography, and related topics at mac.usgs.gov/mac/isb/pubs/pubslists/index.html.
Please visit the USGS home page at www.usgs.gov/
http://erg.usgs.gov/isb/pubs/MapProjections/projections.html (19 of 20)31/01/2005 12:15:22

Potrebbero piacerti anche