Sei sulla pagina 1di 86

Constraints on Universal Extra-Dimensional Dark Matter

from Direct Detection Results


by
Trevor Torpin
A THESIS
Submitted to the faculty of the Graduate School of Creighton University in partial
fulllment of the requirements for the degree of Master of Science in the
Department of Physics
Omaha, NE (August 15, 2011)
Abstract
Detection of dark matter is one of the most challenging and important problems in
astro-particle physics. One theory that produces a viable particle dark matter can-
didate is Universal Extra Dimensions (UED), in which the existence of a 4th spatial
dimension is theorized. The extra dimension is not seen because it is compactifed on
a circular orbifold whose radius is too small to be observed with current technology.
What separates this theory over other Kaluza-Klein-type theories is that UED allows
all standard model particles and elds to propagate in the extra dimension. The dark
matter candidate in UED theories is a stable particle known as the Lightest Kaluza-
Klein Particle or LKP, and the LKP can exist with sucient relic density to serve
as the dark matter. This work will present bounds on UED model parameters from
direct dark matter searches such as the XENON100.
iii
Acknowledgements
I would like to thank the faculty and sta of the Physics Department of Creighton
University. Im proud of the education I have received here at Creighton and I believe
it has helped prepare me for my future. I would especially like to thank my advisor
Dr. Gintaras Duda for his support and help.
I would also like to acknowledge the friendship and camaraderie of my fellow
graduate students who supported me throughout my time here. Finally, I also would
like to thank my family and friends for their unwavering faith and support over my
time here.
iv
Dedication
This work is dedicated to my family and friends who have supported me through thick
and thin throughout my life. Their encouragement has helped more than they know.
v
Contents
Abstract iii
Acknowledgements iv
Dedication v
Table of Contents vi
List of Figures viii
List of Tables ix
1 Dark Matter 1
1.1 Evidence for Dark Matter . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Cosmological Evidence . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Potential Candidates for Dark Matter . . . . . . . . . . . . . . . . . . 7
1.3 Detection Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.1 Direct Detection . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.2 Indirect Detection . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2 Kaluza-Klein Theory 20
vi
2.1 Kaluzas Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Kaluza-Klein Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3 Universal Extra Dimensions Theory 31
4 The XENON Experiments 37
4.1 The Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5 Results and Conclusions 44
5.1 The Direct Detection Program . . . . . . . . . . . . . . . . . . . . . . 44
5.2 Finding R and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6 Conclusions 55
Bibliography 57
Appendix A: Program Code 59
Appendix B: Maple Worksheet 69
vii
List of Figures
1.1 Galactic Rotation Curve . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 A Composite Image of the Bullet Cluster. . . . . . . . . . . . . . . . 4
1.3 CMB Anisotropy Power Spectrum for various values of
b
and
dm
. . 6
3.1 Mass Spectrum of the UED particles with one Loop Corrections for
Mass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Feynman Diagrams for B
(1)
-Quark Elastic Scattering . . . . . . . . . 36
4.1 Rate of Detection by Elements vs Recoil Energy . . . . . . . . . . . . 38
4.2 XENON100 Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Event Distribution using a Discrimination parameter . . . . . . . . . 41
4.4 Event Distribution within the Target Volume . . . . . . . . . . . . . . 41
4.5 XENON100 Limits on WIMP-Nucleon Cross Section vs WIMP Mass 42
5.1 XENON100 Limits: WIMP-Nucleon Cross Section vs WIMP Mass . . 46
5.2 WIMP-Nucleon Cross Section vs WIMP Mass . . . . . . . . . . . . . 49
5.3 Value of n vs WIMP Mass . . . . . . . . . . . . . . . . . . . . . . . . 51
5.4 Value of R vs WIMP Mass . . . . . . . . . . . . . . . . . . . . . . . . 53
viii
List of Tables
5.1 Limitations on Mass Data Table . . . . . . . . . . . . . . . . . . . . . 50
5.2 Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
ix
Chapter 1
Dark Matter
Dark matter is one of the most challenging and important problems in both astronomy
and particle physics today. Dark matter is a form of matter that does not emit or
absorb electromagnetic radiation; therefore, it is non-luminous. There is substantial
evidence that dark matter makes up the majority of the mass in the universe. In fact,
it is believed that dark matter makes up 22% of the universe, with 4% of the universe
being composed of normal mater, and the rest of the universe being composed of
something called dark energy. The composition of dark matter is unknown but we do
know that either it exists due to its gravitational eects on matter that we can see
such as the galaxies in the universe or else our fundamental understanding of gravity
is wrong. There is substantial observational evidence for the existence of dark matter
dating as far back as the 1930s.
1.1 Evidence for Dark Matter
There are two dierent main categories of evidence that imply the existence of dark
matter. The rst category is astrophysical evidence and includes the motion of stars
1
in galaxies and the motions of galaxies in clusters, and the phenomena of gravitational
lensing. J. H. Oort began studying the motion of stars in our Milky Way galaxy in
the early 1930s. By measuring the Doppler shift of these stars, astronomers can
estimate the velocity of the stars and from these velocities we can estimate the total
mass to account for these orbits. Oort was surprised to nd that in order to account
for these observed orbits, there must more mass than calculated from standard mass
to luminosity ratios. Standard mass to luminosity ratios work by letting the ratio
of our suns mass to its luminosity to be equal to one. Then by measuring the light
output of an object we can get an estimate of its mass.
The same essential method was used by F. Zwicky except that rather than looking
at stars, he examined the motion of galaxies in clusters. Zwicky applied the Viral
Theorem to nd the gravitational potential energy and through that, the mass of the
cluster. The Viral Theorem is given by:
K =
1
2
U, (1.1)
where K is the average kinetic energy of the galaxy, and U is the average potential
energy.
By using the Doppler shift of these galaxies Zwicky was able to nd their velocities
and through the Virial Theorem an estimate of the mass of the cluster. He also found
that there must be more mass than previously thought from mass to luminosity ratios,
which are well understood and roughly constant for a galaxy. Zwicky found that with
the velocity that individual galaxies were moving at, the cluster should have drifted
apart. Because of this, Zwicky came to the conclusion that there must be more
mass than what was visible through mass to luminosity ratios. This observation has
been repeated many times for dierent galaxy clusters and the results have all been
2
consistent that there must be more mass than what would be observed. [1]
The next evidence for dark matter was found in a study done on galactic rotation
curves. In this study, Vera Rubin and other collaborating astronomers measured
the velocity of stars as a function of their distance to the galactic center[2]. The
collaboration expected that the galaxy would exhibit the same behavior of our solar
system and that the rotational velocity will be given by:
v(r) =
_
G
m(r)
r
, (1.2)
where v(r) is the rotation speed of an object at a position of a radius r, G is the
gravitational constant, and m(r) is the total mass contained within the radius r.
What they expected was that as the distance that a star orbits the galactic center
increases, the stars velocity will decrease approximately by v(r) 1/

r. Instead
they found that the velocity of the stars was relatively constant no matter the distance
from the galactic center as shown in Figure 1.1. Here A is the expected velocity as a
function of distance and B is what was found. In order for B to occur, there must be
substantially more mass than what is expected from the mass-luminosity ratios and
in a dierent distribution.
Figure 1.1: Galactic Rotation Curve
3
The last major astrophysical evidence is a phenomenon called gravitational lens-
ing. Gravitational lensing was rst realized through Einsteins theory of general
relativity. This phenomenon occurs when light from a distant object such as a galaxy
or a quasar is bent around a closer object such as a cluster of galaxies. From the
amount of angular deection we can estimate the mass of the lensing object. Perhaps
the most direct evidence for dark matter comes from here, in the form of the Bullet
Cluster. Here in Figure 1.2, the visible image is from the Hubble telescope, Red is
the X-Ray Image from Chandra Telescope, and Blue is the gravitational map [3].
Figure 1.2: A Composite Image of the Bullet Cluster.
Scientists have been studying this particular cluster for quite some time which is
actually two colliding galaxy clusters. The astronomers found that as the galaxies
collided, the interstellar medium of dust was stripped from the cluster centers and
heated up in the central region between the two clusters and emitted x-rays that were
detected by the Chandra Telescope. It was assumed that since in a galaxy cluster, the
majority of the luminous mass is made up of the interstellar medium so it was assumed
that the central region would also be where the center of gravity would be. Scientists
then mapped the gravitational potential within the cluster to nd the location of the
majority of the clusters mass through the use of gravitational microlensing. They
found that instead of one central region of gravity, there was actually two, one on
4
either side of the central region. This lead astronomers to conclude that as the
galaxies collided, the centers passed through each other and that the baryonic matter
interacted and heated up in the central region, while the dark matter passed right
through without interacting[3]. This provided evidence that the majority of the mass
of a galaxy was actually not luminous and was in fact dark matter.
1.1.1 Cosmological Evidence
The next category of evidence implying the existence of dark matter comes from
cosmology. The rst example of evidence comes from the Big Bang Nucleosynthesis.
This is a time period that began a few seconds after the big bang and only lasted
a few minutes. During this brief time period, deuterium, helium, and other light
elements were formed from the combination of protons and neutrons. Currently
any deuterium being formed is pretty much only found in stellar structures, where
it is fused essentially immediately into helium so we actually can not observe it.
Because of this, any deuterium that we see can be taken to be leftover from the
Big Bang and we can use this as a lower limit for the amount of deuterium created
in the Big Bang. Using this lower limit we can calculate the theoretical elemental
abundances and measuring the deuterium/hydrogen ratio gives a good estimate of
the overall baryon abundance. . Using these abundances we can estimate the total
baryonic mass density and compare it to the total mass density. The dimensionless
parameter represents the ratio of the enery density to the critical energy density
when relating to the curvature of the universe. The calculated baryonic density
is given as
b
h
2
= 0.0229 0.0013 or
b
h
2
= 0.0216
0.0020
0.0021
[4] depending on the
measurement of deuterium used. The total mass density is
m
h
2
= 0.1334
+0.0056
0.0055
[5].
Clearly baryonic matter is not the only type of matter in the universe.
The next cosmological evidence that we will consider is from the cosmic microwave
5
background. The cosmic microwave background is radiation leftover from approxi-
mately 380,000 years after the Big Bang. From new experiments, such as the WMAP
survey, we can see small uctuations in the temperature in this background. These
uctuations are dependent on the amount of baryons in the universe at the time.
Scientists have realized that these uctuations are too small to account for the com-
plete structure formation as we have observed it. This is because baryonic matter
doesnt become neutrally charged until the epoch of recombination. In order to t
this microwave background with the appropriate time scale, scientists have found that
here must be matter in the universe that is weakly interacting, and thus dark matter
comes into play to match these observations. The Cosmic Microwave Background
also limits the amount of baryonic dark matter because matter and dark matter will
interact dierently with the CMB. When scientists plot the CMB anisotropy power
spectrum, it shows that there must be less baryonic dark matter than ordinary matter
as shown here in Figure 1.2.
Figure 1.3: CMB Anisotropy Power Spectrum for various values of
b
and
dm
.
The last cosmological evidence that we will examine here is in Large Scale Struc-
ture Formation simulations. Scientists have developed models that mimic how we
expect our galaxy to form into its observed structure. One such simulation was the
Millennium-II simulation which observed more than 10 billion particles and how they
6
interacted in an eort to study dark matter halo structure and formation[6]. Each
particle represented 6.89 10
6
h
1
M

in a volume of (100h
1
Mpc)
3
. In these simula-
tions, scientists have found that if we just assume that there is no dark matter we nd
that these structures take too long to form or do not form the lament and void-type
structures and hence they do not match well with our observations of our universe.
In order to run a simulation that will form structure on the appropriate time scale,
scientists have to include more matter. Another very important conclusion from these
simulations is that this dark matter must be non-relativistic meaning that it does not
move at a signicant portion of the speed of light. If scientists run a simulation where
dark matter is relativistic we nd that the universe does not form as we would expect
from our observations.
1.2 Potential Candidates for Dark Matter
Dark matter candidates come in two major categories which are baryonic dark mat-
ter and nonbaryonic dark matter. Baryonic matter is matter that is made up of
quarks and this is the sort of matter that we observe all around us. Some possi-
ble candidates of baryonic dark matter include brown dwarfs, neutron stars, black
holes, and unassociated planets. These candidates are all classied as Massive Com-
pact Halo Objects or MACHOs for short. There have been several collaborations
that have searched for these objects through the use of gravitational microlensing.
Gravitational microlensing is the changing brightness of a distant object due to the
interference of another nearby object that is between the object and the observer.
The two major collaborations are the MACHO collaboration and the EROS-2 Sur-
vey. The MACHO collaboration claims to have found between 13 -17 microlensing
events which was substantially more than expected[7]. This would imply that there
7
could be enough 0.5 Solar Mass MACHOs to account for about 20% of the dark mat-
ter in the galaxy. However, the EROS-2 Survey did not substantiate the signal claims
by the MACHO group. They did not observe enough microlensing eects while hav-
ing a higher sensitivity than the MACHO group[8]. Both these collaborations found
very few possibilities for candidates implying that most dark matter cannot exist in
the form of baryonic astrophysical objects.
From these various studies, we conclude that baryonic dark matter must not
make up the majority of dark matter in our universe. Thus it must be in the form of
nonbaryonic dark matter. Nonbaryonic dark matter candidates are generally known
as WIMPs or Weakly Interacting Massive Particles. These WIMPs are generally
very massive particles that are electrically neutral and do not interact strongly with
matter. They generally only interact via the gravitational force and the weak force.
The Standard Model of particle physics is a theory in which three of the four
fundamental forces in nature are described. The three forces described in the Standard
Model are electromagnetism, the weak nuclear force, and the strong nuclear force. The
force that is not included is the force of gravity. The Standard Model predicts 16
types of particles and of these, there is only one potential candidate for dark matter.
The neutrino is the only stable, electrically neutral, and weakly interacting particle
in the Standard Model that ts the parameters for dark matter. This candidate has
the advantage over other potential particle candidates because it is actually known
to exist. Unfortunately the neutrino cannot account for the majority of dark matter.
This is because neutrinos are relativistic and would inhibit structure formation; this
implies a top-down formation of the universe which is inconsistent with observations.
Another argument against the neutrino is that neutrinos are very light. Current
estimates place an upper limit on the mass of less than .58 electron-volts[5]. This
implies that to account for the estimated value of dark matter in the universe, there
8
would have to have a larger energy density than what is calculated from the Cosmic
Microwave Background. Therefore we can conclude that neutrinos could account for
a portion of dark matter but not all of it.
To nd more possible dark matter candidates, scientists have theorized various
extensions to the Standard Model. The most prominent extension to the standard
model is known as supersymmetry. The exact specics vary on the model proposed.
Essentially all supersymmetries create a symmetry between bosons and fermions.
It relates elementary particles of one spin to other particles that are dierent by a
half unit of spin. These other particles are called superpartners and must be very
massive as they would have been spotted at particle accelerators before now. Thus
implying that supersymmetry must be a broken symmetry in nature. Supersymme-
try eectively doubles the number of particles in the standard model. Thus using
supersymmetry, there are now several possible candidates for a dark matter particle.
The most likely predicted particles that are neutral and also weakly interacting, and
thus the ideal WIMP candidate, are the neutralino, the sneutrino, and the gravitino.
However, there are several arguments against some of these candidates.
The sneutrino particle is the superpartner of the neutrino and is relatively light.
Sneutrinos annihilate very rapidly in the early universe and their relic density is
too low today to make the sneutrino a viable candidate or at least not a signicant
portion of dark matter[9]. We can also rule out the gravitino as a possible candidate.
The gravitino is the superpartner to the graviton which is the theorized force carrier
particle of gravity. The problem with the gravitino as a dark matter candidate is
that the particle would be relativistic. As we saw earlier, relativistic dark matter will
inhibit structure formation, hence, we can conclude that the gravitino is not a viable
dark matter candidate.
The most promising candidate by far is the neutralino. The neutralino is the
9
superposition of the neutral superpartners of the Higgs and gauge bosons. The neu-
tralino composition is given by:
a
1

B + a
2

W
(3)
+ a
3

H
1
+ a
4

H
2
, (1.3)
where is the neutralino,

B is the bino component,

W is the wino component, and

H are the two higgisino components. The neutralino is a viable candidate due to
what is known as R-parity. R-parity is when the baryon number and lepton number
are no longer conserved in the couplings of the theory and because of this, the light-
est supersymmetric particle cannot decay. In most theories of supersymmetry, the
lightest supersymmetric particle is the neutralino. We know that the relic abundance
of neutralinos should be sizeable and thus be signicant. Most importantly the de-
tection rates are high enough to be observed in a laboratory experiment but are not
high enough that they can be ruled out through experiment as of yet.
There are other more exotic particles that are theorized that could be a dark mat-
ter candidate such as the axiono[10], Q-balls[11], mirror particles[12], WIMPzillas[13],
branons[14] among others. However the neutralino remains by far the most promising
candidate motivated by theory. But how could it be detected?
1.3 Detection Methods
If we assume that the dark matter in our galaxy are WIMPs, then there must be a
large number of dark matter particles passing through the Earth each second. There
are many experiments that are being performed that are attempting to discover these
WIMPs. These experiments can be divided into two classes: direct detection ex-
periments, which search for the scattering of dark matter particles o atomic nuclei
within a detector, and indirect detection, which look for the products of WIMP anni-
10
hilations. There is another possibility of detection in which supersymmetric particles
are produced in accelerators. These have not been found yet, but hopes are high that
they will be observed when the Large Hadron Collider becomes fully operational. The
Large Hadron Collider is the worlds largest and highest-energy particle accelerator
located near Geneva, Switzerland and operated by CERN (European Organization
for Nuclear Research).
1.3.1 Direct Detection
Direct detection experiments attempt to observe dark matter in a detector that is
extremely sensitive to nuclear recoils. The detector attempts to measure the nuclear
recoils when dark matter particles collide with a nucleus within the detector. These
events can manifest themselves in three ways. There is a Phonon/Thermal interaction
in which the nuclear recoil causes a vibration in the detector that causes a slight rise in
temperature in the detector. The next type of interaction is an Ionization interaction
in which the event causes a charge to move across an applied electric eld in the
detector. The nal type of event interaction is scintillation in which the event causes
an electron to go to a higher energy state that emits a photon when it decays. A
WIMP signal should have several distinct characteristics. They should be uniformly
distributed throughout the detector because the events are so weakly interacting
they should pass through the surface easily. They should also be a single site event
because these events are rare enough that it is highly unlikely that these events occur
at consecutive events. The signal should also vary at dierent times through out
the year because of the Earths velocity relative to the dark matter in our galaxy.
In general, a detector detects two of these interactions and then these events are
analyzed and based on the timing and other characteristics of these interactions,
potential background events are discarded.
11
There are many collaborations that are performing direct detection experiments.
The majority of direct detection experiments use one of two detector technologies:
Cryogenic detectors, operating at temperatures below 100mK, detect the heat pro-
duced when a particle hits an atom in a crystal absorber or a noble liquid detector
that detects the ash of scintillation light of the particle collision. Direct detection
experiments operate in deep underground laboratories to reduce the background from
cosmic rays. Some examples are CDMS, CRESST, CoGeNT, and XENON experi-
ments. One direct detection experiment, the DAMA experiment, claimed to have
observed an annual modulation in the event rate, which they claim is due to dark
matter particles velocity relative to the Earth as it orbits around the sun. So far this
claim has not conrmed by other collaborations.
Most direct detection experiments set limits on WIMP-proton or neutralino-
proton cross sections. Let us work through an example on how they calculate these
limits. The spin-independent neutralino-nucleus elastic cross section with a pointlike
nucleus of Z protons and A Z neutrons is given by:

SI
i
=

2
i

|ZG
p
s
+ (A Z)G
n
s
|
2
, (1.4)
where G
p
s
and G
n
s
are the scalar four-fermion couplings of a WIMP with point-like
protons and neutrons and
i
= mM/(m + M) is the WIMP-nucleus reduced mass
where m is the neutralino mass and M is the nucleus mass. Typically, it is assumed
that G
p
s
= G
n
s
. This allows us to rewrite the spin-independent WIMP-proton cross
section as

SI
i
=
p
A
2
_

p
_
2
. (1.5)
In detectors, the expected number of events with recoil energy in the range (E
1
, E
2
)
12
is the sum over the nuclear species in the detector given by
N
E
1
E
2
=

i
_
E
1
E
2
dR
i
dE

i
(E)dE, (1.6)
where
i
(E) is the eective exposure of each nuclear species in the detector and
dR
i
/dE is the expected recoil rate per unit mass of species i per unit nucleus recoil
energy and per unit time. The eective exposure is given by the function

i
= M
i
T
i

i
(E), (1.7)
where T
i
is the active time of the detector, M
i
is the mass of the nuclear species i
exposed to the signal, and
i
(E) is the counting eciency of the detector for nuclear
recoils of energy E. The dierential rate dR
i
/dE is given by
dR
i
dE
=

SI
i
|F(q)|
2
2m
2
i
_
v>q/2
f(v, t)
v
d
3
v, (1.8)
where E is the energy of the recoiling nucleus, is the local halo WIMP density, f(v, t)
is the WIMP velocity distribution function in the frame of the detector (generally
assumed to be a Maxwell-Boltzmann distribution[?]),
SI
i
is the spin-independent
WIMP-nucleus elastic cross section o a nucleus and |F(q)|
2
where q is the recoil
momentum given by q =

2ME is a nuclear form factor. The cross section is found


by comparing the expected number of events to the observed number of events.
The nuclear form factor, F(q), is the Fourier transform of a spherically symmetric
ground state mass distribution normalized such that F(0) = 1:
F(q) =
1
M
_

mass
(r)e
iqr
d
3
r. (1.9)
We can perform part of the of the integration leaving
13
F(q) =
1
M
_

0

mass
(r)
sin qr
qr
4r
2
dr. (1.10)
Mass and charge densities are generally assumed to be related because it is dicult
to determine how the mass is arranged in the nucleus. The densities are related by

mass
(r) =
M
Ze

charge
(r). (1.11)
This is done because it is fairly simple to measure the charge densities through the
use of elastic electron scattering. It is worth noting that because the nuclear form
factor is normalized, this implies:
F
mass
(q) = F
charge
(q) (1.12)
The most basic charge density for a nucleus is known as the uniform model[16]. In
this model, the charge density of the nucleus is a constant value until a cuto radius
R. Therefore the charge density is given by:

U
(r) =
_

_
3Ze
4R
3
, r < R
0, r > R
(1.13)
Now, the total charge of the nucleus is Ze due to the fact that the charge density has
been normalized. Now let us derive the form factor. Using Eq. 1.12, we can write
the mass density as

mass
(r) =
_

_
3M
4R
3
, r < R
0, r > R
. (1.14)
The form factor can be written as
14
F(q) =
1
M
_

0

mass
(r)
sin qr
qr
4r
2
dr.
Plugging in our mass density we nd
F(q) =
1
M
_
R
0
3M
4R
3

sin qr
qr
4r
2
dr. (1.15)
Pulling out the constants we are left with
F(q) =
3M4
M4
_
R
0
r
2
R
3

sin qr
qr
dr. (1.16)
Performing the integration, we obtain
F(q) =
3
R
3
q
3
(sin(qR) Rcos(qR)q). (1.17)
We can rewrite this as
F(q) =
3
qR
_
sin(qR)
q
2
R
2

Rq cos(qR)
qR
_
. (1.18)
Now, we can use a spherical Bessel function of the rst order, namely:
j
1
(x) =
sin(x)
x
2

cos(x)
x
(1.19)
Using this we can write the nuclear form factor for the the uniform model as
F(q) =
3
qR
j
1
(qR). (1.20)
The uniform model is just an idealization of a nucleus. A nucleus cannot have
such a abrupt cuto in the charge distribution. One solution of this problem is to
make use of the Helm charge density[16]. The Helm charge distribution is given by:
15

H
(r) =
_

U
(r

)
G
(r r

)d
3
r

(1.21)
where
G
(r) is given by

G
(r) =
1
(2g
2
)
3/2
e
r
2
/2g
2
, (1.22)
and g is a parameter dealing with the radius of the Gaussian smearing surface density.
The Helm density has a simple form factor found using the convolution theorem to
be just the product of the form factors of
U
and
G
:
F(q) = F
U
(q)F
G
(q)
=
3
qR
j
1
(qR)e
g
2
q
2
/2
.
(1.23)
There are three main distributions that are used in dark matter distributions[16].
The rst one is known as the Woods-Saxon distribution. This is given by the density:
(r) =

c
e
(rc)/a
+ 1
, (1.24)
where c is the half-density radius,
c
is the density at r = c and the parameter a
is related to the surface thickness t by t = (4 ln 3)a. The problem with the Woods-
Saxon distribution is that the Fourier transform cannot be computed analytically.
The second main distribution is known as the the sum of Gaussian expansion. In this
distribution, the charge density is modeled as a series of Gaussians. It can be written
as
(r) =
N

i=1
A
i
_
e
[(rR
i
)/]
2
+ e
[(r+R
i
)/]
2
_
, (1.25)
16
where is the width of the Gaussians and A
i
is given by
A
i
=
ZeQ
i
2
3/2

3
(1 + 2R
2
i
/
2
)
. (1.26)
Here Q
i
stands for the fractional charge in the ith Gaussian. If we assume spherical
symmetry, the form factor for the sum of Gaussians expansion can be written as:
F(q) = e
q
2

2
/4
N

i=1
Q
i
1 + 2R
2
i
/
2
_
cos(qR
i
) +
2R
2
i

2
sin(qR
i
)
qR
i
_
. (1.27)
Now the third and nal commonly used distribution is known as the Fourier-Bessel
expansion. This charge density is modeled as a sum of Bessel functions up to a cuto
radius R. The density can be written as:
(r) =
_

_
N

=1
a

j
0
(r/R), r R
0, r R,
(1.28)
where j
0
(x) = sin x/x is the zeroth-order spherical Bessel function. If we assume
spherical symmetry, we can write nuclear form factor as
F(q) =
sin(qR)
qR

N
=1
(1)

/(
2

2
q
2
R
2
)

N
=1
(1)

/
2

2
. (1.29)
These form factors play a role in the calculation of the cross section and can aect
their accuracy. Generally, this eect is typically small and is not considered here as
a major factor in my calculation.
1.3.2 Indirect Detection
Indirect detection experiments search for the products of WIMP annihilation. Some
current experiments include EGRET, the Fermi Gamma-ray Space Telescope, and
17
PAMELA which is a satellite that is studying cosmic radiation. Neutralinos are
majorana particles, meaning that they are their own antiparticle, and hence they
annihilate with each other. These annihilations generate products which we can
detect, such as gamma-rays, neutrinos and antimatter. These products are typically
searched for in the sun, earth, and in the galactic center. This is because neutralinos
are expected to be drawn to large objects due to gravity where they will annihilate.
Gamma-rays from neutralino annihilation is believed to occur mostly in the galac-
tic center. There are two main processes in which gamma-rays are produced. They are
produced when the annihilation creates a quark and anti-quark which create particle
jets in which a spectrum of gamma-rays are released from the decay of
0
particles
and which is proportional to the mass of the WIMP. Neutralino annihilation also
yields neutrinos which occur through many dierent annihilation processes. Detec-
tion of neutrinos depends on the WIMP mass, annihilation rate, density as well as
several other factors. It is dicult to detect neutrinos because they are very weakly
interacting so detectors must be very large in order to detect a signicant signal.
Neutralino annihilations can also yield antimatter. The antimatter can be antipro-
tons from quark and antiquark pairs or it can be positrons created from secondary
products of the annihilation. Antimatter products are charged particles. This means
that they can be aected by magnetic elds in space and they can lose energy due to
inverse Compton (photon gains energy on interaction with matter) and synchrotron
processes[17]. This is important, because of these eects, we cannot determine where
the annihilations occurred. However, the observation of any of these products would
not be proof of dark matter, as the backgrounds from other sources are not yet fully
understood.
18
1.4 Conclusion
Detection of dark matter is one of the most challenging and important problems in
both astronomy and particle physics today. This chapter was aimed at providing a
basic introduction into the subject of dark matter, particularly its potential candi-
dates and experimental detection methods. There are many experiments that are
being performed that are constraining potential dark matter candidates. Because of
this, and with new results that are being released regularly it is a very exciting time
to be in this eld!
19
Chapter 2
Kaluza-Klein Theory
Kaluza-Klein Theory is an interesting example of a unication theory in theoretical
physics that came about in 1921. It resulted from a large push of unication theories
in physics throughout the early 1900s. Unication theories of course are theories that
attempted to combine the forces of gravity and electromagnetism, the only forces that
physicists were aware of at the time. One of the largest proponents of unication
theories was the esteemed Albert Einstein. Now, Kaluza-Klein Theory was one of
such attempts to unify gravity with electromagnetism, but what made this theory so
interesting and intriguing was that the attempt at unication was done through the
use of an extra dimension.
2.1 Kaluzas Theory
Theodor Kaluza was a German mathematician/physicist who was the main architect
behind this theory, but there were other attempts of unication, most notably Her-
mann Weyl who published his theory three years before Kaluza. For various reasons,
all the earlier ones were discredited. Kaluza rst came up with his theory around
20
1919 though, due to some reservations by Einstein, it was not published until 1921 in
a paper entitled On the Unity Problem in Physics[18]. Kaluzas approach was based
on the idea of a single universal tensor which is the source of both the gravitational
and electromagnetic elds.
Kaluza began by considering gravity in a ve dimensional space. When the 5D
Riemannian line element is given by
d =
_

ik
dx
i
dx
k
, (2.1)
where
ik
are the covariant components of a 5D symmetric tensor and the xs are the
5 coordinates of space with x
5
representing the 5th dimension. This can be written
in a more modern form in this way:
ds
2
= g

dx

dx

(2.2)
Throughout this thesis we will use Einstein summation notation, meaning that
we will drop explicit summation signs and assume that repeated indices imply sum-
mation. We must be careful and note that indices that use the Latin alphabet sum
from one to four and the indices that use the Greek alphabet sum from one to ve.
Kaluza started here because Riemann proved that one can consider force a con-
sequence of geometry, which in this case is determined by the metric. Kaluza had
to make several assumptions. He assumed that x
1
, x
2
, x
3
, x
4
characterizes the usual
space-time. Now, Kaluza introduces what he called the cylinder condition which
forces the metric tensor to depend only on the observable space-time coordinates.
We can do this by setting the derivative of the metric with respect to the 5th dimen-
sion equal to zero:
21
d
ik
dx
5
= 0 (2.3)
This sets the structure of the 5D space to be a cylinder world whose axis is the 5th
dimension and is preferred. This section is what Einstein wasnt convinced of, and it
took two years before he agreed with what Kaluza did.
The cylinder condition implies that coordinate transformations must be of the
form:
x
5
= x
5

+
0
(x
1

, x
2

, x
3

, x
4

),
x
i
=
i
(x
1

, x
2

, x
3

, x
4

).
(2.4)
That is, transforming the 5th dimension allows the terms to mix with regular space-
time coordinates but when transforming the regular coordinates, these are not allowed
to mix with the 5th dimension coordinate because if that were to occur, the fth di-
mension would be observable to us. Under this set of transformations,
55
is invariant.
This allows us to set this element of the metric to a constant:

55
= constant. (2.5)
Also invariant under the transformation are the dierential quantities
d = dx
5
+

5i

55
dx
i
, (2.6)
ds
2
=
_

ik


5i

5k

55
_
dx
i
dx
k
, (2.7)
where d represents a point in the 5th dimension and ds
2
represents a line element
22
in the regular four dimensions. We can connect these two quantities by another line
element:
d
2
= d
2
+ ds
2
(2.8)
which is invariant as well due to the fact that d and ds
2
are invariant themselves.
From the properties of transformations and the fact that d and
55
are invariant, we
eventually nd that
5i
transforms the same as a 4D vector, which we may call
i
.
Next, Kaluza began to dene his metric for his 5D space-time. We may let
5i
=

i
where is a constant and
i
is dened as an arbitrary vector. We can substitute
this back into our line element so that we have
d = dx
5
+
i
dx
i
. (2.9)
Kaluzas next step was to write the line element ds
2
as the line element used in
Einsteins highly successful Theory of General Relativity. To do this, Kaluza set his
metric equal to

ik
= g
ik
+
2

k
, (2.10)
where g
ik
is the usual four dimensional metric that Einstein used. This metric, g
ik
,
is usually chosen such that in Cartesian coordinates
ds
2
= dx
2
+ dy
2
+ dz
2
c
2
dt
2
, (2.11)
which is the typical space-time interval. Therefore Kaluzas metric is given by,
23

ik
=
_

_
g
11
g
12
g
13
g
14

1
g
21
g
22
g
23
g
24

2
g
31
g
32
g
33
g
34

3
g
41
g
42
g
43
g
44

4

1

2

3

4

55
_

_
. (2.12)
By examining the metric, we can see that it is composed of the usual 4D metric along
with a vector.
The next thing that Kaluza derived were the equations of motion in this ve
dimensional geometry. He began by dening an invariant scalar
P =

ik
_
{
i

}
x
k

{
ik

}
x

+{
i

}{
k

} {
ik

}{

}
_
, (2.13)
where
ik
are the contravariant components of the metric, and {
rs
i
} represents
Christoel symbols. Christoel symbols relate vectors in the tangent space of nearby
points and are dened as
{
rs
i
} =
1
2

i
{

r
x
s
+

s
x
r


rs
x

}. (2.14)
In this interpretation, P is the ve dimensional Ricci Scalar. The Ricci Scalar is
written in modern terms as
R = R

= R

, (2.15)
where represents the Christoel symbols which are dened as

=
1
2
g

). (2.16)
24
Now in R, it is assumed that it is independent of x
5
and of
55
= . This must be
done because otherwise the fth dimension would be observable.
Next, Kaluza considered the action of the system; this action is known as the
Einstein-Hilbert Action, and works exactly as in general relativity except in ve
dimensions. This is given by
J =
_
R

dx
1
dx
2
dx
3
dx
4
dx
5
, (2.17)
where represents the determinant of the metric,
ik
. Next, Kaluza used the calculus
of variations. He then formed J by varying
ik
and

ik
dx
l
while keeping the boundary
values xed and keeping constant. Then, using the Principle of Least Action, he
set J = 0, which will give the extremes of the motion.
By setting J = 0, Kaluza found[19]:
R
ik

1
2
g
ik
R +

2
2
S
ik
= 0, (2.18)

gF
i
x

= 0. (2.19)
Next Kaluza set

2
2
= where is the gravitational constant used by Einstein in
general relativity. This gives
R
ik

1
2
g
ik
R + S
ik
= 0, (2.20)

gF
i
x

= 0. (2.21)
Eq. (2.20) is the Einstein equation for the metric, which gives four dimensional
general relativity. But what is Eq. (2.21)? It looks like an equation of motion for a
25
vector eld. In modern terms, we can write Eq. (2.21) as

= 0. (2.22)
Switching gears a little bit, Maxwells equations in empty space are given by
E = 0, (2.23)
B = 0, (2.24)
B =
0

0
E
t
, (2.25)
E =
B
t
. (2.26)
where E represents the electric eld, B is the magnetic eld,
0
is the permeability of
free space, and
0
is the permittivity of free space. This allows a tensor to be dened
as
F

=
_

_
0 E
1
E
2
E
3
E
1
0 B
3
B
2
E
2
B
3
0 B
1
E
3
B
2
B
1
0
_

_
(2.27)
Then if we take

and set it equal to zero we nd that it equals Maxwells


equations through the use of Calculus of Variations. Classical electrodynamics comes
from the action
26
S =
_
Ld
4
x (2.28)
in the case of no sources. The Lagrangian for this theory is dened as
L =
1
4
F

. (2.29)
where F

. Working through the Euler-Lagrange equations will


yield Maxwells equations.
Going back to our arbitrary vector
i
when we dened the metric, we can build
F

out of
i
such that in Cartesian coordinates:

i
= (
x
,
y
,
z
) = A

t
= cV
(2.30)
where A is the usual Electromagnetic vector potential, V is the usual scalar potential,
and c is the speed of light. All of this leads to a conclusion that

gF
i
x

= 0 is
equivalent to the equation of motion for an electromagnetic eld.
Our equations of motion are therefore given by
R
ik

1
2
g
ik
R + S
ik
= 0,

gF
i
x

= 0,
where the elements of the above equations as follows:
R is the Ricci Scalar
27
R
ik
are the contravariant components of Einsteins Ricci tensor
g
ik
are the contravariant components of Einsteins metric tensor
S
ik
are the contravariant components of the Electromagnetic Energy-Momentum
tensor
g is the determinant of g
ik
F
i
are the contravariant components of the electromagnetic eld strength ten-
sor
We essentially recover the gravitational eld equations of general relativity along
with the generalized Maxwell equations! Thus Kaluza proved that gravity in ve
dimensions is mathematically equivalent to just four dimensional gravity along with
electromagnetism! This was a very interesting and amazing result. However, there
are some problems with this theory that ultimately led to it being discredited and
being considered a mathematical oddity.
One such problem is that we still have the
55
scalar eld which Kaluza left alone.
This is ultimately what led to the downfall of this theory. A scalar eld will act as
another force, which would be a fth force that we have not observed. Now there are
some theories that postulate a fth force, but if there is such a fth force, it would
have to be tightly constrained. Interestingly enough, there are some theories that
suggest that this
55
scalar eld could be what accounts for dark matter.
2.2 Kaluza-Klein Theory
Someone that studied Kaluzas theory and made some improvements to it was the
Swedish physicist named Oskar Klein. Oskar Klein was a theoretical physicist who is
28
much better known for his work on quantum theory. Early on in his career, he worked
on Kaluzas theory and ended up publishing some additions in 1926 in the paper The
Atomicity of Electricity as a Quantum Theory Law[19]. His major addition to the
theory was linking it to the new theory of quantum mechanics.
Essentially, what Klein ended up doing was curling up or compactifying the fth
dimension. Basically this compactication is why the fth dimension is not observed.
Kleins Condition was based on the Bohr-Sommerfeld quantization rule given by
_
pdr = Nh. (2.31)
The specic quantization rule that Klein used was
p
5
=
Nh
l
, (2.32)
where p
5
is the particles momentum in the fth dimension, N is the quantum number,
l is the period or the circumference of the fth dimension, and h is Planks constant.
Because this is quantized, the motion in the fth dimension will manifest as stand-
ing waves at dierent excitation levels. These excitations will form a tower of particles
which will have ever increasing masses. This could be what gives dark matter mass.
This is because a particle that is just sitting there in our normal space-time but
is moving in the fth dimension will actually have kinetic energy which would just
appear to us as adding mass to a particle.
Klein also calculated the size of the fth dimension. He did this through
l =
hc

, (2.33)
where is the electronic charge. The size of the dimension came out to be about
0.8 10
30
cm which is about the size of the Plank length. This turns out to be
29
about 10
20
times smaller than the diameter of the nucleus of an atom. Now, there
are still some problems with what is now known as Kaluza-Klein theory. Largest of
course, is that Kleins modications still has the aw of the scalar eld that Kaluzas
theory had, so ultimately it fell out of favor as well.
30
Chapter 3
Universal Extra Dimensions
Theory
Universal Extra Dimensions Theory (UED Theory) is a modern Kaluza-Klein theory.
UED theory was rst proposed in 2001 by Applequist, Cheng, and Dobrescu and
it diers from other modern theories because it allows all standard model elds to
propagate in the extra-dimensions[20]. These particles will gain mass the same as in
Kaluzas theory with the excitation of the standing waves in the extra dimension. In
UED theory the extra dimension is compactied along a
S
1
Z
2
orbifold. This creates a
tower of higher dimensional excited particles, where the mass of these particles scale
according to
1
R
, where R is the size of the extra dimension. The Kaluza-Klein number
(KK-number) is essentially a measure of the momentum of the a particle in the extra
dimension. KK-parity preserves the evenness or oddness of the number which keeps
the lightest Kaluza-Klein particle (LKP) stable, which can be a natural dark matter
candidate with the appropriate relic density[20] [21]. A more detailed review can be
found in [22].
UED theory can be adapted to any number of extra dimensions, but the most
31
common number of dimensions used is ve. Four normal dimensions and one extra
dimension that has a radius of R. The Lagrangian for the UED model of 5 dimensions
is given by[23]:
L
5D
=
1
4
G
A
MN
G
AMN

1
4
W
I
MN
W
IMN

1
4
B
MN
B
MN
+ (D
M
H)

_
D
M
H
_
+
2
H

H
1
2

_
H

H
_
2
+ i

M
D
M

+
_

E

LEH +

U

QU

H +

D

QDH + h.c.
_
+ ...
, (3.1)
where G
MN
, W
MN
, B
MN
are the 5D SU(3)
C
SU(2)
W
U(1)
Y
gauge eld strengths.
The covariant derivatives are dened as D
M
=
M
+i g
3
G
A
M
T
A
+i g
2
W
I
M
T
I
+i g
1
Y B
M
,
where g
i
are the 5D gauge couplings with engineering dimension m
1/2
. The ellipsis
represent higher order terms which are not relevant to us and h.c. represents the
Hermitian conjugate of the previous terms.
The next step in the theory is to specify the compactication of the extra-dimensions.
The simplest choice would be to compactify the extra-dimension onto the spherical S
1
orbifold but unfortunately this causes an issue with the chirality of fermions. What
works for this 5D theory is the S
1
/Z
2
orbifold. This orbifold can be thought of by
thinking of the normal four dimensions as a straight line and then the fth dimension
be a circle attached to the line at two points. Next we need to specify how the orbifold
will transform. When a Fourier expansion of the gauge elds is expanded into the A

and A
5
components, this compactication that was used requires that A

(where
represents the normal four dimensions) is even under a y y transformation and
that A
5
is odd under a y y transformation, where the y coordinate represents
the 5th dimension. We can visualize what occurs in this transformation as that of
a particle moving in a circle will be in the same state it was before and after one
32
revolution around the circle i.e. a cyclic boundary condition.
The fourier expansion will give us the components of the gauge and scalar elds
in KK modes[22]:
(H, A

) (x

, y) =
1

R
_
(H
0
, A
,0
) (x

) +

n=1
(H
n
, A
,n
) (x

) cos
_
ny
R
_
_
, (3.2)
A
5
=
_
2
R

n=1
A
5,n
(x

) sin
_
ny
R
_
, (3.3)
where H represents the standard model Higgs Boson and will be even under this
orbifold transformation. A
5
is needed for chirality. When n = 0 those terms are
the zero-mode excitation particles and are in fact just the normal standard model
particles. However, there exists a tower of higher dimensional excitations for these
standard model particles in the 5th dimension. Another important thing to note
about this orbifold is that the boundary conditions are given by y = 0, R so that:
A =

2
A
y
2
= 0 for odd elds,
A
y
= 0 for even elds.
(3.4)
The next step is to transform this model into a 4-dimensional eective theory.
This is done by inputting the expanded elds back into the Lagrangian and then
integrating over the 5th dimension using the limits of y = 0 R for even elds and
for y = R R for the odd elds. Doing the integration yields a solution in the
form of
33
K
0
A
,0
+ K
n

n=1
F (A
,n
, (1/R)) . (3.5)
The Ks are constants that will include R and F is a function of A
,n
and 1/R.
This expansion should have observable consequences that should help determine what
exactly dark matter is if this theory pans out. We should see are a bunch of zero
mode particles (Standard Model particles) and then a tower of other particles whose
masses scales with 1/R.
R must be very small because otherwise extra dimensional excitations would al-
ready be observable. This means that these other particles will be massive. This
could potentially pose a problem for this theory. This is because in particle physics,
the heavier a particle is, the more unstable it is and the more likely it will be to
decay into lighter, stabler particles. This implies that these massive particles would
have long since decayed as they would have been created in the Big Bang. Thus they
would not be able to be a viable dark matter candidate.
However, this turns out to not be a problem due to a inherent property of the
compactication. This is due to what is known as KK-parity. n is what is known
as the Kaluza-Klein number (KK-number). The KK-number is essentially a mea-
sure of the particles momentum in the extra dimension and if it the theory was
compactied on the S
1
surface then the KK-number would be conserved. Compact-
ifying on the S
1
/Z
2
orbifold breaks this symmetry and calls for KK-parity to be
conserved. KK-parity means that the evenness or oddness of the KK-number will be
conserved in an interaction. This means that the lightest KK-particle cannot decay
(n = 1 cannot go to n = 0). Thus all the KK particles will have now decayed into
the LKP which would still exist and could be dark matter.
Cheng, Matchev and Schmaltz investigated what particle could be the LKP and
34
in so doing developed a particle spectrum shown in Figure 3.1[24]. The mass of the
paricles is proportional to 1/R at the tree level, but quantum corrections induce a
splitting between the particles.
Figure 3.1: Mass Spectrum of the UED particles with one Loop Corrections for Mass
They found that the rst excitation of the photon is the LKP, which is called the B
(1)
.
Their mass spectrum depends on two parameters R , which is the size of the extra
dimension, and which is the cut-o scale of the theory. Generally, R measures
how many excitation modes can be counted before the theory breaks down.
The Standard Model elds appear as towers of Kaluza-Klein states at the tree
level, where the mass is given by
m
2
X
(n)
=
n
2
R
2
+ m
2
X
(0)
, (3.6)
where X
(n)
is the nth Kaluza-Klein excitation of the Standard Model Field, R is the
size of the extra dimension (R TeV
1
due to the fact that c = 1 ), and X
(0)
stands
for the ordinary Standard Model Particle. Corrections to the KK masses are given by
loop diagrams traversing around the extra dimension and by brane-localized kinetic
terms at the orbifold boundaries. The corrections for the B
(n)
are given by:
35

_
m
2
B
(n)
_
=
g
2
16
2
R
2
_
39
2
(3)

2

n
2
3
ln R
_
. (3.7)
The corrections for the quarks
_
Q
(n)
_
are given by

_
m
Q
(n)
_
=
n
16
2
R
_
6g
2
3
+
27
8
g
2
+
1
8
g
2
_
ln R. (3.8)
The goal of my project is to constrain R and in UED theory using the lastest
data from the XENON100 direct detection experiment. In order to constrain R and
we will have to analyze cross sections. A cross section is the likelihood of an
interaction between particles. In this case, the elastic scattering of the LKP from a
nucleus in a detector. The detectors used at XENON100 were made of Xe-131 and
hence the dominant factor is the elastic scattering of the LKP o of quarks. The
leading Feynman diagrams for B
(1)
-quark elastic scattering is given by[25]:
Figure 3.2: Feynman Diagrams for B
(1)
-Quark Elastic Scattering
Going through the cross section calculation and performing some numerical calcula-
tions, one nds that the cross section is given by[22]

B
(1)
n,SI
1.2 10
10
pb
_
1TeV
m
B
(1)
_
2
_
_
100GeV
m
h
_
2
+ 0.09
_
1TeV
m
B
(1)
_
2
_
0.1

_
2
_
2
.
(3.9)
36
Chapter 4
The XENON Experiments
The XENON Experiments are a series of direct detection experiments that are cur-
rently under way. The experiments are located at the Gran Sasso National Laboratory
located under the Gran Sasso Mountain in Italy, which is the largest underground
particle physics laboratory in the world. The rst phase of the project, known as
XENON10, took data from March of 2006 through October of 2007. The second
phase is currently running and the third and nal phase of the project, XENON1T,
is in the design phase.
4.1 The Detector
The XENON experiments use very pure, liquid xenon as the detection medium.
Xenon has an atomic mass of 131 which implies that there should be a high rate
for spin-independent interactions between dark matter and xenon. This is because
the cross section is proportional to the square of the atomic mass. Xenon is an ef-
fective target material due to the fact that xenon is self-shielding and has a high
stopping power. Xenon has a large atomic number and high density ( 3 g/cm
3
).
37
Background gamma rays are stopped around the edges of the detector and so the
central region will have a low background[26]. Another reason that xenon is used
is that xenon works well as a scintillator and an ionizer. It has the highest yield of
energy among the noble liquids[27] which allows for easier detection. Xenon is also
radiologically pure as it has no long lived radioactive daughters except for Krypton,
but Krypton can be separated from the xenon through well established methods.
Xenon is also relatively easy to cool down to cryogenic temperatures to help reduce
the background signal. Xenon should also be more sensitive to lower energetic recoils
than the material used in various other dark matter searches as shown here in Figure
4.1[27].
Figure 4.1: Rate of Detection by Elements vs Recoil Energy
The XENON100 detector itself consists of a position sensitive Xenon Time Pro-
jection Chamber (XeTPC) shown here in Figure 4.2[27]. The position sensitivity of
the detector plays a key role in reducing the background because it is able to localize
events with millimeter precision in all spatial dimensions so researchers can select the
volume in which the background is at a minimum[28]. Inside the chamber there is 161
38
Figure 4.2: XENON100 Detector
kg of liquid xenon, of which 99 kg are used as a scintillator veto, and 62 kg left over
serves as the active target and is optically separated from the rest in a cylinder of
height 30 cm and with a radius of 15 cm. The way the XENON Detector determines
the background is through simultaneously measuring the charge and light within the
detector through the use of 242 Photomultiplier tubes[28]. Through the use of the
simultaneous measurements, more than 99.5% of the background is rejected[29].
The XENON10 detector was basically a smaller version of the XENON100 detec-
tor and was able to limit the cross section of the WIMP to be
SI
= 8.810
44
cm
2
[30],
while the XENON100 is projected to limit the cross section to
SI
= 2 10
45
cm
2
,
and the planned XENON1T experiment is hoping to limit a cross section to
SI
<
10
46
cm
2
[27]. This detector has been able to set the most stringent limits on dark
matter interactions to date[28].
39
4.2 Results
The most recently published results from the XENON100 experiment were published
in April, 2011. The XENON collaboration analyzed data that was observed between
January and June of 2010. In total they collected 100.9 live days of data. From calcu-
lation of their eciencies and analysis of the background, the XENON collaboration
predicted that there should be 0.31
+0.22
0.11
single scatter nuclear recoils in their data set,
where 0.11
+0.08
0.04
of those events are expected to appear like an WIMP interaction[28].
There are also electromagnetic recoil events that must be subtracted out and after
those cuts; there are expected to 1.14 0.48 events in the WIMP search region[28].
Another potential source of misidentied events come from anomalous leakage due to
double-scatter gamma events and is estimated to be about 0.56
+0.21
0.27
events. Combin-
ing all of these sources of events lead to a total background estimate of the WIMP
search region (with 99.75% electromagnetic recoil rejection) for the 100.9 days of
exposure of the 48 kg target material to be 1.8 0.6 events[28].
After performing the data cuts the remaining data was unblinded, and the XENON
collaboration found that there was 3 events that pass all requirements for single-
scatter nuclear recoil events that also occurred in the expected WIMP region. A plot
of their data is shown here in Figure 4.3[28].
This plot is a plot of recoil energy vs. a discrimination parameter between the scin-
tillation light and the ionization electrons. The WIMP search region is boxed in by
the dashed lines between 8.4 - 44.6 keV
nr
. The gray points show the nuclear recoil
distribution measured by a neutron source. The three events that survived all the
data cuts are highlighted in red. Figure 4.4[28] shows the location of each event that
took place inside the target material.
This plot shows the distibution of events in the target, where the gray points are
40
Figure 4.3: Event Distribution using a Discrimination parameter
Figure 4.4: Event Distribution within the Target Volume
41
measured by a neutron source during the data taking process and the red points
are the events that passed all the cuts. Because 1.8 0.6 events is the expected
background, and there is only a total of 3 events observed, this does not show evidence
for dark matter. This is because there is a 28% chance that all of these events are due
to background signal[28] which is not nearly strong enough evidence to claim that
the XENON collaboration has detected dark matter.
This data is still very useful as it can be used to develop stricter limits on the
limit of the spin-independent WIMP-nucleon elastic scattering cross section based on
standard assumptions. A plot of this limit is shown here in Figure 4.5[28].
Figure 4.5: XENON100 Limits on WIMP-Nucleon Cross Section vs WIMP Mass
This plot shows the limit of the WIMP-Nucleon Cross Section as a function of WIMP
mass at a 90% condence level (thick, blue line) and within one and two standard
deviations (shaded, blue bands). This plot also includes the most recent results
of several other dark matter direct detection experiments as a comparison. The
maximum sensitivity that this experiment reached was at = 7.0 10
45
cm
2
which
occurs at a WIMP mass of m

= 50GeV/c
2
[28]. This result now rules out a signicant
42
fraction of previously unreachable parameter space and also excludes a region that
should be attainable by the Large Hadron Collider and where some predict that
supersymmetric WIMP dark matter could exist at. An interesting note is that these
results conict with the reports of the DAMA collaboration and more recently the
CoGeNT detector that indicate that they may have seen some light mass WIMPs.
43
Chapter 5
Results and Conclusions
The general goal of this project was to study Universal Extra Dimensions Theory
and place limits on this theory based o of results from direct detection experiments.
This can be done through limiting the two parameters R and . To constrain these
fundamental parameters of UED Theory involved a two step process. The rst step
involved writing a program that that used all the physics of direct detection and
calculate the cross section that would be required for a specic number of detections.
The second step involved using a given mass and mass splitting, then calculating the
required values of R and .
5.1 The Direct Detection Program
The direct detection portion of my project was entitled rate make trevor.f. This built
upon and modied code in George Reifenbergers Masters thesis[31]. His program,
entitled rate make.f, was originally designed for use with a generic direct detection ex-
periment which could then be later modied to t whatever were parameters needed
for specic experiments. Reifenbergers aim was to show how direct detection ex-
44
clusion curves depended on choices for nuclear form factors. The program must be
given the A and Z of the target material and the detection range of nuclear recoil
energies. The next input that the program requires is the nuclear form factor. Form
factors that are available to use are Sum of Gaussians, Fourier-Bessel, Lewin-Smith
Helm, DarkSUSY Helm, and the Two-parameter Fermi. The program also assumes a
number of detected events as a limit, which must be included. The program outputs
the WIMP-Nucleon cross section for a given WIMP mass, and to do so essentially
solves Eq. (1.6) for
p
which is hidden inside
i
. This will give:

p
= N
E
1
E
2
2m
2
p

i
_
E
2
E
1
A
2
i
|F
i
(E)|
2
(E, t)
i
(E)dE
(5.1)
where (E, t) is the velocity distribution integral, N
E
1
E
2
is the expected number of
events with recoil energy in the given range of energy, m is the neutralino mass,
p
is the reduced neutralino-proton mass, is the local halo WIMP density, F
i
(E) is a
nuclear form factor,
i
(E) is the eective exposure, and the summation is over what-
ever nuclear species make-up the detector. The program rst calculates a solution to
(E, t) and then uses the numerical integration method of Gaussian Quadratures to
solve the integral over the given energy range. Since the WIMP mass is unknown, the
integral is solved over the range of 1-1000 GeV/c
2
, and it outputs into a le listing
the mass and the WIMP-Nucleon cross section.
In order to produce and use the XENON100 results, I had to make a few modi-
cations to the older code. The parameters that I used were A = 131 and Z = 54
for xenon and I used the Lewin-Smith Helm Form Factor. The total exposure mass
and time after all the cuts was 6000 kg-days. One major modication was adding the
eciency of the XENON detector. The actual parameterizations were not published
by the XENON collaboration, but an approximate eciency was published in a paper
45
by Savage et al. The eciency function given in Savages paper is[32]:
(E) = 0.46
_
1
E
135KeV
_
, (5.2)
and the energy resolution was given by[32]:
(E) = (0.579KeV)
_
E
KeV
+ 0.021E. (5.3)
Therefore, using the energy eciency, total exposure in kg-days, and other parameters
such as the total number of detected events, we can reproduce the XENON100 direct
detection exclusion curve. This is shown here in Figure 5.1.
Figure 5.1: XENON100 Limits: WIMP-Nucleon Cross Section vs WIMP Mass
46
5.2 Finding R and
The second step in the process was to determine the values of R and that are
allowed by each WIMP mass and cross section. Recall that R represents the size of
the extra dimension and is the cut-o scale of the theory. Calculations involving
UED Theory and direct detection were rst worked out by Servant and Tait[25]
and are also presented by Hooper and Profumo[22]. Servant and Tait derived the
equations that gave the expected spin-independant LKP-Nucleon cross section. They
found that:

B
(1)
n,SI
1.2 10
10
pb
_
1TeV
m
B
(1)
_
2
_
_
100GeV
m
h
_
2
+ 0.09
_
1TeV
m
B
(1)
_
2
_
0.1

_
2
_
2
,
(5.4)
where m
h
is the mass of the Higgs Boson which we have chosen to be 134 GeV, and
(m
q
(1) m
B
(1) )/m
B
(1) is the mass splitting between the LKP and the lightest
Kaluza-Klein quark (LKQ) which turns out to be the rst excited state of the down
quark. The mass of the of the LKP can be found by the equation
m
2
B
(1)
=
1
R
2
+
g
2
16
2
R
2
_
39
2
(3)

2

1
3
ln R
_
, (5.5)
and the mass of the LKQ is found from
m
d
(1) =
1
R
+
1
16
2
R
_
6g
2
3
+
1
2
g
2
_
ln R (5.6)
where g

= .344144 is the electroweak gauge coupling, g


3
= 1.21565011 is the strong
gauge coupling, (3) 1.2020 is the third zeta function. We cannot solve this equa-
tion analytically unless we make a few simplications. We may write as n/R.
Since R TeV
1
implies that R is unitless. This means that R = n so we can
47
replace R with just n in the above equations. We can also group all of our constants
together as well. We determine
k
1
=
g
2
16
2
,
k
2
=
39
2
(3)

2
,
k
3
=
1
3
,
k
4
=
1
16
2
_
6g
2
3
+
1
2
g
2
_
.
(5.7)
Our equations for the masses become
m
2
B
(1)
=
1
R
2
+
k
1
R
2
(k
2
k
3
ln(n)) ,
m
d
(1) =
1
R
+
k
4
R
ln(n).
(5.8)
Now that we have our equations for the masses of the particles as well as the mass
splitting, we essentially have two equations with two unknowns R and n. Using
Maple, we can plug the system of equations in and output values for R and n and I
will go into more detail with this in the next section.
5.3 Results
The last step of my project used the programming capabilities of Maple. The rst step
was to pick a mass for m
B
(1) which I looped over from a mass of 1 GeV to 1000 GeV.
Then for each mass value, I also calculated the necessary parameters for 3 dierent
mass splittings (I used a value of 5%, 10%, and 15% for ). Knowing the mass and
the mass splittings gives two equations and two unknowns and so we are able to solve
48
for n and R for each case. I next used the mass m
B
(1) and the assumed value of
to calculate the cross section for each LKP mass. Finally, I graphed the cross section

SI
versus m
B
(1) for each of the 3 values of , along with the cross section data from
my simulation of the XENON100 experiment from the direct detection experiment.
My plot is shown here in Figure 5.2.
Figure 5.2: WIMP-Nucleon Cross Section vs WIMP Mass
From this plot, several things become immediately apparent. The rst thing is
that clearly the XENON100 data rules out the possibility of low mass UED models
using B
(1)
as the LKP. This is due to the fact that the XENON100 cross section limits
are less than the cross sections that are found from UED theory; this means that the
XENON collaboration should have detected these LKP. With a mass splitting of 15%,
the XENON100 data rules out the possibility of LKPs with a mass lighter than about
220 GeV. For a splitting of 10%, the XENON collaboration rules out LKPs with mass
49
Table 5.1: Limitations on Mass Data Table
Lower Limit
LKP Mass LKP Mass LKP Mass LKP Mass Delta
3 Events 2.4 Evetns 1.8 Events 1.2 Events
220 GeV 230 GeV 238 GeV 270 GeV 15%
275 GeV 284 GeV 307 GeV 336 GeV 10%
388 GeV 407 GeV 428 GeV 483 GeV 5%
lighter than 275 GeV and for a splitting of 5%, the data rules out LKPs lighter than
388 GeV. For masses higher than the respective limits for each of the mass splittings,
the XENON100 data cannot rule out the LKPs due to fact that the XENON100 was
not sensitive enough to reach that region of the parameter space.
I also looked at what limits is placed on the mass of the LKP if I only looked
at the expected background. Essentially, what happens is that it decreases the cross
section of the interaction and will eliminate more of the lighter mass LKPs. My data
is shown here in Table 5.1.
The second thing that can be understood from Figure 5.2 is that the mass splitting
has a denitive eect on the WIMP-Nucleon cross section. For both low and high
mass LKPs the mass splitting is less of a factor but for the intermediate mass ranges,
it has a substantial inuence. The XENON100 data rules out more and more of the
low mass LKPs when the mass splitting is small.
This data gives some basic constraints on UED Theory. But what does this mean
for R and ? To determine constraints on R and , I looked at n as a function of
mass. Recall that n comes from the fact that R equals n which is a real number. The
value of n should be severely constrained due to the fact that we arbitrarily picked a
mass splitting. The plot of n versus mass of the LKP is shown here in Figure 5.4.
As you can see the value of n is essentially constant for each mass splitting, and
50
Figure 5.3: Value of n vs WIMP Mass
51
increases as the mass splitting percentage increases. Thus by picking a mass splitting,
as is usually done in the literature, we can directly constrain R in UED Theory.
I next looked at this value of R as a function of the mass of the LKP. In order to
do this I went back to the theory and used the square root of Eq. 5.5,
m
B
(1) =

1
R
2
+
g
2
16
2
R
2
_
39
2
(3)

2

1
3
ln n
_
. (5.9)
Using Maple, I plugged in all of the constants and solved for R for each value of
and m
B
(1) . This gave me R as a function of m
B
(1) as shown here:
R =
0.9990006343
m
B
(1)
for = 0.05
R =
0.9988903262
m
B
(1)
for = 0.10
R =
0.9987800302
m
B
(1)
for = 0.15
(5.10)
These functions are essentially identical and hence essentially appear the same on the
graph here in Figure 5.5. This shows that R is insensitive to the mass splitting.
Recall from Figure 5.3, we ruled out for = 0.15 that the LKP cannot be less than
220 GeV. This constrains our R to be less than 0.0045 for the mass splitting of 15%.
For = 0.1, it was ruled out the LKP cannot be less than 275 GeV. This constrains
our R to be less than 0.00365 for a splitting of 10%. In the case of = 0.05, the
LKP could not be less than 388 GeV. This sets the strictest limits and limits R to
be at least less than 0.00258 for this nal splitting. Because R is the size of the extra
dimension in UED theory, we have constrained the most fundamental parameter of
UED theory.
In this project, my goal was to limit UED Theory by using the latest results from
direct detection experiments. In order to this, I simulated the XENON100 data and
52
Figure 5.4: Value of R vs WIMP Mass
53
Table 5.2: Summary of Results
Lower Limit
LKP Mass Delta R n
220 GeV 15% 0.0045 1/GeV 13.858
275 GeV 10% 0.00365 1/GeV 5.74
388 GeV 5% 0.00258 1/GeV 2.3774
calculated values of R and from UED Theory and placed constraints by comparing
them. I found that the mass of B
(1)
must be greater than 220 GeV when using a
mass splitting of 15% which in turn constrained n to be 13.858 and that R must be
less than 0.0045. For a mass splitting of 10%, the mass must be larger than 275 GeV,
n to be 5.74, and R to be less than 0.00365. For the nal splitting of 5%, the mass
must be greater than 388 GeV, n must be 2.3774, and R must be less than 0.00258.
My results are shown here in Table 5.2.
54
Chapter 6
Conclusions
This is an exciting time to be a astro-particle physicist. Many experiments expecting
to shed light on the detection of dark matter are in progress and others have been
releasing their results recently. Many theories explaining dark matter are being put to
the test, one of those being Universal Extra Dimensions (UED) theory. This theory
has a viable natural dark matter candidate (LKP) which is what makes this theory
an attractive explanation of dark matter. This theory can be characterized by two
parameters, R and , which we have constrained to match the latest direct detection
results of the XENON100 experiment.
To constrain UED theory to account for the current results from direct searches,
I simulated the XENON100 data and compared that to calculated values of R and
directly from UED theory for various values of the mass splitting. We found that
for a greater percentage of the mass splitting, the more constrained the LKP and R
became. Future work on this project could be done by simulating new results from
other direct detection experiments and further limiting UED theory. We could also
work on nding constraints of the theory based on a function of the mass splitting.
55
Bibliography
56
[1] F. Zwicky, On the masses of nebulae and clusters of nebulae, The Astrophysical
Journal, vol. 86, pp. 217246, 1937.
[2] V. Rubin, Dark matter in spiral galaxies, Scientic American. 248 96-108
(1983).
[3] D. Clowe et. al, The Astrph. Journal 648:L109-L113 (2006).
[4] R. H. Cyburt, Primordial Nucleosynthesis for the New Cosmology: Determin-
ing Uncertainties and Examining Concordance, June 20, 2004. arXiv:astro-
ph/0401091v2
[5] N. Jarosik et al. Seven-Year Wilkinson Microwave Anisotropy Probe
(WMAP1) Observations: Sky Maps, Systematic Errors, and Basic Results,
arXiv:1001.4744v1 (2010).
[6] M. Boylan-Kolchin et al. Resolving Cosmic Structure Formation with the
Millennium-II Simulation, arXiv:0903.3041 (2009).
[7] C. Alcock et al., The MACHO Project: Microlensing Results from 5.7 Years of
LMC Observations. Astrophys.J. 542 (2000) 281-307.
[8] P. Tisserand et al., Limits on the Macho Content of the Galactic Halo from the
EROS-2 Survey of the Magellanic Clouds, 2007, Astron. Astrophys. 469, 387-404.
[9] t. Falk, K.A. Olive, and M. Srednicki, Heavy sneutrinos as dark matter, Physics
Letters B, vol. 339, no. 3, pp. 248-251, 1994.
[10] S. A. Bonometto, F. Gabbiani, and A. Masiero, Mixed darkmatter from axino
distribution, Physical Review D, vol. 49,no. 8, pp. 39183922, 1994.
[11] S. Coleman, Q-balls, Nuclear Physics B, vol. 262, no. 2, pp.263283, 1985.
57
[12] S. I. Blinnikov and M. Yu. Khlopov, On possible eects of mirror particles, Sov.
J. Nucl. Phys. 36, 472 (1982).
[13] Kolb etal. WIMPZILLAS! October 14, 1998. arXiv:hep-ph/9810361v1.
[14] J. A. R. Cembranos, A. Dobado, and A. L. Maroto, Braneworld dark matter,
Physical Review Letters, vol. 90, no. 24, Article ID 241301, 4 pages, 2003.
[15] Garrett et al. Dark Matter: A Primer. January 24, 2011. arXiv:1006.2483v2
[16] G. Duda, A. Kemper, and P. Gondolo. JCAP, 0704:012, 2007.
[17] O. Adriani, G. C. Barbarino, G. A. Bazilevskaya et al., Ananomalous positron
abundance in cosmic rays with energies 1.5-100 GeV, Nature, vol. 458, no. 7238,
pp. 607609, 2009.
[18] Kaluza, T. Sitzungsber. Preuss. Akad. Wiss. Berlin (Math Phys.) 1921, 966
(1921).
[19] Klein, O. Z. Phys. 37, 895 (1926).
[20] Appelquist, Cheng, Dobrescu, Phys. Rev. D 64, (2001) 035002.
[21] Servant, Tait, Nucl. Phys. B 650, (2003) 391.
[22] Hooper, D. and S. Profumo, Phys. Rept. 453:29-115 (2007).
[23] T. Flacke, D. Hooper and J. March-Russel, Phys. Rev. D 73:095002 (2006).
[24] Cheng, Matchev, Schmaltz, Phys. Rev. D 66, (2002) 036005.
[25] Servant, Tait, New J. Phys. 4, 99 (2002).
[26] Minamino et al. Self-shielding eect of a single phase liquid xenon detector for
direct dark matter search. December 12, 2009. arXiv:0912.2405v1.
58
[27] Aprile, Elena. The XENON Dark Matter Search. WONDER Workshop.
LGNS, Gran Sasso, Italy. 22 03 2010. Address.
[28] Arile et al. Dark Matter Results from 100 Live Days of XENON100 Data.
April 13, 2011. arXiv:1104.2549v1
[29] Baudis, Laura. Results from the XENON10 Experiment. CHIPP Meeting. PSI.
15 08 2007. Address.
[30] Angle et al. First Results from the XENON10 Dark Matter Experiment at the
Gran Sasso National Laboratory. December 3, 2007. arXiv:0706.0039v2
[31] G. Reifenberger, M.S. Thesis, Creighton University (2007).
[32] Savage et al. Compatibility of DAMA/LIBRA dark matter detection with other
searches. January 27, 2009. arXiv:0808.3607v3
59
60

APPENDIX A

Program Code

This is the computer code used to modify George
Reifenberger's Exclusion Curve Program. This is coded in
Fortran 77.

program rate

ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
c
c This program will calculate the WIMP-proton cross section
c through Eq.2 of hep-ph/0504010v2
c
ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
implicit none


double precision sigmap,mnuc,mwim,ccm,vearth,pi,mp,mn,rho,eps
double precision total,ratenum,ckm,etay,gerfc,trial,trial1,esure
double precision xysum,xydiff,xy1sum,xy1diff,xysum1,xydiff1
double precision xysum2,xydiff2
double precision sigmap2,mwim2

integer choice

parameter (eps = 2.908011000d41) !(KeV/c^2)*s for 6000 kg-day
exposure
parameter (rho = 3.0d5) !KeV/(c^2*cm^3)
parameter (ccm = 2.99792458d10) !cm/s
parameter (ckm = 2.99792458d5) !km/s
parameter (vearth = 2.98d6) !cm/s
parameter (mp = 938272.31d0) !KeV/c^2
parameter (mn = 931494.013d0) !KeV/c^2
parameter (pi = 3.14159265)

c Efficiency Terms

double precision eff,eff1,effconst1,effconst2
parameter (effconst1 = 0.46d0)
parameter (effconst2 = 1.35d2)

c UED Theory Terms

double precision RR, Lambda, LR

c SOG terms

double precision qr(12),capq(12),capr(12),b(12)
double precision xx, gamma,f,f1,ff,ff1,rp,fermiKeV,q,q1
integer i,p,a,z
61

parameter (fermiKeV = 1.0d0/197326.9602d0) ! (KeV-fm)

c Fermi terms

double precision aa, bb, a1, b1, y1, result, y, n, sum
double precision n2,term1,term2,tot22,tot44
double precision x(48), w(48)
integer j

c FB terms

double precision r,norm,fbqr
double precision bbb(17),c(17)
integer v

c Factorial terms

integer k,fact,l,res
parameter (etay = 0.13517554)

c Extra terms

! real*8 trial,trial1
double precision consteta,nesc,xnum,xnum1,znum
double precision erf,erf1,erf2,erf3,erf4
parameter (znum = 2.94845987d0)

c LSHelm terms

double precision hc,ha,r1,j1,cpa

ccccccccccccccccccccccccccccccccc cccccccccccccccccccccccc
c Main code

write(*,*) "Please enter A of the target material"
read(*,*) a
!write(*,*)a

write(*,*) "Please enter the Z of the target material"
read(*,*) z
!write(*,*)z
mnuc = z*mp+(a-z)*mn

!write(*,*)"mnuc=",mnuc

write(*,*) "Enter your lower limit of integration"
read(*,*) aa
!write(*,*)aa

write(*,*) "Enter your upper limit of integration"
read(*,*) bb
!write(*,*)bb

write(*,*)""
write(*,*)"Which Form Factor do you wish to use?"
write(*,*) "SOG(1), FB(2), Lewin-Smith Helm(3), DarkSUSY
Helm(4), Fermi(5)"
62

read(*,*) choice


! write(*,*)"Enter a value"
! read(*,*)trial
! trial1=gerfc(trial,erf)
! write(*,*)trial
! write(*,*)"erf of",trial,"=",erf
! write(*,*)"erf of value is=",trial1


c write(*,*) "Enter WIMP Mass"
c read(*,*)mwim
c !write(*,*)mwim

open (unit=10,file='termsFB2.dat')

!write(10,*)"xnum"," term1"," xnum1"," term2"
!write(10,*)"y"," xnum"," y1"," xnum1"
!write(10,*)"SOG"," erfx+y"," erf(x-y)"
!write(10,*)"mwim"," sigmap"

nesc = 1-((2*5.11742284)*exp(-((5.11742284**2)))/dsqrt(pi))

!write(*,*)nesc

consteta = ((4*etay)/(dsqrt(pi)))*(exp(-(5.11742284**2)))

!write(*,*)consteta

ratenum = 3.0

!write(*,*)"ratenum=",ratenum

esure = 6000

!write(*,*)"exposure=",esure

!write(*,*)znum,etay,znum-etay
ccccccccccccccccccccccccccccccccccc cccccccccccccccccccccccc


ccccccccccccccccccccccccccccccccccc cccccccccccccccccccccccc
c Fermi Routine

c Define constants.

n = 48
n2 = 2 * n


c Define w and x

w(1)= 0.0007967920655
x(1)= 0.9996895038
w(2)= 0.001853960788
x(2)= 0.9983643758
63

w(3)= 0.002910731817
x(3)= 0.9959818429
w(4)= 0.003964554338
x(4)= 0.9925439003
w(5)= 0.005014202742
x(5)= 0.9880541263
w(6)= 0.006058545504
x(6)= 0.9825172635
w(7)= 0.007096470791
x(7)= 0.9759391745
w(8)= 0.008126876925
x(8)= 0.9683268284
w(9)= 0.009148671230
x(9)= 0.9596882914
w(10)= 0.01016077053
x(10)= 0.9500327177
w(11)= 0.01116210209
x(11)= 0.9393703397
w(12)= 0.01215160467
x(12)= 0.9277124567
w(13)= 0.01312822956
x(13)= 0.9150714231
w(14)= 0.01409094177
x(14)= 0.9014606353
w(15)= 0.01503872102
x(15)= 0.8868945174
w(16)= 0.01597056290
x(16)= 0.8713885059
w(17)= 0.01688547986
x(17)= 0.8549590334
w(18)= 0.01778250231
x(18)= 0.8376235112
w(19)= 0.01866067962
x(19)= 0.8194003107
w(20)= 0.01951908114
x(20)= 0.8003087441
w(21)= 0.02035679715
x(21)= 0.7803690438
w(22)= 0.02117293989
x(22)= 0.7596023411
w(23)= 0.02196664443
x(23)= 0.7380306437
w(24)= 0.02273706965
x(24)= 0.7156768123
w(25)= 0.02348339908
x(25)= 0.6925645366
w(26)= 0.02420484179
x(26)= 0.6687183100
w(27)= 0.02490063322
x(27)= 0.6441634037
w(28)= 0.02557003600
x(28)= 0.6189258401
w(29)= 0.02621234073
x(29)= 0.5930323647
w(30)= 0.02682686672
x(30)= 0.5665104185
w(31)= 0.02741296272
64

x(31)= 0.5393881083
w(32)= 0.02797000761
x(32)= 0.5116941771
w(33)= 0.02849741106
x(33)= 0.4834579739
w(34)= 0.02899461415
x(34)= 0.4547094221
w(35)= 0.02946108995
x(35)= 0.4254789884
w(36)= 0.02989634413
x(36)= 0.3957976498
w(37)= 0.03029991542
x(37)= 0.3656968614
w(38)= 0.03067137612
x(38)= 0.3352085228
w(39)= 0.03101033258
x(39)= 0.3043649443
w(40)= 0.03131642559
x(40)= 0.2731988125
w(41)= 0.03158933077
x(41)= 0.2417431561
w(42)= 0.03182875889
x(42)= 0.2100313104
w(43)= 0.03203445623
x(43)= 0.1780968823
w(44)= 0.03220620479
x(44)= 0.1459737146
w(45)= 0.03234382256
x(45)= 0.1136958501
w(46)= 0.03244716371
x(46)= 0.08129749546
w(47)= 0.03251611871
x(47)= 0.04881298513
w(48)= 0.03255061449
x(48)= 0.01627674484


c Now define the constants for the sum.

a1 = (aa+bb) / 2
b1 = (bb-aa) / 2
sum = 0

!write(*,*)'a1=', al, ' b1=', b1

c calculate the sum with a do loop
c the variable y in the function f(y). in this particular case
c f(y)=Big,nasty function

ccccc Enter eta(E,t) here then transform into outer do-loop

mwim=0
do mwim=20000000,200000000,1000

total = (.5*eps*(ccm**2)*rho*(a**2))*(((mwim+mp)**2)
&/(mwim**3*mp**2))*(0.5/(vearth*nesc))
65


!write(*,*)'total=',total

sum=0

j=0
do j=1,n

!write(*,*)'w(j)=', w(j)

y =( b1 * x(j) + a1)
y1 =( a1 - x(j) * b1)
!write(10,*)y

!write(10,*)y1

q = dsqrt(2*mnuc*y)
q1 = dsqrt(2*mnuc*y1)

ccccccc Added to account for XENON Efficiency based off of
arXiv:0808.3607v3 [astro-ph] Eq: 34

eff = effconst1 * (1 - y/effconst2)

eff1 = effconst1 * (1 - y1/effconst2)

!write(*,*)'eff=', eff, 'eff1=', eff1

ccccccc
xnum=ckm*(sqrt(3*mnuc*y)*(mwim+mnuc))/(2*270*mwim*mnuc)

!write(*,*)'xnum=', xnum

xnum1=ckm*(sqrt(3*mnuc*y1)*(mwim+mnuc))/(2*270*mwim*mnuc)

!write(*,*)'xnum1=', xnum1

!write(10,*)y,xnum,y1,xnum1
ccccccc
xysum = xnum + etay
xydiff = xnum - etay

!write(*,*)xysum,xydiff
xy1sum = xnum1 + etay
xy1diff = xnum1 - etay

!write(*,*)xy1sum,xy1diff

call gerfc1(xysum,erf1)

!write(*,*)'erf1=', erf1

call gerfc2(xydiff,erf2)

!write(*,*)'erf2=', erf2

call gerfc3(xy1sum,erf3)
66


!write(*,*)'erf3=', erf3

call gerfc4(xy1diff,erf4)


!write(*,*)'erf4=', erf4



if(choice .eq. 1)then


call soga(a,z,y,mnuc,f,ff)
!write(*,*)ff

call sogb(a,z,y1,mnuc,f1,ff1)
!write(*,*)ff1


!write(10,*)ff,erf1,erf2

elseif(choice .eq. 2)then

call fbffa(a,z,y,mnuc,f,ff)
!write(*,*)ff

call fbffb(a,z,y1,mnuc,f1,ff1)
!write(*,*)ff1


!write(10,*)ff,erf1,erf2

elseif(choice .eq. 3)then

call lshelma(a,mnuc,y,f,ff)
!write(*,*)ff

call lshelmb(a,mnuc,y1,f1,ff1)
!write(*,*)ff1

elseif(choice .eq. 4)then

call dshelma(a,mnuc,y,f,ff)
!write(*,*)ff

call dshelmb(a,mnuc,y1,f1,ff1)
!write(*,*)ff1

elseif(choice .eq. 5)then

call fermia(q,a,z,ff)

call fermib(q1,a,z,ff1)

else

67

ff=0
ff1=0


endif


!write(10,*)ff,ff1

if(xnum .lt. (znum-etay))then


term1 = ff*(erf1-erf2-consteta)

elseif(xnum .gt. (znum+etay))then

term1 = 0

else

term1 = ff*(1-erf2-((4/dsqrt(pi))*(znum+etay-xnum)
&*dexp(-(znum**2))))
!write(*,*)term1
endif


if(xnum1 .lt. (znum-etay))then

term2 = ff1*(erf3-erf4-consteta)


elseif(xnum1 .gt. (znum+etay))then

term2 = 0

else

term2 = ff1*(1-erf4-((4/dsqrt(pi))*(znum+etay-xnum1)
&*dexp(-(znum**2))))
!write(*,*)term2
endif


!write(10,*)xnum,term1,xnum1,term2

ccccccc


sum = sum + (w(j)*(term1*eff+term2*eff1))
!write(10,*)sum

enddo

!write(10,*)mwim,sum
result = b1*sum
!write(10,*)mwim,result

68

sigmap = ratenum/(total*result)
sigmap2 = sigmap * 1E36 !Conversion to pg
mwim2 = mwim * 1E-9 !Conversion to
TeV/c2
write(10,*)mwim,sigmap

!call UED(mwim2,sigmap2,RR,Lambda,LR)

enddo
close(10)
end


!101 format(e14.8,1x,e14.8)


ccccccccccccccccccccccccccccccccccc ccccccccccccccccccccccccc


ccccccccccccccccccccccccccccccccccccc cccccccccccccccccccccccc
ccccccccccccccccccccccccccccccccccccc cccccccccccccccccccccccc
c erf(x-y) Routine


ccccccccccccccccccccccccccccccccccccc cccccccccccccccccccccccc
c erf(x+y) Routine

ccccccccccccccccccccccccccccccccccccc cccccccccccccccccccccccc
c factorial Routine

ccccccccccccccccccccccccccccccccccccc cccccccccccccccccccccccc
c SOG Routine




69

APPENDIX B

Maple Worksheet

This maple worksheet does all the required calculations of
the cross section, R, and .

>
>
>

>

>

>

>

>


>

>

>

>

>
70

>

>
>

>

>
71


>

>

>
>
72


>
>

>

>

73

>
>
>

>


74


>

>

>

>

>

>

>

>

>
75


>

>

>

>

Potrebbero piacerti anche