Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
c
2012
Hodge
All rights reserved
ISBN-13: 978-1469987361
ISBN-10: 1469987368
Contents
Acknowledgments
xix
preface
xxi
1 Progenitors
1.1 Introduction . . . . . . .
1.2 The start . . . . . . . .
1.3 Thales . . . . . . . . . .
1.4 Democritus and Aristotle
1.5 Descartes and Newton .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
2
4
7
19
47
99
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
173
178
184
191
193
200
204
204
204
205
5 Photon diffraction
209
5.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.1.1 Hod action on field . . . . . . . . . . . . . . . . . 215
5.1.2 field action on a hod . . . . . . . . . . . . . . . . 215
i
5.2
5.3
5.4
5.5
5.6
5.7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
217
217
217
220
227
229
232
234
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
239
244
248
251
257
262
269
7 Pioneer anomaly
7.1 Model . . . . . . . . . . . . . . . . . . . . . .
7.2 Results . . . . . . . . . . . . . . . . . . . . . .
7.2.1 Sample . . . . . . . . . . . . . . . . . .
7.2.2 Annual periodicity . . . . . . . . . . .
7.2.3 Difference of ap between the spacecraft
7.2.4 Slow decline in aP . . . . . . . . . . . .
7.2.5 Saturn encounter . . . . . . . . . . . .
7.2.6 Large uncertainty of P11 80/66 . . . .
7.2.7 Cosmological connection . . . . . . . .
7.3 Discussion . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
273
275
276
276
278
279
281
281
281
282
283
.
.
.
.
.
287
288
291
293
300
305
Radius
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9 Distance calculation
309
9.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
9.2 Data and Analysis . . . . . . . . . . . . . . . . . . . . . . . 312
9.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
.
.
.
.
.
.
323
328
329
329
331
342
345
349
353
355
356
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
and survival
The purpose of life is life . . . .
The nature of nature . . . . . .
Biological to Social Mechanisms
The Vital Way to life . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
367
369
378
389
395
409
appendix
413
.1 Initial distance estimation . . . . . . . . . . . . . . . . . . . 413
.2 The many lines in the parameter relations are not random. . 415
List of Figures
1.1
The left figure shows the initial position of the tiles with an
area of c2 . The right figure shows the rearranged position of
the same tiles with an area of the sum of two squares, a2 + b2 .
5.1
The left figure is a plot of the NeffT versus angle from the
direction of the photon NhT = 10. The first six minima are
at = 0 rad, 0.144 rad (a), 0.249 rad (b), 0.355 rad. (c), and
0.466 rad (d). The right figure is a plot of the longitudinal
vs. latitudinal position of photons after 1000 intervals. The
line labeled e is = /2. The lines are labeled as the
angles in the left plot. The position of photons along lines
corresponding to minima of a photons transmission pattern
is what determines coherence. . . . . . . . . . . . . . . . . . 221
5.2
5.3
5.4
5.5
5.6
5.7
The right figure is a plot screen pattern of Youngs experiment at L = 40 steps after the first mask and Win = 6
steps. The filled squares are the data points. The thin line
connects the data points. The Fresnel equation fit is poor.
Therefore, the pattern is not a diffraction pattern. The right
figure shows the distribution of photons from the first mask
to the screen. The lines and the lower case letters are as in
Fig. 5.1. Random photons through a first slit fail to produce
a diffraction pattern that indicates incoherence. However,
the position distribution shows coherence (see Fig. 5.1B). . 227
5.8
5.9
Plot of the calculated redshift zH using Eq. (6.1) and D calculated using Cepheid variable stars for 32 galaxies (Freedman
et al. 2001; Macri et al. 2001) versus the measured redshift
zm . The straight line is a plot of zH = zm . The circles indicate
the data points for galaxies with (l,b) = (290 20 ,75 15 )
(Reprinted with permission of Elsevier(Hodge 2006a)). . . . 243
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
The left plot is of the measured redshift zm versus the angle A (arcdegrees) subtended from NGC 5353 (S0) (l, b, z) =
(82.61,71.63 ,8.0203103). The open diamonds indicate
the data points for Source galaxies. The filled diamonds indicate the data points for Sink galaxies. The right plot is the
distance D (Mpc) from earth versus A. The open squares
indicate the data points for galaxies with the D calculated
herein. The filled squares indicate the data points for galaxies with the D calculated using the Tully-Fisher relationship.
(Reprinted with permission of Elsevier(Hodge 2006a).) . . . 263
6.10 Plot of galactic latitude Glat (arcdegrees) versus galactic longitude Glon (degrees) approximately six arcdegrees around
NGC 5353. The open circles indicate the data points for
galaxies more than one arcdegree from NGC 5353. The filled
circles indicate the data points for galaxies within one arcdegree of NGC 5353. The + or crosshairs indicate the
position of NGC 5353. (Reprinted with permission of Elsevier(Hodge 2006a).) . . . . . . . . . . . . . . . . . . . . . . . 264
6.11 The left plot is of the measured redshift zm versus the angle
A (arcdegrees) subtended from NGC 2636 (E0), M = 17.9
mag. (l, b, z) = (140.15 ,34.04 ,7.3163103). The open diamonds indicate the data points for Source galaxies. The
filled diamonds indicate the data points for Sink galaxies.
The right plot is the distance D (Mpc) from earth versus
A. The open squares indicate the data points for galaxies
with the D calculated herein. The filled squares indicate
the data points for galaxies with the D calculated using the
Tully-Fisher relationship. . . . . . . . . . . . . . . . . . . . . 265
6.12 Plot of galactic latitude Glat (arcdegrees) versus galactic longitude Glon (degrees) approximately six arcdegrees around
NGC 2636. The open circles indicate the data points for
galaxies more than one arcdegree from NGC 2636. The filled
circles indicate the data points for galaxies within one arcdegree of NGC 2636. The + or crosshairs indicate the
position of NGC 2636. (Reprinted with permission of Elsevier(Hodge 2006a).) . . . . . . . . . . . . . . . . . . . . . . . 266
6.13 Plots of the angle A (arcdegrees) subtended from a target
galaxy versus the measured redshift zm . The target galaxy
is shown on the A = 0 axis as a large, filled diamond. The
open diamonds indicate the data points for Source galaxies. The filled diamonds indicate the data points for Sink
galaxies. The data for the target galaxies are listed in Table 6.4.(Reprinted with permission of Elsevier(Hodge 2006a).) 267
6.14 The left plot is of the measured redshift zm versus the angle
A (arcdegrees) subtended from NGC 1282 (E), M = 19.8
mag. (l, b, z) = (150,-13.34,7.4727103 ). The open diamonds indicate the data points for Source galaxies. The
filled diamonds indicate the data points for Sink galaxies.
The right plot is the distance D (Mpc) from earth versus
A. The open squares indicate the data points for galaxies
with the D calculated herein. The filled squares indicate
the data points for galaxies with the D calculated using the
Tully-Fisher relationship. (Reprinted with permission of Elsevier(Hodge 2006a).) . . . . . . . . . . . . . . . . . . . . . . 268
7.1
7.2
7.3
7.4
7.5
7.6
8.1
8.2
278
279
280
280
282
283
8.3
8.4
8.5
8.6
8.7
8.8
9.1
295
298
299
301
302
304
2
Plot of log(vrmax
Kras ) from HCa versus log(W ). The line
2
is the best fit. The line is a plot of log(vrmax
Kras ) =
2.138 log(W ) 1.099. . . . . . . . . . . . . . . . . . . . . . . 314
9.2
9.3
9.4
9.5
2
Plot of the residual R of log(vrmax
Kras ) measurement and
the line in Figure 9.1 versus log W . The lines are the best fit
and are described in Table 9.2. The + sign, diamond, and
square are for galaxies in RZ 1, RZ 2, and RZ 3, respectively.
The R, F, and D denote the galaxies with rising, flat, and
declining rotation curves, respectively. . . . . . . . . . . . .
Plot of surface brightness I (scaled distance of Digital Sky
Survey) versus the distance from the center of the galaxy
r (kpc) for NGC 2403 using Dc to calculate r. The profile
is characteristic of a peak shaped curve with a peak and
high slope sides. These profiles are found in RZ 1 and RZ 2
galaxies. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Plot of surface brightness I (scaled distance of Digital Sky
Survey) versus the distance from the center of the galaxy r
(kpc) for NGC 7331 using Dc to calculate r. The profile is
characteristic of a flat shaped curve with a flat top and
sides with a lower slope than RZ 2 galaxies. These profiles
are found in RZ 3 galaxies. . . . . . . . . . . . . . . . . . . .
Plot of the calculated distance Dcalc to the test galaxies versus the reported Cepheid distance Dc . The line is Dcalc = Dc .
The correlation coefficient is 0.99 with an F test of 0.78. . .
315
317
317
320
2
10.2 Plot of square of the rotation velocity vrrmax
(103 km2 s2 ) at
the maximum extent of the RR versus B band luminosity
L (108 erg s1 ) for the 95 sample galaxies. The 15 select
galaxies have error bars that show the uncertainty range in
each section of the plot. The error bars for the remaining
galaxies are omitted for clarity. The straight lines mark the
lines whose characteristics are listed in Table 10.3. The large,
filled circle denotes the data point for NGC 5448. The large,
filled square denotes the data point for NGC 3031.(Reprinted
with permission of Nova Science Publishers, Inc. (Hodge
2010).) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
2
10.3 Plot of the square of rotation velocity veormax
(103 km2 s2 ) at
extreme outer region versus B band luminosity L (108 erg s1 )
for the 50 sample galaxies. The 16 select galaxies have error
bars that show the uncertainty level in each section of the
plot. The error bars for the remaining galaxies are omitted
for clarity. The large, filled circle denotes the data point for
NGC 5448. The large, filled square denotes the data point
for NGC 3031. (Reprinted with permission of Nova Science
Publishers, Inc. (Hodge 2010).) . . . . . . . . . . . . . . . . 338
~
10.4 Plot of maximum asymmetry Asymax (103 km2 s2 ) versus |K
1
2 2
3
~ao | (10 kpc km s ) for the 50 sample galaxies. The 13 select galaxies have error bars that show the uncertainty level
in each section of the plot. . The error bars for the remaining galaxies are omitted for clarity. The large, filled
circle denotes the data point for NGC 5448. The large, filled
square denotes the data point for NGC 3031.(Reprinted with
permission of Nova Science Publishers, Inc. (Hodge 2010).) . 340
12.1 Behavior of vl with feedback control for intermediate values
of kl. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
1
List of Tables
1.1
1.2
3.1
3.2
5.1
5.2
5.3
6.1
6.2
6.3
6.4
7.1
7.2
8.1
8.2
8.3
8.4
9.1
9.2
9.3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6
6
255
256
256
263
289
290
301
305
10.3 Data for the integer values of a1 of Eq. (10.13) shown as the
plotted lines in Fig. 10.2. . . . . . . . . . . . . . . . . . . .
10.4 First approximation integer values for the select galaxies. .
10.5 Values of the minimum correlation coefficients Ccmin , constants Kx , and exponent bases Bx for the first approximation
equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
~ ~ao | data for the select galaxies. . . . .
10.6 Asymmetry and |K
10.7 Second approximation integer values for the select galaxies.
334
336
337
341
344
11.1 Data for the host sample galaxies used in the c calculation. 354
11.2 Data for the host sample galaxies used in the c and Mc
calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . 355
13.1
13.2
13.3
13.4
.
.
.
.
373
398
399
402
Acknowledgments
I appreciate the financial support of Cameron Hodge (my father) and Maynard Clark, Apollo Beach, Florida, while I was working on this project.
Dr. Michael Castelaz, Pisgah Astronomical Research Institute, One
PARI Dr., Rosman, NC, 28772 helped train me to write scientific works.
The anonymous reviewer of Hodge (2006a) inspired this investigation of
the Pioneer Anomaly within the context of the STOE.
This research has made use of the NASA/IPAC Extragalactic Database
(NED) that is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and
Space Administration.
This research has made use of the LEDA database (http://leda.univlyon1.fr).
This research has made use of the Wikikpedia database
http://en.wikipedia.org/wiki/Main Page .
xix
preface
We are at a critical time in the evolution of our understanding of the universe. Cosmology models and elementary particle models are fundamentally
inconsistent. Science and the philosophy of life have also been considered
divergent. Technology advances during the last 30 years have allowed surprising discoveries. These observations indicate that the standard models
of cosmology and particle physics are likely incomplete. We are ready for
the next evolutionary step in understanding the universe. This future model
has already been named the Theory of Everything.
Each revolution in physics such as Aristotles physics, Newtonian mechanics, electromagnetism, and nuclear forces has produced unanticipated
and far-reaching social consequences. Societies grow larger and nations
grow more powerful. Little wonder that people outside the physics community are becoming increasingly curious about the universe and increasingly
sophisticated.
However, the term Theory of Everything is meant to include only the
physics of cosmology and particle physics. Life and our social organization
are also part of our universe. The principles that apply to the physics of
the big and the small should also apply to life, our social organization, and
philosophy.
Before each revolution in thought, observational anomalies accumulate,
the accepted models become a patchwork of adhoc modifications, societies
experience disintegration, and a need to unify several academic disciplines
seems necessary to develop a new view of the universe. I think I have discovered a single model that stitches the observational anomalies and social
needs into a seamless whole. The model, the Scalar Theory of Everything
(STOE), is the subject of this book.
A conceptual approach to theories and data is emphasized. This book
is ideal for the amateur scientist, professionals in other fields, and physics
outreach programs. Hopefully, the STOE will encourage the scientists of
xxi
Chapter 1
Progenitors
1.1
Introduction
CHAPTER 1. PROGENITORS
1.2
The start
Stone age (2.5 million years ago to 5000BC) man followed the phases of
the moon for the better hunting at night and, later, the Sun (seasons) for
farming.
Babylonia (10,000BC [?] - 1000BC) studied the movement pattern of
heavenly bodies against a background of the celestial sky. Their interest was
to relate celestial movement to human events - astrology. Other old societies
wanted only a calendar, a timekeeper. The goal was to predict events
by recording events against a time measure and then look for recurrent
patterns. A time scale linked to kings reign had problems. If mention of a
king became politically incorrect, heads could roll. Occasionally, the new
king would erase the memory of the old king.
The time periods in multiples of 6 derives from Babylonian astrology -60
seconds per minute, 60 minutes per hour, 24 hours per day, 12 zodiac signs.
Today the Sun passes through 13 (Ophiuchus is the 13th) constellations and
the duration in each constellation differs from ancient times, which reflects
the Earths precession. Only the Mayans of the cultures before 1500 AD
detected and measured the Earths precession. The Egyptians 4800 years
ago referred to Thuban as the pole star not Polaris. The ancients did observe that celestial events occurred in cycles. This began the philosophical
concept that nature operates in eternal cycles.
Physics was the province of priests. Natural phenomena were explained
by invoking idiosyncratic explanations by reference to the will of capricious, anthropomorphic gods and heroes. There was little for humans to
understand or try to predict. The action of the gods followed a few general principles. The gods came in malefemale contending opposites. The
gaining of strength of one set the stage for the rise of the other, the yin-yang
of oriental fame. Today, this may be viewed as a negative feedback process.
The gods and the universe behaved much as humans behaved. Today, this
may be similar to the fractal or selfsimilar philosophy.
The ancients were well aware of the cycles of the moon in hunter societies and the cycle of the seasons in farming societies. Occasionally, a
change was made in method or machine that became permanent. One set
of philosophies that went east from Babylon suggested eternal cycles of
the universe. The other set of philosophies that went west from Babylon
suggested a beginning of the universe. This debate rages today.
Mathematics was little more than tallying counts with addition and
subtraction. Today, we still use the Roman numerals of I, II, III, etc.
Multiplication was difficult. Measuring was tallying the number of standard
units, which is called a metric today. However, the standards were ill defined
for todays usage.
Many cultures had requirements to tally. As their needs became more
complex, so too did their mathematics and physics. The Chinese developed a level of mathematics well in advance of western civilization. Chinese mathematics, like their language was very concise and problem based.
The problems addressed were the calendar, trade, land measurement, architecture, government (taxes), and astronomy. Their method of proof and
gaining knowledge was Proof By Analogy. That is, after understanding
a particular arrangement of events or circumstance, inferences are made
about similar arrangements. Thus, the Chinese were able to formally deduce and predict by reasoning well in advance of western culture. Western
culture became oriented to Proof by Axiom from the Greeks.
This book is about the Theory of Everything, which is the next theoretical physics development. Most of the experiments considered were done in
CHAPTER 1. PROGENITORS
the last 200 years in the west. Accordingly, the development of knowledge
follows the western tradition.
1.3
Thales
Thales of Miletus (c. 624 BC- c. 546 BC) was a pre-Socratic Greek philosopher and one of the Seven Sages of Greece. Many, most notably Aristotle,
regard him as the first philosopher in the Greek tradition. Thales, however,
aimed to explain natural phenomena via a rational explanation that referenced natural processes themselves without reference to mythology. Thales
made a distinction between priest and philosopher. Almost all of the other
pre-Socratic philosophers follow him in attempting to provide an explanation of ultimate substance, change, and the existence of the world with
reference to concepts men may understand and use. Those philosophers
were also influential. This definition of science provides the limit of science
as the knowledge that human understanding may use to predict and to
cause events.
Eventually Thales rejection of mythological explanations became an
essential idea for the scientific revolution. He was also the first to define
general principles and set forth hypotheses. As a result Thales has been
dubbed the Father of Science. Democritus may be more deserving of this
title.
Thales used geometry to solve problems such as calculating the height
of pyramids and the distance of ships from the shore. He is credited with
the first use of deductive reasoning applied to geometry by deriving four
corollaries to Thales Theorem. As a result, he has been hailed as the first
true mathematician and is the first known, western individual to whom a
mathematical discovery has been attributed. The Chinese with the Proof
By Analogy developed these relationships centuries before Thales.
Pythagoras of Samos (c. 580 BC - c. 500 BC) was an Ionian (Greek)
philosopher and founder of Pythagoreanism, which we view, perhaps mistakenly, as a religious movement. He is often revered as a great mathematician and scientist because of the writings of his students. He is noted for
the Pythagorean theorem a2 + b2 = c2 . The usual proof given in textbooks
is based on Euclidean geometry and Greek logic. Euclid had not been born
yet. The proof Pythagoras probably used was the rearrangement of physical
tiles as depicted in Fig. 1.1.
Pythagoras discovered that musical notes could be described by mathe-
1.3. THALES
Figure 1.1: The left figure shows the initial position of the tiles with an area of c2 . The
right figure shows the rearranged position of the same tiles with an area of the sum of
two squares, a2 + b2 .
matical ratios (Table 1.1) of the length of a stretched string. For example, a
taught string vibrates with a given fundamental tone. Divide the string
into two (a 2:1 ratio) by holding the center still produces a harmonious
octave. Only certain other particular divisions of the string produce harmonious tones.
Pythagoras believed in the harmony of the spheres. He believed that
because planets and the stars all moved in the universe according to cycles
(Table 1.2). These cycles could be translated into musical notes and, thus,
produce a symphony.
Why should these musical notes be pleasing to humans and describe the
planetary orbits? These patterns reflect a certain efficiency of operation on
various scales. The measurements need only change the scale to describe
various observations. Thus, human evolution selects the most efficient hearing mechanism.
These relationships suggest a hidden structure that, Pythagoras noted,
was of waves and wave behavior.
Today, the Fourier series algebraically transforms any periodic series
of numbers into a sine and cosine function. The sine function describes
a wave such as the position of each element of a string in oscillation.
The Fourier series may be considered geometrically as circles and ellipses.
Thus, any periodic occurrence such as planet orbits can be described as a
combination of circles.
Careful scholarship in the past three decades has found no evidence of
CHAPTER 1. PROGENITORS
o
1
2
3
4
5
6
7
8
9
10
11
12
Planet
Mercury
Venus
Mars
Jupiter
Saturn
ratio
1:1
135:128
9:8
6:5
5:4
4:3
45:32
3:2
8:5
27:16
9:5
15:8
2:1
note
unison
major chroma
major second
minor third
major third
perfect fourth
diatonic tritone
perfect fifth
minor sixth
Pythagorean major sixth
minor seventh
major seventh
octave
his contributions to mathematics or natural philosophy. However, no evidence doesnt mean he did not contribute. Because legend and obfuscation
cloud his work even more than with the other pre-Socratics, little can be said
with confidence about his life and teachings. We do know that Pythagoras
and his students believed that everything was related to mathematics, that
numbers (counting) were the ultimate reality, and that everything could
be predicted and measured through mathematics in rhythmic patterns or
cycles.
1.4
CHAPTER 1. PROGENITORS
10
CHAPTER 1. PROGENITORS
ies to the center of the Earth as if the objects had some knowledge where
the center of the Earth was.
It is unclear whether Aristotle meant the object exerts its own force to
achieve its natural position similar to inertia and to what Newton called
fictitious force. If he did mean this, Aristotle may have anticipated the
Equivalence Principle.
This model suggested one of two possibilities. One is the existence of
crystal spheres (the ideal shape, and the heavens must be ideal) that exert
the necessary force to hold the planets in position and prevent them from
seeking their natural position at the center of the universe (Earth) . The
Earth is the accumulation of all the things not held by ideal crystal spheres.
The crystal spheres were meant to be real much like dark matter of today.
The other model postulates different physics for each planet. The Greeks
could look with their own senses. They saw moving stars (planets), the Sun
and moon moving against an unchanging background of stars. Further, the
outer planets had a retrograde motion that differed from the inner planets
retrograde motion. The Sun and the moon had nearly constant motion in
opposite directions. If a piece of matter didnt follow the physics of a planet
level, it was rejected to fall to the next inner level, its more natural place.
The falling object would have to go through the crystal spheres of the first
possibility.
Aristotle viewed the Earth to be at the center of the universe as opposed
to the center of the universe. At the center is where the heavenly rejects
fall. Therefore, Earth was the garbage pile of the heavens. Therefore, the
practices that result in prosperity on garbage pile Earth are rejected by
heavenly rules, as has been frequently suggested. Less ideal spirits are even
closer to the center of the universe below the Earths surface. From this I see
the religious (Catholic and Buddhist) ideals of poverty Mt 19:21, simplicity
Lk 9:3, and humility Mt 16:24 may be heavenly ideals. Contrast this with
the alternate view that Earth (man) is the center of the universe preferred
by the rich and powerful.
The current arguments about the nature of matter, light, space, and
time seem to be a continuation of Democrituss and Aristotles differing
views.
Euclid of Alexandria (Egypt 325BC - 265BC) is best known for his treatise on mathematics (The Elements) and the logical method of the western
proof process. Euclid had made a mathematical study of light. He wrote
Optica circa 300 BC in which he studied the properties of light that he postulated traveled in straight lines. He described the laws of reflection and
11
12
CHAPTER 1. PROGENITORS
13
14
CHAPTER 1. PROGENITORS
intended to be truth. Rather that the model was merely a simpler way to
calculate the positions of the planets. That is, the calculation was merely a
conformal mapping. This probably saved the book from being condemned
and unread. A Church problem of his era was to determine which day was
a special day and the Ptolemaic calculation was difficult. The Copernican
model assumed all planetary motion is circular (rather than spherical or
elliptical) and the Sun is the center of the universe.
The rotation of the Earth in Copernicuss system caused the stars observed motion, rather than the rotation of the sphere of the stars. The
starry sphere was obsolete.
The Copernican system suggested the stars were at various distances
from Earth rather than being on a celestial sphere. Therefore, the stars
should exhibit parallax as the Earth moved around the Sun. A parallax shift
is the apparent change of position against the celestial sphere background
resulting from a change of position of the observer. The prevailing opinion
was that the stars were relatively close. Because no parallax was observed
with the equipment of the day, observational lack of parallax argued against
the Copernican system. This issue was not resolved until the parallax of
the stars was measured in the 19th century.
Another problem was that the Ptolemaic system was more accurate
than the Copernican system. Kepler resolved the accuracy issue by finding
planets move in ellipses rather than circles.
Because the heliocentric model was inconsistent with the data at the
time, the heliocentric model required belief to be considered truth. A
hidden assumption was that the stars were close. Belief without observational support was the province of the Church.
The situation with alternate models of today is similar. The Big Bang
model seems better, but proponents of the eternally existing universe models (Steady State, cyclic, etc.) persist with faith in their model.
Tycho Brahe (Denmark and Czech Republic 1546 - 1601) was the son
of a nobleman. He is also famous for loosing his nose in a sword fight.
Tycho in his younger days had been a fair man in his dealings with others.
For a lord at this time to treat his subjects harshly was usual. However
in the 1590s Tychos nature seemed to change and his treatment both of
the inhabitants of Hven and of his student helpers at Uraniborg became
unreasonable.
He always thought a lot of himself and of his own importance. He saw
himself as the natural successor to Hipparchus and Ptolemy, a far more
important person than a King. He equipped his observatory with large and
15
16
CHAPTER 1. PROGENITORS
dulums path was also the point with the highest velocity. The highest point
in the pendulums path was the point of lowest velocity. The virial theorem [for a simple harmonic oscillator, half the time average value of the
potential energy (height of bob of pendulum) equals the time average value
of the kinetic energy (velocity of bob)] was developed later and Galileos
observations are derived if the inverse square law applies as developed by
Newton. He also noticed the regularity of the frequency of the pendulums
motion. This is linked with the inverse square law of gravity and harmony
of the sine function.
The simple harmonic oscillator motion results from a balancing of forces
and exchange of energy between kinetic and potential. A slight energy input
to the balanced system starts the energy exchange. Adding kinetic or potential energy at integer (quanta) or half integer multiples of the frequency
of oscillation may add energy to the system.
Example: to increase swings height, input potential energy at ends of
the swing by shifting center of gravity. Other pendulums use torsion and
springs to provide the necessary force balance.
He made the first pendulum clock near the end of his life.
The pendulum clock is the fundamental defining notion of time. That
is, the idea that time can not only be ordered but also be divided into equal
intervals with a definite numerical magnitude. The pendulum concept of a
clock in general relativity is following Leibnizs philosophy of time.
He had formulated a law of falling bodies: the distance that a body
moves from rest under uniform acceleration is proportional to the square of
the duration taken. Newton would use this law.
He worked out that projectiles follow a parabolic path. That is, the rate
of an objects fall (in air and for a short distance before terminal velocity
becomes important) is nearly independent of the weight or specific gravity
(Aristotle) of objects. Galileo observed that bodies fall at a rate independent of mass (today - say in a vacuum). Objects of different weight fall with
the same accelerating speed, hitting the ground at virtually the same time.
The drag of air is small for short drops. Further, the objects must have the
same drag coefficient that, at least, requires them to be of the same size.
This is often portrayed differently.
Galileo improved on the telescope. The Galileo telescope combined a
convex and a concave lens. It gave an upright image but was very difficult
to use. He discovered the moons in orbit around Jupiter which suggested
to him a method to measure longitude because the moons were a measure
of absolute time, mountains on the moon, sunspots (sun imperfections),
17
and the Milky Way is composed of stars (1609-10). The use of a telescope,
an instrument, to produce data that was not seen by the naked, human eye
represented a branching point in philosophy and science. What makes us
think the instrument readings represent data about the universe and not
merely characteristics of the instrument? Galileo could use the telescope to
see ships far out to sea. Later, when the ships came to port, the ships were
as seen through the telescope. So, Jupiter had moons in orbit and the moon
had mountains because such things were seen on Earth, also. The philosophical change was the modern hidden assumption that the universe may
be interpreted by Earthborn experiments. Thus, the galaxies are moving
away from us because the Doppler shift on Earth causes a redshift of light.
This view has become a postulate for the interpretation of astronomical
observations.
Galileos interpretations conflicted with church doctrine that everything
revolved around Earth. Because others could duplicate the observations,
there was controversy but no problem. That is, no faith or belief problem
until he interpreted the observations according to changes in belief about
the universe. Galileo inserted the belief that the Sun is center of universe.
He looked for parallax of the stars but found none. Ptolemys scheme was
more accurate and predicted no parallax. Therefore, at the time, the Earth
centered postulate was supported by data and the Sun centered postulate
required a philosophical acceptance.
He postulated velocity was not absolute, all mechanical experiments performed in an inertial frame will give the same result, and Galileos Principle
of Equivalence. Galileos Principle of Equivalence is:
The time in which a certain distance is traversed by an object moving
under uniform acceleration from rest is equal to the time in which the same
distance would be traversed by the same movable object moving at a uniform
speed of one half the maximum and final speed of the previous uniformly
accelerated motion.
This is very close to the Equivalence Principle. He noticed that if experiments are conducted below decks of a ship moving at uniform velocity
(speed and direction), then the results are the same. Unlike Brahe, a ball
could be thrown in the air and return to the same place in such uniform
motion in the moving frame but not the same place on Earth because the
rotating Earth imparts an acceleration.
Johannes Kepler (Germany 1571 - 1630) was Brahes assistant for some
of his career. He was profoundly religious. He believed God had made the
18
CHAPTER 1. PROGENITORS
19
way along its length, an octave is produced (same quality of sound but at
twice the frequency of the fundamental (unstopped) note, a 1:2 frequency
ratio. Notes separated intervals of a perfect fifth (ratio 2:3) and a perfect
fourth (ratio 3:4) have been important consonances (harmonies) in western
music. Kepler measured how may arcseconds planets moved at perihelion
and aphelion. Kepler found the angular velocities of all planets closely correspond to musical intervals. For example, Saturn moves at 106 arcseconds
per day at aphelion and 135 arcseconds at perihelion. 106:135 differs only 2
arcseconds from 4:5 (a major third). The ratio between Jupiters maximum
and Mars minimum corresponds to a minor third and between Earth and
Venus to a minor sixth. A later scholar (Francis Warrain) extended the
correspondence between the angular velocity ratios and musical harmony
to Uranus, Neptune and Pluto, which were unknown in Keplers time. Why
this is true for orbits due to a central force, which produces conical orbits,
is the relation of Fourier series to musical notes (vibrating strings). The
Greek harmony of the spheres became the music of the spheres.
Kepler was an astrologer. He differentiated between the science of astrology and the popular version of astrology.
He invented a 2-convex-lens-system telescope in which the final image is
inverted. This Keplerian telescope in now known simply as an astronomical
telescope.
Kepler worked on optics, and came up with the first accurate mathematical theory of the camera obscura. He also gave the first modern form of
how the human eye works, with an inverted image formed on the retina. He
explained nearsightedness and farsightedness. He gave the important result
that the intensity of light observed from a source varies inversely with the
square of the distance of the observer from the source, which has become
the inverse square law. He argued that the velocity of light is infinite.
1.5
The science and philosophy of Aristotle was being overthrown at the end of
the 1500s. There were many scientific and philosophical attempts to untangle the web of new observations and modifications to Aristotles thought.
The certainty of the last 1700 years was gone. It fell to Newton to create a
testable model describing cosmological happenings.
Unanswered questions that may be subject to scientific inquiry:
Why do objects fall at a rate proportional to the square of the time and
independent of an objects weight? (counter intuitive)
20
CHAPTER 1. PROGENITORS
What is the relation between the pendulums motions and why is it so regular and harmonious?
Why do planets follow elliptical (conic section) orbits?
What is the relation of the planets orbits to musical harmonies?
What is the common property of raindrops, glass, and thin films that causes
them to radiate in different colors?
Is light a particle or a wave? What is the nature of light? How is diffraction
explained? If light is a wave, in what medium does the wave travel?
An object swung on a string exerts an outward (centrifugal) force. The
centrifugal force is related to the weight of the object, length of string, and
rate of rotation. Why is the force proportional to the square of the length
of string?
The intensity of light also declines as the square of the distance. What is
the relation between the centrifugal force, planets orbits, and light intensity?
Why do some materials seem immutable while others are not? What governs their combination proportions?
Which model of the solar system is better - Ptolemy, Copernicus, or Brahe?
If Copernicus is correct, why do the stars lack parallax? What holds the
planets up (in place)?
Fundamental conditions prior to a major paradigm shift:
(1) Current paradigm is stressed with inconsistencies and rapid modifications are made to the model as new observations are discovered.
(2)Data is interpreted according to the paradigm with marginal results.
(3) Some paradigms are so entrenched that they are barely recognized as a
postulate.
(4) Many competing models exist, none of which seem better than the popularly accepted model. They all have some falsifier.
(5) Great social pressure exists to reason from accepted postulates. This
creates a selection bias that is often not recognized. The exceptions are simply deleted from consideration or marginalized for one reason or another.
Perhaps this is the reason social outsiders often find the new solution.
(6) Observations inconsistent with any suggested model are often
marginalized or ignored. A very open and tolerant society is required to
over come this bias.
(7) Several coincidences of the numbers have been noticed but there is
no understanding about the fundamental similarity of the calculations.
Newtons next step was to:
(1) Define new parameters as being fundamental such as Mass rather than
21
weight.
(2) Define force without contact and responsible for all change of movement.
(3) Use some aspect of many models with supporting observations that are
inconsistent with other models.
(4) Defined new rules for natural philosophy.
(5) Concentrated on phenomena that others tend to minimize.
(6) Created a new mathematics method or used what others considered too
simple.
(7) Developed new analysis procedures of differential and integral calculus.
Rene Descartes (1596 - 1650), also known as Renatus Cartesius (latinized form), was a highly influential French philosopher, mathematician,
scientist, and writer. Dubbed the Founder of Modern Philosophy, and
the Father of Modern Mathematics. Much of subsequent western philosophy is a reaction to his writings. His Cartesian coordinate system is
used to bridge between algebra and geometry and is crucial to the invention of calculus and analysis. He was one of the key figures in the Scientific
Revolution. He also created exponential notation. He is credited with the
discovery of the law of conservation of momentum (mass X speed). However, Descartes did not join direction with speed, which caused his view of
the conservation of momentum to be complex. There was some confusion
between what we call energy (mass X speed2 ), potential energy (mass X
height), and momentum. Is the key matter characteristic proportional to
the velocity or to velocity2 ? This debate lasted into the end of the 18th
century. Because of this confusion, his postulate of the laws of motion
(what would become Newtons first law of motion) was complex. However,
applying this to vertical (gravitational potential energy) movement was a
problem. That is, move an object up to a given height fast and drop it.
Compare the dropped momentum when it hits the floor with the momentum
if the body is moved up slowly to the same height. The two experiments
produce the same momentum on the floor. This is the velocity2 motion
of energy. Momentum measures inertial mass. Potential energy measures
gravitational mass.
Descartes adopted the mechanical philosophys explanation of natural phenomena. The mechanical philosophy favored a contact model of the
interaction of small, unobservable corpuscules of matter. Descartes suggested that space was filled with plenum that transmitted pressure from
a light source onto the eye. The corpuscules possess a limited number of
mainly geometric properties such as size, shape, motion, and arrangement
22
CHAPTER 1. PROGENITORS
23
24
CHAPTER 1. PROGENITORS
gesting that light was composed of a stream of tiny particles. The wave
theory by Huygens and Hooke was a development of Descartes ideas where
they proposed that light is a wave through the plenum. Newton supported
the theory that light rays were composed of tiny particles traveling in a
medium, which was later called an aether.
Sir Issac Newton (England 1643 - 1727) brought astrophysics into science
rather than philosophy. Reports of his early life show he seems to have
had little promise in academic work. He showed no talent or interest in
managing his mothers considerable estate. Later he developed a passion
for learning and problem solving.
Newton developed hypothesis (models) based on what he saw by experiment in our size domain. For example, he hypothesized matter and light
(energy) could be interchanged with one another.
Newtonian physics is alive and well today. His ideas form the base of
the ideas today. Indeed, Newtonian physics is still an alternative cosmology.
Newton was the first scientist to be knighted in 1705 solely for his work.
When Sir Francis Bacon was knighted, he was a wellconnected politician
and jurist.
Newton experimented with passing light through a triangular glass prism
circa 1666. That a spectrum of colors was produced was well known. The
standard explanation of this was that the pure while light was somehow
corrupted in passing through the glass. The farther it had to travel in the
glass the more it was corrupted. Hence different colors emerged.
Newton passed light through a second prism. The light was not further
decomposed. If the second prism was inverted relative to the first prism, the
colors recombine to form white light. This proved white light is multicolored
(not pure).
The chromatic aberration seen at the edges of the Keplerian telescopes
image convinced Newton in 1670 that white light was a mixture of rays of a
corpuscular nature following al-Haythem and Democritus that are refracted
at slightly different angles rather than a single basic, pure entity.
Corpuscular because sunlight cast shadows like particles rather than go
around corners like sound or water waves. If an open doorway connects two
rooms and a sound is produced in a remote corner of one of them, a person
in the other room will hear the sound as if it originated at the doorway. As
far as the second room is concerned, the vibrating air in the doorway is the
source of the sound. Light appears to travel straight past edges. This was
one of the difficulties in the wave model of light.
Newton speculated that gravity acts on aether and aether acts on parti-
25
cles including corpuscular light particles. General Relativity suggests gravitation bends light to travel along a geodesic. Newton thought refracting
(Keplerian) telescopes would always suffer chromatic aberration. He invented the reflecting telescope, which bears his name.
He later used a wave model in conjunction with the corpuscular model
to describe colors of light from thin sheets, Newtons rings, and diffraction.
The first major account of interference and diffraction, which Newton
called the inflection, of light was that published by F. M. Grimaldi in
his Physico-Mathesis de Lumine Coloribus et Iride (Bologna 1665). Boyle
and Hooke made further studies of diffraction. The most significant and
quantitative results were those obtained by Newton while studying the interference rings produced when the curved surface of a plano-convex lens
was pressed against a flat optical surface.
Newton wrote a book Philosophiae naturalis mathematica(Newton 1687)
or Principia that is recognized as the greatest scientific book ever written.
New Fundamental Principles:
(1) No more cause of natural things should be admitted than are both true
and sufficient to explain their phenomena.
(2) Nature does nothing in vain.
(3) Nature does not indulge in the luxury of superfluous causes or nature
maximizes profit from energy.
(4) The causes assigned to natural effects of the same kind must be the
same such as the cause of respiration in man and beast; falling stones in
America, in Europe, and on the moon; and the light from a fire or from the
Sun.
(5) Those qualities of bodies that cannot be intended and remitted (i.e.
qualities that cannot be increased and diminished) and that belong to all
bodies on which experiments can be made should be taken as qualities of all
bodies universally. (Early form of conservation and symmetry principles.)
(6) Propositions in experimental philosophy gathered from phenomena by
induction should be considered either exactly or very nearly true notwithstanding any contrary hypothesis, until yet other phenomena make such
either more exact or liable to exceptions. (Mach in the 19th century will
echo this because so many seem to ignore it).
Phenomena.
1. Keplers laws.
2. Orbits of planets around the Sun.
3. The moon, only, obeys Keplers laws relative to the Earth.
Newton formed the Newtonian Equivalence Principle: the rest mass
26
CHAPTER 1. PROGENITORS
27
mass of one body, m is the inertial mass of another body, and ~r is the
vector distance between the center of masses. The (GM2 /r 3 )~r factor is the
acceleration field caused by body 2. This is also a conformal mapping of
the gravitational influence of body 2. The GM2 factor is the proportionality constant between the force and the experimentally determined r 2 .
Because the M2 is defined as a characteristic of a body, the G carries the
conversion between m and M characteristics. The center of inertial mass
is assumed to be at the same position as the center of gravitational mass.
The G is assumed to be a constant in our everyday world. Experiments
performed on Earth seem to support this assumption within the limits of
the experiment. The limits include masses that have atomic weights within
an order of magnitude of each other and cosmologically small r.
After the mapping of other forces such as friction, buoyancy, and electromagnetic, the vector addition of the forces reduces to just one force that
then determines motion of a body via the inverse transformation.
The measurement of M is by a scale such as Hooks spring, which is a
static measurement. The term rest mass is used here rather than gravitational mass to anticipate a fundamental issue in General Relativity. The M
may be a mass other than Earth such a mass held in a plane perpendicular
to Earths gravitational field, which is used in experiments to measure G.
The measurement of m is by measuring its motion such as by dropping it
and measuring its acceleration. Newtons third law implies F~g1 = F~g2 and,
therefore, Newtons equivalence principle (m1 = M1 and m2 = M2 ). An
assumption used in astronomical calculations is that m and M are universal
characteristic of objects, and G is a universal constant.
The physics of Newtons time had a paradox of whether the conserved
characteristic of a bodys motion was proportional to its velocity (conservation of momentum) or its velocity squared (kinetic energy). This debate
raged until the 1800s. However, kinetic energy is not conserved. Remember, the force by only contact of Aristotle considers the height as the body
desiring to return to its natural position. The application of Newtons
model suggested the height is a form of energy (force times distance). Consequently, the ideas of momentum and conservation of total energy are
different. Total energy is kinetic and potential. Newtons cradle is a simple demonstration of (in his day) a very complex issue. The raised ball
introduces a potential energy that is converted to kinetic energy by the
string constraint. The potential energy is recovered after a conservation of
momentum (through the balls) process.
From centrifugal force and Keplers third law, Newton deduced the in-
28
CHAPTER 1. PROGENITORS
verse square law: The gravitational attraction between two bodies is proportional to the product of their masses and inversely proportional to the
square of the distance between their centers-of-mass (a point) and directed
along a line connecting their centers-of-mass. This is the importance of
Keplers second law. Newton additionally derived that if one of the bodies
is held stationary (or is very much larger than the other) the motion of
the smaller body follows a conic section (circle, ellipse [Keplers first law],
parabola, or hyperbola) because a central force governs the motion. Also,
objects approaching the Sun from outside our solar system, which are not
in orbit, follow hyperbola paths. This was known at the time. Newton
derived it from his fundamental principles.
One of the issues that exercised Newton and delayed the publication
of the Principia was to justify the assumption that the gravitational pull
exerted by a mass is identical to that exerted by an equal point mass
(the mass of the object located at its center of mass - a point in space) of
zero extent. This is known as the spherical property. Eventually, Newton
established the spherical property is true for inverse square law of force but
not for other powers of distance such as inverse cube.
The inverse square law is very important for understanding many concepts in astronomy. The assumption in using the inverse square law requires
three-dimensional, Cartesian, isotropic, (flat in General Relativity terms)
mathematical space. Consider a substance being emitted from a very small
volume into the surrounding volume. After some time, the substance would
expand by diffusion into a larger volume. The r 2 dependence of the force
of an object implies the effect is dependent of the surface area of the object
presented to the center of the source. The inverse square law may be seen as
a relation of a force with a geometrical surface. The total force is assumed
to be evenly distributed over a spherical surface.
Another vexing issue is the infinite universe conclusion from Newtons
gravitation law. The other forces in modern times (electromagnetic, strong
nuclear, and weak nuclear) cancel at moderate distances. The gravitational
force is assumed to be the only force with extragalactic effect. If the universe
is finite, there is an outer galaxy. The outer galaxy would be attracted to
the center (yes, a finite universe in Newtonian dynamics implies a center)
and the universe would collapse. Therefore, the universe must be infinite is
extent. Infinite here means unbounded without an edge rather than
unlimited. Otherwise, as Newton noted, the balance of the positions of
the galaxies must be so fine that it would be easier to balance thousands of
pins on their points.
29
Alternatively, there may be some other force directed outward from the
center of mass of the universe that acts against gravity. If such force is
centrifugal, then the universe must be rotating relative to a fixed absolute
space. The modern dark energy or cosmological constant serves the
same function.
The mathematics of r 0 presents another difficulty. The Newtonian
universe has limits of applicability.
Newton applied these concepts to relate a body falling to Earth and
the orbital period of the moon, pendulums, etc. Thus, he connected the
heavens to a universal law experiment and tested the universal law on Earth
with masses and sea tides.
Five concepts that were and are controversial that are attributed to
Newton are:
(1) Action at a distance vs. action requires contact with objects. Newton
did not support Action at a distance philosophically.
(2) Absolute space coordinates system vs. relational space. Newton
did not support this (he made no hypothesis).
(3) Absolute time vs. relational time.
(4) Light is corpuscular (particles)
(5) Particle wave duality of light.
Time works in Newtons equations equally well whether going forward
or backward symmetry in time. The observation in everyday life is that
time progresses in only one direction. This is the arrow of time. Liebiz
attempted to define time as the earlier/later relation by means of causal
connections. Such a model is known as a causal theory of time.
Many still reject the idea of action at a distance. Newton saw the
gravitational relationship but couldnt explain the force. The prevailing
idea is (championed by Rene Descartes) that forces work through contact.
Fields were invented in the 19th century to provide a contact medium.
However, a field is little more than potential at a distance.
The absolute time of Newtons model was an absolute clock external
to the universe that measured time independent of the universe.
The concept of space as the container of material objects is generally
considered to have originated with Democritus and, for him, it provided the
stage upon which material things play out their existence. Emptiness exists
and is that which is devoid of the attribute of extendedness. For Newton,
an extension of the Democritian concept was basic to his mechanics. Absolute space, by its own nature and irrespective of anything external, always
remains immovable and similar to itself.
30
CHAPTER 1. PROGENITORS
Thus, the absolute space of Newton was, like that of Democritus, the
stage upon which material things play out their existence. Space had an
objective existence for Newton and was primary to the order of things.
Time was also considered to possess an objective existence, independently
of space and independently of all the things contained within space. Universal time is an absolute time that is the same everywhere. The fusion of
these two concepts provided Newton with the reference system (spatial coordinates defined at a particular time) by means of which all motions could
be quantified in a way that was completely independent of the objects concerned. If emptiness exists and is devoid of the attribute of extendedness,
then the emptiness of Democritus can have no metric (a length measure)
associated with it. But it is precisely Newtons belief in absolute space and
time with the implied virtual clocks and rods that makes the Newtonian
concept a direct antecedent of Minkowski spacetime. That is, an empty
space and time within which it is possible to have an internally consistent
discussion of the notion of metric.
The contrary view is generally considered to have originated with Aristotle for whom there was no such thing as a void - there was only the plenum
within which the concept of the empty place was meaningless and, in this,
Aristotle and Leibniz were at one. It fell to Leibniz, however, to take a crucial step beyond the Aristotelian concept in the debate of (Samuel) Clarke
and Leibniz (1715 - 1716) in which Clarke argued for Newtons concept.
The forces in Newtonian mechanics acting on a mass that depend on
positions and motions of other bodies are called real forces. Whereas,
the forces acting on a mass that depend on changing direction of the mass
(inertial mass) such as centrifugal forces with respect to the fixed coordinate system are fictitious forces. An absolute space coordinate system
is required to distinguish a fixed coordinate system. The fictitious forces
in the General Theory of Relativity are regarded as legitimate forces on the
same footing as the real forces. A method to deduce the fictitious forces
from the position and motion of other bodies must be shown to accomplish
this.
Newton thought that the results of experiment and observation established that there was absolute motion, i.e. motion was defined to be relative
to absolute space. His main argument for absolute motion is the argument
called Newtons Bucket conceived in 1689. Newton was misunderstood
in his time about fictitious forces. This is why many Newtonian Dynamics issues seem to be developed long after Newton such as the v vs. v 2
arguments in the 1700s. Newtons concern was the causes of centrifugal
31
motion and defining what was meant by rotation without a fixed frame
of reference. It challenges the relationalist position.
For a relationalist, a notion like unchanging speed can be understood
only relative to a frame fixed by some material objects. For example, something at rest relative to the Earths surface is in motion relative to a reference frame fixed to the Sun. However, Newton argues that the notion of
inertial (or accelerated) motion is not merely relative it is absolute.
Examine the observed effects that form the basis of the Newtons Bucket
argument. Then turn to the argument itself.
Consider the case of a body that is accelerating in a straight line, i.e.
a body that is traveling in some fixed direction with a steadily increasing
speed. For example, consider the case where you and I are in cars on a
straight stretch of road. However, I am in a car that is at rest on the
shoulder and you are accelerating away from me down the road. You will
feel inertial effects due to your acceleration, i.e. you will feel yourself being
pulled back into your seat as you accelerate, and I wont. The Newtonian
would claim that you are really accelerating and that is why you experience
the inertial effects you feel and I dont. However, the Leibnizian cant
say this because you can think of me as accelerating away from you just
as much as I can think of you as accelerating away from me (relation to
other bodies). That is, for the relationalist, there is only a notion of relative
acceleration. Neither of us can be taken as the one that is really accelerating.
Consequently, there seems to be no reason why you experience the inertial
effects and I dont unless there is a fixed reference frame such as the Earth.
Consider a point on a turntable that is rotating at a constant rate. The
speed at which this point is moving is fixed (i.e. the distance traveled per
unit time is fixed), but the direction in which it is traveling is constantly
changing so that it can follow the circular path associated with the rotation
of the turntable. Therefore, this point is accelerating. Consequently, because rotations are examples of accelerated motions (changes in the velocity
vector), they will have inertial effects associated with them. Such an effect
is the basis of the bucket argument.
Consider a bucket of water that is suspended by a rope so that it is
free to rotate. (A) When both the bucket and the water are at rest, the
surface of the water is flat and there is no relative motion between the
bucket and the water. (B) Now, make the bucket rotate (say by winding
up the rope). The bucket is rotating and the water is still at rest with a
flat surface. As the bucket is moving and the water is not, there will be
frictional forces acting between the bucket and the water. (C) The action
32
CHAPTER 1. PROGENITORS
of these frictional forces will cause the water to rotate, i.e. the water will
start to move, too. These frictional forces will continue to operate until
the bucket and the water are rotating at the same rate, i.e. until there is
no relative motion (no frictional forces) between the bucket and the water.
The surface of the water will become concave.
The bucket argument, in brief, then is as follows:
Premise 1: The shape of the waters surface is dependent on some motion
that is either absolute or relative.
Observation: (A) and (C) represent situations where the bucket and the
water are rotating at the same rate, yet there is a difference in the shape
of the waters surface. So, in both of these cases, the bucket and the water
have the same (i.e. no) relative motion, but the shape of the waters surface
is different.
Therefore, the shape of the waters surface is not dependent upon the
motion of the water relative to the bucket. However, it could be due to the
motion of the water relative to something else.
Premise 2: If the shape of the waters surface was dependent upon some
relative motion between the water and a set of objects not in contact with
it such as the Earth or stars, then there must be action at a distance. For
Leibniz, relative motion must be motion relative to some other objects.
Premise 3: There is no action at a distance. This is the premise that
Mach will deny. However, Mach will be required to introduce something
else to explain the concave surface.
Therefore, the shape of the waters surface is independent of relative
motion and, as such, it must be dependent on its absolute motion. Hence,
inertial effects, like those due to rotation, cannot be accounted for within a
relationalist framework.
Premise 4: Absolute motion is motion relative to absolute space. This
is the definition of absolute motion.
Therefore, absolute space exists. How else can we have absolute motion?
This argument is inference to the best explanation of some observation. That is, given the only choice between the relationalist and absolutist
view of space, we should choose the latter because the former cant supply
an adequate explanation. Einstein said there is a third choice. Furthermore, notice that this is an argument for an absolute space and, as such, it
doesnt tell us anything about what space is.
What is the problem? Is this not precisely what we would expect to
happen? Newton asked the simple question: why does the surface of the
water become concave? The easy answer the surface becomes concave
33
because the water is spinning. But what does spinning mean? It certainly
doesnt mean spinning relative to the bucket. After the bucket is released
and starts spinning, then the water is spinning relative to the bucket yet its
surface is flat. When friction between the water and the sides of the bucket
has the two spinning together with no relative motion between them, then
the water is concave. After the bucket stops and the water goes on spinning
relative to the bucket, then the surface of the water is concave. Certainly
the shape of the surface of the water is not determined by the spin of the
water relative to the bucket. Indeed, the water is not spinning relative to
the bucket in both a flat and a concave shape.
Newton then went a step further with a thought experiment. Try the
bucket experiment in empty space. He suggested a slightly different version for this thought experiment. Tie two rocks together with a rope, he
suggested, and go into deep space far from the gravitation of the Earth or
the Sun. This cannot be physically done any more than it could be done
in 1689. Rotate the rope about its center and it will become taut as the
rocks pull outwards. The rocks will create an outward force pulling the
rope tight. If this is done in an empty universe, then what does a rotating
system mean? There is no coordinate system to measure rotation. Newton
deduced from this thought experiment that there had to be something to
measure relative rotation. That something had to be space itself. It was his
strongest argument for the idea of absolute space. I deny this experiment
can be done. Therefore, this thought experiment is meaningless.
Newton returned to his bucket experiment. Spin, he claimed, was spin
with respect to absolute space. When the water is not rotating with respect
to absolute space, then its surface is flat. When it spins with respect to
absolute space, its surface is concave. If the bucket is in a tree attached to
the Earth, the bucket is spinning with the Earths rotation and revolution,
at least. However, he wrote in the Principia:
I do not define time, space, place, and motion, as they are well known
to all. Absolute space by its own nature, without reference to anything
external, always remains similar and unmovable.
He was not happy with this as perhaps seen from other things he wrote:
It is indeed a matter of great difficulty to discover and effectually to
distinguish the true motions of particular bodies from the apparent, because
the parts of that immovable space in which these motions are performed do
by no means come under the observations of our senses.
34
CHAPTER 1. PROGENITORS
35
Leibnizs position seems to be that absolute motions can be distinguished from purely relative motion by considering the causes (forces) that
act to set the bodies in motion. If A and B are moving in relation to one
another and if an impressed force on A set A in motion, then it is A (rather
than B) which is undergoing absolute motion. As far as the relative motions of A and B are concerned, there is no distinguishing them. When
we consider the forces acting on A and B, then, Leibniz suggest, absolute
motions can be distinguished from relative motions.
Leibniz is saying is that kinematically (motion in the abstract without
reference to the forces or mass that produces the motion) speaking, all
motions are purely relative (this is why current physics talks of energy
rather than forces such as dark energy). However, dynamically (treating
the action of forces to produce motion) speaking, some motions can be
distinguished from others.
Newton took this difference between kinematic and dynamic effects to be
evidence for the existence of Absolute Space. Leibniz, rather inconsistently,
agreed that the effects were different but arbitrarily denied that they had
the implications that Newton tried to draw from them. If Leibniz had been
a consistent relationalist he could have denied that consideration of forces
does make a difference, as Einstein would later do. He would have argued
that, dynamically speaking, the motions of A and B were indistinguishable
as well. He did not, however, make this move. It was left to Ernst Mach
in the late 1800s to argue for the dynamical equivalence as well as the
kinematic equivalence of two observers in relative motion to one another.
(3) Clarkes third basic criticism of Leibniz was also inadequately answered by Leibniz in the Leibniz-Clarke correspondence. Leibniz had argued that space was a system of relations. Clarke urged that space had a
quantity (but not substance) as well (Democritus). Relations could have
no quantity and that, therefore, a theory that held space (or time) to be
merely relational could not be adequate. Leibniz responds by arguing that
a relation also has its quantity. Leibniz says:
As for the objection that space and time are quantities, or rather things
endowed with quantity; and that situation and order are not so: I answer,
that order also has its quantity; there is in it, that which goes before, and
that which follows; there is distance or interval. Relative things have their
quantity as well as absolute ones. For instance, ratios or proportions in
mathematics, have their quantity, and are measured by logarithms; and yet
they are relations. And therefore though time and space consist in relations,
yet they have their quantity.
36
CHAPTER 1. PROGENITORS
This is pretty obscure stuff and Clarke, perhaps, can be forgiven for
thinking that Leibniz was just evading the issue.
The basic problem is that specifying a given order of points (objects,
events) is not sufficient to determine a unique quantitative measure (length,
duration) for them. Leibniz tried to give some account of the metrical
properties of order in his 1715 paper:
In each of both orders (time and space) we can speak of a propinquity or
remoteness of the elements according to whether fewer or more connecting
links are required to discern their mutual order. Two points, then, are nearer
to one another when the points between them and the structure arising out
of them with the utmost definiteness, present something relatively simpler.
Such a structure which unites the points between the two points is the
simplest, i.e., the shortest and also the most uniform, path from one to the
other; in this case, therefore, the straight line is the shortest one between
two neighboring points.
There are two related difficulties with this approach. Both arise from
the idea that a metrical structure cannot be generated from a topological
structure alone. General relativity ascribes gravity to the topology or geometry of space but then spends considerable effort dealing with these two
difficulties. The first difficulty concerns Leibnizs characterization of the
nearness relation. The second difficulty concerns his characterization of
straightness. The problem with nearness is this. Consider three points
A, B, and C. According to Leibniz, B is nearer to A than C in the case
where there are fewer points between A and B than their are between A
and C. Note points not extent that begs the issue of irrational numbers.
Intuitively, this seems right. However, our intuitions, as often as not, are
liable to lead us astray in an arena requiring prediction and measurement.
They do so here. We can talk about the number of intervening points between A and B or between A and C as being different only if we assume
that space itself is discrete. Our definition of a standard measure may not
measure the discrete distance. That is, if we assume that there are a definite
finite number of spatial positions between A and B and a greater number
between A and C. To find the distance between any two points A and B,
we find the smallest number of points linking A to B and use that count
as a natural measure. This is inconsistent with Democritus and the idea of
extent rather than number of points.
The straight line from A to B in Liebnizs view is the shortest path in
terms of intervening connecting points that connects them. Thus, if space
37
38
CHAPTER 1. PROGENITORS
the divergent views. This did not convince many people. Carl Neumann in
1870 suggested a similar situation to the bucket when he imagined that the
whole universe consisted only of a single planet. He suggested: Wouldnt it
be shaped like an ellipsoid if it rotated and a sphere if at rest? The first serious challenge to Newton came from Ernst Mach, who rejected Neumanns
test as inconclusive.
Newton had shown that celestial and terrestrial motions were in accordance with a law of universal gravitation in which the attraction between
any two bodies in the universe depends only on their masses and (inversely)
on the square of the distance between them. This led to an attribution to
Newton of ideas that he abhorred. One was that because the gravitational
attraction is a function of the masses of bodies irrespective of any other
properties except their separation in space, this attraction arises simply
from the existence of matter. This materialist position was castigated by
Newton in a letter to Bentley in which he said:
You sometimes speak of gravity as essential and inherent to matter.
Pray, do not ascribe that notion to me; for the cause of gravity is what I do
not pretend to know.
In another letter to Bentley, he amplified his position:
It is inconceivable, that inanimate brute matter should, without the
mediation of something else, which is not material, operate upon and affect
other matter without mutual contact.
Newton disliked action at a distance and non-contact force.
Cotes replied to Leibniz (although without mentioning his name) in the
preface he wrote to the second edition of the Principia . . . twere better
to neglect him.. Cotes also discussed the general nature of gravitation
and forces acting at a distance. For this second edition, Newton wrote the
famous General Scholium to Book Three, in which he attacked the vortex
theory of Descartes, declared that the most beautiful system of the Sun,
planets, and comets, could only proceed from the counsel and dominion of an
intelligent and powerful Being, and discussed the nature of God. Newton
concluded: And thus much concerning God; to discourse of whom from the
appearance of things, does certainly belong to Natural Philosophy. Newton
then addressed himself to the problem of what gravitation is and how it
might work, admitting that no assignment had been made of the cause of
this power whose action explains the phenomena of the heavens and the
tides of the seas. This is followed by the famous paragraph that reads:
39
But hitherto I have not been able to discover the cause of those properties of gravity from phenomena, and I frame no hypotheses; for whatever
is not deduced from the phenomena is to be called an hypothesis; and hypotheses, whether metaphysical or physical, whether of occult qualities or
mechanical, have no place in experimental philosophy ... . And to us it is
enough that gravity does really exist, and act according to the laws which
we have explained, and abundantly serves to account for all the motions of
the celestial bodies, and of our sea.
The purpose of the General Scholium was apparently to prevent any misunderstanding of Newtons position such as had been made by Bentley and
Leibniz after reading the first edition of the Principia in which this General
Scholium did not appear. Yet the cautious wording prevented the reader
from gaining any insight into Newtons changing views on this subject. Long
before the Principia, in a letter to Boyle written on 28 February 1678/9,
published in the mid-18th century, Newton speculated about the cause of
gravity and attempted to explain gravitational attraction by the operation
of an all-pervading aether consisting of parts differing from one another
in subtlety by indefinite degrees. Some hint, but not more, of Newtons
later opinion concerning an aethereal electric spirit was contained in the
final paragraph of the above General Scholium, in which Newton wrote:
And now we might add something concerning a certain most subtle
spirit which pervades and lies hid in all gross bodies; by the force and action
of which spirit the particles of bodies attract one another at near distances,
and cohere, if contiguous; and electric bodies operate to greater distances as
well repelling as attracting the neighboring corpuscles; and light is emitted,
reflected, refracted, inflected, and heats bodies; and all sensation is excited,
and the members of animal bodies move at the command of the will, namely,
by the vibrations of this spirit, mutually propagated along the solid filaments
of the nerves, from the outward organs of sense to the brain, and from the
brain into the muscles. But these are things that cannot be explained in
few words, nor are we furnished with that sufficiency of experiments which
is required to an accurate determination and demonstration of the laws by
which this electric and elastic spirit operates.
Thus, the 18th century reader who had become convinced that the system of Newtons Principia accounted for the workings of the universe and
then would feel frustrated that Newton had not elucidated the topic of the
cause of gravity.
He refused to discuss the cause of gravitation, begging off with the
40
CHAPTER 1. PROGENITORS
41
42
CHAPTER 1. PROGENITORS
43
suggested that the alternate fits of easy reflection and fits of easy refraction arise from the action of the aether waves that overtake the particles
of light and put them into one or the other state. Overtake means travel
faster than light. Newtons measurements of the separation of these rings
and his computations of the thickness of the thin film of air between the
two glass surfaces were highly accurate.
The popular view of Newtons speculations about light is that light has
a dual nature of particle and wave. Another interpretation is that Newton
suggested light is a particle that acts like a wave when in a stream. Newtons
corpuscle could mean a very small particle or the smallest particle
such as Democrituss atom. Color then becomes a result of the stream
rather than a characteristic of the corpuscle.
The second book thus admits hypotheses, although without any consideration of their truth or falsity. The third (and last) books opening
section deals with Newtons experiments on diffraction, followed by the
famous Queries in which Newton introduced a variety of hypotheses
(speculations)not only on light, but on a great many subjects of physics
and philosophy. He seems to have emptied his mind of the conjectures he
had accumulated in a lifetime of scientific activity. Hypotheses non fingo
could not be applied to the Opticks. The progressively conjectural character
of this book makes it so interesting to read.
Because Newton devoted a considerable portion of the Opticks to the
cause of gravity, which was avoided in the Principia, we can understand
why the Opticks must have exerted so strong a fascination on men who
wanted to know the cause of gravity and the fundamental principle of the
universe. Indeed, in the 1717 edition of the Opticks Newton inserted an
Advertisement (printed below) explicitly declaring that he did not take
Gravity for an Essential Property of Bodies, and noting that among the
new Queries or Questions added to the new edition was one Question
concerning its Cause. Newton chose to propose it by way of a Question
because he was not yet satisfied about it for want of experiments.
Why did Newton reject the wave theory that others in the 19th century tried to attribute to him in vain? The information is provided in the
Opticks itself. Foremost among the reasons why Newton insisted upon the
corpuscularity of light was the general atomism of the age; indeed, the very
hallmark of the New Science in the 17th century, among such men as
Bacon, Galileo, Boyle, and others, was a belief in atomism, in what Boyle
called the corpuscular philosophy. The traditional scholastic doctrine of
Newtons time had placed light and the phenomena of colors in the category
44
CHAPTER 1. PROGENITORS
45
the angle of incidence equals the angle of reflection, had been well known
since classical antiquity. Refraction might easily be explained on the basis
of the corpuscular theory. The attraction exerted by the particles of glass
on the corpuscles of light incident upon the glass from air would produce
an increase in the glass surface normal component of the velocity of the
particles. Therefore, the result ia a bending toward the normal which is
observed to be the case (gravitational bending of light.)
Finally, the most brilliant of all the portions of Huygens Treatise on
Light provided Newton with an argument against Huygens theory. Huygens had been able to account for the phenomenon of double refraction
in calcite, or Iceland spar, by two different wave forms by extending the
geometric construction of wave fronts from isotropic to anisotropic media.
Newton (Qu. 28) considered this to be an important weapon against Huygens hypothesis. Newton grasped the salient aspect of Huygens investigation that the Rays of Light have different properties in their different
sides. Newton quoted from the original French of Huygens to prove how
baffling this phenomenon was to the author of the wave theory himself;
plainly, Pressions . . . propagated . . . through an uniform Medium, must be
on all sides alike. That the undulations might be perpendicular to the direction of propagation apparently never occurred to Huygens, who thought
in terms of a geometric scheme, nor to Newton. Eventually, Young and
Fresnel suggested that light waves must be transverse rather than longitudinal. Then for the first time was it possible to explain the polarization of
light and the way in which lightto use Newtons phrasehas sides. The
study of the interference of polarized beams of light in the 19th century
provided one of the chief arguments for the advocates of the wave theory.
But in Newtons day and for a hundred years thereafter, the only way to
account for the sides of light was to suppose that the corpuscules were not
perfectly spherical and would present, therefore, different sides depending
on their orientation to the axis of motion.
Several of the quires in Newtons Opticks remain questions. Some may
still prove insightful and prophetic.
Pierre-Simon Laplace (France)(1749 - 1827) was from a family that was
comfortably well off in the cider trade (some texts claim he was from a poor
farmer family). He was a brilliant mathematician. He was not modest about
his abilities. This attitude didnt improve his relations with his colleagues.
He established his reputation in difference equations, differential equations,
probability, and celestial mechanics. He wrote a masterpiece on the stability
of the solar system. He presented the nebular hypothesis in 1796 that
46
CHAPTER 1. PROGENITORS
viewed the solar system as originating from the contracting and cooling of a
large, flattened, and slowly rotating cloud in incandescent (hot) gas. He set
up the differential equations and solved them for the motion of solids and
fluids that also may obey the inverse square law and applied them to the
centers of gravity of bodies in the solar system and to tidal forces, which are
important in general relativity. The famous Laplace equation is 2 u = a
~ = Force. The derivation
constant, where u is the potential (GM/r) and u
of this is to consider the gravitational force as conservative (the work done
in moving without friction from one point to another is independent of the
path taken - Principle of Minimum Action).
Laplace noted that Force (1/r 2 ) + r is a solution of his generalized
inverse square law equation. The (Greek upper case lambda) is now
known as the cosmological constant. The addition of this to Newtons
equation provides a repulsive component to gravitation that could hold
the finite universe from collapse and cause accelerated expansion of the
universe. He proposed a philosophy of physics that the phenomena of nature
can be reduced to actions at a distance between molecules and that the
consideration of these actions must serve as the basis of the mathematical
theory of these phenomena. Laplace supported the corpuscular theory of
light. The application of the Laplace equation to fluid motion is also part
of the consideration of the universe in some current cosmological models as
a fluid or gas.
Einstein also added the in general relativity to create a static universe rather than a changing universe. The discovery in 1998 of the accelerating universe suggested the need for a in the equations.
Huygens principle is a method of analysis applied to problems of wave
propagation. It suggests that each point of an advancing wave front is the
center of a fresh disturbance and the source of a new train of waves; and
that the advancing wave as a whole may be regarded as the sum of all the
secondary waves arising from points in the medium already traversed.
Chapter 2
48
49
50
51
52
the first solution to gain the most scientific consensus in an essay that also
suggests the expansion and collapse of the universe. Given the finite speed
of light, the light from the most distant star cannot have traveled a further
distance. Alternatively, if the universe is expanding and distant stars are
receding from us as claimed by the Big Bang theory, then their light is red
shifted which diminishes their brightness in the visible band. According to
the Big Bang theory, both are working together; the finiteness of time is
the more important effect. Some see the darkness of the night sky to be
evidence in support of the Big Bang theory.
The finite age of the universe (in its present form) may be established by
a mathematical evaluation of hydrogen. Assume that the amount of mass
in stars divided by the total amount of mass in the universe is nonzero.
After some length of time, any given star will convert too much hydrogen
into helium (or heavier elements) to continue nuclear fusion. From this we
conclude that in unit time, the amount of hydrogen converted into helium
by a given star divided by the stars mass is nonzero. Combining this with
the earlier statement, we conclude that the amount of hydrogen converted
into helium by stars as a whole divided by the mass of the universe is
nonzero. There is no known process that can return heavier elements to
hydrogen in the necessary quantities, and any would probably violate the
second law of thermodynamics. Postulating such a process in a necessary
task for a TOE without a Big Bang.
Therefore, the amount of time needed for stars to convert all of the hydrogen in the universe into helium is finite, and it will never change back.
After this, only heavier-elementburning stars will exist. These will die
when they hit iron, an event known as the heat death of the universe. This
hasnt happened yet. Therefore, the universe is of finite age, it has undergone major changes in its history, or there exists some highly exotic process
that produces hydrogen to keep it going. Current popular cosmology suggests no direct evidence exists for this process. However, the evidence may
exist but is being misinterpreted.
Recent satellite studies have found the cosmic microwave background
radiation is isotropic to 1 part in 10000.
All reference frames that move with constant velocity and in a constant
direction with respect to any inertial frame of reference are members of
the group of inertial reference frames. Special Relativity has several consequences that struck many people as bizarre, among which are:
(1) The time lapse between two events is not invariant from one observer to
another, but is dependent on the relative speeds of the observers reference
53
frames.
(2) Two events that occur simultaneously in different places in one frame
of reference may occur at different times in another frame of reference (lack
of absolute simultaneity).
(3) The dimensions (e.g. length) of an object as measured by one observer
may differ from the results of measurements of the same object made by
another observer.
(4) The twin paradox (similar to Clock paradox) concerns a twin who flies
off in a spaceship traveling near the speed of light. When he returns, he
discovers that his twin has aged much more rapidly than he has (or he aged
more slowly). The ladder paradox involves a long ladder traveling near the
speed of light and being contained within a smaller garage.
Special Relativity rejects the idea of any absolute (unique or Special) frame of reference. Rather physics must look the same to all observers
traveling at a constant relative velocity (inertial frame). This principle of
Relativity dates back to Galileo and is incorporated into Newtonian physics
with the idea of invarient.
There are a couple of equivalent ways to define momentum and energy
in Special Relativity. One method uses conservation laws. If these laws are
to remain valid in Special Relativity they must be true in every possible
reference frame. However, simple thought experiments using the Newtonian
definitions of momentum and energy shows that these quantities are not
conserved in Special Relativity. Some small modifications to the definitions
to account for relativistic velocities can rescue the idea of conservation. It
is these new definitions that are taken as the correct ones for momentum
and energy in Special Relativity. Given an object of invariant mass m
traveling at velocity v the energy and momentum are given by E = mc2
and p = mv where = [(1 v 2 /c2 )0.5 ] is the Lorentz factor.
Rest mass must be an invariant. This presents profound problems
where every frame is moving or accelerating relative to every other frame.
The Newtonian inertial frame is one in which the bulk of the matter is at
rest relative to absolute space. However, Einsteins ideas came in conflict
with Machs idea of the inertia of a particle that depends on the existence
of other matter. Einstein initially accepted Machs view, but dropped it
because of the rest mass issue.
The four-momentum of a particle is defined as the particles mass times
the particles four-velocity. Four-momentum in Special Relativity is a four
vector that replaces classical momentum. The conservation of the fourmomentum yields three laws of classical conservation:
54
55
lost as recoil to the emitting atom. If the lost recoil energy is small compared with the energy linewidth of the nuclear transition, then the gamma
ray energy still corresponds to the energy of the nuclear transition and a
second atom of the same type as the first can absorb the gamma ray. This
emission and subsequent absorption is called resonance. Additional recoil
energy is also lost during absorption. For resonance to occur, the recoil
energy must actually be less than half the linewidth for the corresponding
nuclear transition.
Experimental verification of the gravitational redshift requires good
clocks because at the Earths surface the effect is small. The first experimental confirmation came as late as in 1960, in the Pound-Rebka experiment
(Pound & Rebka 1960) later improved by Pound and Snider. The famous
experiment is Generally called the Pound-Rebka-Snider experiment. Their
clock was an atomic transition that results in a very narrow line of electromagnetic radiation. A narrow line implies a very well defined frequency.
The line is in the gamma ray range and emitted from the isotope Fe57 at
14.4 keV. The Mossbauer effect causes the narrowness of the line. The emitter and absorber were placed in a tower of only 22 meters height between
the bottom and top. The observed redshift was obtained within 1% of the
prediction. Nowadays the accuracy is measured up to 0.02%.
This redshift may be deduced from the Weak Equivalence Principle by
noting the frequency of the wave model of light is a number of wavelengths
per unit of duration. Because the transmitting frequency differs from the
received frequency, the units of duration (time) must differ. The differing gravitational potential is equivalent to a Doppler shift of acceleration
(velocity at each point).
The redshift can also be deduced from the photon model of light. The
redshift is proportional to the difference in gravitational potential energy
(gd) where g is the gravitational acceleration of the Earth, and d = 22
m. The gain or loss of energy of a photon is equal to the difference in
potential energy (gd). The Pound-Rebka experiment reversed the emitter
and detector and measured a blueshift. This redshift is not the redshift
derived from General Relativity and the strong Equivalence Principle. A
perpetual motion machine cant be made by having photons going up and
down in a gravitational field, something that was possible within Newtons
theory of gravity.
There is a hidden assumption in models of the Pound-Rebka experiment.
Textbook thought experiments of gravitational redshift and gravitational
time dilation describe a room with a clock on the ceiling. Photons are
56
emitted from the ceiling to the floor. The photons are viewed as small,
solid balls. The measurement is the distance between similar balls. The
energy measured by the number of balls passing a point or impinging on
the floor per unit duration. Because the speed of the balls is constant
in Special Relativity, the energy transfer is by ball spacing. This is like
measuring the distance between wave peaks and, therefore, corresponds to
the wavelength model. This model is falsified by the intensity relationship
in diffraction experiments and in the photoelectric effect.
The other possible model is that photons are variable. The energy
measured by the energy contained in one photon times the number that
pass a point per unit duration. Therefore, each photon absorbs gd units of
energy.
Photons emitted from a stellar surface on a star and observed on Earth
are expected to have a gravitational redshift equal to the difference in gravitational potential in addition to the Doppler redshift. Each spectral line
should be gravitationally shifted towards the red end of the spectrum by a
little over one millionth of its original wavelength for a star with the mass
of the Sun. This effect was measured for the first time on the Sun in 1962.
Observation of much more massive and compact stars such as white dwarfs
have shown that gravitational shift does occur and is within the correct
order of magnitude. Recently also the gravitational redshift of a neutron
star has been measured from spectral lines in the x-ray range. The result
gives the mass and radius of the neutron star. If the mass is obtained by
other means (for example from the motion of the neutron star around a
companion star), then the radius of a neutron star can be measured.
The gravitational redshift increases without limit around a black hole
when an object approaches the event horizon of the black hole. A black
hole can be defined as a massive compact object surrounded by an area at
which the redshift as observed from a large distance is infinitely large.
When a star is imploding to form a black hole, the star is never observed
passing the Schwarzschild radius. As the star approaches this radius it will
appear increasingly redder and dimmer in a very short time. Such a star
in the past was called a frozen star instead of a black hole. However, in
a very short time the collapsing star emits its last photon and the object
thereafter is black. The terminology black hole is preferred above frozen
star.
The gravitational redshift z for a spherical mass M with radius R is
57
given by:
2GM 0.5
1+z = 1 2
,
(2.1)
cR
where G is the gravitational constant. This formula reduces to the one
used at Earth for a gravitational acceleration g = GM/R2 and a difference
in gravitational potential between R and R + d for small d. For radius
approaching 2GM/c2 , the redshift z inf. The quantity 2GM/c2 is called
the Schwarzschild radius.
Corrections for gravitational redshift are common practice in many situations. With presentday accuracies, clocks in orbit around the Earth must
be corrected for this effect. This is the case with satellitebased navigational
systems such as the Global Positioning System (GPS). To get accuracies
of order 10 m, light travel times with an accuracy of order 30 ns (nanoseconds) have to be measured. Special relativistic time dilatation (caused by
the velocity) and gravitational redshift corrections in these satellites are of
order 30 000 ns per day.
Therefore, we can assert with confidence that the predictions of Relativity are confirmed to high accuracy over time periods of many days. New
corrections for epoch offset and rate for each clock are determined anew
typically once each day. These corrections differ by a few ns and a few
ns/day, respectively, from similar corrections for other days in the same
week. At much later times, unpredictable errors in the clocks build up with
time squared, so comparisons with predictions become increasingly uncertain unless these empirical corrections are used. However, within each day,
the clock corrections remain stable to within about 1 ns in epoch and 1
ns/day in rate.
The time dilation is said to be found in type SN1a supernova with
large z (Blondin et al. 2008). The spectral aging rates were found to be a
function of 1 + z that falsifies models that fail to predict time dilation.
The Strong Equivalence Principle takes this a stage further and asserts
that not only is the spacetime as in Special Relativity, but all the laws of
physics take the same form in the freely falling frame as they would in the
absence of gravity. A physical interaction behaves in a local, inertial frame
as if gravity were absent. Gravity is in the geometry of the gravitational
ether. This form of the Strong Equivalence Principle is crucial in that it
will allow us to deduce the Generally valid laws governing physics once the
Specialrelativistic forms are known. Note however that it is less easy to
design experiments that can test the Strong Equivalence Principle
In noninertial frames there is a perceived force that is accounted for by
58
the acceleration of the frame, not by the direct influence of other matter.
Thus, we feel acceleration when cornering on the roads when we use a car
as the physical base of our reference frame. Similarly there are coriolis and
centrifugal forces when we define reference frames based on rotating matter
such as the Earth or Newton Bucket. The coriolis and centrifugal forces in
Newtonian mechanics are regarded as nonphysical forces arising from the
use of a rotating reference frame. General Relativity has no way, locally,
to define these forces as distinct from those arising through the use of
any noninertial reference frame. The Strong Principle of Equivalence in
General Relativity states that there is no local experiment to distinguish
nonrotating free fall in a gravitational field from uniform motion in the
absence of a gravitational field. There is no gravity in a reference frame
in free fall. The observed gravity at the surface of the Earth from this
perspective is the force observed in a reference frame defined from matter
at the surface, which is not free. The matter within the Earth acts on the
surface matter from below. The action is analogous to the acceleration felt
in a car.
Einstein used an observation that was known since the time of Galileo
that the inertial and gravitational masses of an object happen to be the
same. He used this as the basis for the Strong Equivalence Principle, which
describes the effects of gravitation and acceleration as different perspectives
of the same thing (at least locally), and which he stated in 1907 as:
We shall therefore assume the complete physical equivalence of a gravitational field and the corresponding acceleration of the reference frame.
This assumption extends the principle of Relativity to the case of uniformly
accelerated motion of the reference frame.
That is, he postulated that no experiment could locally distinguish between a uniform gravitational field and uniform acceleration. The meaning
of the Strong Equivalence Principle has gradually broadened to include the
concept that no physical measurement within a given unaccelerated reference system can determine its state of motion. This implies that it is impossible to measure and, therefore, virtually meaningless to discuss changes in
fundamental physical constants such as the rest masses or electrical charges
of elementary particles in different states of relative motion. Any measured
change would represent either experimental error or a demonstration that
the theory of Relativity was wrong or incomplete.
The Strong Equivalence Principle implies that some frames of reference
must obey a nonEuclidean geometry, that matter and energy curve spacetime, and that gravity can be seen purely as a result of this geometry. This
59
60
The first clear example of time dilation was provided over fifty years
ago by an experiment detecting muons. These particles are produced at the
outer edge of our atmosphere by incoming cosmic rays hitting the first traces
of air. They are unstable particles, with a half-life of 1.5 microseconds
and are constantly being produced many miles up. There is a constant rain
of them towards the surface of the Earth, moving at very close to the speed
of light. A detector placed near the top of Mount Washington (at 6000
feet above sea level) in 1941 measured about 570 muons per hour coming
in. These muons are dying as they fall. If the detector is moved to a
lower altitude, fewer muons should be detected because a fraction of those
that came down past the 6000 foot level will die before they get to a lower
altitude detector. They should reach the 4500 foot level 1.5 microseconds
after passing the 6000 foot level. If half of them die off in 1.5 microseconds,
as claimed above, we should only expect to register about 570/2 = 285
per hour with the same detector at this level. Only about 35 per hour are
expected to survive down to sea level. When the detector was brought down
to sea level, it detected about 400 per hour! The reason they didnt decay
is that in their frame of reference, much less time had passed. Their actual
speed is about 0.994c, corresponding to a time dilation factor of about 9.
From the top of Mount Washington to sea level in the 6 microsecond trip,
their clocks register only 6/9 = 0.67 microseconds. Only about 428 per
hour of them are expected at sea level.
What does this look like from the muons point of view? How do they
manage to get so far in so little time? Mount Washington and the Earths
surface are approaching at 0.994c, or about 1,000 feet per microsecond.
But in the 0.67 seconds it takes them to get to sea level, it would seem
that to them sea level could only get 670 feet closer. How could they travel
the whole 6000 feet from the top of Mount Washington? The answer is the
Fitzgerald contractionto them Mount Washington is squashed in a vertical
direction (the direction of motion) by a factor of (1 V 2 /c2 )0.5 , the same
as the time dilation factor, which for the muons is 9. So, to the muons,
Mount Washington is only 670 feet high. This is why they can get down it
so fast.
The twin paradox is a thought experiment in Special Relativity, in which
a twin makes a journey into space in a highspeed rocket and returns home
to find he has aged less than his identical twin that stayed on Earth. This
result appears puzzling because each twin sees the other twin as traveling, and so, according to a naive application of time dilation, each should
paradoxically find the other to have aged more slowly. The result is not a
61
paradox in the true sense, because it can be resolved within the standard
framework of Special and General Relativity. The effect has been verified
experimentally using measurements of precise clocks flown in airplanes and
satellites.
Starting with Paul Langevin in 1911, there have been numerous explanations of this paradox, many based upon there being no contradiction
because there is no symmetry. Because only one twin has undergone acceleration and deceleration, the two cases differ. One version of the asymmetry
argument made by Max von Laue in 1913 is that the traveling twin uses
two inertial frames, one on the way up and the other on the way down. So
switching frames is the cause of the difference, not acceleration.
Other explanations account for the effects of acceleration. Einstein,
Born, and Moller invoked gravitational time dilation to explain the aging
based upon the effects of acceleration. Both gravitational time dilation and
Special Relativity can be used to explain the Hafele-Keating experiment on
time dilation using precise measurements of clocks flown in airplanes.
A hidden assumption is that physical clocks measure time. A clock is a
physical device that is supposed to reflect a constant duration between ticks.
That is, the model of how a clock works is vital to the data interpretation.
All processes (chemical, biological, measuring apparatus functioning,
human perception involving the eye and brain, the communication of force,
etc.) are constrained by the speed of light. There is clock functioning at
every level, dependent on light speed and the inherent delay at even the
atomic level. Thus, we speak of the twin paradox, involving biological
aging. It is in no way different from clock time keeping.
For example, consider a pendulum clock. Upon the airplanes takeoff,
the acceleration may hold the pendulum to one side and thus slow the ticks.
The same happens upon landing. At the higher altitude, the pendulum
swings more slowly. The workings of a cesium or atomic clock are unknown.
That is, the cause of the level decay is unknown. Perhaps acceleration of
the atom causes decay rates to change.
Albert Einstein publishes the General Theory of Relativity in 1915,
showing that an energy density warps spacetime. Mass is a form of energy
density. Einstein noted the twin in the twin paradox must undergo an acceleration to return. General Relativity or General Relativity Theory is a
fundamental physical theory of gravitation that corrects and extends Newtonian gravitation, eSpecially at the macroscopic level of stars or planets.
General Relativity may be regarded as an extension of Special Relativity.
Special Relativity may be regarded as an extension Newtonian mechanics
62
63
64
65
66
universe.
The exponential expansion of the scale factor means that the physical distance between any two non-accelerating observers will eventually be
growing faster than the speed of light. At this point those two observers
will no longer be able to make contact. Therefore any observer in a de
Sitter universe would see horizons beyond which that observer cannot see.
If our universe is approaching a de Sitter universe then eventually we will
not be able to observe any galaxies other than our own Milky Way.
Another application of de Sitter gravitational aether is in the early universe during cosmic inflation. Many inflationary models are approximately
a de Sitter gravitational aether and can be modeled by giving the Hubble parameter a mild time dependence. For simplicity, some calculations
involving inflation in the early universe can be performed in a de Sitter gravitational aether rather than a more realistic inflationary universe. By using
the de Sitter universe instead, where the expansion is truly exponential,
there are many simplifications.
Harlow Shapley demonstrates in 1918 that globular clusters are arranged
in a spheroid or halo whose center is not the Earth. He decides that its
center is the center of the galaxy.
Harlow Shapley and Heber Curtis in 1920 debate whether or not the spiral nebulae lie within the Milky Way. Edwin Hubble resolves the ShapelyCurtis debate in 1923 by finding Cepheids in Andromeda.
Vesto Slipher in 1922 summarizes his findings on the spiral nebulaes
systematic redshift. Howard Percy Robertson in 1928 briefly mentions that
Vesto Sliphers redshift measurements combined with brightness measurements of the same galaxies indicate a redshiftdistance relation. Edwin
Hubble in 1929 demonstrates the linear redshift-distance relation and thus
shows the expansion of the universe V = cz = Ho D where V is the receding
velocity, Ho is Hubbles constant. This is Hubbles Law. However, the assumption of Hubbles Law is that the redshift is caused by the Doppler shift.
The redshift could be caused by another mechanism. The nebulae thought
to be in the Milky Ways halo are much further away and the universe is
much larger.
Spiral and elliptical galaxies differ significantly. The great majority of
elliptical galaxies are observed to be much poorer in cool gas and hydrogen
than spiral galaxies of comparable luminosity. The bulk of the interstellar
matter (ISM) in spiral galaxies is hydrogen. The bulk of the ISM in elliptical galaxies consists of hot plasma distributed approximately spherically
rather than in a thin disk. A characteristic of elliptical galaxies not found
67
68
69
70
the gravitation constant and mp and me are the mass of the proton and
electron, respectively.
If any one of the constants changes over time or over distance, then at
least one other should change, also. The G is the constant of interest to
cosmologists. Measurements by Bahcall and Schmidt in 1967, by Roberts
in 1977, and others show appears to not change over time. These measurements were on an extragalactic scale (z 0.2). Modified Newtonian
Dynamics (MOND) suggests variation in the acceleration of Newtons equations over galactic scale distances. If these ratios were constant over time,
distance, or other parameter, the connection between gravity and quantum
mechanics would be unmistakable and must be a part of a TOE.
Some scientists believe that the hypothesis is the result of a numerological coincidence. Robert Dicke in 1961 argues that carbon-based life
can only arise when the Dirac large numbers hypothesis is true lest fusion
of hydrogen in stars and the creation of carbon would not occur. This is
the first use of the Weak Anthropic Principle. A few proponents of nonstandard cosmologies refer to Diracs cosmology as a foundational basis for
their ideas.
The Brans-Dicke theory and the Rosen bi-metric theory are modifications of General Relativity and cannot be ruled out by current experiments.
The Brans-Dicke theory introduces a longrange scalar field that acts as
a modifier to the gravitational constant. Specifically, the effective gravitational constant is proportional to 1/. Thus, the gravitational constant
is a function of spacetime, specifically the totality of massenergy in the
universe. The theory is characterized by a parameter, , called the Dicke
coupling constant, which describes the coupling between mass-energy and
the scalar field. is assumed to be a fundamental constant throughout
spacetime in the basic version of the theory. General Relativity is recovered in the limit that tends to infinity.
The constancy of G is one of the basis of General Relativity and the
71
that |G/G|
1012 per year.
Nucleosynthesis is the process of creating new atomic nuclei either by
nuclear fusion or nuclear fission. One of the fundamental problems of astrophysics is the problem of the origin of chemical elements. A cosmological
model must show the formation of hydrogen, the rest is astrophysics and
nuclear physics. However, this leaves the problem of the formation of the
other elements, rather important ones like carbon, oxygen, iron, and gold.
As it turns out, elements can be fabricated in a variety of astrophysical sites.
Most of these sites have been identified, isotope by isotope. The principal
modes of nucleosynthesis, along with isotopic abundances, for naturally
occurring isotopes have been tabulated. There are a number of astrophysical processes that are believed to be responsible for nucleosynthesis in the
universe. Nucleosynthesis is one of the more notable triumphs of theoretical astrophysics. The model is convincing because the steps of elemental
development are duplicated in Earthborn experiments.
The Big Bang model is praised for being consistent with nucleosynthesis
of elements. However, this does not prove the Big Bang model. Because
the Big Bang model posits a very high, initial temperature, elements such as
hydrogen, helium, and some lithium would have been formed just moments
after the Big Bang (Alpher et al. 1948). Big Bang nucleosynthesis occurred
within the first three minutes of the universe and is responsible for most
of the helium-4 and deuterium in the universe. Because of the very short
period in which Big Bang nucleosynthesis occurred, no elements heavier
than lithium could be formed. The predicted abundance of deuterium,
helium and lithium depends on the density of ordinary matter in the early
universe. The yield of helium is relatively insensitive to the abundance of
ordinary matter above a certain threshold. We generically expect about
24% of the ordinary matter in the universe to be helium produced in the
Big Bang. This is in very good agreement with observations and is another
major triumph for the Big Bang theory.
Stellar nucleosynthesis is believed to create many of the heavier elements
between lithium and iron. Particularly important is carbon. One important
process is the s-process that involves the slow absorption of neutrons.
Supernova nucleosynthesis produces most of the elements heavier than
iron (iron peak and r-process elements). Supernovae are also the most favored candidate of r-process which are elements produced by rapid absorption of neutrons, although there are still some major unanswered questions
about this.
Cosmic ray spallation (partial fragmentation) produces some lighter el-
72
ements such as lithium and boron. This process was discovered somewhat
by accident. There was great interest in the 1970s in processes that could
generate deuterium. Cosmic ray spallation was suggested as a possible process. However, that spallation could not generate much deuterium was later
found. However, it could generate much lithium and boron.
Fred Hoyle and Jayant Narlikar in 1963 show that the cosmological
Steady State Theory can explain the isotropy of the universe because deviations from isotropy and homogeneity exponentially decay in time. The
Steady State Theory provides for hydrogen creation between galaxies followed by infall into galaxies.
George Gamow in 1948 predicts the existence of the Cosmic Microwave
Background radiation by considering the behavior of primordial radiation
in an expanding universe. Arno Penzias, Robert Wilson, and astronomers
at Bell Labs Bernie Burke, Robert Dicke, and James Peebles discover microwave radiation from all directions in the sky. This has been named the
Cosmic Microwave Background radiation in 1965.
Brandon Carter in 1968 speculates that perhaps the fundamental constants of nature must lie within a restricted range to allow the emergence
of life. This is the first use of the Strong Anthropic Principle.
Robert Dicke in 1969 formally presents the Big Bang flatness problem.
Data in 2000 from several cosmic microwave background experiments give
strong evidence that the Universe is flat (spacetime is not curved), with
important implications for the formation of largescale structure. The flatness problem is a cosmological problem with the Big Bang theory, which is
solved by hypothesizing an inflationary universe.
Charles Misner formally presents the Big Bang horizon problem in 1969.
The horizon problem is a problem with the standard cosmological model
of the Big Bang, which was identified in the 1970s. Because information
can travel no faster than the speed of light, there is a limit to the region of
gravitational ether that is in causal contact with any particular point in the
universe. The extent of this region is a function of how long the universe
has existed. Two entities are said to be in causal contact if there may be
an event that has affected both in a causal way. The particle horizon in
cosmology is the distance from which particles of positive mass or of zero
mass can have traveled to the observer in the age of the Universe. The Big
Bang model suggests the Cosmic Microwave Background radiation comes
from 15 billion light years away. When the light was emitted, the universe
was 300,000 years old. Light would not have been able to contact points
across the universe because their spheres of causality do not overlap. So
73
74
of an inflationary universe, in which very shortly after the Big Bang, the
universe increased in size by an enormous factor. Such inflation would have
smoothed out any nonflatness originally present and resulted in a universe
with a density extremely close to the critical density.
Fritz Zwicky in 1933 applies the virial theorem to the Coma cluster and
obtains evidence for unseen mass. He dubbed this Dark Matter meaning
matter that did not reflect, refract, emit, or absorb light. The missing mass
was identified in 1997 as baryonic dust by X-ray observation.
Jeremiah Ostriker and James Peebles in 1973 discover that the amount
of visible matter in the disks of typical spiral galaxies is not enough for
Newtonian gravitation to keep the disks from flying apart or drastically
changing shape. The Dark Matter hypothesis and MOND model attempt
to reconcile this discrepancy.
Edward Tryon proposes in 1973 that the universe may be a largescale
quantum mechanical vacuum fluctuation where positive massenergy is balanced by negative gravitational potential energy.
Sandra Faber and Robert Jackson in 1976 discover the FaberJackson
relation between the luminosity of an elliptical galaxy and the velocity dispersion in its center.
R. Brent Tully and Richard Fisher in 1977 publish the TullyFisher
relation between the luminosity of an isolated normal spiral galaxy and the
velocity of the flat part of its rotation curve. The Tully-Fisher relation
implies a tight relation between visible matter and absolute magnitude.
That there may be other matter (dark matter) in a galaxy is inconsistent
with this relation.
The discrepancy between the Newtonian estimated mass in galaxies and
galaxy clusters and the observation of star and gas kinematics is well established. Vera Rubin, Kent Ford, N. Thonnard, and Albert Bosma in 1978
measure the rotation curves of several spiral galaxies. They find significant
deviations from what is predicted by the Newtonian gravitation of visible
stars. This discovery of what is known as flat rotation curves is the most
direct and robust evidence of dark matter.
The rotation velocity v (km s1 ) of a particle in the plane of a spiral
galaxy (hereinafter galaxy) reflects the forces acting on the particle. An
explanation of v 2 as a function of radius r (kpc) from the center of a galaxy
(RC) requires a knowledge of galaxy dynamics and an explanation of radically differing slopes. The v and, thus, the RC is measured along the major
axis. Although v is non-relativistic, the result of calculations using RC
models are used in cosmological calculations. Thus, the RC is an excellent
75
76
77
weaker. Because the central singularity is so far away from the horizon, a
hypothetical astronaut traveling towards the black hole center would not
experience significant tidal force until very deep into the black hole.
3. The SBH at the center of the Milky Way is 10,000 times weaker than
theory predicts.
Current physics holds that black holes of this size can form in only three
ways:
1. by slow accretion of matter starting from a stellar size,
2. by galaxy merger that drives gas infall, or
3. directly from external pressure in the first instants after the Big Bang.
The first method requires a long time and large amounts of matter
available for the black hole growth.
Paolo Padovani and other leading astronomers announced in 2004 their
discovery of 30 previously hidden supermassive black holes outside the
Milky Way. Their discovery also suggests there are at least twice as many
of these black holes as previously thought.
Two competing groups of astronomers (one led by Andrea Ghez from
UCLA, the other by Reinhart Genzel from Germany) are tracking stars
rotation around the center of the Milky Way.
Andrea Mia Ghezs current research involves using high spatial resolution imaging techniques to study starforming regions and the suspected
supermassive black hole at the center of the Milky Way galaxy. She uses
the kinematics of stars near the center of the galaxy as a probe to investigate this region. Her research finds the mass of approximately three million
Suns is within 60 AU of the center.
Ghez et al. (2000) and Ferrarese and Merritt (2002) have observed Keplerian motion to within one part in 100 in elliptical orbits of stars that are
from less than a pc to a few 1000 pc from the center of galaxies.
The orbits of stars within nine light hours of the Galaxy center indicates
the presence of a large amount of mass within the orbits (Ghez et al. 2000,
2003a,b; Schodel 2002; Dunning-Davies 2004). To achieve the velocities of
1300 km s1 to 9000 km s1 (Schodel 2002) and high accelerations (Ghez
et al. 2000), there must be a huge amount of very dense particles such as
millions of black holes, dense quark stars (Prasad and Bhalerao 2003, and
references therein), and ionized iron (Wang et al. 2002) distributed inside
the innermost orbit of luminous matter. The mass M (in M ) within a few
light hours of the center of galaxies varies from 106 M to 1010 M (Ferrarese
78
& Merritt 2000; Gebhardt et al. 2000a). The M can be distributed over
the volume with a density of at least 1012 M pc3 (Ghez et al. 2000, 1998,
2003b; Dunning-Davies 2004). The orbits of stars closest to the center
are approximately 1,169 times (Schodel 2002) the Schwartschild radius of
the theorized supermassive black hole (SBH) (Ghez et al. 2000). However,
such large mass crowded into a ball with a radius of less than 60 AU must
either quickly dissipate, must quickly collapse into a SBH (Kormendy &
Richstone 1995; Magorrian et al. 1998), or there must exist a force in
addition to the centrifugal force holding the mass from collapse. Mouawad
et al. (2004) suggested there is some extended mass around Sgr A. Also,
models of supermassive fermion balls wherein the gravitational pressure
is balanced by the degeneracy pressure of the fermions due to the Pauli
exclusion principle are not excluded (Bilic et al. 2003). A strong repulsive
force at the center of galaxies would explain the shells of shocked gas around
galactic nuclei (Binney and Merrifield 1998, page 595)(Konigl 2003), the
apparent inactivity of the central object (Baganoff et al. 2001; Baganoff
2003a; Nayakshin and Sunyaev 2003; Zhao et al. 2003), and the surprising
accuracy of reverberation mapping as a technique to determine M (Merritt
& Ferrarese 2001b). However, discrepancies have been noted among the
methods used to measure M (Gebhardt et al. 2000b; Merritt & Ferrarese
2001b, and references therein).
The first published calculation of the slope of the M to velocity dispersion v (in km s1 ) curve (M v relation) varied between 5.270.40 (Ferrarese and Merritt 2000a) and 3.750.3 (Gebhardt et al. 2000a). Reference Tremaine et al. (2002, and references therein) suggested the range
of slopes are caused by systematic differences in the velocity dispersions
used by different groups. However, the origin of these differences remains
unclear.
Ferrarese (2002) found the ratio of the M to MDM of the theorized dark
matter halo around galaxies was a positive value that decreased with halo
mass. However, if the intrinsic rotation curve is rising (Hodge & Castelaz
2003b), the effect of the force of MDM in the equations implies the effect
of the center object must be repulsive. Such a repulsive force was called
a wind by Shu et al. (2003); Silk and Rees (1998). The wind (a gas)
exerted a repulsive force acting on the cross sectional area of particles.
Therefore, denser particles such as black holes move inward relative to less
dense particles.
A multitude of X-ray point sources, highly ionized iron, and radio flares
without accompanying large variation at longer wavelengths have been re-
79
ported near the center of the Milky Way (Baganoff et al. 2001; Baganoff
2003a,b; Binney and Merrifield 1998; Genzel et al. 2003; Zhao et al. 2003;
Wang et al. 2002).
McLure & Dunlop (2002) found that the mass in the central region
is 0.0012 of the mass of the bulge. Ferrarese and Merritt (2002) reported
that approximately 0.1% of a galaxys luminous mass is at the center of
galaxies and that the density of SBHs in the universe agrees with the
density inferred from observation of quasars. Merritt & Ferrarese (2001a)
found similar results in their study of the M v relation. Ferrarese (2002)
found a tight relation between rotation velocity vc in the outer disk region
and bulge velocity dispersion c (vc c ) which is strongly supporting a
relationship of a center force with total gravitational mass of a galaxy.
Wandel (2003) showed the M of AGN galaxies and their bulge luminosity
follow the same relationships as their ordinary (inactive) galaxies, with
the exception of narrow line AGN. Graham et al. (2003, 2002); Graham
et al. (2003b) found correlations between M and structural parameters of
elliptical galaxies and bulges. Either the dynamics of many galaxies are
producing the same increase of mass in the center at the same rate or a
feedback controlled mechanism exists to evaporate the mass increase that
changes as the rate of inflow changes as suggested by Merritt & Ferrarese
(2001b). The RCs imply the dynamics of galaxies differ so the former
explanation is unlikely.
The first detection of the smallscale structure in the Cosmic Microwave
Background and the confirmation that the Cosmic Microwave Background
radiation is black body radiation was in 1995.
Adam Riess in 1995 discovers a deviation from the Hubble Law in observations of Type Ia supernovae (SN1a) providing the first evidence for a
nonzero cosmological constant. The Universe expansion was found to be
accelerating by measuring Type 1a supernova in 1997.
The 2dF Galaxy Redshift Survey in 1998 maps the largescale structure
in a section of the Universe close to the Milky Way. The plot of angle versus
redshift showed filaments and voids in the position of galaxies.
Evidence for the fine structure constant varying over the lifetime of the
universe is published in 2001. Recent improvements in astronomical techniques brought first hints in 2001 that might change its value over time.
However in April 2004, new and more-detailed observations on quasars
made using the UVES spectrograph on Kueyen, one of the 8.2-m telescopes
of ESOs Very Large Telescope array at Paranal (Chile), puts limits to any
change in at 0.6 parts per million over the past ten thousand million years.
80
Because this limit contradicts the 2001 results, the question on whether is
constant or not is open again and the scientists involved hotly debate the
correctness of the contradicting experiments.
NASAs WMAP takes first detailed picture of the Cosmic Microwave
Background radiation in 2003. According to the Lambda-CDM, Big Bang
model, the image supports a universe that is 13.7 billion years old within
one percent error and that is consistent with the inflationary theory. Take
this with extreme skepticism.
Although the concept of an isotropically expanding universe is straightforward enough to understand locally, there are a number of conceptual
traps lurking in wait for those who try to make global sense of the expansion. The most common sources of error in thinking about the basics of
cosmology are discussed below. Some of these may seem rather elementary when written down, but they are all fallacies to be encountered quite
commonly in the minds of even the most able students.
(1) A common question asked by laymen and some physicists is what
does the universe expands into? The very terminology of the big bang
suggests an explosion, which flings debris out into some void. Such a picture
is strongly suggested by many semi-popular descriptions, which commonly
include a description of the initial instant as one where all the matter in
the universe is gathered at a single point, or something to that effect. This
phrase can be traced back to Lematre s unfortunate term the primeval
atom. Describing the origin of the expansion as an explosion is probably
not a good idea in any case. It suggests some input of energy that moves
matter from an initial state of rest. Classically, this is false. The expansion
merely appears as an initial condition. This might reasonably seem to be
evading the point. One of the advantages of inflationary cosmology is that
it supplies an explicit mechanism for starting the expansion, the repulsive
effect of vacuum energy. However, if the big bang is to be thought of
explosively, then it is really many explosions that happen everywhere at
once, Because the expansion fills all of space, it is not possible to be outside
the explosion. Because the density rises without limit as t 0, the mass
within any sphere today including the size of our present horizon was once
packed into an arbitrarily small volume. Nevertheless, this does not justify
the primeval atom terminology unless the universe is closed. The mass
of an open universe is infinite however far back we run the clock. There is
infinitely more mass outside a given volume than inside it.
(2) For small redshifts, the interpretation of the redshift as a Doppler
shift is quite clear. What is not so clear is what to do when the redshift
81
82
Special Relativity.
(5) If the Hubble Law implies expansion, projection backward in time
implies a time existed when all matter was at a point. This means current
physical laws as far as we know fail and not necessarily the creation of the
universe. The Steady State model has changed to the cyclic model of the
universe that also allows expansion and the Hubble Law.
As it stands today, the Standard Cosmological Model is dependent on
three assumptions:
1. The universality of physical laws.
2. The Cosmological Principle.
3. The Copernican Principle.
There are other hidden postulates:
1. Adiabatic universe - matter and energy are not being introduced or removed from the universe. Therefore, the universe must have started at a
high temperature and is declining with the volume expansion.
2. Hubble redshift caused by only Doppler shift (velocity).
3. The strong equivalence principle holds which is still unproven.
4. Matter is continually condensing into galaxies.
5. Attractive gravity is the prime force influencing the largescale structure
of the universe (no other 5th force).
6. General Relativity is the ultimate model of gravity.
7. Uniformity of the Cosmic Microwave Background radiation is caused by
uniform expansion of space.
9. The Cosmic Microwave Background radiation is indeed cosmic and in the
background. These is no data concerning the distance between the Earth
and the origin of the Cosmic Microwave Background radiation.
That there are three observational pillars that support the Big Bang
theory of cosmology is Generally accepted. These are the Hubble-type expansion seen in the redshift of galaxies, the detailed measurements of the
Cosmic Microwave Background, and the abundance of light elements. Additionally, the observed correlation function of largescale structure in the
universe fits well with standard Big Bang theory. If any of these is disproved,
the Standard Model of Cosmology collapses. Accordingly, any TOE must
replace each of these observational pillars with a model consistent with the
small.
Problems with the Standard Model as stated by proponents:
1. Historically, a number of problems have arisen within the Big Bang the-
83
ory. Some of them are today mainly of historical interest, and have been
avoided either through modifications to the theory or as the result of better
observations.
2. The cuspy halo problem arises from cosmological simulations that seem
to indicate cold dark matter would form cuspy distributions. That is, distributions increasing sharply to a high value at a central point in the most
dense areas of the universe. This would imply that the center of our galaxy,
for example, should exhibit a higher darkmatter density than other areas. However, it seems rather that the centers of these galaxies likely have
no cusp in the darkmatter distribution at all. This is not an intractable
problem, however, because the relationships between the baryonic matters
distribution and the dark matters distribution in areas of high baryonic
matter concentration have not been adequately explored. A high density
of baryonic matter would have a different distribution due to the effects
of forces other than gravity. The distribution of baryonic matter, in turn,
might affect the cuspy nature of the density profile for dark matter and the
dwarf galaxy problem of cold dark matter.
3. The dwarf galaxy problem is one that arises from numerical cosmological
simulations that predict the evolution of the distribution of matter in the
universe. Dark matter seems to cluster hierarchically and in ever increasing
number counts for smaller and smaller sized halos. However, while there
seems to be enough observed normalsized galaxies to account for this distribution, the number of dwarf galaxies is orders of magnitude lower than
expected from simulation.
There are a small number of proponents of nonstandard cosmologies
who believe that there was no Big Bang at all. They claim that solutions to standard problems in the Big Bang involve ad hoc modifications
and addenda to the theory. Most often attacked are the parts of Standard
Cosmology that include dark matter, dark energy, and cosmic inflation.
These features of the universe are each strongly suggested by observations
of the Cosmic Microwave Background, largescale structure, and Type Ia
supernovae. While the gravitational effects of these features are understood
observationally and theoretically, they have not yet been incorporated into
the Standard Model of particle physics in an accepted way. Though such
aspects of standard cosmology remain inadequately explained, the vast majority of astronomers and physicists accept that the close agreement between
Big Bang theory and observation have firmly established all the basic parts
of the theory.
The following is a short list of standard Big Bang problems and puz-
84
zles:
1. The horizon problem results from the premise that information cannot
travel faster than light, and hence two regions of gravitational ether that are
separated by a greater distance than the speed of light multiplied by the age
of the universe cannot be in causal contact. This is inconsistent with the
quantum entanglement observations. The observed isotropy of the Cosmic
Microwave Background is problematic because the horizon size at the time
of its formation corresponds to a size that is about 2 degrees on the sky.
If the universe has had the same expansion history since the Planck epoch,
there is no mechanism to cause these regions to have the same temperature.
2. This apparent inconsistency is resolved by inflationary theory in which a
homogeneous and isotropic scalar energy field dominates the universe at a
time 1035 seconds after the Planck epoch. Heisenbergs uncertainty principle predicts that during the inflationary phase there would be quantum
thermal fluctuations, which would be magnified to cosmic scale. These fluctuations serve as the seeds of all current structure in the universe. After
inflation, the universe expands according to a Hubble Law, and regions that
were out of causal contact come back into the horizon. This explains the
observed isotropy of the Cosmic Microwave Background radiation. Inflation predicts that the primordial fluctuations are nearly scale invariant and
Gaussian that has been accurately confirmed by measurements of the Cosmic Microwave Background radiation.
3. The flatness problem is an observational problem that results from considerations of the geometry associated with the FLRW metric. The universe can have three different kinds of geometries: hyperbolic geometry,
Euclidean geometry, or elliptic geometry. The density of the universe determines the geometry: the hyperbolic geometry results from a density less
than the critical density, elliptic geometry results from a density greater
than the critical density, and Euclidean geometry results from exactly the
critical density. The universe is required to be within one part in 1015 of
the critical density in its earliest stages. Any greater deviation would have
caused either a Heat Death or a Big Crunch, and the universe would not
exist as it does today. The resolution to this problem is again offered by
inflationary theory. During the inflationary period, spacetime expanded to
such an extent that any residual curvature associated with it would have
been smoothed out to a high degree of precision. Thus, inflation drove the
universe to be flat.
4. The magnetic monopole objection was raised in the late 1970s. Grand
unification theories (see page 170)predicted point defects in the gravita-
85
86
87
Major revisions in the past 20 years of the model considered the standard
have been necessary and accepted. The data requiring the major revisions
were the black body nature of the Cosmic Microwave Background radiation,
temperature anisotropies of the Cosmic Microwave Background radiation,
accelerated expansion of the universe, the flat universe, and the period of
deceleration of the cosmic expansion. The current standard model fails
to describe many galaxy and galaxy cluster observations. Note the dark
matter postulate created to explain the rotation curves of spiral galaxies
remains a hypothesis because it is inconsistent with rising rotation curves,
is inconsistent with the Tully-Fisher relation, and dark matter remains a
mystery. The Pioneer Anomaly and the Flyby Anomaly on the solar system
scale remain unexplained. Further, some observations appear to contradict
some of the fundamental postulates of the standard model.
The challenge is to develop a model that is consistent with all the observations that the standard model explains, that is consistent with much
of the additional data, and that is simpler than the standard model. Newton did this by redefining the important characteristic of matter to be mass,
redefining forces, redefining space, and reformulating the fundamental principles. The stage of our understanding is reminiscent of the century before
Newton - a great enlightenment seems imminent.
The two main divisions of universe models are (1) the universe has a
beginning from Jewish heritage and (2) the universe always existed from
Hindu and Buddhist heritage. This discussion has been raging since before
the beginning of recorded time.
The following is a case against standard cosmology from an opponents
viewpoint. Some of the following is taken from Narlikar (2002). J. V.
Narlikar is a proponent of the Quasi-Steady State Cosmology (QSSC),
which is an alwaysexisted model. Therefore, his view of the challenges
for the BIG BANG model is more critical than a proponents view.
The Big Bang models fail on galactic scale observations(Bell et al. 2003;
van den Bosch et al. 2001).
General Relativity has been tested only in the weakfield approximation. We have no empirical evidence regarding how the theory fares for the
strongfield scenarios of cosmology such as near the Supermassive Black
Hole in the center of spiral galaxies. Therefore, the standard models are to
be looked upon as extrapolations into speculative regions.
The Standard Model Dark Matter predictions are inconsistent with observations on a galactic scale (Bahcall et al. 1999; Bell et al. 2003; van den
Bosch et al. 2001; Sellwood and Kosowsky 2001b).
88
89
90
91
and that they are expanding as QSSC suggests. If the clusters are virialized
or rather the clusters may have been created in explosive creation processes
and that they are expanding as QSSC suggests, the virial theorem is not
valid for clusters. This takes away the strength of the present argument for
dark matter. Independent checks on whether the velocity distributions of
galaxies in clusters are in statistical equilibrium are necessary.
Does Hubbles law hold for all extragalactic objects? If the answer is
no, then the Big Bang scenario fails with little hope of a modification such
as was done with the finding of universe acceleration and inflation. Current
cosmology takes it for granted that the redshift of an extragalactic object
is cosmological in origin, caused by the expansion of the universe described
as the Cosmological Hypothesis, and caused by the Doppler shift. Whereas
the Hubble diagram on which the Cosmological Hypothesis is based gives
a fairly tight magnitude - z relationship for firstranked galaxies in clusters, a corresponding plot for quasars has enormous scatter. The Big Bang
model with homogeneity must have either a static galaxy distribution or a
very Special velocity field obeying the Hubble Law. The Hubble constant
occupies a pivotal role in current cosmologies. The methods of calculating supernova distances, the Cosmological Microwave Background power
spectrum, weak gravitational lensing, cluster counts, baryon oscillation, expansion of the universe, and the fundamental aspects of the Big Bang model
depend on Hubble constant and the Hubble law.
Comparison of z with Cepheid based distance has enormous scatter.
The correlation coefficient is 0.75(Saha et al. 2006b) to 0.79 , which is poor.
The data for galaxies with Cepheid based distance up to 10 Mpc agrees
well with distances based on TRGB data. However, beyond 10 Mpc, the
correlation coefficients for both data sets is approximately 0.30.
Special efforts are needed in some cases to make the Cosmological Hypothesis consistent with data on quasars. These included the supraliminal motion of quasars, rapid variability, the absence of a Ly- absorption
trough, etc. Some quasars and galaxies are found in pairs or groups of close
neighbors in the sky. If a quasar and a galaxy are found to be within a
small angular separation of one another, then it is very likely that they are
physical neighbors and, according to the Cosmological Hypothesis, their
redshift must be nearly equal. The quasar population is not a dense one.
The probability of finding a quasar projected by chance within a small angular separation from a galaxy is very small. If the probability is < 0.01,
say, then the null hypothesis of projection by chance has to be rejected.
There must be a reason in that case to cause the quasar to be physically
92
close to the galaxy. While evidence was found that in such cases the redshift
of the galaxy and the quasar were nearly the same, there have been data of
the other kind, also. H. C. Arp (Arp 1987, 1998) has described numerous
examples in which the chanceprojection hypothesis is rejected.
There is growing evidence that largeredshift quasars are preferentially
distributed closer to lowredshift bright galaxies. There are alignments and
similarities of redshift among quasars distributed across bright galaxies.
There are filaments connecting pairs of galaxies with discrepant redshift.
There are continuing additions to the list of anomalous cases. They are not
limited to only optical and radio sources. They are also found among X-ray
sources. The supporters of the Cosmological Hypothesis like to dismiss all
such cases as either observational artifacts or selection effects or argue that
the excess number density of quasars near bright galaxies could be due to
gravitational lensing. While this criticism or resolution of discrepant data
may be valid in some cases, why this should hold in all cases is hard to
understand.
H. Arp further suggests the quasars are formed in and are ejected from
galaxies. He notes the several instances where multiple quasars form a line
through a central galaxy and the redshift decline with distance from the
host galaxy. Thus, he suggests, galaxy clusters are formed.
Another curious effect that was first noticed by Burbidge (1968) for
about 70 quasars concerns the apparent periodicity in the distribution of
redshift of quasars. The periodicity of z 0.06 is still present with the
population multiplied 30-fold (Duari et al. 1992). What is the cause of this
structure in the z distribution? Various statistical analyses have confirmed
that the effect is significant. This also is very puzzling and does not fit into
the simple picture of the expanding universe caused z.
For this to be due to peculiar velocity: (1) the peculiar velocity would
have to be incredibility high, (2) it would have to be directed only outward
from us, which is unlikely because such peculiar velocities are not found
directed toward us, (3) because the periodicity is undetected for angular variation, the effect is only along our line-of-sight, which violates the
Copernican Principle.
On a much finer scale Tifft (1996, 1997) has been discovering a redshift
periodicity cz = 72 km s1 for double galaxies and for galaxies in groups.
The data have been refined over the years with accurate 21-cm redshift measurements. If the effect were spurious, it would have disappeared. Instead
it has grown stronger and has withstood fairly rigorous statistical analyses.
Hodge (2006a) discovered discrete redshift behind elliptical galaxies
93
94
CHASMP programs at P1O (I) and their closer agreement at P1O (III);
the slowly declining ap ; the low value of ap immediately before P11s Saturn
encounter; and the high uncertainty in the value of ap obtained during and
after the P11s Saturn encounter.
A systematic error would clearly solve this problem. So far none has
been found. The General scientific community in 2005 agreed that a systematic or spacecraft error was not responsible for the Pioneer Anomaly.
Further the direction the craft are moving in the solar system are different.
Most attempts to explain the Pioneer Anomaly are directed to only the
General value of ap and assume this value is constant. These attempts fail.
Turyshev et al. (2011) continues to analyze Pioneer data. His recent findings
are (1) ap is temporally declining rather than being constant which rejects
most of the attempts to model the pioneer Anomaly, (2) the data does not
favor a Sun-pointing direction over an Earth-pointing or spin-axis pointing
directions, and (3) support for an early onset of the acceleration remains
weak. The only model consistent with all the observed characteristics of
the Pioneer Anomaly is the Scalar Potential Model of Hodge (2006e).
The Flyby Anomalies are unexplained velocity increases observed near
closest approach during the Earth gravity assist flybys of the Galileo, Cassini,
NEAR, and Rosetta spacecrafts (Anderson et al. 2007).
Do gravity waves exist? Gravity fails to behave like the other forces
in the Grand Unified Theory (see page 170). Accordingly, the General
Theory of Relativity ascribes gravity to the geometry of gravitational ether
rather than to the interchange of particles in the Grand Unified Theory.
Newtonian mechanics suggested the effect of gravity was instantaneous.
Modern thought holds the speed of a gravity wave is either near the speed
of light or several billion times the speed of light.
Standard experimental techniques exist to determine the propagation
speed of forces. When we apply these techniques to gravity, they all yield
propagation speeds too great to measure, substantially faster than light
speed (van Flandern 1998; van Flandern & Vigier 2002). This is because
gravity, in contrast to light, has no detectable aberration or propagation
delay for its action, even for cases such as binary pulsars where sources of
gravity accelerate significantly during the light time from source to target.
By contrast, the finite propagation speed of light causes radiation pressure forces to have a nonradial component causing orbits to decay (the
Poynting-Robertson effect). Gravity has no counterpart force proportional to v/c to first order. General Relativity explains these features by
suggesting that gravitation (unlike electromagnetic forces) is a pure geo-
95
96
97
98
Chapter 3
100
(1766 - 1844) around 1800. Dalton proceeded to print his first published
table of relative atomic weights. Six elements appear in this table, namely
hydrogen, oxygen, nitrogen, carbon, sulfur, and phosphorus, with the atom
of hydrogen conventionally assumed to weigh 1. Assisted by the assumption that combination always takes place in the simplest possible way, he
arrived at the idea that chemical combination takes place between particles
of different weights, and it was this that differentiated his theory from the
historic speculations of the Greeks, such as Democritus and Lucretius.
The extension of this idea to substances in general necessarily led him to
the law of multiple proportions. The comparison with experiment confirmed
his deduction.
He listed compounds as binary, ternary, quaternary, etc. (molecules
composed of two, three, four, etc. atoms) depending on the number of
atoms a compound had in its simplest, empirical form. He hypothesized
the structure of compounds can be represented in whole number ratios. For
example, one atom of element X combining with one atom of element Y is
a binary compound. Furthermore, one atom of element X combining with
two elements of Y or vice versa, is a ternary compound.
Dalton used his own symbols to visually represent the atomic structure
of compounds. The five main points of Daltons atomic theory are:
1.Elements are made of tiny particles called atoms.
2.The atoms of a given element are different from those of any other element; the atoms of different elements can be distinguished from one another
by their respective relative atomic weights.
3.All atoms of a given element are identical.
4.Atoms of one element can combine with atoms of other elements to form
chemical compounds; a given compound always has the same relative numbers of types of atoms.
5.Atoms cannot be created, divided into smaller particles, nor destroyed in
the chemical process; a chemical reaction simply changes the way atoms are
grouped together.
Dalton proposed an additional rule of greatest simplicity that created
controversy, because it could not be independently confirmed. When atoms
combine in only one ratio, . . . it must be presumed to be a binary one, unless
some cause appear to the contrary. This was merely an assumption, derived from faith in the simplicity of nature. No evidence was then available
to scientists to deduce how many atoms of each element combine to form
compound molecules. But this or some other such rule was necessary to any
incipient theory, because an assumed molecular formula and structure were
101
102
and square functions into a series of simple harmonic waves. Because simple
harmonic waves are everywhere continuous and differentiable, the mathematics of the transformed function is very easy. The Fourier transform is
unbounded with an unbounded number of coefficients and frequencies. Because the observations are of finite time and distance, the transformation
is limited. This results in a practical uncertainty of the waves. Therefore,
phenomena in the subatomic world of quantum mechanics can be described
only within a range of precision that allows for the uncertainty of waves in
the Schrodinger equation. This is known as the Uncertainty Principle first
formulated by Werner Heisenberg.
Thomas Youngs (English 1773 - 1829) initial contributions were made
as a result of studying a class of optical phenomena that we call today
interference and diffraction, but which Newton called the inflection of
light.
After Young had explained the production of Newtons rings by the
application of his new principle of interference to the wave theory of light,
he used Newtons data to compute the wavelength of different colors in the
visible spectrum and the wavenumbers (i.e. Number of Undulations in an
inch). Youngs computations, based on Newtons measurements, yielded
a wavelength for the Extreme red of 0.000,026,6 inches, or a wavenumber
of 37640 undulations per inch and a frequency of 4.36 108 undulations per second. The same quantities for the extreme violet had values
0.000,016,7 inches, 59750 undulations per inch, and 7.35 108 undulations
per sec, respectively. These numbers are in close agreement with presentday accepted values.
Young was indebted to Newton for more than the data for computing
wavelengths, wavenumbers, and frequencies. Young developed the whole
wave theory of light from the suggestions in Newtons Opticks. Young
made several important additions by considering the waves in the ether
to be transverse and by supplementing the wave theory by the principle of
interference. The 19th-century developments leading to the electromagnetic
theory might be said to have begun from Youngs work. Because Youngs
work was inspired by Newtons, we have an historical chain leading from
Newton to Young, to Fresnel and Arago, to Maxwell, to Planck, and to
Einstein.
The prevailing view in Youngs time was that Newtons corpuscular
model was right and correct and the wave model was wrong. Therefore, a proponent of the wave model was a social outcast. Young was
extremely explicit about his debt to Newton. Thus, in the first of the three
103
104
105
ters emits spatially coherent light, while the light from a collection of point
sources or from a source of finite diameter would have lower coherence.
Spatial coherence can be increased with a spatial filter such as a very small
pinhole proceeded by a condenser lens. The spatial coherence of light will
increase as it travels away from the source and becomes more like a spherical
or plane wave. Thus, light from stars is coherent and produces a diffraction
pattern (Airy circle) when it passes through the telescope.
Youngs double-slit experiment has become a classic experiment because
it demonstrates the central mysteries of the physics and of the philosophy
of the very small.
Augustin Jean Fresnel (France 1788 - 1827) struggled with ill health
but maintained an exceptionally high workload. He died of tuberculosis.
Fresnel removed many of the objections to the wave theory of light. He put
forth the idea that the waves arise at every point along the wave front (a
modification of Huygens principle) and mutually interfere. He was able to
describe diffraction and interference with his mathematics.
The HuygensFresnel equation, describes the intensity pattern on the
screen of a slit experiment. The assumptions of Fresnel model of diffraction
include: (1) The part of Huygens Principle that each point in a wave front
emits a secondary wavelet. (2) The wavelets destructive and constructive
interference produces the diffraction pattern. (3) The secondary waves are
emitted in only the forward direction, which is the so called obliquity
factor (a cosine function). (4) The wavelet phase advances by one-quarter
period ahead of the wave that produced them. (5) The wave has a uniform
amplitude and phase over the wave front in the slit and zero amplitude and
no effect behind the mask. (6) The slit width is much greater than the
wavelength. (7) The Fresnel model has a slight arc of the wave front across
the slit. That is, the distribution of energy in the plane of the slit varies. (8)
There is a minimum distance between the mask and the screen within which
the Fresnel model fails. The Fresnel model with larger distance between the
mask and the screen or with condensing lenses before and after the mask
degenerates into the Fraunhofer diffraction model.
The intensity patterns produced by multiple slits can be compared to
the intensities of the single slit pattern of equal total width. Thus, the
resulting pattern may be regarded as due to the joint action of interference
between the waves coming from corresponding points in the multiple slits
and of diffraction from each slit. Diffraction in the Fresnel model is the result of interference of all the secondary wavelets radiating from the different
elements of the wave front. The term diffraction is reserved for the consid-
106
107
108
General Relativity. Also, he started asking about the geometry of the space
in which we live. By a conformal transformation of complex equations to a
Riemann Space, one variable is eliminated that makes the resulting equations easier to solve.
If a periodic table is regarded as an ordering of the chemical elements
demonstrating the periodicity of chemical and physical properties, credit
for the first periodic table (published in 1862) probably should be given to
a French geologist, A.E.Beguyer de Chancourtois. De Chancourtois transcribed a list of the elements positioned on a cylinder in terms of increasing
atomic weight. When the cylinder was constructed so that 16 mass units
could be written on the cylinder per turn, closely related elements were lined
up vertically. This led de Chancourtois to propose that the properties of
the elements are the properties of numbers. De Chancourtois was first to
recognize that elemental properties reoccur every seven elements, and using this chart, he was able to predict the stoichiometry (the determination
of the atomic weights of elements, the proportions in which they combine,
and the weight relations in any chemical reaction) of several metallic oxides.
Unfortunately, his chart included some ions and compounds in addition to
elements.
John Newlands, an English chemist, wrote a paper in 1863 which classified the 56 established elements into 11 groups based on similar physical
properties, noting that many pairs of similar elements existed which differed
by some multiple of eight in atomic weight. Newlands published his version
of the periodic table and proposed the Law of Octaves by analogy with the
seven intervals of the musical scale in 1864. This law stated that any given
element exhibits analogous behavior to the eighth element following it in
the table. Is the hidden structure like the Music of the Spheres?
Dmitriy Ivanovich Mendeleev (Russia) (1834 - 1907) wrote a textbook
on systematic inorganic chemistry, Principles of Chemistry, which appeared
in thirteen editions the last being in 1947. Mendeleev organized his material in terms of the families of the known elements that displayed similar
properties. The first part of the text was devoted to the well-known chemistry of the halogens. Next, he chose to cover the chemistry of the metallic
elements in order of combining power alkali metals first (combining power
of one), alkaline earths (two), etc. However, it was difficult to classify
metals such as copper and mercury that had multiple combining powers,
sometimes one and other times two. While trying to sort out this dilemma,
Mendeleev noticed patterns in the properties and atomic weights of halogens, alkali metals and alkaline metals. He observed similarities between
109
the series Cl-K-Ca, Br-Rb-Sr, and I-Cs-Ba. He created a card for each of
the 63 known elements in an effort to extend this pattern to other elements.
Each card contained the elements symbol, atomic weight and its characteristic chemical and physical properties. Atomic number was a concept yet
to be developed. When Mendeleev arranged the cards on a table in order
of ascending atomic weight grouping elements of similar properties together
in a manner not unlike the card arrangement in his favorite solitaire card
game, patience, the periodic table was formed. From this table, Mendeleev
developed his statement of the periodic law and published his work. The
advantage of Mendeleevs table over previous attempts was that it exhibited
similarities not only in small units such as the triads, but showed similarities in an entire network of vertical, horizontal, and diagonal groupings.
Mendeleev came within one vote of being awarded the Nobel Prize in 1906
for his work. Mendeleev predicted the existence and properties of unknown
elements from the gaps present in his table that he called eka-aluminum,
eka-boron, and eka-silicon. The elements gallium, scandium and germanium
were found later to fit his predictions quite well. Mendeleevs table was published before Meyers. His work was more extensive in predicting new or
missing elements. Mendeleev predicted the existence of 10 new elements,
of which seven were eventually discovered the other three, atomic weights
45, 146 and 175, do not exist. He was incorrect in suggesting that the element pairs of argon-potassium, cobalt-nickel and tellurium-iodine should
be interchanged in position due to inaccurate atomic weights. Although
these elements did need to be interchanged, it was because of a flaw in the
reasoning that periodicity is a function of atomic weight rather than the
then unknown atomic number. Although Mendeleevs table demonstrated
the periodic nature of the elements, it remained for the discoveries of scientists of the 20th Century to model the hidden structure of the elements that
caused their properties to recur periodically. Mathematically, the periodic
table is a simple form of Group Theory.
Thermodynamics (Greek: thermos = heat and dynamic = change) is
the physics of energy, heat, work, entropy and the spontaneity of processes.
Thermodynamics is the study of the inter-relation between heat, work and
internal energy of a system.
Thermodynamics is closely related to statistical mechanics from which
many thermodynamic relationships can be derived.
While dealing with processes in which systems exchange matter or energy, classical thermodynamics is not concerned with the rate at which
such processes take place, termed kinetics. The use of the term thermody-
110
111
ber of particles N of each component of the system and the Chemical potential of each component of the system.
The mechanical parameters listed above can be described in terms of
fundamental classical or quantum physics, while the statistical parameters
can only be understood in terms of statistical mechanics.
In most applications of thermodynamics, one or more of these parameters will he held constant, while one or more of the remaining parameters are
allowed to vary. Mathematically, this means the system can be described
as a point in n-dimensional mathematical space, where n is the number
of parameters not held fixed. Using statistical mechanics, combined with
the laws of classical or quantum physics, equations of state can be derived
which express each of these parameters in terms of the others. The simplest
and most important of these equations of state are the ideal gas law and
its derived equations pV = NRT .
Thermodynamic Potentials Four quantities, called Thermodynamic potentials, can be defined in terms of the thermodynamic parameters of a
physical system:
Internal energy EI : dEI = T dS pdV
Helmholtz free energy AH : dAH = SdT pdV
Gibbs free energy Gfe : dGfe = SdT + V dp
Enthalpy HE : dHE = T dS + V dp
Using the above differential forms of the four thermodynamic potentials,
combined with the chain rule of product differentiation, the four potentials
can be expressed in terms of each other and the thermodynamic parameters, as below: E = H P V = A + T S
A = E TS = G PV
G = A + PV = H TS
H = G + TS = E + PV
The above relationships between the thermodynamic potentials and the
thermodynamic parameters do not depend upon the particular system being studied. They are universal relationships that can be derived using
statistical mechanics, with no regard for the forces or interaction potentials
between the components of the system. However, the dependence of any
one of these four thermodynamic potentials cannot be expressed in terms
of the thermodynamic parameters of the system without knowledge of the
interaction potentials between system components, the quantum energy
levels, and their corresponding degeneracy, or the partition function of the
112
system under study. However, once the dependence of one of the thermodynamic functions upon the thermodynamic variables (temperature and one
other variable such as volume or pressure) is determined, the three other
thermodynamic potentials can be easily derived using the above equations.
A thermodynamic system is that part of the universe that is under
consideration. A real or imaginary boundary separates the system from the
rest of the universe, which is referred to as the environment or as a reservoir.
A useful classification of thermodynamic systems is based on the nature of
the boundary and the flows of matter, energy and entropy through it.
There are three kinds of systems depending on the kinds of exchanges
taking place between a system and its environment:
1.Isolated systems do not exchange heat, matter, or work with their environment. Mathematically, this implies that T dS = 0, dN = 0, pdV = 0,
and, therefore, dE = 0. An example of an isolated system would be an
insulated container, such as an insulated gas cylinder.
2.Closed systems exchange energy (heat and work) but not matter with
their environment. Only dN = 0 in closed systems. A greenhouse is an
example of a closed system exchanging heat but not work with its environment.
3. Open systems exchange energy (heat and work) and matter with their
environment. A boundary allowing matter exchange is called permeable.
The ocean is an example of an open system.
Whether a system exchanges heat, work, or both is usually thought of
as a property of its boundary, which can be:
An adiabatic boundary: not allowing heat exchange, T dS = 0. The universe
is assumed to be adiabatic.
A rigid boundary: not allowing exchange of work, pdV = 0.
In reality, a system can never be absolutely isolated from its environment, because there is always at least some slight coupling, even if only via
minimal gravitational attraction. The only exception in current models is
the universe. A steady-state system has the energy into the system is equal
to the energy leaving the system.
When a system is at equilibrium under a given set of conditions, it is
said to be in a definite state. The state of the system can be described
by a number of intensive variables and extensive variables. The properties
of the system can be described by an equation of state that specifies the
relationship between these variables such as between pressure and density.
The Laws of Thermodynamics
Zeroth law: Thermodynamic equilibrium. When two systems are put
113
in contact with each other, there will be a net exchange of energy and/or
matter between them unless they are in thermodynamic equilibrium. Two
systems are in thermodynamic equilibrium with each other if they stay the
same after being put in contact. The zeroth law is stated as:
If systems A and B are in thermodynamic equilibrium, and systems B and
C are in thermodynamic equilibrium, then systems A and C are also in
thermodynamic equilibrium.
While this is a fundamental concept of thermodynamics, the need to
state it explicitly as a law was not perceived until the first third of the 20th
century, long after the first three laws were already widely in use, hence the
zero numbering. There is still some discussion about its status.
Thermodynamic equilibrium includes thermal equilibrium (associated to
heat exchange and parameterized by temperature), mechanical equilibrium
(associated to work exchange and parameterized generalized forces such as
pressure), and chemical equilibrium (associated to matter exchange and
parameterized by chemical potential).
First Law: Conservation of energy. This is a fundamental principle of
mechanics, and more generally of physics. However, remember quantum
mechanics and currently popular cosmology have trouble with this. It is
used in thermodynamics to give a precise definition of heat. It is stated as
follows:
The work exchanged in an adiabatic process depends only on the initial and
the final state and not on the details of the process.
or
The net sum of exchange of heat and work of a system with the environment
is a change of property. The amount of property change is determined only
by the initial and final states and is independent on the path through which
the process takes place.
or
The heat flowing into a system equals the increase in internal energy of the
system plus the work done by the system.
or
Energy cannot be created or destroyed, only modified in form.
The First Law of thermodynamics is an exact consequence of the laws
of mechanics - classical or quantum.
Second Law: The second law of thermodynamics is an expression of the
tendency that over time, differences in temperature, pressure, and chemical
potential equilibrate in an isolated physical system. From the state of thermodynamic equilibrium, the law deduced the principle of the increase of
114
entropy and explains the phenomenon of irreversibility in nature. The second law declares the impossibility of machines that generate usable energy
from the abundant internal energy of nature by processes called perpetual
motion of the second kind.
The law is usually stated in physical terms of impossible processes. Classical thermodynamics holds the second law as a basic postulate applicable
to any system involving measurable heat transfer. The second law defines
the concept of thermodynamic entropy. Entropy in statistical mechanics
can be defined from information theory, known as the Shannon entropy.
The second law of thermodynamics is a farreaching and powerful law.
It is typically stated in one of two ways:
It is impossible to obtain a process that, operating in cycle, produces no
other effect than the subtraction of a positive amount of heat from a reservoir and the production of an equal amount of work. (Kelvin-Planck Statement).
or
No process is possible whose sole result is the transfer of heat from a body of
lower temperature to a body of higher temperature. (Clausius Statement)
Constantin Caratheodory formulated thermodynamics on a purely mathematical axiomatic foundation. His statement of the second law is known
as the Principle of Caratheodory, which may be formulated as follows:
In every neighborhood of any state S of an adiabatically isolated system
there are states inaccessible from S.
With this formulation he described the concept of adiabatic accessibility
for the first time and provided the foundation for a new subfield of classical
thermodynamics, often called geometrical thermodynamics.
The entropy of a thermally isolated macroscopic system never decreases
(see Maxwells demon). However a microscopic system may exhibit fluctuations of entropy opposite to that dictated by the second law (see Fluctuation
Theorem).
The mathematical proof of the Fluctuation Theorem from time-reversible
dynamics and the Axiom of Causality constitutes a proof of the Second Law.
The Second Law in a logical sense thus ceases to be a Law of Physics and
becomes a theorem which is valid for large systems or long times. A theorem
has more fundamental postulates and hidden assumptions. For example,
the Big Band model holds the universe is closed and adiabatic. The steady
state model holds that hydrogen is being injected into the universe that
would require a reevaluation of the second law.
According to the second law the entropy of any isolated system, such as
115
the entire universe of the Big Bang model, never decreases. If the entropy of
the universe has a maximum upper bound then when this bound is reached
the universe has no thermodynamic free energy to sustain motion or life,
that is, the heat death is reached.
Third Law: This law explains why it is so hard to cool something to
absolute zero: All processes cease as temperature approaches zero. The
entropy of a system as T 0 approaches a constant.
These laws have been humorously summarized as Ginsbergs theorem:
(1) you cant win, (2) you cant break even, and (3) you cant get out of
the game.
Alternatively: (1) you cant get anything without working for it, (2) the
most you can accomplish by working is to break even, and (3) you can only
break even at absolute zero.
Or, (1) you cannot win or quit the game, (2) you cannot tie the game
unless it is very cold, and (3) the weather doesnt get that cold.
More about the 2nd Law The Second Law is exhibited (coarsely) by
a box of electrical cables. Cables added from time to time tangle, inside
the closed system (cables in a box) by adding and then removing cables.
The best way to untangle is to start by taking the cables out of the box
and placing them stretched out. The cables in a closed system (the box)
will never untangle, but giving them some extra space starts the process of
untangling by going outside the closed system.
C.P. Snow said the following in a lecture in 1959 entitled The Two
Cultures and the Scientific Revolution.
A good many times I have been present at gatherings of people who,
by the standards of the traditional culture, are thought highly educated and
who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked
the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative.
James Bradley, a British astronomer, noted in 1728 that the direction
of observation of a star varies periodically in the course of a year by an
amount of the same order as the ratio of orbital velocity of the Earth to
c (104 ). The corpuscular model of light explained this observation easily.
Thomas Young and Augustin Fresnel saved the wave model by assuming
the ether was completely undisturbed by the motion of the Earth. Fresnels
assumption implied the existence of an ether wind on the Earths surface
and the mass of the Earth was completely transparent to the ether wind.
George Stokes suggested in 1846 the ether was a jelly-like substance that
116
117
118
119
To make that easily detectable the apparatus was located in a closed room
in the basement of a stone building, eliminating most thermal and vibration
effects. Vibrations were further reduced by building the apparatus on top
of a huge block of marble that was then floated in a pool of mercury. They
calculated that effects of about 1/100th of a fringe would be detectable.
The mercury pool allowed the device to be turned, so that it could be
rotated through the entire range of possible angles to the ether wind. Even
over a short period of time some sort of effect would be noticed simply by
rotating the device, such that one arm rotated into the direction of the wind
and the other away. Over longer periods day/night cycles or yearly cycles
would also be easily measurable
This result was rather astounding and not explainable by the then
current theory of wave propagation in a static ether. Several explanations
were attempted. Among them, that the experiment had a hidden flaw (apparently Michelsons initial belief), or that the Earths gravitational field
somehow dragged the ether around with it in such a way as to locally
eliminate its effect. Dayton Miller (who purported to have observed a variation with season) argued that, in most if not all experiments other than his
own, there was little possibility of detecting an ether wind because it was
almost completely blocked out by the laboratory walls or by the apparatus
itself. Millers result of a slight shift has not been duplicated. Be this as it
may, the idea of a simple ether, what became known as the First Postulate,
had been dealt a serious blow.
A number of experiments were carried out to investigate the concept of
ether dragging or entrainment. Hamar performed the most convincing by
placing one arm of the interferometer between two huge lead blocks. If the
ether were dragged by mass, the blocks would, it was theorized, have been
enough to cause a visible effect. Once again, no effect was seen.
Walter Ritzs emitter theory (or ballistic theory) was a competing theory for Special Relativity, explaining the results of the Michelson-Morley
experiment. Emitter theory keeps part of the principle of relativity but
postulates the speed of light is c only relative to its source, instead of the
invariance postulate.
Thus, emitter theory combines electrodynamics and mechanics with a
simple Newtonian theory that has no paradoxes in it. Testing this theory
became practical in the 1960s. Particles called neutral pions were accelerated to near the speed of light in a particle accelerator, and the speed of
the photons emitted by decay of those particles was measured. The speed
was found to be exactly the same as that of light emitted by the decay of
120
stationary particles.
Emitter theory was also consistent with the results of the experiment,
did not require an ether, was more intuitive, and was paradox free. This
became known as the Second Postulate. However it also led to several
obvious optical effects that were not seen in astronomical photographs,
notably in observations of binary stars in which the light from the two stars
could be measured in an interferometer.
The Sagnac experiment placed the apparatus on a constantly rotating
turntable. Any ballistic theories such as Ritzs could be tested directly.
The light going one way around the device would have different length to
travel than light going the other way (the eyepiece and mirrors would be
moving toward/away from the light). Ritzs theory suggested there would
be no shift because the net velocity between the light source and detector
was zero (they were both mounted on the turntable). However, an effect
was seen in this case, thereby eliminating any simple ballistic theory. This
fringe-shift effect is used today in laser gyroscopes.
The Fitzgerald contraction was another possible solution. Fitzgerald
contraction suggests all objects physically contract along the line of motion
relative to the ether. While the light may indeed transit slower on that
arm, it also ends up traveling a shorter distance that exactly cancels out
the drift.
The Kennedy-Thorndike experiment in 1932 modified the MichelsonMorley experiment by making the path lengths of the split beam unequal
with one arm being very long. The two ends of the experiment were at different velocities due to the rotation of the Earth Therefore, the contraction
should not exactly cancel the result. Once again, no effect was seen.
Ernst Mach was among the first physicists to suggest that the experiment actually amounted to a disproof of the ether theory. The development of what became Einsteins Special Relativity had the FitzgeraldLorentz contraction derived from the invariance postulate and was also
consistent with the apparently null results of most experiments (though
not with Millers observed seasonal effects). Today, relativity is generally
considered the solution to the Michelson-Morley experiments null result.
The Trouton-Noble experiment is regarded as the electrostatic equivalent of the Michelson-Morley optical experiment, though whether or not it
can ever be done with the necessary sensitivity is debatable.
A suggested Theory of Everything must account for the MichelsonMorley experimental result. However, the null result is an ambiguous
zero, which causes the experiment to be repeated over the decades with
121
greater accuracy.
Another way to attack the experimental result is to suggest one of the
hidden assumptions is false.
Hendrik Antoon Lorentz (Netherlands)(1853 - 1928) proposed that light
waves were due to oscillations of an electric charge in the atom before the
existence of electrons was proved. He developed his mathematical theory
of the electron for which he received the Nobel Prize.
He developed a molecular theory of optical dispersion in 1878 in which
elastically bound, charged particles (ions) vibrated under the action (the
Lorentz force) of an electromagnetic wave that generated a secondary
wave. He assumed the ether around the ions had the same properties as
the ether in a vacuum. He considered in 1892 that the ions and molecules
moved through the ether at the velocity of Earth. The resulting wave
traveled at the velocity predicted by Fresnels model. This reduced the
Fresnel ether drag to molecular interference in a stationary ether and gave
a proof that optical phenomena were unaffected by the Earths motion
through the ether.
George FitzGerald suggested the longitudinal arm of the interferometer underwent a physical contraction when moving through the ether as
Alfred
Potier noted for the perpendicular arm of the interferometer of
1/ (1 u2 /c2 ). Lorentz was also famed for his work on the FitzgeraldLorentz contraction, which is a contraction in the length of an object at
relativistic speeds. Lorentz transformations, which he introduced in 1904,
form the basis of the theory of relativity. They describe the increase of mass,
shortening of length, and time dilation of an object moving at speeds close
to the speed of light. The Lorentz contraction appears to result from an
assumed similarity between molecular forces of cohesion and electrostatic
forces. His and similar theories benefited from experimental microphysics,
including the discovery of x-rays (1895), radioactivity (1896), the electron
(1897), and the spectral splitting of spectral lines (1896). He never fully
accepted quantum theory and hoped it could be incorporated into classical
physics.
Jules Henri Poincare (France) (1854 - 1912) - He was ambidextrous, had
poor muscular coordination, and was nearsighted. His memory was linked
with an outstanding ability to visualize the ideas he heard. He tended
to develop his results from first principles and not from previous results.
Among his first principles were the Principle of relativity, the Principle
of Action/Reaction, and the Principle of Least Action. Therefore, he attacked problems from many different angles. He made contributions to
122
123
of charged bodies and, therefore, their masses depend on the speed of the
bodies as well. He wrote:
When in the limit v = c, the increase in mass is infinite, thus a charged
sphere moving with the velocity of light behaves as if its mass were infinite,
its velocity therefore will remain constant, in other words it is impossible to
increase the velocity of a charged body moving through the dielectric beyond
that of light.
Lorentz in 1899 assumed that the electrons undergo length contraction
in the line of motion.
The predictions of the theories of Abraham and Lorentz were supported
by the experiments of Walter Kaufmann (1901), but the experiments were
not precise enough, to distinguish between them. Experiments lacked the
necessary precision until 1940 when Lorentzs formula was eventually supported.
The idea of an electromagnetic nature of matter, however, had to be
given up. Abraham (1904, 1905) argued that non-electromagnetic forces
were necessary to prevent Lorentzs contractile electrons from exploding.
He also showed that different results for the longitudinal electromagnetic
mass can be obtained in Lorentzs theory, depending on whether the mass
is calculated from its energy or its momentum, so a non-electromagnetic
potential (corresponding to 1/3 of the Electrons electromagnetic energy)
was necessary to render these masses equal. Abraham doubted whether it
was possible to develop a model satisfying all of these properties.
To solve those problems, Henri Poincare in 1905 and 1906 introduced
some sort of pressure (Poincare stresses) of nonelectromagnetic nature.
Abraham required these stresses contribute nonelectromagnetic energy to
the electrons, amounting to 1/4 of their total energy or to 1/3 of their
electromagnetic energy. The Poincare stresses remove the contradiction in
the derivation of the longitudinal electromagnetic mass, they prevent the
electron from exploding, they remain unaltered by a Lorentz transformation
(i.e. they are Lorentz invariant), and were also thought as a dynamical
explanation of length contraction. However, Poincare still assumed that
only the electromagnetic energy contributes to the mass of the bodies.
As it was later noted, the problem lies in the 4/3 factor of electromagnetic rest mass when derived from the Abraham-Lorentz equations. However, when it is derived from the electrons electrostatic energy alone, we
have mes = Eem /c2 , where the 4/3 factor is missing. This can be solved
by adding the nonelectromagnetic energy Ep of the Poincare stresses to
Eem , the electrons total energy Etot = mem c2 . Thus the missing 4/3 factor
124
125
126
127
128
129
130
131
132
Its new value depends on e. Planck soon found that the desired spectral
distribution is obtained if e is proportional to the oscillation frequency
. Then the possible energies e = ne become e = nh, as stated in his
postulate.
The success of Plancks postulate in leading to a theoretical blackbody
spectrum that agrees with experiment requires that its validity is tentatively
accepted until such time as it may be proved to lead to conclusions that
disagree with experiment. The behavior of some physical systems appears to
be in disagreement with the postulate. For instance, an ordinary pendulum
executes simple harmonic oscillations. This system appears to be capable of
possessing a continuous range of energies. Consider a pendulum consisting
of a 1-gm weight suspended from a 10-cm string. The oscillation frequency
of this pendulum is about 1.6/sec. The energy of the pendulum depends
on the amplitude of the oscillations. Assume the amplitude to be such that
the string in its extreme position makes an angle of 0.1 radians with the
vertical. Then the energy E is approximately 50 ergs. If the energy of the
pendulum is quantized, any decrease in the energy of the pendulum, for
instance caused by frictional effects, will take place in discontinuous jumps
of magnitude h. If we consider that h = 6.63 1027 1.6 1026
ergs, then even the most sensitive experimental equipment is incapable of
resolving the discontinuous nature of the energy decrease. No evidence,
either positive or negative, concerning the validity of Plancks postulate is
to be found from experiments involving a pendulum. The same is true of all
other macroscopic mechanical systems. Only when we consider systems in
which is so large and/or E is so small that h is of the order of E are we
in a position to test Plancks postulate. One example is the high frequency
standing waves in black body radiation.
Black body radiation causes the continuous spectrum against which the
bands of absorption or emission are seen. The microwave background radiation is the closest to black body radiation than other measurement (stars,
hot material on Earth, etc.). Therefore, a candidate model for the Theory
of Everything must model the black body nature of the universe.
Philosophically, Plancks derivation is statistical thermodynamics. As
the idea of a quanta or discrete energy levels evolves into Quantum Mechanics, so to does the statistical nature of the calculations. Note that
the radiation is electromagnetic (light). This tends to support the photon
model of light.
The Suns visible photosphere (after the atmosphere) is at a T = 5770
K (about 5497 C) that implies wavelength of about 5500
A.
133
The photoelectric effect is the emission of electrons from a surface (usually metallic) upon exposure to, and absorption of, electromagnetic radiation (such as visible light and ultraviolet radiation) that is above the threshold frequency particular to each type of surface. No electrons are emitted
for radiation below the threshold frequency, as they cannot gain sufficient
energy to overcome their atomic bonding. The electrons that are emitted
are often termed Photoelectrons in many textbooks.
The photoelectric effect helped further waveparticle duality, whereby
physical systems (such as photons in this case) can display both wave
like and particlelike properties and behaviors. The creators of quantum
mechanics used this concept.
The first recorded observation of the photoelectric effect was by Heinrich
Hertz in 1887 in the journal Annalen Der Physik when he was investigating
the production and reception of electromagnetic (EM) waves. His receiver
consisted of a coil with a spark gap, whereupon a spark would be seen
upon detection of EM waves. He placed the apparatus in a darkened box
in order to see the spark better. He observed that the maximum spark
length was reduced when in the box. The glass panel placed between the
source of EM waves and the receiver absorbed ultraviolet radiation that
assisted the electrons in jumping across the gap. When removed, the spark
length would increase. He observed no decrease in spark length when he
substituted quartz for glass, as quartz does not absorb UV radiation.
Hertz concluded his months of investigation and reported the results
obtained. He did not further pursue investigation of this effect, nor did he
make any attempt at explaining how the observed phenomenon was brought
about.
In 1902 Philipp von Lenard observed the variation in electron energy
with light frequency. He used a powerful electric arc lamp that enabled
him to investigate large changes in intensity and that had sufficient power
to enable him to investigate the variation of potential with light frequency.
His experiment directly measured potentials, not electron kinetic energy.
He found the electron energy by relating it to the maximum stopping potential (voltage) in a phototube. He found that the calculated maximum
electron kinetic energy is determined by the frequency of the light. However, Lenards results were qualitative rather than quantitative because of
the difficulty in performing the experiments. The experiments need to be
done on freshly cut metal so that the pure metal was observed, but it oxidized in tens of minutes even in the partial vacuums he used. The current
emitted by the surface was determined by the lights intensity, or bright-
134
ness. Doubling the intensity of the light doubled the number of electrons
emitted from the surface. Lenard did not know of photons.
Albert Einstein (1879 Germany - 1955 USA) acknowledged Above all,
it is my disposition for abstract and mathematical thought, and my lack of
imagination and practical ability. Because of a mediocre scholastic record,
he had difficulty finding a job at universities. Einstein worked in the Bern
patent office from 1902 to 1909, holding a temporary post. By 1904 the
position was made permanent He was promoted to technical expert second
class in 1906. While in the Bern patent office he completed an astonishing
range of theoretical physics publications, written in his spare time without
the benefit or hindrance of close contact with scientific colleagues.
In the first of three papers, all published in 1905, Einstein examined the
phenomenon discovered by Max Planck, according to which electromagnetic
energy seemed to be emitted from radiating objects in discrete quantities.
The energy of these quanta was directly proportional to the frequency of the
radiation. This seemed to contradict classical electromagnetic theory based
on Maxwells equations and the laws of thermodynamics that assumed that
electromagnetic energy consisted of waves that could contain any small
amount of energy. Einstein used Plancks quantum hypothesis to describe
the electromagnetic radiation of light.
Einsteins mathematical description in 1905 of how the photoelectric
effect was caused by absorption of what were later called photons, or quanta
of light, in the interaction of light with the electrons in the substance. The
simple explanation by Einstein in terms of absorption of single quanta of
light explained the features of the phenomenon and helped explain the
characteristic frequency.
The idea of light quanta was motivated by Max Plancks published law
of black-body radiation [On the Law of Distribution of Energy in the Normal Spectrum Annalen der Physik 4 (1901)] by assuming that Hertzian
oscillators could only exist at energies E = hf . Einstein, by assuming that
light actually consisted of discrete energy packets, wrote an equation for
the photoelectric effect that fit experiments. This was an enormous theoretical leap. Even after experiments showed that Einsteins equations for
the photoelectric effect were accurate, the reality of the light quanta was
strongly resisted. The idea of light quanta contradicted the wave theory
of light that followed naturally from James Clerk Maxwells equations for
electromagnetic behavior and, more generally, the assumption of infinite
divisibility of energy in physical systems, which were believed to be well
understood and well verified.
135
Einsteins work predicted that the energy of the ejected electrons would
increase linearly with the frequency of the light. Surprisingly, that had
not yet been tested. That the energy of the photoelectrons increased with
increasing frequency of incident light was known in 1905, but the manner
of the increase was not experimentally determined to be linear until 1915
when Robert Andrews Millikan showed that Einstein was correct.
The photoelectric effect helped propel the thenemerging concept of the
dual nature of light (light exhibits characteristics of waves and particles at
different times). That the energy of the emitted electrons did not depend on
the intensity of the incident radiation was difficult to understand in terms
of the classical wave description of light. Classical theory predicted that
the electrons could gather up energy over a period of time and then be
emitted. A preloaded state would need to persist in matter for such a
classical theory to work. The idea of the preloaded state was discussed
in Millikans book Electrons (+ & -) and in Compton and Allisons book
X-Rays in Theory and Experiment. The classical theory was abandoned.
The photons of the light beam have a characteristic energy given by the
wavelength of the light. If an electron absorbs the energy of one photon
in the photoemission process and has more energy than the work function
(the minimum energy needed to remove an electron from a solid to a point
immediately outside the solid surface), it is ejected from the material. If
the photon energy is too low, however, the electron is unable to escape the
surface of the material. Increasing the intensity of the light beam does not
change the energy of the constituent photons, only their number. Thus
the energy of the emitted electrons does not depend on the intensity of the
incoming light. Electrons can absorb energy from photons when irradiated,
but they follow an all or nothing principle. All of the energy from one
photon must be absorbed and used to liberate one electron from atomic
binding or the energy is re-emitted. If the photon is absorbed, some of the
energy is used to liberate it from the atom, and the rest contributes to the
electrons kinetic energy as a free particle.
Einsteins second 1905 paper proposed what is today called Special Relativity. He based his new theory on a reinterpretation of the classical principle of relativity, namely that the laws of physics had to have the same
form in any frame of reference. As a second fundamental hypothesis, Einstein assumed that the speed of light without matter remained constant in
all frames of reference, as required by Maxwells theory.
Eventually, Albert Einstein (1905) drew the conclusion that established
theories such as:
136
137
Relativity. Lorentz and Poincare had also adopted these same principles,
as necessary to achieve their final results, but didnt recognize that they
were also sufficient. Therefore they obviated all the other assumptions
underlying Lorentzs initial derivations (many of which later turned out to
be incorrect). The stationary, immaterial, and immobile ether of Poincare
was little different than space. Therefore, Special Relativity very quickly
gained wide acceptance among physicists, and the 19th century concept of
luminiferous ether was no longer considered useful.
Lorentzs ether theory (LET) and Special Relativity are similar. The difference is that LET assumes the contraction and develops the constancy of c
whereas Special Relativity develops the contraction. Further, LET assumes
the undetectable ether and the validity of Poincares relativity seems coincidental. Experiments that support Special Relativity also support LET.
However, Special Relativity is preferred by modern physics because the understanding of spacetime was fundamental to the development of General
Relativity.
Einsteins 1905 presentation of Special Relativity was soon supplemented,
in 1907, by Hermann Minkowski, who showed that the relations had a very
natural interpretation in terms of a unified four-dimensional spacetime
in which absolute intervals are seen to be given by an extension of the
Pythagorean theorem. Poincare in 1906 anticipated some of Minkowskis
ideas. The utility and naturalness of the representations by Einstein and
Minkowski contributed to the rapid acceptance of Special Relativity, and
to the corresponding loss of interest in Lorentzs ether theory.
Einstein in 1907 criticized the ad hoc character of Lorentzs contraction hypothesis in his theory of electrons, because according to him it
was only invented to rescue the hypothesis of an immobile ether. Einstein
thought it necessary to replace Lorentzs theory of electrons by assuming
that Lorentzs local time can simply be called time. Einstein explained
in 1910 and 1912 that he borrowed the principle of the constancy of light
from Lorentzs immobile ether, but he recognized that this principle together with the principle of relativity makes the ether useless and leads to
Special Relativity. Although Lorentzs hypothesis is completely equivalent
with the new concept of space and time, Minkowski held that it becomes
much more comprehensible in the framework of the new spacetime physics.
However, Lorentz disagreed that it was ad-hoc and he argued in 1913 that
there is little difference between his theory and the negation of a preferred
reference frame, as in the theory of Einstein and Minkowski, so that it is a
matter of taste which theory is preferred.
138
Einstein in 1905 derived that as a consequence of the relativity principle, inertia of energy is actually represented by E/c2 , but in contrast to
Poincares 1900-paper, Einstein recognized that matter itself loses or gains
mass during the emission or absorption. So the mass of any form of matter
is equal to a certain amount of energy, which can be converted into and
re-converted from other forms of energy. This is the mass-energy equivalence, represented by E = mc2 . Einstein didnt have to introduce fictitious
masses and also avoided the perpetual motion problem, because according
to Darrigol, Poincares radiation paradox can simply be solved by applying
Einsteins equivalence. If the light source loses mass during the emission by
E/c2 , the contradiction in the momentum law vanishes without the need of
any compensating effect in the ether.
Similar to Poincare, Einstein concluded in 1906 that the inertia of (electromagnetic) energy is a necessary condition for the center of mass theorem
to hold in systems in which electromagnetic fields and matter are acting on
each other. Based on the massenergy equivalence he showed that emission
and absorption of em radiation and, therefore, the transport of inertia solves
all problems. On that occasion, Einstein referred to Poincares 1900-paper
and wrote: Although the simple formal views, which must be accomplished
for the proof of this statement, are already mainly contained in a work by H.
Poincare [Lorentz-Festschrift, p. 252, 1900], for the sake of clarity I wont
rely on that work.
Also, Poincares rejection of the reaction principle due to the violation of
the mass conservation law can be avoided through Einsteins E = mc2 , because mass conservation appears as a special case of the energy conservation
law.
The third of Einsteins papers of 1905 concerned statistical mechanics,
a field that had been studied by Ludwig Boltzmann and Josiah Gibbs.
Special Relativity is a fundamental theory in the description of physical
phenomena in its domain of applicability. Many experiments played important roles in its development and justification. The strength of the theory
lies in that it is the only one that can correctly predict to high precision
the outcome of all known (and very different) experiments in its domain of
applicability. Its domain of applicability is when no considerable influence
of gravitation or forces occurs. It is restricted to flat spacetime. Many
of those experiments are still conducted with increased precision, and the
only area, where deviations of the predictions of Special Relativity are not
completely ruled out by experiment, is at the Planck scale and below.
Tests have been conducted concerning refutations of the ether and ether
139
drag, isotropy of the speed of light, time dilation and length contraction,
relativistic mass and energy, Sagnac and Fizeau experiments, and the one
way speed of light.
The effects of Special Relativity can phenomenologically be derived from
the Michelson-Morley experiment that tests the dependence of the speed of
light on the direction of the measuring device, Kennedy-Thorndike experiment that tests the dependence of the speed of light on the velocity of the
measuring device, and the Ives-Stilwell experiment that tests time dilation.
The combination of these effects is important, because most of them can
be interpreted in different ways, when viewed individually. For example,
isotropy experiments such as Michelson-Morley can be seen as a simple consequence of the relativity principle, according to which any inertially moving
observer can consider himself as at rest. Therefore, it is also compatible to
Galilean-invariant theories like emission theories or complete ether drag,
which also contain some sort of relativity principle. Other experiments exclude the Galilean-invariant theories such as the Ives-Stillwell experiment
which assumes a fixed c, refutations of emission theories and complete ether
drag. Lorentz-invariance and thus Special Relativity remains as the only
theory that explains all those experiments.
To measure the isotropy of the speed of light, variations of the Michelson
Morley and KennedyThorndike experiments are still under way. The
Kennedy-Thorndike experiments employ different arm lengths and the evaluations are lasting over several months. Therefore, the influence of different velocities during Earths orbit around the Sun can be observed. Laser,
maser, and optical resonators are in use in modern variants of MichelsonMorley and KennedyThorndike experiments, which diminishes the possibility of any anisotropy of the speed of light. Lunar Laser Ranging Experiments are being conducted as a variation of the Kennedy-Thorndike
experiment.
Because periodic processes and frequencies can be considered as clocks,
extremely precise clockcomparison experiments such as the HughesDrever
experiments are also still conducted. An underlying assumption of these experiments is that the clock process is unchanged by the surrounding energy
field.
Emission theories, according to which the speed of light depends on the
velocity of the source, can explain the negative outcome of the ether drift
experiments. A series of experiments definitely ruled out these models.
Examples are the Alvager experiment where the photons dont acquire the
speed of the decaying mesons; the Sagnac experiment in which the light rays
140
are moving independently of the velocity of the rotating apparatus; and the
de Sitter double star experiment showing that the orbits of the stars dont
appear scrambled because of different propagation times of light. Other
observations also demonstrated that the speed of light is independent of
the frequency and energy of the light.
A series of oneway speed of light measurements were undertaken that
confirm the isotropy of the speed of light. Because the oneway speed depends on the definition of simultaneity and, therefore, on the method of
synchronization, only the twoway speed of light (from A to B back to A)
can directly be measured. Assuming the PoincareEinstein synchronization
makes the oneway speed equal to the two-way speed. Yet different synchronizations, which also give an isotropic twoway speed, but an anisotropic
oneway speed can be conceived. That synchronization by slow clock transport is equivalent with Einstein synchronization and also non-standard synchronization was also shown, as long as the moving clocks are subjected to
time dilation. Because the oneway speed depends on the definition of simultaneity, only the twoway speed is directly accessible by experiment.
There are many models with anisotropic oneway speed of light that is
experimentally equivalent to Special Relativity. However, only Special Relativity is acceptable for the overwhelming majority of physicists. All other
synchronizations are much more complicated than Einsteins and the other
models such as Lorentz Ether Theory are based on extreme and implausible
assumptions concerning some dynamical effects, which are aimed at hiding
the preferred frame from observation.
Time dilation is interpreted to be directly observed in the Ives-Stilwell
experiment (1938) for the first time by observing the transverse Doppler
effect, where the displacement of the center of gravity of the overlapping
light waves was evaluated. Another variant is the Moessbauer rotor experiment, in which gamma rays were sent from the middle of a rotating disc to
a receiver at the edge of the disc so that the transverse Doppler effect can
be evaluated by means of the Moessbauer effect. Time dilation of moving
particles was also verified by measuring the lifetime of muons in the atmosphere and in particle accelerators. The Hafele-Keating experiment on
the other hand confirmed the twin paradox, i.e., that a clock moving from
A to B back to A, is retarded with respect to the initial clock. Because
the moving clock undergoes acceleration, the effects of General Relativity
play an essential role. Because the particle decay mechanism is unmodeled, acceleration and changing gravitational potential may play a role in
these experiments. These experiments have hidden assumptions such as the
141
142
about how a ray of light from a distant star, passing near the Sun would
appear to be bent slightly in the direction of the Sun. This would be highly
significant, as it would lead to the first experimental evidence in favor of
Einsteins theory. Although Newton hypothesized gravitational light bending, Einstein calculated a nearly correct value.
Einstein received the Nobel Prize in 1921 but not for relativity rather
for his 1905 work on the photoelectric effect.
Charles Barkla discovers in 1906 that each chemical element has a characteristic X-ray and that the degree of penetration of these X-rays is related
to the atomic weight of the element.
Hans Geiger and Ernest Marsden in 1909 discover large angle deflections
of alpha particles by thin metal foils.
Ernest Rutherford and Thomas Royds in 1909 demonstrate that alpha
particles are doubly ionized helium atoms.
Ernest Rutherford in 1911 explains the GeigerMarsden experiment by
invoking a nuclear atom model and derives the Rutherford cross section
Max von Laue in 1912 suggests using lattice solids to diffract X-rays.
The Laue X-ray diffraction pattern is obtained by directing an X-ray beam
through a crystal.
Walter Friedrich and Paul Knipping in 1912 diffract X-rays in sphalerite
(zinc blende).
Sir W.H. Bragg and his son Sir W.L. Bragg in 1913 derived Braggs law,
the condition for strong X-ray reflection off crystal surfaces at certain angles.
Although simple, Braggs law confirmed the existence of real particles at the
atomic scale, as well as providing a powerful new tool for studying crystals
in the form of x-ray diffraction. The Braggs were awarded the Nobel Prize
in physics in 1915 for their work in determining crystal structures beginning
with NaCl, ZnS, and diamond.
When X-rays hit an atom, they make the electronic cloud move, as does
any electromagnetic wave. The movement of these charges re-radiate waves
with the same frequency (blurred slightly due to a variety of effects); this
phenomenon is known as the Rayleigh scattering (or elastic scattering).
These re-emitted X-rays interfere, giving constructive or destructive interferences. This is a wave description rather than a photon description. How
photons form constructive or destructive interferences remains a mystery.
Henry Moseley in 1913 shows that nuclear charge is the basis for numbering the elements.
Niels Bohr in 1913 presents his quantum model of the atom. Bohrs
model improves Rutherfords original nuclear model by placing restrictions
143
144
145
marily because the three types of particle from which ordinary matter is
made - electrons, protons, and neutrons - are all subject to it. The Pauli
exclusion principle underlies many of the characteristic properties of matter, from the largescale stability of matter to the existence of the periodic
table of the elements.
Particles obeying the Pauli exclusion principle are called fermions and
are described by FermiDirac statistics. Apart from the familiar electron,
proton and neutron, fermions include the neutrinos, the quarks (from which
protons and neutrons are made), as well as atoms like helium-3. All fermions
possess half-integer spin, meaning that they possess an intrinsic angular
momentum whose value is Plancks constant divided by 2 times a halfinteger (1/2, 3/2, 5/2, etc. Fermions in quantum mechanics theory are
described by antisymmetric states. Particles that are not fermions are
almost invariably bosons. The expected number of particles in an energy
state for B-E statistics reduces to M-B statistics for energies kT .
Molecules or crystals are atoms maintaining a nearly constant space relationship at a temperature. The atoms are said to be bonded with one
another. A chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating
about them. More than simple attraction and repulsion, the energies and
distributions characterize the availability of an electron to bond to another
atom.
A chemical bond can be a covalent bond, an ionic bond, a hydrogen
bond, or just because of Van der Waals force. Each kind of bond is ascribed to some potential. John LennardJones in 1924 proposed a semi
empirical, interatomic force law. These are fundamentally electrostatic interactions (ionic interactions, hydrogen bond, dipole-dipole interactions) or
electrodynamic interactions (van der Waals/London forces). Coulombs law
classically describes electrostatic interactions. The basic difference between
them is the strength of their bonding force or, rather, the energy required
to significantly change the special relationship that is called breaking the
bond. Ionic interactions are the strongest with integer level charges, hydrogen bonds have partial charges that are about an order of magnitude weaker,
and dipole-dipole interactions also come from partial charges another order
of magnitude weaker.
These potentials create the interactions that hold atoms together in
molecules or crystals. Valence Bond Theory, the Valence Shell Electron
Pair Repulsion model (VSEPR), and the concept of oxidation number can
be used to explain molecular structure and composition in many simple
146
1000
100
10
1
compounds. Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as
metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used.
A figure in geometry is chiral (and said to have chirality) if it is not
identical to its mirror image, or, more precisely, if it cannot be mapped to its
mirror image by rotations and translations alone. For example, a right shoe
is different from a left shoe, and clockwise is different from counterclockwise.
Handedness (also referred to as chirality or laterality) is an attribute of
matter where an object is not identical to its mirror image. A chiral object
and its mirror image are called enantiomorphs (Greek opposite forms) or,
when referring to molecules, enantiomers. A nonchiral object is called
achiral (sometimes also amphichiral) and can be superposed on its mirror
image. Human hands are perhaps the most universally recognized example
of chirality. The left hand is a nonsuperimposable mirror image of the
right hand.
A chiral molecule in chemistry is a type of molecule that lacks an internal plane of symmetry and thus has a nonsuperimposable, mirror image.
A chiral molecule is not necessarily asymmetric (devoid of any symmetry
element), as it can have, for example, rotational symmetry. The feature
that is most often the cause of chirality in molecules is the presence of an
asymmetric carbon atom. Most life that we know is made of carbon atoms.
Is there a cause and effect?
The righthand rule imposes the following procedure for choosing one
of the two directions. One form of the righthand rule is used in situations
in which an ordered operation must be performed on two vectors ~a and ~b
that has a result which is a vector ~c perpendicular to both ~a and ~b. The
most common example is the vector cross product.
A different form of the right-hand rule, sometimes called the right-hand
grip rule, is used in situations where a vector must be assigned to the
147
148
(change of viewpoint) in the direction of motion of the particle, and the sign
of the projection (helicity) is fixed for all reference frames. The helicity is
a relativistic invariant.
With the discovery of neutrino oscillation, which implies that neutrinos have mass. The photon and gluon also are expected to be massless,
although the assumption that they are massless has not been well tested.
Hence, these are the only two particles now known for which helicity could
be identical to chirality, and only one that has been confirmed by measurement. An underling assumption is the Lorentz equations for mass and the
measurement mechanism for mass. That is, a measuring frame must travel
slower than c in Special Relativity. All other observed particles have mass
and thus may have different helicities in different reference frames. Asyet
unobserved particles, like the graviton, might be massless, and hence have
invariant helicity like the photon.
Only lefthanded fermions interact with the weak interaction has been
observed. Two lefthanded fermions interact more strongly than right
handed or oppositehanded fermions in most circumstances. Experiments
sensitive to this effect imply that the universe has a preference for left
handed chirality, which violates asymmetry of the other forces of nature.
A theory that is asymmetric between chiralities is called a chiral theory,
while a parity symmetric theory is sometimes called a vector theory. Most
pieces of the Standard Model of physics are nonchiral, which may be due
to problems of anomaly cancellation in chiral theories. Quantum chromodynamics is an example of a vector theory because both chiralities of all
quarks appear in the theory and couple the same way.
The electroweak theory developed in the mid 20th century is an example
of a chiral theory. Originally, it assumed that neutrinos were massless, and
only assumed the existence of lefthanded neutrinos (along with their complementary righthanded antineutrinos). After the observation of neutrino
oscillations, which imply that neutrinos are massive like all other fermions,
the revised theories of the electroweak interaction now include both rightand left-handed neutrinos. However, it is still a chiral theory, as it does not
respect parity symmetry.
The exact nature of the neutrino is still unsettled and so the electroweak
theories that have been proposed are different. Most accommodate the
chirality of neutrinos in the same way as was already done for all other
fermions.
Science in general deals with many complex and, hopefully, precise concepts. Therefore, symbols are defined in scientific papers to be precise
149
Table 3.2: Light wave or particle evidence. Some tests such as atomic line spectra do
not reject either model.
Reject PARTICLE
REFRACTION
DIFFRACTION
INTERFERENCE
POLARIZATION
Reject WAVE
BLACK BODY
PHOTOELECTRIC EFFECT
X-RAY PRODUCTION
COMPTON EFFECT
and complete without very long repetition each time a concept is invoked.
Popular texts then use the symbols such as light wave without the long
repetition of the data and equation set.
We see ripples of the surface of fluids such as water. Careful plotting of
the amplitude and the change in amplitude over duration (time indicated
by clocks) and position results in a mathematical relation among the amplitude, position, and duration. This relation is a most simply stated as a
Fourier series. The first order term of a Fourier series is a sine curve. This
equation set is called a transverse wave or simply wave.
Saying light wave or light is a wave means (1) observations of some
of the measured behavior of light is consistent with the wave equation
set, (2) light may be hypothesized to behave according to other relations
suggested by the wave equation set, and (3) there are tests that do not
reject this hypothesis. Note the popular interpretation of light is a wave
is the philosophical use of the word is.
There are other observations of light that obey the equation set of particles. The popular terminology is light is a photon or photon. Further,
there are tests that do not reject the photon hypothesis.
The problem is that the wave equation set and the photon equation
set are incompatible. That is, the tests that do not reject the wave
equation set do reject the particle equation set and visa versa.
Dr. de Broglie presented his pilot wave theory at the 1927 Solvay Conference, after close collaboration with Schrodinger. Schrodinger developed
his wave equation for de Broglies theory. The pilot wave model is that
particles have a unique position and momentum and are guided by a pilot
wave in a - field. The - field satisfies the Schrodinger equation, acts on
the particles to guide their path, is ubiquitous, and is non-local. The origin
of the - field and the dynamics of a single photon are unmodeled. The pilot wave theory is a causal, hidden variable, and, perhaps, a deterministic
150
model.
Erwin Schrodinger (1887 - 1961) stated his nonrelativistic quantum
wave equation and formulated quantum wave mechanics in 1926. The
Schrodinger equation describes the time-dependence of quantum mechanical systems. It is of central importance to the theory of quantum mechanics,
playing a role analogous to Newtons second law in classical mechanics.
The de Broglies pilot wave model suggests matter is particles and measurements are limited in the ability to determine position and momentum
of small particles. The de Broglie theory suggests each particle does have a
unique position and momentum, but our measurements have considerable
uncertainty. Therefore, the state vector encodes the PROBABILITIES for
the outcomes of all possible measurements applied to the system. As the
state of a system generally changes over time, the state vector is a function
of time. The Schrodinger equation provides a quantitative description of
the rate of change of the state vector of de Broglies .
Using Diracs bra-ket notation, (bra < | and ket | > ) the instantaneous
state vector at time t is denoted by |(t) >. The Schrodinger equation is:
151
152
mechanics does not include concepts such as the wave function of Erwin
Schrodingers wave equation, the two approaches were proven to be mathematically equivalent by mathematician David Hilbert.
The Heisenberg uncertainty principle is one of the cornerstones of quantum mechanics and was discovered by Werner Heisenberg in 1927. The
Heisenberg uncertainty principle in quantum physics, sometimes called the
Heisenberg indeterminacy principle, expresses a limitation on the accuracy
of (nearly) simultaneous measurement of observables such as the position
and the momentum of a particle. The objective of measurement in quantum mechanics is to determine the properties between the parameter and
the interacting dynamical system. The joke is Heisenberg may have done
something. The uncertainty principle quantifies the imprecision by providing a lower bound for the product of the dispersions of the measurements.
For instance, consider repeated trials of the following experiment: By
an operational process, a particle is prepared in a definite state and two
successive measurements are performed on the particle. The first one measures its position and the second immediately after measures its momentum.
Suppose that the operational process of preparing the state is such that on
every trial the first measurement yields the same value, or at least a distribution of values with a very small dispersion dp around a value p. Then the
second measurement will have a distribution of values whose dispersion dq
is at least inversely proportional to dp. The operational process in quantum
mechanical terminology has produced a particle in a possibly mixed state
with definite position. Any momentum measurement on the particle will
necessarily yield a dispersion of values on repeated trials. Moreover, if we
follow the momentum measurement by a measurement of position, we will
get a dispersion of position values. More generally, an uncertainty relation
arises between any two observable quantities defined by non-commuting
operators.
The uncertainty principle in quantum mechanics is sometimes explained
by claiming that the measurement of position necessarily disturbs a particles momentum. Heisenberg himself may have offered explanations that
suggest this view, at least initially. That the role of disturbance is not essential can be seen as follows: Consider an ensemble of (non-interacting)
particles all prepared in the same state. We measure the momentum or the
position (but not both) of each particle in the ensemble. Probability distributions for both these quantities are obtained. The uncertainty relations
still hold for the dispersions of momentum values dp and position values
dx.
153
154
The uncertainty principle does not just apply to position and momentum. It applies to every pair of conjugate variables. Conjugate variables are
a pair of variables mathematically defined in such a way that they become
Fourier transform duals of oneanother. A duality translates concepts,
theorems, or mathematical structures into other concepts, theorems, or
structures, in a one-to-one fashion. An example is exchanging the terms
point and line everywhere results in a new, equally valid theorem. An
example of a pair of conjugate variables is the x-component of angular momentum (spin) vs. the y-component of angular momentum. Unlike the
case of position versus momentum discussed above, in general the lower
bound for the product of the uncertainties of two conjugate variables depends on the system state. The uncertainty principle becomes a theorem
in the theory of operators.
Paul Dirac introduces Fermi-Dirac statistics in 1926. Max Born interprets the probabilistic nature of wavefunctions in 1927.
Albert Einstein was not happy with the uncertainty principle, and he
challenged Niels Bohr with a famous thought experiment: we fill a box
with a radioactive material that randomly emits radiation. The box has a
shutter, which is opened and immediately thereafter shut by a clock at a
precise duration, thereby allowing some radiation to escape. So the duration
is already known with precision. We still want to measure the conjugate
variable energy precisely. Einstein proposed doing this by weighing the box
before and after. The equivalence between mass and energy from Special
Relativity will allow you to determine precisely how much energy was left
in the box. Bohr countered as follows: if energy should leave, then the now
lighter box will rise slightly on the scale. That changes the position of the
clock. Thus, the clock deviates from our stationary reference frame, and
again by Special Relativity, its measurement of duration will be different
from ours, leading to some unavoidable margin of error. A detailed analysis
shows that Heisenbergs relation correctly gives the imprecision.
The Copenhagen Interpretation assumes that there are two processes
influencing the wavefunction: (1) the unitary evolution according to the
Schrodinger equation and (2) the process of the measurement. While there
is no ambiguity about the former, the latter admits several interpretations,
even within the Copenhagen Interpretation itself. The wavefunction is a real
object that undergoes the wavefunction collapse in the second stage, or the
wavefunction is an auxiliary mathematical tool (not a real physical entity)
whose only physical meaning is our ability to calculate the probabilities.
Niels Bohr emphasized that it is only the results of the experiments
155
that should be predicted and, therefore, the additional questions are not
scientific but philosophical. Bohr followed the principles of positivism from
philosophy that implies that the scientists should discuss only measurable
questions.
Within the widely, but not universally, accepted Copenhagen Interpretation of quantum mechanics, the uncertainty principle is taken to mean
that on an elementary level, the physical universe does not exist in a deterministic form, but rather as a collection of probabilities or potentials.
For example, the pattern (probability distribution) produced by billions of
photons passing through a diffraction slit can be calculated using quantum
mechanics, but the exact path of each photon cannot be predicted by any
known method. The Copenhagen Interpretation holds that it cannot be
predicted by any method. It is this interpretation that Einstein was questioning when he said I cannot believe that God would choose to play dice
with the universe. Bohr, who was one of the authors of the Copenhagen
Interpretation responded, Einstein, dont tell God what to do.
Einstein was convinced that this interpretation was in error. His reasoning was that all previously known probability distributions arose from
deterministic events. The distribution of a flipped coin or a rolled dice can
be described with a probability distribution (50% heads, 50% tails). But
this does not mean that their physical motions are unpredictable. Ordinary
mechanics can be used to calculate exactly how each coin will land, if the
forces acting on it are known. And the heads/tails distribution will still line
up with the probability distribution given random initial forces. Einstein
assumed that there are similar hidden variables in quantum mechanics that
underlie the observed probabilities.
Neither Einstein nor anyone since has been able to construct a satisfying hidden variable theory. Although the behavior of an individual particle
is random in the Copenhagen Interpretation, it is also correlated with the
behavior of other particles. Therefore, if the uncertainty principle is the
result of some deterministic process as suggested by de Broglies pilot wave
model, it must be the case that particles at great distances instantly transmit information to each other to ensure that the correlations in behavior
between particles occur.
Bohr and others created the Copenhagen Interpretation to counter Einsteins hidden variable suggestion. The Copenhagen Interpretation Principles are:
1. A system is completely described by a wave function , which represents
an observers knowledge of the system. (Heisenberg):
156
157
tion. However, there is a price to pay. That price involves the ambiguous
zero such as in the Michelson-Morley experiment. That is, the ensemble
measurement is done where the model expectation is a zero result. For example, the interference experiment expects to yield bands of light (the sum
of an ensemble of light particles/waves) separated by bands of zero light.
The width and spacing of the bands, which the wave model suggests carries
information about the energy of the light, may be measured by measuring
the spacing of the dark bands rather than the light bands.
The Copenhagen Interpretation ignores the zero measurement and tends
to treat ensemble measurements as single particle measurements. Such experiments raise credibility questions. The need for a correct model and
for a zero measurement (very sensitive equipment with some assurance the
equipment is operational) places very difficult restrictions on the interpretation of such data. Its easily understood why physics tends toward the
traditional method of measurement.
On a philosophical note, an increasing area of research is based on the
concept that information is the true quantum mechanical content of our
universe. This concept of obtaining data (information) without the collapse
of the wave function presents some difficulty for information models. The
popular conclusion of the MichelsonMorley experiment is that there was
no wave function to collapse (no ether).
When coherent light passes through double slits in the classic double
slit experiment onto a screen, alternate bands of bright and dark regions
are produced. These can be explained as areas in which the superposition
(addition) of light waves reinforce or cancel. However it became experimentally apparent that light has some particlelike properties and items such
as electrons have wavelike properties and can also produce interference
patterns.
Consider the double slit experiment with the light reduced so that the
model considers that only one photon (or electron) passes through the slits
at a time. The electron or photon hit the screen one at a time. However,
when the totals of where the photons have hit, the interference patterns
emerge that appear to be the result of interfering waves even though the
experiment dealt with only one particle at a time. The Pilot Wave model
suggests the probability statements made by quantum mechanics are irreducible in the sense that they dont exclusively reflect our limited knowledge
of some hidden variables. Classical physics uses probabilities to describe the
outcome of rolling a die, even though the process was thought to be deterministic. Probabilities were used to substitute for complete knowledge.
158
By contrast, the Copenhagen Interpretation holds that in quantum mechanics, measurement outcomes are fundamentally indeterministic.
1. Physics is the science of outcomes of measurement processes. Speculation beyond that cannot be justified. The Copenhagen Interpretation
rejects questions like where was the particle before I measured its position as meaningless.
2. The act of measurement causes an instantaneous collapse of the wave
function. This means that the measurement process randomly picks out exactly one of the many possibilities allowed for by the states wave function,
and the wave function instantaneously changes to reflect that pick.
3. The original formulation of the Copenhagen Interpretation has led to
several variants; the most respected one is based on Consistent Histories (a
systems history is considered in the wave equations) (Copenhagen done
right?) and the concept of quantum decoherence (a system interacts with
its environment without wavefunction collapse) that allows us to calculate
the fuzzy boundary between the microscopic and the macroscopic worlds.
Other variants differ according to the degree of reality assigned to the waveform.
Many physicists and philosophers have objected to the Copenhagen Interpretation on the grounds that it is nondeterministic and that it includes an undefined measurement process that converts probability functions into nonprobabilistic measurements. Erwin Schrodinger devised the
Schrodingers cat experiment that attempts to illustrate the incompleteness of the theory of quantum mechanics when going from subatomic to
macroscopic systems.
The completeness of the Copenhagen Interpretation of quantum mechanics is that the mapping of the observable parameters to wave parameters is complete.
The EPR paradox is a thought experiment that demonstrates that the
result of a measurement performed on one part of a quantum system can
have an instantaneous effect on the result of a measurement performed on
another part, regardless of the distance separating the two parts. This runs
counter to the intuition of Special Relativity, which states that information
cannot be transmitted faster than the speed of light. EPR stands for
Albert Einstein, Boris Podolsky, and Nathan Rosen, who introduced the
thought experiment in a 1935 paper to argue that quantum mechanics is not
a complete physical theory. The EPR paradox was intended to show that
there have to be hidden variables in order to avoid nonlocal, instantaneous
effects at a distance. It is sometimes referred to as the EPRB paradox for
159
David Bohm, who converted the original thought experiment into something
closer to being experimentally testable.
The EPR paradox is a paradox in the following sense: if some seemingly reasonable conditions (referred to as locality, realism, and completeness) are added to quantum mechanics, then a contradiction results.
However, quantum mechanics by itself does not appear to be internally inconsistent, nor does it contradict relativity. However, experimental tests
of the EPR paradox using Bells inequality (outcomes by two observer in
two experiments may not be equivalent) have supported the predictions
of quantum mechanics, while showing that local hidden variable theories
fail to match the experimental evidence. As a result of further theoretical and experimental developments since the original EPR paper, most
physicists today regard the EPR paradox as an illustration of how quantum mechanics violates classical intuitions, and not as an indication that
quantum mechanics is fundamentally flawed. However, it means quantum
mechanics and relativity will never get together.
The EPR paradox draws on a phenomenon predicted by quantum mechanics, known as quantum entanglement, to show that measurements performed on spatially separated parts of a quantum system can apparently
have an instantaneous influence on one another. This effect is now known as
nonlocal behavior or colloquially as quantum weirdness. For example,
consider a simplified version of the EPR thought experiment put forth by
David Bohm.
Many types of physical quantities can be used to produce quantum
entanglement. The original EPR paper used momentum for the observable.
Experimental realizations of the EPR scenario often use the polarization
of photons, because polarized photons are easy to prepare and measure.
Many experiments in recent years have repeated the quantum entanglement
observation.
Bells theorem is the most famous legacy of the late Northern Irish physicist John Bell. It is notable for showing that the predictions of quantum
mechanics are not intuitive. It is simple, elegant, and touches upon fundamental philosophical issues that relate to modern physics. Bells theorem
states: No physical theory of local hidden variables can ever reproduce all
of the predictions of quantum mechanics.
The principle of locality states that physical processes occurring at one
place should have no immediate effect on the elements of reality at another
location. At first sight, this appears to be a reasonable assumption to
make, as it seems to be a consequence of Special Relativity, which states
160
that information can never be transmitted faster than the speed of light
without violating causality. It is generally believed that any theory that
violates causality would also be internally inconsistent, and thus deeply
unsatisfactory.
However, the usual rules for combining quantum mechanical and classical descriptions violate the principle of locality without violating causality.
Causality is preserved because there is no way (or so current thought holds
but the speed of gravity waves has not been measured) to transmit information by manipulating measurement axis. That is, the principle of locality
heavily depends on the speed of information transmittal.
The principle of locality appeals powerfully to physical intuition, and
Einstein, Podolsky, and Rosen were unwilling to abandon it. Einstein derided the quantum mechanical predictions as spooky action at a distance.
However, doubt has been cast on EPRs conclusion in recent years due
to developments in understanding locality and especially quantum decoherence. The word locality has several different meanings in physics. For
example, in quantum field theory locality means that quantum fields at different points of space do not interact with one another. Supporters of the
Copenhagen Interpretation suggest quantum field theories that are local in
this sense appear to violate the principle of locality as defined by EPR, but
they nevertheless do not violate locality in a more general sense. Wavefunction collapse can be viewed as an epiphenomenon of quantum decoherence,
which in turn is nothing more than an effect of the underlying local time
evolution of the wavefunction of a system and all of its environment. Because the underlying behavior doesnt violate local causality, it follows that
neither does the additional effect of wavefunction collapse, whether real or
apparent. Therefore, the EPR experiment (nor any quantum experiment)
does not demonstrate that fasterthanlight signaling is possible. I think
there are two possibilities. One is that the statistical assembly of enough
events to determine that a wavefunction collapse has occurred is the meaning of information or signal transfer. The other is there is a mechanism for
faster than light transfer such as gravity waves.
There are several ways to resolve the EPR paradox. The one suggested
by EPR is that quantum mechanics, despite its success in a wide variety of
experimental scenarios, is actually an incomplete theory. That is, there is
some yet undiscovered theory of nature to which quantum mechanics acts
as a kind of statistical approximation albeit an exceedingly successful one.
Unlike quantum mechanics, the more complete theory contains variables
corresponding to all the elements of reality. There must be some unknown
161
162
The EPR paradox has deepened our understanding of quantum mechanics by exposing the fundamentally nonclassical characteristics of the
measurement process. Prior to the publication of the EPR paper, a measurement was often visualized as a physical disturbance inflicted directly
upon the measured system. For instance, when measuring the position of
an electron, imagine shining a light on it. Thus the electron is disturbed and
the quantum mechanical uncertainties in its position are produced. Such
explanations, which are still encountered in popular expositions of quantum
mechanics, are debunked by the EPR paradox, which shows that a measurement can be performed on a particle without disturbing it directly by
performing a measurement on a distant entangled particle.
Technologies relying on quantum entanglement are now being developed.
Entangled particles in quantum cryptography are used to transmit signals
that cannot be eavesdropped upon without leaving a trace. Entangled
quantum states in quantum computation are used to perform computations
in parallel, which may allow certain calculations to be performed much more
quickly than they ever could be with classical computers.
A direct or many worlds interpretation view that classical physics and
ordinary language are only approximations to quantum mechanics and that
insisting on applying the approximation in the same ways all the time leads
to strange results is understandable. For example, use of geometric optics
to describe the optical properties of a telescope might be expected, because
it is large with respect to the wavelength of light. However, telescopes
(especially if space based) are designed to measure such small angles that
wave effects are significant. Similarly, when EPR designed their experiment
to be sensitive to subtleties of quantum mechanics, they made it sensitive
to how the classical approximation is applied.
An interpretation of quantum mechanics is an attempt to answer the
question: what exactly is quantum mechanics talking about? Quantum
mechanics, as a scientific theory, has been very successful in predicting experimental results. The close correspondence between the abstract, mathematical formalism, and the observed facts is not generally in question. That
such a basic question is still posed in itself requires some explanation.
The understanding of the theorys mathematical structures went through
various preliminary stages of development. Schrodinger at first did not understand the probabilistic nature of the wavefunction associated to the electron; it was Max Born who proposed its interpretation as the probability
distribution in space of the electrons position. Other leading scientists,
such as Albert Einstein, had great difficulty in coming to terms with the
163
164
tum system, but not at the same time. Examples are propositions involving
a wave description of a quantum system and a corpuscular description of
a quantum system. The latter statement is one part of Niels Bohrs original formulation, which is often equated to the principle of complementarity
itself.
Some physicists, for example Asher Peres and Chris Fuchs, seem to
argue that an interpretation is nothing more than a formal equivalence between sets of rules for operating on experimental data. This would suggest
that the whole exercise of interpretation is unnecessary (i.e. shut up and
calculate).
Any modern scientific theory requires at the very least an instrumentalist description that relates the mathematical formalism to experimental practice. The most common instrumentalist description in the case of
quantum mechanics is an assertion of statistical regularity between state
preparation processes and measurement processes. This is usually glossed
over into an assertion regarding the statistical regularity of a measurement
performed on a system with a given state. The quantum mechanics methods are like the methods of thermostatistics. Statistical mathematics is
used where the underlying structure is unknown. The problem is that perhaps we lack the necessary probing particle to really see Einsteins hidden
structure.
The Pilot Wave theory was the first known example of a hidden variable theory. Its more modern version, the Bohm Interpretation, remains
a controversial attempt to interpret quantum mechanics as a deterministic
theory and avoiding troublesome notions such as instantaneous wavefunction collapse and the paradox of Schrodingers cat.
The Pilot Wave theory uses the same mathematics as other interpretations of quantum mechanics. Consequently, the current experimental
evidence supports the Pilot Wave theory to the same extent as the other
interpretations. The Pilot Wave theory is a hidden variable theory. Consequently: (1) the theory has realism meaning that its concepts exist independently of the observer and (2) the theory has determinism.
The position and momentum of every particle are considered hidden
variables; they are defined at all times, but not known by the observer; the
initial conditions of the particles are not known accurately, so that from the
point of view of the observer, there are uncertainties in the particles states
which conform to Heisenbergs Uncertainty Principle.
A collection of particles has an associated wave, which evolves according
to the Schrodinger equation. Each of the particles follows a deterministic
165
(but probably chaotic) trajectory that is guided by the wave function. The
density of the particles conforms to the magnitude of the wave function.
The particles are guided by a pilot wave in a - field that satisfies the
Schrodinger equation, that acts on the particles to guide their path, that
is ubiquitous, and that is non-local. The origin of the field and the
dynamics of a single photon are unmodeled.
In common with most interpretations of quantum mechanics other than
the Many-worlds interpretation, this theory has nonlocality. The Pilot Wave
Theory shows that it is possible to have a theory that is realistic and deterministic, but still predicts the experimental results of Quantum Mechanics.
Claus Jonsson of T
ubingen in 1961 finally performed an actual double
slit experiment with electrons (Zeitschrift f
ur Physik 161 454). He demonstrated interference with up to five slits. The next milestone (an experiment
in which there was just one electron in the apparatus at any one time) was
reached by Akira Tonomura and co-workers at Hitachi in 1989 when they
observed the build up (large ensemble) of the fringe pattern with a very
weak electron source and an electron biprism (American Journal of Physics
57 117-120). Whereas Jonssons experiment was analogous to Youngs original experiment, Tonomuras was similar to G I Taylors. (Note added on
May 7: Pier Giorgio Merli, Giulio Pozzi, and GianFranco Missiroli carried
out doubleslit interference experiments with single electrons in Bologna in
the 1970s; see Merli et al. in Further reading and the letters from Steeds,
Merli et al., and Tonomura.)
Since then, particle interference has been demonstrated with neutrons,
atoms, and molecules as large as carbon-60 and carbon-70. However, the
results are profoundly different this time because electrons are fermions and,
therefore, obey the Pauli exclusion principle whereas photons are bosons
and do not.
Aim an electron gun at a screen with two slits and record their positions
of detection at a detector behind the screen. An interference pattern will be
observed just like the one produced by diffraction of a light or water wave
at two slits. This pattern will even appear if you slow down the electron
source so that only one electrons worth of charge per second comes through.
Classically speaking, every electron is a point particle and must either travel
through the first or through the second slit. So we should be able to produce
the same interference pattern if we ran the experiment twice as long, closing
slit number one for the first half, then closing slit number two for the second
half. But the same pattern does not emerge. Furthermore, if we build
detectors around the slits in order to determine which path a particular
166
167
168
when observing the path of a photon stream is possible. If his results are
verified, it has farreaching implications for the understanding of the quantum world and potentially challenges the Copenhagen Interpretation and
the Manyworlds interpretation However, it lends support to the Transactional Interpretation and the Bohm Interpretation, which are consistent
with the results. Newtons suggestion that Corpuscles travel in streams and
the idea that the streams may interact like waves may also be consistent
with the Afshar experiment.
The following is another experiment done by Afshar in 2005 to answer
some criticisms (Afshar 2005). He and others have suggested other experiments to demonstrate single-photon diffraction behavior. The experimenter
argues the intensity of light is so low and the detector so sensitive that only
one photon passes through the slit(s) at time. The decoherent distribution
of photons lacks the dark fringes. The hidden assumption is that only the
photons passing through the slit(s) affects the pattern on the screen.
The distinction between incoherent and coherent light is whether the
light forms a diffraction/interference pattern when passed through one or
more slits in a mask. Because interference has been thought to be a wave
phenomenon, coherence is considered a property of waves. However, the
definition of coherence allows a distribution of particles to be coherent. A
continuous screen pattern when passed through one or more slits in a mask
is formed for even very weak incoherent source.
Afshar compared weak coherent source and a weak decoherent source.
The weak source is sometimes labeled a single-photon source. After 30
photons were registered from each source, there was no distinction in the
screen patterns. After 3000 photons were registered from each source, the
distinction between coherent source and decoherent source became apparent. Therefore, the interference phenomenon requires coherence. Coherence
is a relationship of light particles or parts of the wave front. Because only
one photon passes through the slits at a time does not imply that there
are not other photons blocked by the mask. Therefore, coherence is not a
single particle property. It is a multi-particle property.
However, the coherence requirement is inconsistent with the Copenhagen Interpretation.
Afshar seems to be saying Einstein was right, there are hidden variables
(hidden structure), and quantum mechanics is a statistical treatment of this
structure.
The introduction of particle accelerators in the 1930s and their development produced many new particles. Various schemes of classifying particles
169
170
171
those of relativity. For example, a test theory may have a different postulate about light concerning the oneway speed of light vs. twoway speed
of light, may have a preferred frame of reference, and may violate Lorentz
invariance in many different ways. Test theories predicting different experimental results from Einsteins Special Relativity, are Robertsons test
theory (1949), and the Mansouri-Sexl theory (1977) that is equivalent to
Robertsons theory. Another, more extensive model is the Standard-Model
Extension, which also includes the Standard Model and General Relativity.
Some fundamental experiments to test those parameters, still repeated
with increased accuracy such as the MichelsonMorley experiment, the
KennedyThorndike experiment, and the Ives-Stilwell experiment.
Another, more extensive, model is the Standard Model Extension (SME)
by Alan Kostelecky and others. Contrary to the RobersonMansouriSexl
(RMS) framework, which is kinematic in nature and restricted to Special
Relativity, SME not only accounts for Special Relativity, but for dynamical
effects of the standard model and General Relativity as well. It investigates
possible spontaneous breaking of both Lorentz invariance and CPT symmetry. RMS is fully included in SME, though the latter has a much larger
group of parameters that can indicate any Lorentz or CPT violation.
A couple of SME parameters were tested in a 2007 study sensitive to
1016 . It employed two simultaneous interferometers over a years observation: Optical in Berlin at 52 31N 13 20E and microwave in Perth at 31
53S 115 53E. A preferred background leading to Lorentz Violation could
never be at rest relative to both of them.
Viewed as a theory of elementary particles, Lorentzs electron/ether
theory was superseded during the first few decades of the 20th century, first
by quantum mechanics and then by quantum field theory. As a general
theory of dynamics, Lorentz and Poincare had already (by about 1905)
found it necessary to invoke the principle of relativity itself in order to
make the theory match all the available empirical data. By this point,
the last vestiges of substantial ether had been eliminated from Lorentzs
ether theory, and it became both empirically and deductively equivalent
to Special Relativity. The only difference was the metaphysical postulate
of a unique absolute rest frame, which was empirically undetectable and
played no role in the physical predictions of the theory. As a result, the
term Lorentz Ether Theory is sometimes used today to refer to a neoLorentzian interpretation of Special Relativity. The prefix neo is used
in recognition of the fact that the interpretation must now be applied to
physical entities and processes (such as the standard model of quantum field
172
Chapter 4
Scalar Theory of
Everything(STOE)
Explaining and predicting observations that are unsatisfactorily accounted
by the cosmological standard model and by General Relativity motivates
the investigation of alternate models. Examples of these future model challenges are
(1) the need for and the ad hoc introduction of dark matter and dark energy,
(2) the Pioneer Anomaly (PA),
(3) the incompatibility of General Relativity with observations of subatomic
particles,
(4) the need for fine - tuning of significant parameters,
(5) the incompatibility of General Relativity with galactic scale and galaxy
cluster scale observations (Sellwood and Kosowsky 2001b),
(6) the poor correlation of galactocentric redshift z of galaxies to distances
D (Mpc) > 10 Mpc measured using Cepheid stars (Freedman et al. 2001;
Macri et al. 2001; Saha et al. 2006a),
(7) the lack of a galaxy and galaxy cluster evolution model consistent with
observations, and
(8) the lack of application of Machs Principle.
Other examples (Arp 1998; Pecker and Narklikar 2006) that proponents
of the standard model dispute the interpretation of the observational data
include evidence
(1) for discrete redshift
(2) for QSO association with nearby galaxies,
(3) that galaxies are forming by ejection of matter from other galaxies rather
173
174
175
a single board 5 feet long (left hand side). If the need is to span a distance
of 4 feet, the right hand side fails. If the need is to be a spacer, then either
side may suffice.
Equality in math means the result of calculation on the right hand
and the left hand sides of an equation are the same number, set, group,
property, etc. Equality does not mean the physical objects represented on
the right hand and the left hand sides of an equation can be interchanged
or can be the same object. However, the equality in physics requires some
understanding of the use of math. For example, the interpretation of E =
mc2 may mean the energy content of a mass is mc2 . If a body is composed
of indestructible smallest objects as Democritus suggests, then E is the
energy content of the body.
Causality means an effect is obtained by direct cause. An effect may also
be a cause for another effect. Two similar effects may be in a chain of causes
or be the result of similar causes. The problem with action-at-a-distance
is the chain of causes is lacking.
Transformations are used in mathematical physics to render equations
into a more easily solvable form. Great care must be exercised to assure that
there is not a hidden infinity, a divide by zero operation (singularity), or a
negative physical reality; that the mapping is conformal and invertible; and
that the mapping is complete for the calculation. Once a transformation is
preformed in a calculation sequence, the reality of the result must again be
established by observation. Further, the reality of intervening operations is
considered unreal until reestablished by observation.
Poincare pointed out that any physical model admits an unlimited number of valid mathematical transformations, all equivalent and deducible
from each other. However, they are not all equally simple. Therefore, scientists choose the transformation for the physical situation. However, the
performance of the transformation does not guarantee that all the physical observations have been included. Hence, the reality of the transformed
numbers is questionable. This seems to be the case for the lack of an equivalent transformation between the General Relativity transformation and the
Quantum Mechanical transformation.
Newtonian physics transforms the movement measurements such as acceleration a of distance and duration into a force Fa number with an
appropriate scaling by use of an operation of multiplication by a proportionality constant mi , Fa = mi a. Physics also notes the tendency of two
bodies of mass mg and Mg at a distance r between their centers of mass tend
toward each other. Another transformation into a force Fg is obtained by
176
noting the interrelation of the physical parameters and by use of an operation of multiplication by a proportionality constant G, Fg = Gmg Mg /r 2. If
no other causes are present, the transformed forces are equal by Newtons
law, Fa = Fg . Because Fa and Fg are transformed numbers, the causation
link may not be real. However, the next equation of mi a = Gmg Mg /r 2 may
not be real in the sense of causation. The physics adds the Equivalence
principle transformation (not causation) mi = mg , the appropriate definition of G to say the equation is valid in the sense of observable, and the
interpretation that gravitation causes the motion. Two different observations are transformed, the equality in transformed numbers are linked,
a physics Principle is included, and the resulting numbers are judged as
cause and effect despite the intervening unreal transformation. Hence, the
final cause and effect is also an assumption that must be observed to be
considered valid and a description of nature.
The physics Principles are the concepts of the nature of our universe
that are like the axioms of math. Because of the importance of math procedures, phrasing the Principles in such a way as to be amenable to math
is important. Standards, permissible operations, and causality must be
established.
The goal of the science of physics is to predict the outcome of observations and experiments. Human use of physics models is to cause the
outcome of experiments that aids our survival. Understanding is the ability
to predict observations. Understanding is a conceptual exercise. Measurement and calculation are mathematical exercises. Increased dependence
on math increases specialization and the need for expert judgment and,
therefore, decreases understanding of the universe. Understanding aids our
survival.
Todays mathematical models require experts to compute predictions.
The mathematical models have become more abstract and devoid of the
way nature is actually thought to operate. Hence, they become devoid of
intuitive understanding and very weird to our sensibilities. Further, the
statement of an experiment in mathematical terms often involves expert
human judgment to configure the problem. Because we are a part of the
universe, physics should be consistent with our intuitive understanding and
our sensibilities.
Physics in so far as we can predict outcomes is limited. The limitation produces anomalies in our measurement of the universe. If we concern
ourselves with complexities of the transformation that may exist, we are
distracted from determining a new, real model of unknown anomalies. Sim-
177
178
4.1
STOE PRINCIPLES
The Reality Principle states that results of any action must be real. Calculations that yield results of infinity, of singularities, or of negative numbers
for physical conditions are not real. The Strong Reality Principle states that
179
180
dent idea of the Fundamental Principles must also apply to life and to
social systems. The more fundamental scientific models have larger domains. A proposal becomes a scientific model when a deduction from it
is observed. This concept does not require a candidate model to include
falsifiable predictions and does not invalidate the usefulness of a scientific model for a domain because of the existence of falsifying observations
in another domain. For instance, Newtonian dynamics is a valid scientific
model. Observations in the domain including relative velocities approaching the speed of light falsify Newtonian dynamics. However, this only limits
the Newtonian dynamics domain. Religious ideology models based on belief
and philosophy models may be scientific models provided they are useful
and meaningful with restricted what, where, and when bounds. To survive,
a scientific model must compete for attention. The concept of a scientific model survives because the human mind is limited in the ability to
maintain a catalog of the nearly infinite number of possible observations.
Scientific models with empty or more limited domains have little usefulness
and meaningfulness.
The Universality Principle states that the physics must be the same
at all positions in the universe. A Theory of Everything must exist. The
only difference from one place and time to another is a question of scale
and history. For example, the physics that states all objects fall to earth
was found to be limited to the Earth domain when Galileo noted moons
rotating around Jupiter. Newtons restatement of physics was more cosmological, was simpler, and corresponded to Aristotles model. However,
the Universality Principle is not extended to imply the universe is isotropic
and homogeneous. Near a star is a unique position. The physics must
explain other positions. The physical theories must explain any isotropies
and anisotropies. Our presence may change the outcome of experiments
according to quantum mechanics. However, this is true for any observer.
If, in some set of observations, we appear privileged, the privilege must be
incorporated in the model. For example, we are in a galaxy disk and are
close to a sun. We are in a highly unique and privileged area. Just because
we are carbon based does not imply all intelligent life is carbon based.
The Universality Principle appears to be a combination of the Cosmological Principle in the form that states that observers of the physical
phenomena produced by uniform and universal laws of physics and the
Copernican Principle in the form that states observers on Earth are not
privileged observers of the universe. However, the STOE rejects both the
Cosmological Principle and the Copernican Principle because they are lim-
181
ited to cosmology and inapplicable to the small. Our solar system is not
isotropic and homogeneous. Variation in physical structures cannot be overlooked because the greater physical models must describe these variations
to be a Theory of Everything. The physics is in the details.
Physicists have used the concept that observations of the cosmos have
their counterpart in earthborn experiments. For example, the observed
spectra from galaxies are assumed to be the same spectra produced by
elements on Earth with a frequency shift. The model of the frequency
shift is the same as can be produced in Earth laboratories by the Doppler
shift model. The STOE suggests this is not always true. For example,
much higher temperatures have been modeled in the universe than can be
produced on Earth. However, the STOE should have the capability to
describe both conditions.
The Anthropic Principle is accepted to the extent that what is observed
must have been created and have evolved to the present. What is observed
must be able to be observed. Note this statement of the Anthropic Principle omits the requirement that it depend on an assumption of life and
intelligence because life and intelligence are inadequately defined. The
existence of life, social systems, and intelligence are observations of our
universe and, therefore, must be able to exist. An unobserved parameter
may or may not be able to be observed. Therefore, the negative model
candidates are not useful.
The Anthropic Principle is expanded to include not only our physical
existence but also our successful social and economic creations. By successful I mean the set of rules that allow survival in competition with other
sets of rules. That is, the rules for the successful functioning of social and
economic structures may be the same as the functioning of physical cosmology. Conversely, the determination of the functioning of physical cosmology
may point the way to a more successful set of social and economic rules.
Some argue (Smolin 2004, and references therein) the Anthropic Principle cannot be part of science because it cannot yield falsifiable predictions.
The Change Principle states that all structures change by a minimum
step change. What exists will change. A structure is a relationship of the
components of the universe. Change involves modifying the influence of one
structure on another structure. A rigid structure maintains the relation of
the components while the relation with the rest of the universe changes. If
the influence between components is large, the structure behaves as a rigid
structure. Particles became a hydrogen atom followed by evolution of other
atoms. Atoms became molecules. A model that requires a large step where
182
183
must have a negative feedback loop to maintain the narrow parameters and
achieve balance between the Change and Competition processes. Otherwise,
the system is unstable and transitory. The effects of the unstable system
will cease to exist without consequential fallout or permanent change. Transitory means the structure can exist but is ending. Therefore, there will be
very few observations of the transitory type of rigid structure. We observe
objects that have limited size. So, there is a limiting force or negative feedback condition controlling the size of each object. So too must black holes
have a size limitation and a negative feedback condition. When the size
of a structure of an object becomes limited, a new structure comprising a
combination of existing structures can occur. Alternatively, the structure
may be dissolved into smaller structures.
Conversely, if a functional relationship is measured between two parameters, then there exists a negative feedback physical mechanism such that a
change in one parameter produces only the amount of change in the other
parameter allowed by the relationship. For example, the ratio of the central
mass to the mass of the bulge is constant. Therefore, there exists a physical
mechanism to cause this to happen (Merritt & Ferrarese 2001b).
Because all structures have parametric relations with other structures,
all processes of change are part of a Negative Feedback loop. The problem of
physics is to identify the Negative Feedback loops. Each complete Negative
Feedback loop is a fractal.
The Local Action Principle states influence is only upon the immediate
adjacent volume by contact. This action is then iteratively transmitted to
other volumes. The summation or integration of this local action is calculated with nonlocal models. The calculation must take care that the Reality
Principle is obeyed. The integration of actions results in the abstract models such as actionatadistance.
The Minimum Action Principle can be stated as a Principle of Minimum
Potential Energy, which states the path of least energy expenditure will be
followed during the change from one state to another.
The Fractal (or Self-similarity) Principle states that the universe has
a fractal structure. There is no natural system in our universe including
our universe as a whole that is totally adiabatic. Even laboratory-isolated
systems have some energy leakage. The Universality Principle combined
with the Fractal Principle implies physics models are analogies of the world
we can experience. We directly experience approximately 10 powers of two
larger to 10 powers of two smaller than our size (2 meters). Instrumentation
technology allows the expansion of our knowledge approximately as many
184
4.2
STOE
The scalar potential model (STOE) was conceived by considering the observation anomalies and the observation data that current models describe.
The current models were considered only to the extent they are consistent
with data and the STOE must correspond to them with simplifying parameters. The current explanations were ignored. Further, some observations
are so entwined with a model that the data was restated. For example,
galaxy redshift observations are often referred to as Doppler shift that
implies a relative velocity.
Secondly, the simplest structure that can conceptually produce a wide
range of differing observation is an interaction and two structures. There
4.2. STOE
185
are two types of energy, potential and kinetic. The two types of light behavior suggests one must be discrete and particlelike. The other must
be continuous and able to support wavelike behavior. The interaction of
the particlelike and wavelike behaviors produce differing and ever larger
structures. The differing structures produce variety. For example, both
particlelike and wavelike things are needed to produce diffraction patterns. This is unlike the Copenhagen Interpretation, which suggests an
or relation.
A search was made in accordance with the Principle of Fundamental
Principles for a physical process in the Newtonian physical domain that may
model and explain the observed data of spiral galaxies. Such a system was
found in the fractional distillation process (FDP). A FDP has a Source of
heat plenum (energy) and material input (matter) at the bottom (center of
galaxies) of a container. The heat of the FDP is identified with a constituent
of our universe that is not matter and is identified as plenum (in units of
que, abbreviated as q, - a new unit of measure) whose density is a
potential energy. The term plenum will mean the amount of plenum in
ques. Temperature in the FDP decreases with height from the Source. The
opposing forces of gravity and the upward flow of gas acting on the cross
section of molecules in an FDP cause the compounds to be lighter with
height and lower temperature. Heavy molecules that coalesce and fall to
the bottom are re-evaporated and cracked. The heat equation with slight
modification rather than gas dynamics modeled the flow of heat (plenum
energy). Temperature decline of the FDP was modeled as dissipation of
matter and plenum energy into three dimensions.
The Principles of Change and Repetition requires that the three dimensions (3D) be created from two dimensions (2D). The creation of 3D from
zero dimensions (0D) is not only unlikely but forbidden. The universe begins with a Change that may be a rupture of 0D. The Change itself is a
beginning of time. The 0D ruptures to create one dimensional (1D) points
and plenum between the points on a line. The 0D is a Source of 1D and
continually ejects plenum into one end of the 1D line. The other end must
expand into the void. Thus, the 1D line is continually growing. The 1D line
has only one direction. Therefore, the 1D line is straight. The 1D energy
is 1 t = 1 l where 1 is the point tension, t is time elapsed between point
eruption, 1 is the line plenum density, and l is the length between points.
The first transition creates length and time
l = ct,
(4.1)
186
(4.2)
where I and VI are the initial and V at the transition, respectively, and
is the proportionality constant called surface tension.
Similarly, corollary II of the Fundamental Principle implies a 3D Sink
of strength is formed.
Succeeding ruptures at the same point adds plenum and hods to the
universe. Because we observe there is plenum and in our neighborhood,
the plenum and hods then flow from the Source and creates the galaxy
around the Source.
4.2. STOE
187
(4.3)
188
force on the plenum, the plenum exerts a pressure force on the hod, and the
plenum exerts pressure and tension forces on adjacent plenum. A hod is
not attracted or repelled by other hods through a field or other action at a
distance device. Because can act only perpendicular to the hods surface,
the force the plenum exerts on the hod is perpendicular to the surface of
the hod.
There are three dimensions in our universe. Counting is a characteristic
of hods. Therefore, that our universe is threedimensional is a characteristic
of the hods and not of the plenum.
There are two concepts of distance. Distance can be by reference to a
fixed grid. Traditional methods have used the plenum as the fixed grid.
Distance in this model is a multiple of the average diameter a hod. Because
is a constant and the hod may be deformed, the hod average diameter
will be considered a constant throughout the universe beyond the Source
and in moving and accelerating reference frames. The measure of distance
will be a ho, which is defined as the average diameter of the hod. The
STOE uses ho distance to form a grid in a flat universe with
1 ho = 2 ( )1/2 .
(4.4)
Another concept of distance is the amount of plenum between two surfaces. By analogy, this type of distance in gas is the amount of gas between
two surfaces. Increasing the amount of gas between the two surfaces increases the distance. One way to do this is by movement of the surfaces.
Another way is to add more gas from a source. If the medium is compressible, the distance measured by the amount of gas varies with the change in
density of the medium. The STOE model allows plenum to be compressible. Therefore, the amount of plenum between two surfaces can change if
the (que ho3 ) of the plenum between two surfaces changes.
All hods are the same and no plenum action changes their dimension.
Therefore, the hods dimension is the only fixed standard of length. All
other standards of length are subject to environmental conditions. The
terms distance, length, area, and volume will refer to ho measurements.
The unit of measure of distance is usually expressed in km, pc, kpc, or
Mpc. If the measurement is taken by means of angular observation or timebetweenevents observations such as using Cepheid variables that do not
depend on the amount of plenum, the units of pc are proportional to the
ho unit. Because the proportionality constant is unknown, the units of pc
will be used. However, if the measurement such as redshift depends on the
amount of plenum, the units of pc should be used with caution.
4.2. STOE
189
Because the hods are discrete, (t) is a digital function in time. The
hod exerts a tremendous to hold the plenum in its 2D surroundings.
Upon transition, the plenum immediately around the hod experiences an
instantaneous addition to the amount of plenum in the 3D universe. This
step function creates a shockwave in the surrounding plenum. The plenum
then flows out into the void causing the V of plenum to increase and a
variation in the V .
If plenum has no relaxation time and no maximum gradient, the eruption of plenum from a Source would spread out instantly so that would
be virtually zero everywhere. Because we observe plenum in our region of
the universe, the Anthropic Principle implies plenum flow must have a re~ m ) that is a constant of the
laxation time and a maximum gradient (
universe.
Because plenum has a relaxation time, the plenum carries the inertia
in matter. A hod is an object with an area , a surface tension per unit
area ~ . The and the magnitude of ~ are the same for all hods and are
universal constants.
~ =
~ m in the region around an active Source. The plenum
The
~ = 0. Therefore, at some radius
expands into the void where = 0 and
rsc from the Source, becomes another function of time and distance from
~ must be continuous at rsc ,
~ =
~ m and 2 > 0. If
the Source. The
(t) changes, then rsc changes.
The plenum increases between the hods as the plenum increases V ,. The
hods move and attain a velocity dd~ij /dt where d~ij is the vector distance
from the ith hod to the j th hod. The plenums relaxation time causes the
hods movement to be inhibited, an inertia . Thus, a feedback condition
is established wherein the dd~ij /dt is balanced by . Therefore, the hods
movement is proportional to the velocity of the hod. The momentum p~ij
between the of the ith and j th hods is defined as
p~ij Kp
dd~ij
,
dt
(4.5)
190
Thus,
Tij = Kt Kp
dd 2
ij
,
dt
(4.7)
Tj =
Tij .
(4.8)
i6=j i=1
i 6= j
(4.9)
Nt
X
Uij .
(4.10)
i6=j i=1
The j th hods inhibits the changing p~ij . The inertia of the hods results
in work W done on the hod in time dt
Wj =
Wj = Kp
e
b
d
dt
Nt
X
i6=j i=1
~
p~ij dd,
d2 deij
d2 dbij
de
d
b
dt2
dt2
(4.11)
!
(4.12)
4.2. STOE
191
Nt
X
(Tj + Uj )
i6=j j=1
ZZZ
dV = 0.
(4.13)
Ku f (dij ).
(4.14)
plenum
Nt
X
Kt Kp
ddij
dt
!2
Nt
Nt
X
X
The classical development of an energy continuity equation includes the assumption that energy is neither created nor destroyed. An arbitrary volume
according to the STOE may include insertion into our universe of energy
(Sources) or the removal from our universe of energy (Sinks).
Consider an arbitrary volume V bounded by a surface s. The
plenum will be in V . Hods, Sources, and Sinks may be in V . Equation (4.7) implies Tij = Tji . Equation (4.9) implies Uij = Uji . Therefore,
the total energy Ev in V is
Ev = N +
+
N
X
ZZZ
dV
(Tj + Uj + Wj ),
j=1
192
d
Ev dV
dt
= S
+
+
+
+
+
ZZ
~ ~nds
N
X
~
Su
Uj ~n ds
i6=j j=1
N
ZZ
X
~
St
Tj ~n ds
i6=j j=1
X
ZZ
N
~
Sw
Wj ~n ds
i6=j j=1
ZZ
ZZ
[ Ntuw (t)~vn ] ~n ds
ZZZ X
N
k=1
k +
N
X
l ) dV ,
(4.15)
l=1
4.2. STOE
193
dEv
=
dt
+
N
X
Su ~ 2
2 ~2
D +
Uj
C
i6=j j=1
X
X
N
N
St ~ 2
Sw ~ 2
Tj +
Wj
C
C
i6=j j=1
i6=j j=1
~ [ Ntuw (t)~vn ]
+
+ C 1 (
N
X
k=1
k +
N
X
l ),
(4.16)
l=1
Forces
The terms and Uj term in Eq. (4.15) change by spatial movement and
are called impelling forces because they cause an energy change. The N
term, the Wj term, and Tj term in Eq. (4.15) change by temporal differences
and are called nurturing forces because their movement carries energy.
plenum
194
(4.17)
where the negative sign means the force is directed opposite to increasing
.
Because there is no shear stress in plenum, the force exerted by plenum
is exerted only normal to surfaces. Consider a cylinder of cross section ds
around a regular, simply connected volume with the ends of the cylinder
normal to F~sp . The ds has a difference of force F~s on each end of the
cylinder. Allow the height of the cylinder to become arbitrarily small. The
F~s dF~s and
~ d~s,
dF~s = D 2 (~n )
(4.18)
where ~n is the outward unit normal of d~s.
Integrating Eq. (4.18) over a surface, the plenum force on the surface F~s
becomes
ZZ
~ ~n d~s.
F~s = D 2
(4.19)
The force of plenum on a surface is proportional to the cross section of
~
the surface perpendicular to .
The due to Sources and Sinks can be calculated from Eq. (4.16). If all
terms of Eq. (4.16) are constant except the , , and terms, the average
of each Sink remains constant over a long period of time, and the average
of each Source, each galaxy, remains constant over a long period of time,
then Eq. (4.16) can be solved for a steady-state condition. The range of
distance from Sources and Sinks will be assumed to be sufficient such that
the transition shock is dissipated and r > rsc . We can consider at a
point to be the superposition of the plenum effects of each Source and Sink.
Therefore, for from the j th galaxy and k th Sink at the ith point (i ),
Eq. (4.16) becomes
d
(rij i ) = D 2 2 (rij i ),
dt
d
(rik i ) = D 2 2 (rik i )
dt
(4.20)
(4.21)
4.2. STOE
195
(4.22)
(4.23)
(4.24)
(4.25)
allX
Sinks
X
1 all Sources
j
|k |
i (r) =
.
2
4D
rij
rik
j=1
k=1
(4.26)
At rij = rsc of the j th galaxy , the effect of the other Sources and Sinks
is very small. Therefore,
1
~ i = j r
~ scj
= ~m ,
(4.27)
2
4D
where rscj is the rsc for the j th galaxy.
At the j th Source, the nearly instantaneous transition of the hods and
plenum into 3D creates a shock wave zone of radius rswj , where the tran~ is greater than
~ m . The assumption of constant D is no
sition forced
longer valid. The j can be considered to be a time average of the nearly
~ i=
~ m in the region rswj < rij < rscj .
digital transitions and
th
At the Source of the j galaxy rij = 0, i (rij = 0) is a maximum value
~ i and 2 i are discontinuous. For periods of the time required
maxj and
for plenum to flow to rscj , j can be considered a constant. All the plenum
from the Source is flowing through the surface of the sphere of radius rscj .
Because the volume of a sphere r 3 and is a volume per unit time,
r. Further, this proportionality holds for any radius determined by
. If j increases, rscj increases proportionally as suggested by Eq. (4.27).
Therefore,
Ksc
,
rscj =
(4.28)
~m j
where Ksc is the proportionality constant which depends on how D varies
~ m zone around the Source and maxj = Ksc j .
in the
196
The force F~ts exerted on an assembly of hods by the plenum force due
to Sources and Sinks is calculated by combining Eq. (4.19) and (4.26) and
integrating over the cross section of the assembly yields
F~tsi = Gs ms
all galaxies
X
j=1
allX
Sinks
|k |
j
~
r
~rik ,
3 ij
3
rij
rik
k=1
(4.29)
where ms is the plenum effective cross section of the assembly of hods and
Gs = (4)1 .
Interaction of plenum and hods
h d
, = i (dh ) < h , rsw < rh ,
dh =
~m d
(4.30)
4.2. STOE
197
~ < ~m ,
For rh > rsc , volumes more than dh from a hod surface have
= i , Eq. (4.30) applies, and b < . Therefore, there is an amount of
free surface tension energy f such that
= b + f
f = dh (h d ), rh > rsc ,
b =
2dI f ,
f =
0,
rh rsc .
(4.31)
(4.32)
(4.33)
(4.34)
.
(4.37)
=
cos2 v
4
sin2 v
Geometrically, the strip in the w-plane between the lines v = 0 and v =
transforms into the entire t plane. The line at u = 0 between v = 0 and
v = transforms to the surface of the hod. The lines v =constant are
transformed into confocal hyperbolae lines which are streamlines of the ij
field where ij is the at the j th point due to the ith hod. The lines
u=constant are transformed into confocal ellipses with the foci at the ends
of the hod. Because acts normally to the surface of the hod, u=constant
are equipotential lines of ij caused by [see Eq. (4.9)].
If the hod is circular around the z-axis, the ij streamlines and equipotential lines are also circular around the z-axis. The ij equipotential lines
form oblate spheroids with the minor axis along the z-axis.
Consider a point at a large distance from the hod, r a, the u of
Eq. (4.36) is very large. Equation (4.36) becomes x2 + z 2 = d2 where d is
the distance from the center of the hod. At large d the ij equipotential
lines are concentric spheres. The ij streamlines are the radii of the spheres.
198
(4.38)
Eq. (4.18) is transferred to the hod surface. Therefore, the net force dF~s of
plenum on an element of surface ds of the hod is
dF~s
dF~s
~ s2 )d~s D 2 (~n
~ s1 )d~s,
D 2 (~n
~ s )d~s,
D 2 (~n
(4.39)
(4.40)
4.2. STOE
199
Nt
X
k6=j j=1
~ kj ,
U
(4.41)
where the negative sign means the force is directed opposite to increasing
U and the sum is over the other (j th ) hods.
Define a volume V containing plenum for the distances being considered and throughout which the terms of Eq. (4.16) except the U terms are
constant. By the Principle of Superposition, this allows the examination of
just one force. Equation (4.16) can be solved for a steady-state condition.
We can consider at a point to be the superposition of the effects of each
hod. Therefore, Eq. (4.16) at the ith point due to the j th hod becomes
d
Su 2
(dij Uj ) =
(dij Uj )
dt
C
in spherical coordinates.
The boundary conditions are:
Ui (dij , 0) = a function of distance, only,
Ui (dij , t) 0 as r .
(4.42)
(4.43)
(4.44)
Ku C f
.
4Su dij
(4.45)
The |Uij | is highest in a volume with a given f > 0 value when the
k and j th hods have a shared = d oblate spheroid surface and are
th
200
orientated flat-to-flat. Because dij is the distance to the center of the hod,
dij 6= 0 and a singularity problem is non-existent.
Chose two assemblies of hods where Nt is the number of hods in one
assembly and Ns is the number of hods in the other assembly. Let each
assembly have a very small diameter compared to the distance ~rts =<
P Nt P Ns ~
j=1
k=1 dkj > where < > means average of. The force of gravity of
the hods in an assembly is greater on each other than the force of gravity
of hods in the other assembly. Therefore, the hods of an assembly will act
as a solid body relative to the other assembly of hods. Because hods act on
plenum, plenum acts on plenum and plenum acts on hods, only the local
f for each hod determines the N f on the hod. Therefore, the force of
gravity F~g on each assembly due to the other assembly is
2
(Ku )
(Ns fs ) (Nt ft ) ~rts ,
F~g =
3
4rts
(4.46)
where fs is the f for the assembly with Ns hods and ft is the f for the
assembly with Nt hods.
For the simplified case of two symmetric distribution of hods with a
distance ~r12 between them, with fs = ft , and with similar structures (the
orientation of the hods is approximately uniform)
mg1 mg2
F~g = G
~r12 ,
3
r12
(4.47)
Particles
Define a particle to be an assembly of hods surrounded by a = d equipotential surface (ES) that forms a connected volume around the hods in the
particle within which plenum is totally bound to the hods by b . Therefore,
s outside the ES equals d within the ES of the particle. If the ES is simply
connected, the particle is called a simple particle and the distance for gravity potential calculations is to the center of the particle. The potential on
4.2. STOE
201
202
(4.49)
L = 2I( )1/2 .
Similarly, a third photon can join the T structure so that the three
are mutually orthogonal, as the Principle of Repetition allows. However,
~ n 6= 0 in all directions. Call this structure a F1, Type 3 (T3) structure.
~
A cross cylinder on a cross cylinder of a F1T2 structure makes the F1T3 box
structure very dense and very stable because there is little free surface
(unattached hod faces) compared to the f energy contained and P/S is
minimal. Perhaps this is the structure of quarks, quark stars, and black
holes.
The F1T3 cube in small structures has the least P/S. However, this
allows corners. A rounding effect in larger structures such as found in
crystal structures produces a more spherical ES, which will have a lower
P/S. This could be a mechanism to fracture some combinations (particles)
and cause faceting in all combinations like in crystals.
4.2. STOE
203
As crystals and compounds have a structure and a handedness or charility, so to do particles have a structure and therefore a handedness. The
handedness of particles results in the particleantiparticle observations. As
in chemical compounds where one handedness comes to dominate, so too in
particles does one handedness dominate. Thus, baryonic particles dominate
and antibaryonic particles are much less numerous.
Another possible structure of hods with a position of least P/S and
~ field is with each inserted through and normal to each other.
stable
Call the particles formed with this structure Family 2 (F2) particles. The
particle so formed has a direction along the line where they join where
~ ~n = 0 can occur. These may be the Muon family of particles.
204
Chirality of matter
CP violation is a puzzle in modern physics, where C is charge conjugation and P is parity inversion such a observed in a mirror reflection.
There is no parity between matter and antimatter. Our universe contains a
vast excess of matter over antimatter. A chemical compound that has chiral structure will adopt one or the other handedness when right- and lefthanded versions are mixed. This suggests that the structure of particles
has chirality.
4.2.5
Source characteristics
The gradient of the field in Source galaxies repels matter from the Source.
Some matter coalesces to form stars. The nucleosynthesis of stars causes a
decrease in the surface area to gravitational effect ratio. Therefore, the stars
fall to the center of the spiral galaxy where the high rho causes black holes
to disintegrate as detected by periodic radiation pulses from the center. A
mature spiral galaxy is formed when the matter distribution remains stable.
Before the matter distribution becomes stable, the spiral galaxy is growing.
4.2.6
Sink characteristics
~
If a Source ceases to erupt or if some matter is repelled to the low
distances between Sources, gravity will cause the hods to coalesce. A vol~ becomes near zero, the attraction of hods will cause a high
ume where
hod density. If the number of hods is greater than in a black hole in this
intergalactic volume, the intense force on the supple hods may cause the
hods to form spheres like bubbles. With increased pressure from a huge
number of hods, the hod can go only out of our universe and turn into a
four dimensional 4D object like 2D turned to 3D.
Because the Sink requires mass to form, the Sinks age is considerably
less than spiral galaxies. Therefore, the delay before matter and plenum is
4.2. STOE
205
Equivalence Principle
The concept of two forms of energy was born in Galileos free fall experiments. The interaction of the plenum and matter produces two forms of
energy. One depends on the action of the field on matter that is kinetic
energy. The other is the effect of matter on the field that produces the
potential energy field. Because T 2.718 K, the relative Source and Sink
strengths are balanced by a feedback mechanism to maintain the potential
206
and kinetic energy balance. Thus, the resolution of the flatness problem is
a natural consequence of the STOE.
The General Relativity WEP implies that a kinetic property measured
by acceleration and a charge-type property measured by gravitational potential are equal (Unzicker 2007). A type of redshift in General Relativity
dependent on the gravity potential field (GM/R) derives from the WEP. If
the emitter and observer are in a uniform gravitational field R1 , the
electromagnetic EM signal acquires or looses energy because of the change
in gravitational field potential. The Pound-Rebka experiment of 1959-1960
measured the WEP redshift (Pound & Rebka 1960)1 . The WEP suggests
the integral of the EM frequency change experienced at each small increment along the path could be used to calculate aP . The STOE uses the
gravitational potential from all matter at each point along the EM signals
path to calculate aP . The difference between General Relativity and the
STOE is that the STOE also considers the inhomogeneity of the gravitational field in the Ki I term and the amount of plenum the EM signal passes
through in the Kdp Dl P term. Without the Ki I term or Kdp Dl P term in the
calculation, the correlation of measured and calculated aP would be much
poorer to the degree the WEP method fails.
~
The STOE postulates matter experiences a force F~ = Gs ms .
A
field variation produced solely by matter,
Gg Mg
F = (Gs ms )
,
R
(4.50)
should not be confused with the theorized General Relativity time dilation effect (1 + z) caused
by the differing gravitation fields of emitter and observer.
4.2. STOE
207
such factors. Equation (4.50) implies the WEP test should be done with
the same attractor and with the same number of atoms in the pendulum
or free-fall body for the differing isotopes. Maintaining equal mg considers
only the bulk property of matter that reduces the STOE effect.
Because F is symmetric, (Gs ms )(Gg Mg ) = (Gg mg )(Gs Ms ) = Gmg Ms . If
the LP is quarks, the (Gg /Gs ) ratio indicates the relative volume to crosssection that may differ for quarks and larger structures. For example, if the
LP are quarks, differing A/Z ratios with equal Z will determine differing
G values. For elements the number of protons and the number of neutrons
are approximately equal. Therefore, the G varies little among atoms.
However, the WEP deals with inertial mass mi rather that ms . Therefore, the STOE suggests a test of the WEP different from those done previously is required to differentiate the three mass interaction parameters mi ,
mg , and ms .
The plenum characteristics describe:
(1) the field of de Broglie-Bohm quantum mechanics,
(2) the gravitational aether of General Relativity,
(3) the dark matter of galaxy rotation curves,
(4) the dark energy of SN1a data,
(5) the major cause of galactic redshift, and (6) the major cause of the
pioneer anomaly blueshift.
The Brans-Dicke model suggests G 1 , where is a scalar field. This
implies G is a function of the mass-energy of the universe that is measured
by the temperature of the universe. The STOE suggests the temperature
of galaxy cluster cells is finely controlled by a feedback mechanism. Therefore, G and fine structure constant are finely controlled by this feedback
mechanism.
208
Chapter 5
210
211
differing media densities. This model can account for the observed value
of gravitational lensing (Eddington 1920, p. 209).
Models of photons have assumed the photon structure to be similar
to other matter particles. That is, photons are assumed to be threedimensional. The conclusion drawn from the well-known Michelson-Morley
experiments null result was that light is a wave, that there is no aether
for light to wave in, and that a Fitzgerald or Lorentz type of contraction
existed.
Fractal cosmology has been shown to fit astronomical data on the scale of
galaxy clusters and larger (Baryshev and Teerikorpi 2002). At the opposite
extreme, models such as Quantum Einstein Gravity have been presented
that suggest a fractal structure on the near Planck scale (Lauscher and
Reuter 2005). However, the Democritus concept of the smallest particle
suggests a lower limit of the fractal self-similarity.
The distinction between incoherent and coherent light is whether the
light forms a diffraction/interference pattern when passed through one or
more slits in a mask. Because interference has been thought to be a wave
phenomenon, coherence is considered a property of waves. However, the
definition of coherence allows a distribution of particles to be coherent.
Coherence is obtained (1) by light traveling a long distance such as from
a star as seen in the Airy disk pattern in telescope images, (2) by the pinhole
in the first mask in Youngs experiment, and (3) from a laser.
Youngs double-slit experiment has become a classic experiment because
it demonstrates the central mysteries of the physics and of the philosophy
of the very small. Incoherent light from a source such as a flame or incandescent light impinges on a mask and through a small pinhole or slit. The
light through the first slit shows no diffraction effects. The light that passes
through the slit is allowed to impinge on a second mask with two narrow,
close slits. The light that passes through the two slits produces an interference pattern on a distant screen. The first slit makes the light coherent.
The intensity pattern on the screen is described by the Huygens-Fresnel
equation.
The assumptions of Fresnel model of diffraction include: (1) The Huygens Principle that each point in a wave front emits a secondary wavelet.
(2) The wavelets destructive and constructive interference produces the
diffraction pattern. (3) The secondary waves are emitted in only the forward direction, which is the so called obliquity factor (a cosine function).
(4) The wavelet phase advances by one-quarter period ahead of the wave
that produced them. (5) The wave has a uniform amplitude and phase
212
over the wave front in the slit and zero amplitude and no effect behind the
mask. (6) Note the Fresnel model has a slight arc of the wave front across
the slit. That is, the distribution of energy in the plane of the slit varies.
The Fresnel model with larger distance between the mask and the screen
or with condensing lenses before and after the mask degenerates into the
Fraunhofer diffraction model.
The intensity patterns produced by multiple slits can be compared to
the intensities of the single slit pattern of equal total width. Thus, the
resulting pattern may be regarded as due to the joint action of interference
between the waves coming from corresponding points in the multiple slits
and of diffraction from each slit. Diffraction in the Fresnel model is the result of interference of all the secondary wavelets radiating from the different
elements of the wave front. The term diffraction is reserved for the consideration of the radiating elements that is usually stated as integration over
the infinitesimal elements of the wave front. Interference is reserved for the
superposition of existing waves, which is usually stated as the superposition
of a finite number of beams.
Sommerfield (1954) gave a more refined, vector treatment for phase
application. However, these other more complex models make many simplifying assumptions. When the apertures are many wavelengths wide and
the observations are made at appreciable distance from the screen, intensity
results are identical to the Fresnel model.
This chapter proposes a model of light that postulates the necessary
characteristics of photons to satisfy observations and yield diffraction phenomenon. The model combines Newtons speculations, Democrituss speculations, the Bohm interpretation of quantum mechanics, and the fractal
philosophy. The wavelike behavior of light results from the photons changing the field that guides the path of the photons. The resulting model
is tested by numerical simulation of diffraction and interference, with application to the Afshar experiment. Therefore, the wave characteristics of
light may be obtained from the interaction of photons and field.
5.1
Model
Newtons third law suggests that if the field acts on photons, the photons
and other matter should act on the field.
Compare coherent light passing through multiple slits in a mask to light
passing through a single slit. The slits may be viewed as transmitters of light
that produces a diffraction pattern in the field if the incoming light is
5.1. MODEL
213
(5.1)
where ~n is the surface, normal, unit vector. A vector without the~ indicates
the value of the vector.
The change that results from the effect of other matter M on the
field is
=
N
X
GMi /Ri ,
(5.2)
where N is the number of bodies used in the calculation and R is the distance
214
5.1. MODEL
215
the path of light wherein c is measured. Because the hod has no surface
area in the direction of motion, c is the maximum speed a particle may
have.
5.1.1
The action of all matter (hods) causes the to change. Therefore, the
motion of a hod is relative to the local field variation.
Matter causes the to decrease. Therefore the hod motion which pulls
the to zero displaces the field. The field then flows back when the
hod passes a point. That is, the hods motion within a volume neither increases nor decreases the amount of field in that volume. The cavitation
in the field produces a wave in the field. The cavitation limits the
c. The cavitation depends on the density of the field, higher density
allows a faster refresh of the round the hod and, therefore, a faster c
than in a low field. The wave has a cos(kr) affect from the point of
the hod center. Because the field wave travels faster than the hod, the
field flows around the hod to fill in the depression behind the hod. If the
speed of the wave in the field is much larger than c, the harmonic wave
is transmitted forward and the level behind the wave reaches a stable,
non-oscillating level very rapidly and is the effect of gravity. This can be
modeled as a cos(K ) decrease of s value, where is the angle between
the direction ~c of a hod and the direction of the point from the hod center
where is calculated. The angle at which the field no longer oscillates
has the cos(K ) = 0. This is analogous to the Fresnel model that secondary wavelets are emitted in only the forward direction. Therefore, the
effect on the field of a single hod is
single = Krr cos
single =
2r
T
cos(K ) expj(t)
Kr r
K < /2
K /2,
(5.5)
If energy may be transferred to the hods from the field, then the motion
of the hods may be dampened by the field. Posit the dampening force
is proportional to the , ms , and ~v in analogy to nonturbulent movement
216
(5.6)
where the over dot means a time derivative, Kv and Kd are proportionality
~ sin(), and is the angle between
~ and ~c.
constants, Fst ms ||
Solving for v yields:
vf =
(5.7)
(r mI ) = rFsa Kd rms ,
dt
(5.8)
where i = (time
= t), f = (time
= t + t),
~
Ah = Ks ms || sin()/rmI ,
Bh = Kd mrs mI (m
I /mI ), and
m
I /mI is small and nearly constant.
(5.9)
5.2. SIMULATION
5.1.3
217
The suggested structure of the photon in Hodge (2004) implies each hod
is positioned at a minimum because the hod surface holds the = 0.
Therefore, the distance between hods in a photon is one wavelength T
of the emitted field wave. Also, the hods of a photon emit in phase.
Further, the number of hods in a photon and the field potential max
due to all other causes around the hod effects this distance. Therefore, the
transmitted potential T from a photon is
T =
Kr
NeffT ,
r
(5.10)
where
NeffT = cos
NeffT =
2r
T
hT sin()]
cos(K ) sin[N
sin[ sin()]
NhT
K < /2
K /2
(5.11)
and T = K /(max NhT ) and NhT is the number of hods in the transmitting
photon.
These equations apply to the plane that includes the center of the photon, the direction vector, and the axis of the photon.
5.1.4
The field wave from each cause impinges on a photon through the hods.
Because the photon is linearily extended, the photon is analogous to the
receiving antenna. Therefore,
~ effi =
N
2
sin[ hRR Ti sin( + RTi )]
~
,
i
Ti
sin[ sin( + 2
)]
(5.12)
where NhR is the number of hods in the receiving photon; R is the wavelength, which is the distance between hods, of the receiving photon; and i ,
~ effj is the effective , T , ,
~
Ti , and
respectively, for the ith cause.
Using Eq. (5.12) in Eqns. (5.7) and (5.9) and summing the effect of all
causes yields the total change in a t.
5.2
Simulation
218
i=xX
s +0.1
B(i)/11,
(5.13)
i=xs 0.1
5.2. SIMULATION
219
the constants was so the angle (xs /L), where L (step) is the distance from
the last mask to the screen, corresponded to the Fresnel equations. That
is, so that x = 1 step and y = 1 step was the same distance of photon
movement for the calculations, which is a Cartesian space.
Table 5.2 lists the curve fit measurements for the plots shown in the referenced figures. The first column is the figure number showing the curve.
The second column is the number Nc of photons counted at the screen. The
third column is the center of the theoretical curve (Kcent ). The fourth column is the asymmetry Asym of the data points (number of photons counted
greater than Kcent minus number of photons counted less than Kcent )/Nc .
s ) and
The fifth column is the sum of the least squares Lsq between the B(x
the theoretical plot for -5 steps xs 5 steps divided by Nc . The sixth
s ) and the theoretical plot
column is the correlation coefficient between B(x
for -5 steps xs 5 steps.
Table 5.1: The values of the constants.
Parameter
Kc
K
K
Kr
Kv
Kd
Ks
Kd
K
max
value
1 108
1 101
1.1
8 1011
1 105
2.4 104
5 101
6 101
2 101
1 104
1 103
units
step interval1
gr. hod1
erg step
gr. step2 erg1 interval1
radian
gr. interval2 erg1
gr. step2 erg1 interval1
erg step hod
erg
220
5.3
The initial distribution of photons was within a 60 steps by 60 steps section. One photon was randomly placed in each 1 step by 1 step part. The
equations were applied to each of the photons in the section. Because the
section has an outer edge, additional virtual photons were necessary to calculate the . Therefore, [Px (pn ), Py (pn )], [Px (pn ), Py (pn ) + 60], [Px (pn ),
Py (pn ) 60], [Px (pn ) + 60, Py (pn )], [Px (pn ) 60, Py (pn )] [Px (pn ) + 60,
Py (pn ) + 60], [Px (pn ) 60, Py (pn ) + 60], [Px (pn )-60, Py (pn ) 60], and
[Px (pn )+60, Py (pn )60] were included in the calculation of . Only photons
and virtual photons within a radius of 30 steps were used in the calculation
of at a point.
The equations were developed without consideration of photons colliding. That is, some photons were too close which generated very large,
unrealistic values. Another characteristic of the toy model is that each
interval moves a photon a discrete distance that occasionally places photons unrealistically close. When this happened, one of the close photons
was eliminated from consideration. The initial distribution started with
3600 photons from which 111 were eliminated because of collision.
Figure 5.1(left) is a plot of the NeffT versus with NhT = 10. The
first five peaks are at = 0 rad, 0.144 rad (a), 0.249 rad (b), 0.355 rad.
(c), 0.466 rad (d), and /2 rad (e). This pattern is similar to the antenna
emission pattern.
After 1000 intervals, a pattern of photon position developed as seen in
Fig.5.1(right). The photons positions were recorded. The photons were
organizing themselves into recognizable patterns of lines with angles to the
direction of travel (Y axis) corresponding to the minima of Fig. 5.1(left).
A mask with a single slit with a width Ws = 1 step was placed at
y = 100 steps and a screen was placed at y = 140 steps (L = 40 steps).
The positions of the photons were read from the recording. The group of
photons were placed in 60 steps increments rearward from the mask. The
photons were selected from the recording to form a beam width Win (step)
centered on the x = 0 step axis. Because the incoming beam had edges, the
calculation for Pyold (pn ) < 100 was Pynew (pn ) = Pyold (pn ) + Kc 1 interval,
where Pyold (pn ) is the position of the nth photon from the last calculation
and Pynew (pn ) is the newly calculated position. The Px (pn ) remained the
same. If Pyold (pn ) 100, the Pynew (pn ) and Px (pn ) were calculated according
to the model.
Figure 5.2 shows the resulting patterns for varying Win . The thicker,
221
Figure 5.1: The left figure is a plot of the NeffT versus angle from the direction of the
photon NhT = 10. The first six minima are at = 0 rad, 0.144 rad (a), 0.249 rad
(b), 0.355 rad. (c), and 0.466 rad (d). The right figure is a plot of the longitudinal vs.
latitudinal position of photons after 1000 intervals. The line labeled e is = /2.
The lines are labeled as the angles in the left plot. The position of photons along lines
corresponding to minima of a photons transmission pattern is what determines coherence.
solid line in each figure is the result of a Fresnel equation fit to the data
points. Although each plot shows a good fit to the Fresnel equation, the
fit differs among the plots and depends on Win . Because the calculation
includes all photons, the photons that were destined to be removed by the
mask have an affect on the diffraction pattern beyond the mask.
Figure 5.3 shows the resulting patterns for varying L. The mask, screen,
and photon input was the same as the previous experiment with Win =
6 steps. Comparing Fig. 5.3(A), Fig. 5.2(D), and Fig. 5.3(B) shows the
evolution of the diffraction pattern with L = 30 steps, L = 40 steps, and
L = 50 steps, respectively. Fig. 5.3(C) and Fig. 5.3(D) show the Fraunhofer
equation fits. The greater L produces a closer match between the Fresnel
and Fraunhofer equation fits.
Figure 5.4 shows an expanded view of Fig. 5.3(B). The center area,
first ring, and second ring of Fig. 5.3(B) and Fig. 5.4 has 954 photons, 20
photons, and 4 photons, respectively, of the 1000 photons counted.
Figure 5.5 shows the screen pattern with the mask from the previous
experiment replaced by a double slit mask. Figure 5.5(A) was with the slits
placed from 0.50 step to 1.00 step and from -1.00 step to -0.50 step. The
best two slit Fresnel fit (a cosine term multiplied be the one slit Fresnel
equation) is expected for slits with a ratio of the width b of one slit to the
width d between the centers of the slits (the d/b ratio). Figure 5.5(B)
222
Nc a
1000
1000
1000
1000
1000
1000
1000
1000
438
1000
400
1000
1000
Kcent b
0.585
0.605
0.408
0.383
0.259
0.426
0.271
0.535
0.187
0.314
0.850
0.145
0.273
Asym c
0.014
-0.028
0.000
0.006
0.084
0.050
0.060
0.006
-0.046
-0.070
-0.320
-0.122
-0.074
Lsq d
0.219
0.199
0.247
0.259
0.360
0.021
0.185
0.246
0.178
0.717
0.221
0.909
0.895
Cc e
0.96
0.97
0.96
0.96
0.97
0.92
0.98
0.83
0.92
0.82
0.68
0.79
0.81
223
Figure 5.2: The single slit width Ws = 1.0 step screen patterns for L = 40 steps: (A)
input beam width Win = 1.0 step which is the same as the slit width, (B) Win = 2.0
steps, (C) Win = 4.0 steps, and (D) Win = 6.0 steps. The filled squares are the data
points, the thin line connects the data points, and the thick line marks the theoretical
calculation. Although each plot shows a good fit to the Fresnel equation, the fit differs
among the plots and depends on Win . Because the calculation includes all photons, the
photons that were destined to be removed by the mask have an effect on the diffraction
pattern beyond the mask.
shows the screen pattern with the slits placed from 0.75 step to 1.25 steps
and from -1.25 steps to -0.75 step.
Figure 5.6 shows the paths traced by 10 consecutive photons through
the slits at two different intervals that form part of the distribution of
Fig. 5.5A. The traces are from the mask to the screen. The for each photon
is established after y = 130 steps. Before y = 120, there is considerable
change in , which is consistent with Fig. 5.3. That is, the photon paths
do not start at the slit and follow straight lines to the screen. The Fresnel
224
Figure 5.3: Resulting patterns for varying L. The mask, screen, and photon input was
the same as the previous experiment with Win = 6 steps. The single slit Ws = 1.0 step
screen patterns for L = 30 steps (left figures A and C) and for L = 50 steps (right
figures B and D). The top row is the Fresnel calculation plots and the bottom row is
the Fraunhofer calculation plots. The filled squares are the data points, the thin line
connects the data points, and the thick line marks the theoretical calculation. Comparing
Fig. 5.3(A), Fig. 5.2(D), and Fig. 5.3(B) shows the evolution of the diffraction pattern
with L = 30 steps, L = 40 steps, and L = 50 steps, respectively. Fig. 5.3(C) and
Fig. 5.3(D) show the Fraunhofer equation fits. The greater L produces a closer match
between the Fresnel and Fraunhofer equation fits.
225
Figure 5.4: Plot of Fig. 5.3B with an expanded scale to show the second and third
diffraction rings. The filled squares are the data points, the thin line connects the data
points, and the thick line marks the theoretical calculation. The center area, first ring,
and second ring of Fig. 5.3(B) and Fig. 5.4 has 954 photons, 20 photons, and 4 photons,
respectively, of the 1000 photons counted. The number of photons in each ring agrees
with the theoretical calculation of the relative intensity of the diffraction rings.
Figure 5.5: Plot of the double slit screen pattern at L = 40 steps and Win = 8 steps.
The A figure is with the slits placed from 0.50 step to 1.00 step and from -1.00 step to
-0.50 step. The B figure is with the slits placed from 0.75 step to 1.25 steps and from
-1.25 steps to -0.75 step. The filled squares are the data points, the thin line connects the
data points, and the thick line marks the theoretical calculation. The model produces
the double slit interference pattern.
of the discrete nature of the simulation like the collision condition noted
previously. (4) A photon from one slit follows another from the other slit.
The leading photon determines the xs and at the screen.
226
Figure 5.6: Traces of 10 consecutive photon paths between the mask and screen at two
different intervals. The numbers mark the following occurrences: (1) One photon follows
another and traces the same path. The following photon travels a longer path before
path merging. (2) One photon follows another and traces a parallel and close path. (3)
A photon experiences an abrupt change in as it passes close to another photon. These
events were probably a result of the discrete nature of the simulation like a collision
condition. (4) A photon from one slit follows another from the other slit. The leading
photon determines the xs and at the screen. The photons path continues to change
direction for a short distance after the mask.
5.4
227
Youngs experiment
Figure 5.7: The right figure is a plot screen pattern of Youngs experiment at L = 40
steps after the first mask and Win = 6 steps. The filled squares are the data points.
The thin line connects the data points. The Fresnel equation fit is poor. Therefore, the
pattern is not a diffraction pattern. The right figure shows the distribution of photons
from the first mask to the screen. The lines and the lower case letters are as in Fig. 5.1.
Random photons through a first slit fail to produce a diffraction pattern that indicates
incoherence. However, the position distribution shows coherence (see Fig. 5.1B).
The screen was removed and the second mask was placed at y= 140
steps. The second mask had two slits placed 0.5 step x 1.0 step and at
-1.0 step x -0.5 step. The screen was placed at y= 180 steps (L = 40
steps). Figure 5.8 shows the resulting distribution pattern.
Although the statistics for Fig. 5.8 are poorer than previous screen interference patterns, inspection of Fig. 5.8 indicates an interference pattern
228
Figure 5.8: Plot of the double slit screen pattern at L = 40 steps from the second
mask and 80 steps beyond the first mask. The slits were placed from 0.50 step to 1.00
step and from -1.00 step to -0.50 step. The filled squares are the data points, the thin
line connects the data points, and the thick line marks the theoretical calculation. The
position distribution after the first mask is coherent.
5.5. LASER
5.5
229
Laser
The initial, overall photon density in the previous sections was approximately uniform and incoherent. The photons in the distance simulation or
the slit in the Youngs simulation formed the coherent distribution. These
coherent distributions resulted from an application of the model to an initial
random distribution.
The popular model of a laser is that a seed photon in a medium stimulates the emission of more photons. Because the photons from a laser
impinging on a slit(s) produces a diffraction pattern, the laser simulation
must produce the coherent distribution of photons. Because of the diversity
of materials that produce laser emissions, the laser must form an ordered
distribution within the light.
The Fourier transform of a triangular function is the sinc-type function.
Another function with a Fourier transform of the sinc-type function is the
rectangular function. If the slit acts as a Fourier transform on the stream
of photons, a pulse pattern with high density and a low duty cycle may also
produce the diffraction pattern.
A simple model of the simulation of the laser light is several photons
followed by a delay between pulses. Figure 5.9 shows the result of passing
a pulse through a double slit. The pulse is formed by positioning a photon
randomly in half step x intervals and randomly within a 1.1 steps y interval.
These pulses were three steps apart and Win = 6 steps. Several other
pulse configurations were tried and yielded a poorer fit. The parameters
are unique. Also, the fit is inconsistent with observation of interference
patterns for a d/b ratio of 3/1. That this is what lasers in general produce
seems unlikely.
That the photons form lines with a set angle to ~c was noted in Fig. 5.1
and Fig. 5.7. Posit that a seed photon follows a free photon in the laser
material. These photons form themselves at an angle noted in Fig. 5.1. The
angles are related to Nh . These photon then exert a force along the angle
to free weakly bound photons. Thus, a line of photons is formed. The lines
are then emitted.
The experiment was to create two seed photons at y = 0 and randomly
between -3 steps x 3 steps. A line of 13 additional photons was introduced from each of the seed photons at one of the four angles, which was
randomly chosen, progressing positively or negatively. Figure 5.10 depicts
a plot of such a distribution at one interval. This model has the advantage
of being dependent on Nh and the form of the distribution produced by
230
Figure 5.9: Plot of the pattern on a screen at L = 40 steps of a laser pulse input Win = 6
steps through a double slit. The slits were placed from 0.50 step to 1.00 step and from
-1.00 step to -0.50 step. The filled squares are the data points, the thin line connects the
data points, and the thick line marks the theoretical calculation.
Figure 5.10: Plot of the position of photons between 0 step y 100 steps and Win = 6
steps.
The photons were then directed to a mask at y = 100 with a double slit.
The slits placed from 0.50 step to 1.00 step and from -1.00 step to -0.50
step.
The fit is consistent with observation of interference patterns for a d/b
ratio of 3/1.
5.5. LASER
231
Figure 5.11: Plot of the pattern on a screen at L = 40 steps of a line laser input (see
Fig. 5.10) Win = 6 steps through a double slit. The slits placed from 0.50 step to 1.00
step and from -1.00 step to -0.50 step. The filled squares are the data points, the thin
line connects the data points, and the thick line marks the theoretical calculation.
232
5.6
Afshar experiment
Figure 5.12: Plot of s vs. xs for the photons that passed through the Positive Slit (left)
and the photons that passed through the Negative Slit (right). The groups of photons
with -3 steps xs 3 steps to have a nearly linear distribution. A linear regression was
done on the data of each of the groups. Photons existed outside this range. However,
occurrences (1) and (2) of Fig. 5.6, which was considered an artifact of the simulation,
caused errors. The distribution outside this range became non-linear. Over 86% of the
recorded photons were in this range.
(5.14)
233
Table 5.3 lists the resulting values of the linear regression equation for
each of the data sets and the calculated s at xs = 2 steps and xs = 2 steps.
The majority of the photons impinge on the screen at angles that would
cause a condensing lens to focus them at different points associated with the
slits. Figure 5.6, occurrence (4) showed some photons from one slit follow
another photon from the other slit and, therefore, were recorded with the
s as if from the wrong slit. This occurred for both slits. Therefore, the
statistical effect would balance.
Table 5.3: The values
m
rad. step1
Positive Slit
0.0428
Negative Slit 0.0348
slit
The majority of the photons impinge on the screen at angles that would
cause a condensing lens to focus them at different points associated with
the slits. The model produces the Afshar experiment.
234
5.7
Discussion
The constants were determined iteratively and with few significant figures.
The solution presented may not be unique or optimal.
The E = mc2 relation was used to derive Eq. (5.3) and (5.4). This
suggests a way to relate measurable quantities to the constants E = mc2 =
h. Further, is responsible for inertial mass. Thus, is a wave in a
real physical entity.
The wave in quantum mechanics is a wave in the field. The hod
causes the wave and the wave directs the hod. The speed of the wave in the
field is much greater than c. Because the number of hods in a moving
photon determines the wavelength of the field wave, the photon causally
interacts with other similar hods. Because the wave is a sine or cosine
function, matter producing equal wavelength in the field can tune into
each other. This produces the interference pattern. Therefore, quantum
entanglement may be a causal and connected observation.
This chapter suggests the transverse and longitudinal position of photons undergo forces that may be treated as Fourier transforms with each
encounter with more massive particles. The varying field experienced by
photons causes a Fourier transform on the distribution of photons. Therefore, the probability distribution of the position and movement of the large
number of photons may be treated as in quantum mechanics.
The flow of photons through a volume with matter would produce a
pattern of waves from each matter particle. Therefore, the Huygens model
of each point being a re-emitter of waves is justified if each point means
each matter particle such as atoms.
Fourier mathematics assumes an infinite stream of particles obeying a
given function for all time. Each encounter with other matter produces
a different function. The mathematics of the Fourier transform includes
the integration in both time and distance from to +. Therefore,
observations made over a region or over a shorter interval allows for the
uncertainty of waves. This non-uniformity of the time and distance of
the particle stream distribution is Fourier transformed into the Heisenberg
Uncertainty Principle (Tang 2007, see, for example, Section 2.9).
The change in the diffraction pattern upon the change in the width
of the photon stream that the mask blocks (see Fig. 5.2) suggests these
photons have an influence on the photons that go through the slit. This
differs from the traditional wave model of diffraction. It also suggests the
photon diffraction experiment is an experiment of quantum entanglement.
5.7. DISCUSSION
235
Indeed, the photons blocked by the mask are non-local to the transmitted
photons beyond the mask.
Bells inequality includes the assumption of locality (D
urr,et al. 2009;
Goldstein 2009). Because the present model is intrinsically nonlocal, it
avoids Bells inequality.
The calculation equations allow several negative feedback loops. For
example, c is dependent on . If a photon is at a high region, c is high.
This causes the photon to be faster then the photon producing the wave
and move to a lower . The lower slows the photon to match the speed
of the photon producing the wave. This mechanism exists in and vt .
The present concept of coherent differs from the traditional model.
The photons interact through the field and tend toward lower . Coherence in the sense of interaction of photons is when the photons are maintained at a position and momentum relative to other photons through the
feedback mechanisms. For a photon, this occurs when a photon distribution
causes a constant relative, moving minima. That is when cos(K )/r <
1/r. This also implies there are constant relative forbidden zones where
cos(K )/r > 1/r and max . Thus, position and momentum are
quantized.
As noted in Youngs experiment, knowledge of the density of particles is
insufficient to determine the Bohmian quantum potential. The structure of
hods must also be known to provide the field. For example, the field
wave caused by a photon structure of Nh1 hods differs from the field
wave caused by a photon structure of Nh2 hods where Nh1 6= Nh2 .
Gravity is another manifestation of the fieldhod interaction. Moving
particles produce pilot waves (gravity waves) in the field. The wave
particle duality of the Copenhagen interpretation may be viewed as which
of the two entities (field or particle) predominates in an experiment.
The cosmological, scalar potential model (SPM) was derived from considerations of galaxies and galaxy clusters (Hodge 2004, 2006a; Hodge and
Castelaz 2003b; Hodge & Castelaz 2003b; Hodge 2006c,e). The SPM
posits a plenum exists whose density distribution creates a scalar potential (erg) field. The term plenum was chosen to distinguish the concept
from space in the relativity sense and from aether. The term space
is reserved for a passive backdrop to measure distance, which is a mathematical construct. The plenum follows Descartes (1644) description of the
plenum. The plenum is infinitely divisible, fills all volume between matter
particles, is ubiquitous, flows to volumes according to the heat equation,
is influenced by matter, is compressible in the sense that the amount of
236
5.7. DISCUSSION
237
238
Chapter 6
The evidence suggest the problem of a single model explaining both galactic scale and cosmological scale observations is fundamental (Sellwood and
Kosowsky 2001b). Among a variety of models that have been suggested to
link cosmological scale and galactic scale observations are models using a
scalar field. A scalar field has been linked to dark matter (Fay 2004; Pirogov
2005), a cosmological model (Aguilar et al. 2004), the rotational curves of
spiral galaxies (Mbelek 2004), and axisymmetric galaxies (Rodriguez-Meza
et al. 2005).
The great majority of elliptical galaxies are observed to be much poorer
in cool gas and hydrogen than spiral galaxies of comparable luminosity
(Binney and Merrifield 1998, pages 527-8). The bulk of the interstellar
matter (ISM) in spiral galaxies is HI and hydrogen. In elliptical galaxies, the bulk of the ISM consists of hot plasma distributed approximately
spherically rather than in a thin disk (Binney and Merrifield 1998, pages
525-6). A characteristic of elliptical galaxies not found in spiral galaxies
is that the X-ray surface brightness is nearly proportional to the optical
surface brightness (Binney and Merrifield 1998, pages 526). The study of
dust lanes suggests that gas and dust are falling into elliptical and lenticular
galaxies (Binney and Merrifield 1998, pages 513-6) and are formed internally
in spiral galaxies (Binney and Merrifield 1998, pages 528-9). Some evidence
has been presented that suggests irregular galaxies will settle down to being a normal elliptical galaxy (Binney and Merrifield 1998, page 243). In
1 Reprinted
from New Astronomy, vol. 11, Author: John C. Hodge, Scalar potential model of redshift
and discrete redshift, Pages 344-358 with permission of Elsevier(Hodge 2006a).
239
240
low surface brightness (LSB) spiral galaxies, the outer rotation curve (RC)
generally rises (de Block et al. 2001, and references therein). In contrast,
ordinary elliptical galaxies, with luminosities close to the characteristic
L (=2.2 1010 LB, in B band solar units for a Hubble constant Ho = 70
km s1 Mpc1 ) show a nearly Keplerian decline with radius outside 2Reff ,
where Reff is the galaxys effective radius enclosing half its projected light
(Romanowsky et al. 2003).
Battaner and Florido (2000) and Sofue et al. (1999) provides an
overview of the current state of knowledge of RCs of spiral galaxies. The
RCs of spiral galaxies have a high rotation velocity of over 1000 km s1 near
the nucleus (Ghez et al. 2000; Takamiya and Sofue 2000). Ghez et al. (2000)
and Ferrarese and Merritt (2002) have observed Keplerian motion to within
one part in 100 in elliptical orbits of stars that are from a few 100 pc to a
few 1000 pc from the center of spiral galaxies. The Keplerian characteristic
decline in rotation velocity is sometimes seen in H RCs. This is followed
by gradual rise to a knee or sharp change of slope at a rotation velocity
of less than 300 km s1 . The outer RC is beyond the knee. Interacting
galaxies often show perturbed outer RCs. The outer part of an RC is often
relatively flat with rising and falling RCs occasionally being found.
The particles most often measured in the disk region of a galaxy are
hydrogen gas by HI observation and stars by observing the H line. The
particles being measured in the inner bulge region are stars by observation of
H , CO, and other spectral lines. Also, the RC differs for different particles.
For example, the HI and H RCs for NGC 4321 (Sempere et al. 1995) differ
in the outer bulge and approach each other in the outer disk region.
McLure & Dunlop (2002) found that the mass in the central region of
spiral galaxies is 0.0012 of the mass of the bulge. Ferrarese and Merritt
(2002) reported that about 0.1% of a spiral galaxys luminous mass is at
the center of galaxies and that the density of supermassive black holes
SBHs in the universe agrees with the density inferred from observation of
quasars. Merritt & Ferrarese (2001a) found similar results in their study
of the relationship of the mass M of the central supermassive black hole
and the velocity dispersion v (M v relation). Ferrarese (2002) found a
tight relation between rotation velocity vc in the outer disk region and bulge
velocity dispersion c (vc c ) which is strongly supporting a relationship
of a center force with total gravitational mass of a galaxy. Wandel (2003)
showed M of AGN galaxies and their bulge luminosity follow the same
relationships as their ordinary (inactive) galaxies, with the exception of
narrow line AGN. Graham et al. (2003, 2002); Graham et al. (2003b) found
241
242
c
z,
Ho
(6.1)
where Ho (km s1 Mpc1 ) is the Hubble constant and c (km s1 ) is the speed
of light. The Ho occupies a pivotal role in current cosmologies. The methods
of calculating supernova distances, the cosmological microwave background
(CMB) power spectrum, weak gravitational lensing, cluster counts, baryon
oscillation, expansion of the universe, and the fundamental aspects of the
Big Bang model depend on Ho and the Hubble law.
However, the determination of Ho has a large uncertainty and different researchers calculate different values. Figure 6.1 shows the calculated
redshift zH using Eq. (6.1), Ho = 70 km s1 Mpc1 , and the distance Da
calculated using Cepheid variable stars for 32 galaxies (Freedman et al.
2001; Macri et al. 2001) versus the measured galactocentric redshift zm .
The correlation coefficient of zH versus zm is 0.80.
The deviation of the recession velocity of a galaxy from the straight
line of the Hubble Law is ascribed to a peculiar velocity of the photon
emitting galaxy relative to earth (Binney and Merrifield 1998, page 439).
The deviation from the Hubble Law is the only means of determining the
peculiar velocity of a galaxy. The average peculiar velocity for all galaxies
is assumed to be zero on the scale that the universe appears homogenous.
The 2dFGRS (Peacock et al. 2001) suggests this scale is z > 0.2.
The circles in Fig. 6.1 denote data for galaxies in the general direction
of (l, b) = (290 20 , 75 15 ). Aaronson et al. (1982) found the peculiar velocity field in the local supercluster is directed toward NGC 4486
(Messier 087) with (l, b) (284 , 74) at a speed of 33141 km s1 . This
has been called the Virgocentric infall. NGC 4486 is a peculiar, large,
elliptical galaxy with strong X-ray emissions. In addition, Lilje et al. (1986)
detected a quadrupolar tidal velocity field from spiral galaxy data in addition to the Virgocentric infall pointing toward (l,b) = (308 13 , 13 9 )
at a speed of 200 km s1 . NGC 5128 (Centaurus A) at (l,b) = (310, 19 )
and at a distance of 3.84 0.35 Mpc (Rejkuba 2004) is a galaxy with properties (Isreal 1998) similar to NGC 4486. Lynden-Bell et al. (1988) found
elliptical galaxies at distances in the 2000-7000 km s1 range are streaming
toward a Great Attractor at (l, b) = (307 13 , 9 8 ). Centaurus B
at (l,b) = (310, 2 ) is a galaxy with properties similar to NGC 4486. In
243
Figure 6.1: Plot of the calculated redshift zH using Eq. (6.1) and D calculated using
Cepheid variable stars for 32 galaxies (Freedman et al. 2001; Macri et al. 2001) versus
the measured redshift zm . The straight line is a plot of zH = zm . The circles indicate the
data points for galaxies with (l,b) = (290 20 ,75 15 ) (Reprinted with permission
of Elsevier(Hodge 2006a)).
244
a more recent analysis, Hudson et al. (2004) suggested a bulk flow of 225
km s1 toward (l, b) (300, 10 ). However, the total mass in these directions appears to be insufficient to account for the peculiar velocity fields
using Newtonian dynamics.
Burbidge (1968) found a zm periodicity zm 0.06 for approximately
70 QSOs. Duari et al. (1992) confirmed these claims of periodicity using
various statistical tests on zm of 2164 QSOs. However, another claim (Karlsson 1977) that ln(1 + zm ) is periodic with a period of 0.206 was found to be
unsupported. Bell (2004, and references therein) offered further evidence
of periodic zm in QSOs.
Tifft (1996, 1997) found discrete velocity periods of zm of spiral galaxies in clusters. The discrete velocity periods of zm showed an octave, or
doubling, nature. Bell et al. (2004, and references therein) confirmed the
existence of the discrete velocity periods of zm for 83 ScI galaxies. Russell
(2005, and references therein) presented evidence that redshifts from normal
spiral galaxies have an intrinsic component that causes the appearance
of peculiar velocities in excess of 5000 km s1 .
This Chapter uses galaxy and cluster observations to derive the characteristics of a scalar potential model (SPM). The SPM suggests spiral
galaxies are Sources of the scalar potential field and early type galaxies are
Sinks of the scalar potential field. The cluster observations support the
movement of matter from Source galaxies to Sink galaxies. An equation is
derived that recovers Eq. (6.1) for cosmological distances from data that is
anisotropic and inhomogeneous. The resulting model is used to calculate
redshift of particle photons for a sample of 32 galaxies with Da . The calculated redshift zc has a correlation coefficient of 0.88 with zm . Further, the
SPM suggests the discrete variations in zm (Bell 2004; Bell et al. 2003;
Bell et al. 2004; Tifft 1996, 1997) are consistent with the SPM.
6.1
The SPM postulates the existence of a scalar potential (erg) field with
the characteristics to cause the observed differences in spiral and elliptical
galaxies. The gradient of is proportional to a force F~s (dyne) that acts on
matter,
~
F~s = Gs ms ,
(6.2)
where the arrow over a parameter denotes a vector, the Gs is a proportionality constant analogous to the gravitational constant G, and ms is a
245
property of the test particle on which the F~s acts. Because the ms property
of matter is currently unidentified, call the units of ms cs.
The SPM suggests F~s exerts a force to repel matter from spiral galaxies
and to attract matter to early type galaxies. Therefore, the is highest at
the center of spiral galaxies as a point Source and lowest at the center of
early type galaxies as a point Sink. Call spiral galaxies Sources and early
type galaxies Sinks.
Because the scalar field due to Sources is highest at point Sources, it
obeys the inverse square law as does gravity. Therefore,
= K /r ,
(6.3)
(6.4)
NX
source
i=1
NX
sink
i
l
+ K
,
rpi
r
l=1 pl
(6.5)
where the subscript, lower case, italic, roman letters are indices; p is the
point where p is evaluated; rpi (Mpc) and rpl (Mpc) are the distances to the
point where p is evaluated from the ith Source and lth Sink, respectively;
and Nsource and Nsink are the number of Source and Sinks, respectively.
In the universe, Nsource and Nsink are very large, perhaps effectively
infinite. The boundary condition of a very large universe produces considerable ambiguity in Eq. (6.5). One way to resolve this Olbers paradox
type condition is to postulate that < 0 and > 0. Call the volume with
a simply connected surface containing equal Source and Sink strengths a
cell. This is analogous to the electric charges in an atom. As the universe
246
GMeff
,
r
(6.6)
Gs ms
,
G mg
(6.7)
Gs ms
||,
(6.8)
G mg
where, for simplicity, the mass of the test particle is assumed to be constant
over time, r (kpc) is the radial distance of the test particle, M (M ) is the
mass inside the sphere of radius r, | | indicates absolute value, and the
inertial mass m equals gravitational mass mg of the test particle (Will
2001).
Because the outer RCs of elliptical galaxies are Keplerian (Romanowsky
et al. 2003), the total force on mass must be centrally directed acting along
the radius of the elliptical galaxy. The F~s and the gravitational force F~g
of the surrounding mass of the galaxy are directed radially inward. Thus,
the Meff of a Sink galaxy appears greater than Newtonian dynamical expectation as the Virgocentric infall and Great Attractor phenomena
require.
In spiral galaxies the various rotation velocity curves can result if F~s is
directed radially outward and F~g is directed radially inward with a ms /mg
ratio varing with r.
The and terms of Eqs. (6.7) and (6.8) mimic a hidden mass. However,
because M is the total mass, the and terms are massless.
Because the HI and H RCs differ, the ms /mg factor must be different
for different matter (elemental) types. The ms cannot be linearly related
to the mass. Another characteristic of matter must be the characteristic of
ms . Thus, the M of a Source galaxy is greater than Newtonian dynamical
expectation.
The nature of the may (1) be derived from a property of matter, (2) be
independent of matter, or (3) be the cause both of and of matter. Because
Meff = M + K
247
248
6.2
Redshift model
Ne
=
1,
No
(6.9)
(6.10)
(6.11)
Ne
.
R
Nmin + Ne exp(Kv 0D dV )
(6.12)
For the D and change of N considered herein, Cs is considered a constant. For greater distances, where the total change of N is relatively larger
than considered herein, the Cs is a function of N and at the position of
the photon.
The
Z
1 D
hi =
x dx,
(6.13)
D 0
where dx is the incremental distance x (Mpc) traveled by the photon.
The emitted luminosity of a galaxy is proportional to the flux of photons
from the galaxy and is assumed to be isotropic. Other factors such as the
Kcorrection were considered too small for the D of the sample. For a
spiral galaxy to exist, Fg must balance the Fs . Matter is ejected until this
249
(6.14)
!
m
Ext
D
M
=
+ 25 5 log10
;
mag.
mag. mag.
Mpc
(6.15)
(6.16)
(6.17)
(6.18)
Sum Eqs. (6.17) and (6.18) over all Source and Sink galaxies, respectively. Because the mass ejected from Source galaxies go to Sink galaxies,
the combined Mg = 0. Because the measured CMB radiation is nearly an
ideal black body radiation (Mather et al. 1990, 1999), the sum of the LI
terms for all Sources equals the sum of the LI terms for all Sinks. Equating
the sum of all matter emitted from Source galaxies to the sum of all matter
entering Sink galaxies yields,
Nsources
i
K
= Pi=1
,
Nsink
K
k=1 k
(6.19)
250
where Kl /Kl = K /K .
Because V is a function of x from the observer to a point along the line
of sight and time t,
V
V
dV (x, t) =
dx +
dt.
(6.20)
x
t
Combining Eqs. (6.11), (6.12), (6.13), and (6.20) yields,
1
= Kmin + eX ,
zc + 1
(6.21)
X = Kdp DP + Kd D + Kp P + Kf F + Kvp P ve
(6.22)
where
where: (1) Relatively small terms such as terms involving the relative
of the emitter and observer were ignored.
(2) The Kmin =R Nmin /Ne is a
RD
constant for a given Ne . (3) The P = 0 p dx. (4) The F = 0D[(p /x)
Kco ]dx to a first order approximation. (5) The Kdp (erg1 Mpc2 ), Kd
(Mpc1 ), Kp (erg1 Mpc1 ), Kf (erg1 ), Kco (erg Mpc1 ), and
Kvp (erg1 Mpc1 deg.1 )are constants. (6) The is posited to be constant
over time. (7) The relative velocity of the emitting and observing galaxies
causes a change in V , hence N, and has three possible causes. One is the
expansion of our universe. This component is linearly related to (Kdp P +
Kd )D. The second cause is due to the possible peculiar velocity of the
Milky Way relative to the reference frame derived by summing over all
Sources and Sinks. Another cause derives from the inaccuracy of defining
the reference frame because the galaxies directly on the other side of the
Milky Way center from earth are unobservable from earth. The component
ve deriving from the second and third causes is proportional to the cosine
of the angular difference between the direction of the target galaxy and the
direction of ve . Thus,
ve = cos(90 Glat ) cos(90 Klat )
+ sin(90 Glat ) sin(90 Klat )
cos(Glon Klon ),
(6.23)
where Glat (degrees) and Glon (degrees) are the galactic latitude and longitude, respectively, of the emitting galaxy; and Klat (degrees) and Klon
(degrees) are the galactic latitude and galactic longitude, respectively, of
the direction of ve .
6.3. RESULTS
251
K = Pi=1
Nsink
k=1
6.3
erg Mpc q1 .
(6.24)
Results
The sample galaxies were selected from the NED database2 . The selection
criteria were that the heliocentric redshift zmh be less than 0.03 and that
the object be a galaxy. The parameters obtained from the NED database
included the galaxy name, Glon , Glat , zmh , morphology, the m was the Bband apparent magnitude mb (mag.) as defined by NED, and the galactic
extinction Ext (mag.). The zm was calculated from zmh .
The 21-cm line width W20 (km s1 ) at 20 percent of the peak and the
inclination in (arcdegrees) between the line of sight and polar axis were
obtained from the LEDA database3 when such data existed.
The constants to be discovered are the constants of Eq. (6.21), the cell
limitation of rpi and rpl , and K . Calculating the constants was done by
making the following simplifications: (1) Estimate the D to the sample
galaxies (see Appendix .1). (2) Galaxies with an unlisted morphology in
the NED database were considered to have negligible effect. An alternate
method may be to assign a high value of mb to such galaxies. This option
was rejected because a large number of such galaxies were from the 2dFGRS
and 2MASS. These surveys include only limited areas of the sky. Therefore,
including the 2dFGRS and 2MASS galaxies with unlisted morphology in the
calculation would introduce a selection bias into the sample. (3) Galaxies
with an unlisted mb were assigned Mb = 11 mag. (4) Objects with
Ext = 99 in NED were assigned an Ext = 0 mag. (5) All the sample
galaxies were considered mature and stable. (6) Galaxies with a spiral
(non-lenticular), barred, or ringed morphology in the NED database were
considered Sources. All other galaxies were considered Sinks. The result
was a sample of 22,631 Source galaxies and 7,268 Sink galaxies.
Define the luminosity ratio Lr as the ratio of the sum of the luminosities of all Sources within a limiting distance Dl divided by the sum of the
2 The Ned database is available at http://nedwww.ipac.caltech.edu. The data were obtained from
NED on 5 May 2004.
3 The LEDA database is available at http://leda.univ-lyon.fr. The data were obtained from LEDA
on 5 May 2004.
252
Figure 6.2: Plot of the luminosity ratio Lr for galaxies with a distance D less than the
limiting distance Dl versus Dl .(Reprinted with permission of Elsevier(Hodge 2006a).)
Lr = Pi=1
Nsink (D<Dl )
k=1
(6.25)
Figure 6.2 shows a plot of Lr versus Dl for the 29,899 sample galaxies.
Because the sample galaxies were limited to zmh < 0.03, the selection of
galaxies for D > 130 Mpc was incomplete. Therefore, from Eq. (6.24)
K = 2.7 0.1 erg Mpc q1 .
Form the linear relation,
zc = Kscm zm + Kicm ,
(6.26)
where Kscm is the least squares slope and Kicm is the least squares intercept
of the presumed linear relationship between zc and zm . The constants of
6.3. RESULTS
253
Figure 6.3: Plot of the correlation coefficient Cc of Eq. (6.26) using the best values
of the constants to be discovered versus the distance Dlc (Mpc) limitation of rxi and
rxl .(Reprinted with permission of Elsevier(Hodge 2006a).)
Eq. (6.21) were adjusted to maximize the correlation coefficient of Eq. (6.26)
with Kscm 1 and with Kicm 0.
Figure 6.3 shows a plot of the correlation coefficient Cc of Eq. (6.26)
using the best values of the constants to be discovered for each data point
versus the distance Dlc (Mpc) limitation of rpi and rpl . Peaks of Cc 0.88
were obtained at Dlc = 15 Mpc and Dlc = 75 Mpc. Therefore, the rpi and
rpl were limited to 15 Mpc. Of the sample galaxies, 3,480 Source galaxies
and 1,604 Sink galaxies were within 15 Mpc of at least one of the Category A
galaxies. Note the distances to the close, major clusters (Virgo and Fornax)
vary between 15 Mpc and 20 Mpc. The distance of the next farther clusters
(Pisces, Perseus, and Coma) vary between 40 Mpc and 60 Mpc.
Figure 6.4 shows a plot of zc versus zm for the 32 Category A galaxies.
254
Figure 6.4: Plot of the calculated redshift zc versus the measured redshift zm for 32
Category A galaxies (Freedman et al. 2001; Macri et al. 2001). The straight line is a
plot of zc = zm . The circles indicate the data points for galaxies with (l,b) = (290
20 ,75 15 ).(Reprinted with permission of Elsevier(Hodge 2006a).)
Tables 6.1 and 6.2 lists the data for the 32 Category A galaxies. Table 6.3
lists the calculated constants of Eq. (6.21). The Kscm = 1.0 0.1 and
Kicm = (0 9) 105 at 1 with a correlation coefficient of 0.88.
The error bars indicate the zc for 1.05Da and 0.95Da , which is consistent
with the error cited in Freedman et al. (2001). For some galaxies such as
NGC 4548 both of the recalculations yield a higher zc calculation than the
calculation using Da .
If the non-target (other) galaxy is far from the photons path, it has
little individual effect. If the other galaxy is close to the photons path,
an error in its distance changes the distance to the photons path by the
tangent of a small angle. Also, because Mb is calculated using D, the slight
6.3. RESULTS
255
Morphologya
Glon
Glat
Da
zm
103
zh
103
zc
103
P
F
1010
1012
deg.
deg.
Mpc
erg Mpc
erg
IC 1613
IB(s)m
130
-61
0.65
-0.518
0.156
-1.234
0.25
0.04
NGC 0224
SA(s)b LINER
121
-22
0.79
-0.408
0.190
-1.327
0.34
0.02
NGC 0598
SA(s)cd HII
134
-31
0.84
-0.148
0.202
-1.240
0.32
0.04
NGC 0300
SA(s)d
299
-79
2.00
0.336
0.480
0.687
0.81
0.09
NGC 5253
Im pec;HII Sbrst
315
30
3.15
0.903
0.756
1.219
0.82
0.15
NGC 2403
SAB(s)cd HII
151
29
3.22
0.758
0.773
-0.479
0.80
0.15
NGC 3031
SA(s)ab;LINER Sy1.8
142
41
3.63
0.243
0.871
-0.249
0.92
0.15
IC 4182
SA(s)m
108
79
4.49
1.231
1.078
0.735
1.01
0.21
NGC 3621
SA(s)d
281
26
6.64
1.759
1.594
2.744
1.37
0.29
NGC 5457
SAB(rs)cd
102
60
6.70
1.202
1.608
1.406
2.20
0.07
NGC 4258
SAB(s)bc;LINE R Sy1.9
138
69
7.98
1.694
1.915
1.896
1.81
0.34
NGC 0925
SAB(s)d HII
145
-25
9.16
2.216
2.198
1.566
1.68
0.40
NGC 3351
SB(r)b;HII Sbrst
234
56
10.00
2.258
2.400
2.522
1.77
0.44
NGC 3627
SAB(s)b;LINER Sy2
242
64
10.05
2.145
2.412
2.753
1.82
0.43
NGC 3368
SAB(rs)ab;Sy LINER
234
57
10.52
2.659
2.525
2.703
1.89
0.45
NGC 2541
SA(s)cd LINER
170
33
11.22
1.963
2.693
1.492
1.91
0.49
NGC 2090
SA (rs)b
239
-27
11.75
2.490
2.820
3.985
2.16
0.51
NGC 4725b
SAB(r)ab pec Sy2
295
88
12.36
4.026
2.966
3.848
2.32
0.52
NGC 3319
SB(rs)cd HII
176
59
13.30
2.497
3.192
2.669
2.38
0.58
NGC 3198
SB(rs)c
171
55
13.80
2.281
3.312
2.546
2.44
0.60
NGC 2841
SA(r)b ;LINER Sy1
167
44
14.07
2.249
3.377
2.070
2.49
0.58
NGC 7331
SA(s)b LINER
94
-21
14.72
3.434
3.533
3.850
2.12
0.62
b
NGC 4496A
SB(rs)m
291
66
14.86
5.505
3.566
4.883
2.30
0.65
NGC 4536b
SAB(rs)bc HII
293
65
14.93
5.752
3.583
4.963
2.31
0.65
NGC 4321b
SAB(s)bc;LINER HII
271
77
15.21
5.087
3.650
4.306
2.29
0.65
NGC 4535b
SAB(s)c HII
290
71
15.78
6.325
3.787
4.803
2.32
0.68
NGC 1326A
SB(s)m
239
-56
16.14
5.713
3.874
5.478
2.51
0.71
NGC 4548b
SBb(rs);LINER Sy
286
77
16.22
1.476
3.893
4.408
2.18
0.69
NGC 4414
SA(rs)c? LINER
175
83
17.70
2.416
4.248
4.557
2.94
0.75
NGC 1365
(R)SBb(s)b Sy1.8
238
-55
17.95
5.049
4.308
5.589
2.57
0.77
NGC 1425
SA(rs)b
228
-53
21.88
4.666
5.251
5.542
2.52
0.94
b
NGC 4639
SAB(rs)bc Sy1.8
294
76
21.98
3.223
5.275
5.315
2.33
0.95
a Galaxy morphological type from the NED database. b The data points for galaxies with (l,b) = (290 20 ,75 15 ).
change in rpi and rpl is partially offset by the change in Mb . Therefore, the
error in D of other galaxies is negligible relative to the effect of a Da error
of the target galaxy.
The Category A galaxies are within 22 Mpc from earth. The X term of
Eq. (6.21) predominates and Kmin is relatively small for distances less than
a few Gpc. Therefore, z exp(X) 1 X. Figure 6.5 is a plot of
Da versus X. The straight line is a plot of the least squares fit of the data.
The line is
Da = (2700 500Mpc)X (1.4 0.8Mpc)
c
z
Hspm
(6.27)
256
Table 6.2: The components of Eq. (6.21) for each Category A galaxy.
Galaxy
Kdp DP
Kp P
Kf F
Kvp P ve
X
103
103
103
103
103
IC 1613
0.009
-0.563
-0.204
0.06
-0.70
NGC 0224
0.014
-0.770
-0.103
0.25
-0.61
NGC 0598
0.014
-0.727
-0.226
0.25
-0.69
NGC 0300
0.086
-1.836
-0.492
-0.38
-2.63
NGC 5253
0.137
-1.862
-0.780
-0.65
-3.16
NGC 2403
0.136
-1.808
-0.760
0.98
-1.46
NGC 3031
0.177
-2.091
-0.778
1.00
-1.69
IC 4182
0.240
-2.292
-1.067
0.45
-2.67
NGC 3621
0.484
-3.121
-1.523
-0.52
-4.68
NGC 5457
0.783
-5.005
-0.380
1.26
-3.34
NGC 4258
0.767
-4.117
-1.761
1.28
-3.83
NGC 0925
0.814
-3.807
-2.097
1.58
-3.50
NGC 3351
0.939
-4.020
-2.267
0.89
-4.46
NGC 3627
0.969
-4.129
-2.254
0.72
-4.69
NGC 3368
1.054
-4.292
-2.337
0.93
-4.64
NGC 2541
1.135
-4.333
-2.560
2.33
-3.43
NGC 2090
1.344
-4.899
-2.661
0.29
-5.92
NGC 4725a
1.521
-5.269
-2.711
0.67
-5.78
NGC 3319
1.681
-5.414
-3.021
2.15
-4.61
NGC 3198
1.786
-5.542
-3.116
2.39
-4.48
NGC 2841
1.860
-5.662
-3.008
2.80
-4.01
NGC 7331
1.656
-4.818
-3.234
0.61
-5.79
NGC 4496Aa
1.812
-5.223
-3.376
-0.03
-6.82
NGC 4536a
1.825
-5.235
-3.363
-0.12
-6.90
NGC 4321a
1.848
-5.204
-3.395
0.51
-6.24
NGC 4535a
1.943
-5.274
-3.524
0.12
-6.74
NGC 1326A
2.143
-5.687
-3.679
-0.19
-7.41
NGC 4548a
1.875
-4.952
-3.605
0.34
-6.34
NGC 4414
2.763
-6.685
-3.923
1.35
-6.49
NGC 1365
2.447
-5.839
-3.989
-0.14
-7.52
NGC 1425
2.919
-5.715
-4.910
0.23
-7.47
NGC 4639a
2.718
-5.297
-4.920
0.25
-7.25
a
The data points for galaxies with (l,b) = (290 20 ,75 15 ).
Parameter
Kmin
Kdp
Kd a
Kp
Kf
Kco
Kvp
Klat
Klon
a
14
164
6.4. X FACTORS
257
Figure 6.5: Plot of distance Da (Mpc) versus exponent factor X of the redshift calculation
for 32 Category A sample galaxies. The straight line indicates Da = 2600X 1.
(Reprinted with permission of Elsevier(Hodge 2006a).)
6.4
X factors
The components of X in Eq. (6.22) are a directional effect and effects calculated from the integration of p and p /x across x. The X value
may be calculated by the summation of the galaxy effects of the various
regions along the light path. The Milky Way effect Xmw is caused by the
Source in the Galaxy. Because it is close to earth, is high and limited to
x < 0.05 Mpc as shown in the examples NGC 1326A (Fig. 6.6), NGC 4535
(Fig. 6.7), and NGC 1425 (Fig. 6.8).
The directional dependant effect Xdir is caused by the ve term of Eq. (6.21)
and the Sources and Sinks in the local group, over 0.05 Mpc < x < 2 Mpc.
The variation in Xdir due to the cluster structure of our local group is seen
258
in Figs. 6.6, 6.7, and 6.8. Figure 6.6 shows the light from the target galaxy
passing first through the outer shell of Source galaxies of the local cluster
with a large hump in the x plot and then through the Sink galaxies of
the core of the local cluster with a large U in the x plot. Figure 6.8
shows light from the target galaxy passing first through a portion of the
outer shell of Source galaxies of the local cluster with a small hump in the
x plot and then near but not through the Sink galaxies of the core of
the local cluster with a barely discernable U in the x plot. Figure 6.7
shows the light from the target galaxy passing through only a portion of
the outer shell of Source galaxies of the local group with neither a hump
nor a U being discernable.
Figure 6.6: Plot of scalar potential (109 erg) versus distance x (Mpc) along our line
of sight of NGC 1326A at location (l, b, z) (239 , 56 , 0.0057134). (Reprinted with
permission of Elsevier(Hodge 2006a).)
6.4. X FACTORS
259
Figure 6.7: Plot of scalar potential (109 erg) versus distance x (Mpc) along our
line of sight of NGC 4535 at location (l, b, z) (290 , 71 , 0.0063253). (Reprinted with
permission of Elsevier(Hodge 2006a).)
260
Figure 6.8: Plot of scalar potential (109 erg) versus distance x (Mpc) along our line
of sight of NGC 1425 at location (l, b, z) (228 , 53 , 0.0046664). (Reprinted with
permission of Elsevier(Hodge 2006a).)
The field Source effect Xsource is caused by the light passing very close
to a Source galaxy outside of a cluster. This effect is seen in the plot of
NGC 4535 (Fig. 6.7) as a sharp peak at x 10.4 Mpc. The Xsource in
the plot of NGC 4535 (Fig. 6.7) appears to be caused by NGC 4526 (SAB,
Db = 10.41 Mpc, Mb = 19.44 mag., A = 0.5 ), where A (arcdegrees) is
the angular separation of other galaxies from the identified target galaxy.
The field Sink effect Xsink is caused by the light passing very close to a
Sink galaxy outside of a cluster. This effect is seen in the plot of NGC 4535
(Fig. 6.7) as a sharp V at x 5.6 Mpc. The Xsink in the plot of NGC 4535
(Fig. 6.7) appears to be caused by Messier 095 (E5, De = 5.59 Mpc, Mb =
18.23 mag., A = 3.9 ).
The galaxy group effect Xg is seen in the plots of NGC 1326A (Fig. 6.6)
6.4. X FACTORS
261
and NGC 1425 (Fig. 6.8) as sharp steps associated with extended values
slightly higher than Xd at x 7 Mpc and x 8.5 Mpc. The step and
extended areas of NGC 1326A and NGC 1425 (Fig. 6.8) are caused by
several low luminosity Source galaxies intermixed with a few low luminosity
Sink galaxies. Unlike in clusters, the field galaxies have a higher percentage
of Source galaxies and a lower density (Binney and Merrifield 1998). The
Xg is distinguished from Xsource by the larger x > 0.5 Mpc of the group.
The total cell effect Xtc is caused by light passing through or near
the center of a cluster. The Xtc is a broad V shape. This is shown
in NGC 1326A (Fig. 6.6) at x = 11.6 Mpc, NGC 4535 (Fig. 6.7) at
x = 11.6 Mpc, and NGC 1425 (Fig. 6.8) at x = 12.6 Mpc. Within 5.0
and 4 Mpc of the point of the V, NGC 1326A has 14 sinks and 1 source,
NGC 4535 has 34 Sinks and 20 Sources, and NGC 1425 has 5 Sinks and
zero Sources.
Another cluster effect Xshell is caused by the high density of Source
galaxies in the shell of a cluster. This is shown in NGC 1326A (Fig. 6.6)
at x = 14.4 Mpc, NGC 4535 (Fig. 6.7) at x = 12.9 Mpc, and NGC 1425
(Fig. 6.8) at x = 14.4 Mpc.
Another cluster effect Xcore is caused by the high density of Sinks
galaxies in the core of a cluster and by the target galaxy being in the
shell of a cluster. This is shown in NGC 4535 (Fig. 6.7) at x = 15 Mpc and
NGC 1425 (Fig. 6.8) at x = 21.1 Mpc. This effect is caused by light passing
through the core of a cluster. Therefore, the core acts as a large Sink. This
effect occurs only if the target galaxy is in the far side of the cluster from
us.
Another cluster effect Xcore+ is caused by the high density of Sources
galaxies in the shell of a cluster and by the target galaxy being in the shell
of a cluster. This is shown in NGC 1326A (Fig. 6.6) at x = 15.5 Mpc. This
effect occurs only if the target galaxy is in the near side of the cluster from
us. Although there is a slight dip in the D plot, the net effect is the
values are higher than the Xd values.
The net effect Xcore is either the Xcore+ or Xcore depending on whether
the target galaxy is in the near or far side of the cluster, respectively.
The Category A galaxies used in Figs. 6.1 and 2 were obtained from the
Key Project results(Freedman et al. 2001) that established the currently
popular Hubble constant value of about 70 km s1 Mpc1 . Examination of
Fig. 6.1 shows the appearance of two different types of distributions. The
data points for the galaxies in the sample with zH < 0.0024 and Da <
10 Mpc (close sample galaxies) have a tight, linear appearance. The data
262
points for the galaxies in the sample with zH > 0.0024 and Da > 10 Mpc (far
sample galaxies) have a scattered, nearly unrelated appearance. Figures 6.1
and 2 show that close sample galaxies, which are without an intervening
cluster, have an effective Ho 100 km s1 Mpc1 and a non-zero intercept
with a correlation coefficient of 0.93 (see the Category C distance calculation
in the Appendix). The two outlier galaxies omitted in the Category C
calibration have Da > 11 Mpc. The far sample galaxies have at least one
intervening cluster at 11 Mpc < x < 15 Mpc and have a Da -zm correlation
coefficient of 0.53. The SPM suggests this scatter is caused by Xtc , Xcore ,
and Xshell of intervening galaxies. A selection bias in selecting galaxies
with Sink effects is expected because of the cluster cell structure. Further,
gravitational lensing observations have the X factor effects of galaxies close
to the light path.
The target galaxy effect Xt is caused by the Source or Sink in the target
galaxy. The Xt depends on the magnitude of the target galaxy like Xmw .
6.5
Beyond the local group, the major X effects are Xd , Xshell , Xtc , and Xcore .
The Xsource , Xsink , and Xg effects are secondary and tend to partially offset
each other. At x > 2 Mpc the Xd is the major effect on z. The major
variation of zm from the Hubble law is caused by the cluster factors Xtc ,
Xshell , and Xcore .
This effect is seen in Figs. 6.9, 6.10, 6.11, 6.12, and 6.13. The left plots
in Figs. 6.9 and 6.11 and the plots in Fig. 6.13 are z versus A from the target
Sink galaxy. The filled diamonds denote Sinks. The right plots in Figs. 6.9
and 6.11 show D of the galaxies versus A. The filled squares are distances
calculated using the Tully-Fisher relation (Tully and Fisher 1977). The
Category A galaxies (Cepheid galaxies) were used as calibration galaxies
for the Tully-Fisher relation.
Figures 6.10 and 6.12 show Glat versus Glon of the galaxies within approximately six arcdegrees surrounding the identified Sink. The angular
location of the identified Sink is marked by the crosshairs. The filled circles
denote the galaxies within one arcdegree of the identified Sink of Figs. 6.9
and 6.11, respectively. Source galaxies surround the Sink galaxies.
Figure 6.9 shows the Xcore effect around the Sink NGC 5353. The z
value of galaxies closer than the identified Sink is increased due to Xcore+ .
The z value of galaxies farther than the identified Sink is decreased due to
Xcore . The overall effect is the range of z values of galaxies around the
263
Figure 6.9: The left plot is of the measured redshift zm versus the angle A (arcdegrees)
subtended from NGC 5353 (S0) (l, b, z) = (82.61 ,71.63,8.0203103). The open diamonds indicate the data points for Source galaxies. The filled diamonds indicate the
data points for Sink galaxies. The right plot is the distance D (Mpc) from earth versus
A. The open squares indicate the data points for galaxies with the D calculated herein.
The filled squares indicate the data points for galaxies with the D calculated using the
Tully-Fisher relationship. (Reprinted with permission of Elsevier(Hodge 2006a).)
Table 6.4: The data for the Case Nos. of Fig. 6.13.
Case
Constellation
Galaxy
Morph.
Glon a
Glat a
D
z
Ratio
No.
Mpc
103
1
Camelopardalis
UGC 05955
E
135
42
13.8
4.2959
1:2:5b
2
Canes
NGC 5797
S0/a
85
57
92.2
13.7083
1:2c
3
Cetus
NGC 0693
S0/a?
148
-54
13.2
5.4335
1:2:3:4:5d
4
Coma
VCC 0546
dE6
279
72
35.1
6.6833
1:2c
5
Fornax
NGC 0802
SAB(s)
294
48
17.2
4.4360
1:2:4:5
6
Grus
IC 5267B
S0
350
-62
5.5945
-17.74
1:2:4:5e
7
Hercules
UGC 10086
S0/a?
29
46
21.3
7.6864
1:2:4f
a
b
Rounded to the nearest degree.
The data point at z 27.5 103 is 6.4X. c Note the Source galaxies
closer to earth. d Note the presence of a number of Sinks at 17 103 < z < 21 103 causing the lapse in
the discrete ratio. e Note the presence of additional Sinks at 1x and 2x and the odd Source at approximately
3.5x. f Note the presence of additional Sinks at 2x causing the nearby Sources to have lower z.
identified Sink are tightened toward the z value of the identified Sink.
Figure 6.11 shows the additional effect of Xtc in the z values of more
distant target galaxies that also have nearby Sinks and whose light must
264
pass through another, closer cluster. The nearer cluster causes an incremental increase in z and the farther Sink in the cluster of the target galaxy
causes the distribution of z values to tighten. Therefore, the total effect is
of the two clusters. If the Xtc effect of clusters are approximately equal,
then the z value of the farther group is nXtc , where n is an integer count of
the clusters the light passes through. The net effect is a near scarcity of z
values half way between integer z variation of the z of the first cluster from
earth.
Figure 6.14 of the Perseus constellation shows the situation may become
very complex. The previous plots used situations with few Sink galaxies for
clarity.
Figure 6.10: Plot of galactic latitude Glat (arcdegrees) versus galactic longitude Glon
(degrees) approximately six arcdegrees around NGC 5353. The open circles indicate
the data points for galaxies more than one arcdegree from NGC 5353. The filled circles indicate the data points for galaxies within one arcdegree of NGC 5353. The +
or crosshairs indicate the position of NGC 5353. (Reprinted with permission of Elsevier(Hodge 2006a).)
265
Figure 6.11: The left plot is of the measured redshift zm versus the angle A
(arcdegrees) subtended from NGC 2636 (E0), M = 17.9 mag.
(l, b, z) =
(140.15,34.04,7.3163103). The open diamonds indicate the data points for Source
galaxies. The filled diamonds indicate the data points for Sink galaxies. The right plot
is the distance D (Mpc) from earth versus A. The open squares indicate the data points
for galaxies with the D calculated herein. The filled squares indicate the data points for
galaxies with the D calculated using the Tully-Fisher relationship.
266
Figure 6.12: Plot of galactic latitude Glat (arcdegrees) versus galactic longitude Glon
(degrees) approximately six arcdegrees around NGC 2636. The open circles indicate
the data points for galaxies more than one arcdegree from NGC 2636. The filled circles indicate the data points for galaxies within one arcdegree of NGC 2636. The +
or crosshairs indicate the position of NGC 2636. (Reprinted with permission of Elsevier(Hodge 2006a).)
267
Figure 6.13: Plots of the angle A (arcdegrees) subtended from a target galaxy versus the
measured redshift zm . The target galaxy is shown on the A = 0 axis as a large, filled
diamond. The open diamonds indicate the data points for Source galaxies. The filled
diamonds indicate the data points for Sink galaxies. The data for the target galaxies are
listed in Table 6.4.(Reprinted with permission of Elsevier(Hodge 2006a).)
268
Figure 6.14: The left plot is of the measured redshift zm versus the angle A (arcdegrees)
subtended from NGC 1282 (E), M = 19.8 mag. (l, b, z) = (150 ,-13.34,7.4727103).
The open diamonds indicate the data points for Source galaxies. The filled diamonds
indicate the data points for Sink galaxies. The right plot is the distance D (Mpc) from
earth versus A. The open squares indicate the data points for galaxies with the D calculated herein. The filled squares indicate the data points for galaxies with the D calculated using the Tully-Fisher relationship. (Reprinted with permission of Elsevier(Hodge
2006a).)
6.6. DISCUSSION
6.6
269
Discussion
270
6.6. DISCUSSION
271
272
Chapter 7
Pioneer anomaly
1
That an unexplained blueshift exists in the electromagnetic (EM) radio signal from the Pioneer 10 (P10) and Pioneer 11 (P11) (PA) is well established
(Anderson et al. 2002; Toth and Turyshev 2006). Several models have been
proposed to explain the PA (Anderson et al. 2002). A currently popular
interpretation of the PA is that the Pioneer spacecraft are being subjected
to a force that causes a sunward acceleration aP (8.74 1.33) 108 cm
s2 . That aP cHo , where c (cm s1 ) is the speed of light and Ho (s1 )
is the Hubble constant, suggest a cosmological connection to PA. However,
the PA exceeds by at least two orders of magnitude the general relativity
(GR) corrections to Newtonian motion.
The PA is experimentally observed as a frequency shift but expressed
as an apparent acceleration. The PA could be an effect other than a real
acceleration such as a time acceleration (Anderson et al. 2002; Nieto and
Anderson 2005a) or an effect of an unmodeled effect on the radio signals.
Although unlikely, a currently unknown systematics effect is not entirely
ruled out.
That the PA is an acceleration is unproven. The acceleration nomenclature is based on the unsupported hypothesis that the frequency shift is
a Doppler effect. Other phenomena cause frequency shifts such as gravity using the Weak Equivalence Principle as shown in the Pound-Rebka
experiment (Pound & Rebka 1960).
Data for the existence of aP from the Galileo, Ulysses, Voyager, and
Cassini spacecraft are inconclusive (Anderson et al. 2002; Nieto et al. 2005b).
There are other characteristics of the PA (Anderson et al. 2002) in addition to the Sun directed blueshift. The PA has an apparent annual peri1 Portions
273
274
7.1. MODEL
275
7.1
Model
N
X
GMi /Ri ,
(7.1)
(7.2)
X = Kdp Dl P + Kp P + Kf F ,
(7.3)
where
the terms are defined in Hodge (2006a), Dl = 2D (AU) is the distance the
radio signal travels, and D (AU) is the geocentric distance to the spacecraft.
The P is a measure of the amount of the EM signal travels through. The
2 Static
276
7.2
Results
7.2.1
Sample
The mass of the Kuiper belt of approximately 0.3 Earth masses (Teplitz
et al. 1999) and the asteroid belt of approximately one Earth mass were
included in the mass of the Sun. The ephemeris including GM of the Sun,
planets, dwarf planets, and moons of Saturn were obtained from JPLs
Horizon web site3 in November and December 2006. The planets barycenter
data were used for the calculation except for the Earth and its moon and
except when considering the Saturn encounter of P11. When considering
the Saturn encounter, the GM of the moons of Saturn without GM data
in the Horizon web site were calculated from the relative volume and mass
of the other moons of Saturn. The data were taken from the ephemeris
for 00h 00m of the date listed except for the Saturn encounter where hourly
data were used.
The aP data were obtained from Table 2 of Nieto and Anderson (2005a).
The calculation of starting from the surface of the Earth along the line-ofsight (LOS) to the position of the spacecraft and back used a Visual Basic
program. Note the calculation of F is direction dependent. The Galaxys
effective mass was calculated from the revolution of the Sun about the center
of the Galaxy and, for simplicity, assumed a spherically symmetric galactic
matter distribution. For the calculations, the Galaxy center of mass was
positioned at 8 kpc from the Sun in the direction of Sgr A*. The field due
to the Source was assumed to be flat across the solar system. Therefore, the
effective mass at the center of the galaxy accounts for both the variation of
from the Source and the true mass within the Galaxy (Hodge & Castelaz
2003b).
Equation (7.2) was used to calculate the zc for each spacecraft on each
date. The calculated PA acceleration aPc (cm s2 ) (Anderson et al. 2002)
is
aPc = c2 zc /Dl .
(7.4)
The components of zc values are listed in Table 7.2. Figure 7.1 plots the
aP and aPc values versus D for each spacecraft. The correlation coefficient
3 http://ssd.jpl.nasa.gov/horizons.cgi
7.2. RESULTS
277
value
1.30 1028
2.10 1026
5.38 1022
5.98 106
units
erg1 AU2
erg1 AU1
erg1
erg AU1
Table 7.2: The components of Eq. (7.2) and Asun for each data point.
Craft Date
P10
P10
P10
P10
P10
P10
P10
P10
P10
P11
P11
P11
P11
P11
P11
P11
P11
P11
P11
82/19
82/347
83/338
84/338
85/138
86/6
87/80
88/68
89/42
77/270
80/66
82/190
83/159
84/254
85/207
86/344
87/135
88/256
89/316
D
(AU)
25.80
27.93
30.68
33.36
36.57
36.53
40.92
43.34
45.37
6.49
8.42
11.80
13.13
17.13
18.36
23.19
22.41
26.62
30.29
Kdp DP
1014
-1.83
-2.06
-2.43
-2.82
-4.43
-3.34
-4.38
-4.76
-5.04
-0.24
-0.26
-0.49
-0.55
-0.99
-1.01
-2.09
-1.40
-2.01
-2.85
Kp P
1014
-5.74
-5.97
-6.42
-6.85
-9.81
-7.42
-8.66
-8.90
-8.99
-3.05
-2.49
-3.34
-3.42
-4.68
-4.47
-7.31
-5.06
-6.12
-7.62
Kf F
1014
15.40
16.77
18.54
20.27
21.17
21.12
23.96
26.69
26.82
2.99
4.23
6.41
7.25
9.83
10.62
13.74
13.23
15.93
18.29
X
1013
0.78
0.87
0.97
1.06
0.69
1.04
1.09
1.30
1.28
-0.03
0.15
0.26
0.33
0.42
0.51
0.43
0.68
0.78
0.78
Asun
( )
124
165
167
175
12
144
70
82
109
43
166
109
148
71
121
16
151
88
36
between the aP and aPc is 0.85 for all data points. Without the P11 80/66
data point, which is the most uncertain measurement (Nieto and Anderson
2005a), the aP and aPc correlation coefficient is 0.93.
The error bars in Fig. 7.1 reflect the uncertainty from Table II ofAnderson et al. (2002) except for P11 80/66 where the uncertainty is fromNieto
and Anderson (2005a). The stochastic variable of the unmodeled acceleration was sampled in ten-day or longer batches of data (Anderson et al.
2002). Starting at the P11 80/66 data point, the average extended over
many months (Nieto and Anderson 2005a). The Xs in Fig. 7.1 plot the
calculated data point for a ten-day change from the date of the aP . Some
data points showed little change between the two dates of calculation. Others showed moderate change. Because the value of aPc depends on the
closeness of matter to the LOS, a change over ten days is due to a body
close to the LOS and to long integration times. Therefore, the closeness of
matter to the LOS introduces an uncertainty for even ten-day integration
278
Figure 7.1: Plots of the aP data versus geocentric distance D for P10 (left figure) and
P11 (right figure). The solid diamonds reflect the aP from Nieto and Anderson (2005a),
the solid squares are the calculated points for aPc , and the Xs are calculated points
for dates ten days from the date of the aP . (Reprinted with permission of Nova Science
Publishers, Inc. (Hodge 2010).)
7.2.2
Annual periodicity
Figures 7.2 show the value along the LOS versus D on dates when the
angle Asun (degrees) between the LOS and a line from the Earth to the Sun
was < 60 and when Asun > 120.
Table 7.2 lists the Asun for each data point. On the dates that Asun <
7.2. RESULTS
279
Figure 7.2: Plots of the versus geocentric distance D along the LOS. Data is for P10
on 84/338 (left figure) aP = 8.43 108 cm s2 and for P10 on 85/138 (right figure)
aP = 7.67 108 cm s2 . The Asun 175 on P10 84/338 and Asun 16 on P10
85/138. (Reprinted with permission of Nova Science Publishers, Inc. (Hodge 2010).)
7.2.3
The P10 data included one of nine dates ( 0.11) with Asun < 60 and five
of nine dates ( 0.56) with Asun > 120 . The P11 data included three of
ten dates ( 0.30) with Asun < 60 and four of ten dates ( 0.40) with
Asun > 120 . Figure 7.3 shows the trend of aPsun versus D between P10 and
P11 data points. At D > 10 AU, the aPsun appears to be a linear function
of D. At D < 10 AU, the Suns influence is to lower aP11 more than aP10 .
The SPM also suggests the mass of the planets and the mass of the
galaxy has an influence on the field. Figure 7.4 plots the aPc for the
spacecraft, the aPc excluding the outer planets (aPplanets ), and the aPc excluding the galaxy mass (aPgal ) versus D.
Because the outer planets are opposite the Sun for P10, the effect of the
planets on aPc of P10 is less than P11. However, as D of P11 increases, the
aPplanets aPc .
From the galaxy scale perspective, the spacecraft in the solar system
appears as near the large mass of the Sun and inner planets. The effect
of the galaxy mass appears to decrease the aPc nearly uniformly for P11.
The outer P10 data points show a trend to increasing relative effect of the
galaxy mass. The orbit of P10 is closer to the Sun-Sgr.A* axis than P11
and the D of P10 is greater than the D of P11. However, this effect is
within the uncertainty level.
The difference in aP10 and aP11 noted byAnderson et al. (2002) results
primarily from the effect of the Sun. A secondary effect is the effect caused
280
Figure 7.3: Plots of the aPc data versus geocentric distance D for P10 (solid diamonds)
and P11 (solid triangles). The stars plot the data points for aPc with the Sun excluded
(aPsun ).(Reprinted with permission of Nova Science Publishers, Inc. (Hodge 2010).)
Figure 7.4: Plots the aPc versus geocentric distance D for P10 (solid diamonds) and for
P11 (solid triangles). The data points of aPc excluding the outer planets aPplanets (left),
and the aPc excluding the galaxy mass aPgal (right) are shown as stars. (Reprinted with
permission of Nova Science Publishers, Inc. (Hodge 2010).)
7.2. RESULTS
281
Slow decline in aP
The plot in Fig. 7.3 suggests daPsun /dD at D < 10 AU is nearly zero,
followed by a decline and then a flattening.
The radio signal measurements are from and to Earth. At small D,
the relative effect of the Sun-Earth distance is larger than at farther D.
As D increases the Solar System effect appears to approach a single mass
located at the barycenter of the Solar System. Therefore, aP declines and
approaches a constant value dictated by R1 . However, the SPM
expects that at much greater D, the effect of the galaxy mass will increase
to cause a difference in the aP values between the spacecraft.
7.2.5
Saturn encounter
Figure 7.5 shows a plot of aPc versus the hours from the closest approach
of P11 to Saturn on P11 79/244 (Asun 8 ). The plot shows the aPc varies
widely over a period of hours. The negative aPc is a redshift. As seen in
Fig. 7.1, the SPM is consistent with the P11 77//270 (Asun 43 ) data
point at the beginning of the Saturn encounter of a near zero blueshift.
7.2.6
Because the P11 80/66 (Asun 166 ) data point extends over a relatively
long time, the rapidly varying aPc seen in Fig. 7.5 is consistent with the
uncertainty in the P11 80/66 data point.
The aPsun data points for P11 77/270 and P11 80/66 in Fig. 7.3 have only
a slightly lower slope than the later data points. The planets gravity well is
in a larger gravity well of the Sun, which is in an even larger galaxy gravity
well. The change from the Sun versus D curve to a planet gravity well
causes a smaller Kf F term relative to Kp P . Table 7.2 lists |Kf F | < |Kp P |
for the P11 77/270, where | | means absolute value, and |Kf F | > |Kp P |
for other data points. Without the Sun gravity well in the calculations,
282
Figure 7.5: Plot of aPc versus the hours from the closest approach of P11 to Saturn.
(Reprinted with permission of Nova Science Publishers, Inc. (Hodge 2010).)
|Kf F | > |Kp P | for all data points. Therefore, the aPsun for the P11 77/270
data point is consistent with the other data points.
7.2.7
Cosmological connection
(7.5)
7.3. DISCUSSION
283
Figure 7.6: Plot of the distance Dl (1011 Mpc) versus X(1013 ) for both spacecraft.
The line is a plot of Dl = 2800X + 5 1011 Mpc. (Reprinted with permission of Nova
Science Publishers, Inc. (Hodge 2010).)
7.3
Discussion
284
uppercase S on Space indicates the view of GR and the field of the SPM. A lower case s
indicates the neutral backdrop, Euclidean space.
7.3. DISCUSSION
285
flow. In such a condition such as between galaxy clusters, the static field of
matter extends throughout the flat field.
For a sample of 19 data points with published Pioneer Anomaly acceleration values, the SPM was found to be consistent with the observation of
not only the value of the Pioneer Anomaly, but also with the subtler effects
noted in Anderson et al. (2002). The SPM is consistent with the general
value of aP ; with the annual periodicity; with the differing aP between the
spacecraft; with the discrepancy between Sigma and CHASMP programs
at P10 (I) and their closer agreement at P10 (III); with the slowly declining
aP ; with the low value of aP immediately before the P11s Saturn encounter;
with the high uncertainty in the value of aP obtained during and after the
P11s Saturn encounter; and with the cosmological connection suggested by
aP cHo . The SPM has outstanding correlation to observed data when
the long integration time combined with rapidly changing field is insignificant. Because the gradient of the field produces a force on matter, the
effect of the field warp appears as the curvature of space proposed by GR
that causes the gravitational potential of matter.
The STOE passes the prime test for a new model in that it has made a
prediction that later observations confirm. This paper forming this Chapter
was written in 2006. Among the predictions were observations made in 2011:
* The data before the flyby encounters were insufficient to detect the PA
rather than there was no PA before the encounters as suggested by several
other models (Turyshev et al. 1999).
* The data favor a temporally decaying anomalous acceleration with an
over 10% improvement in the residuals compared to a constant acceleration
model. (Turyshev et al. (2011)). All other models calculate a constant
acceleration.
* Although the Earth direction is marginally preferred by the solution
(see Table III), the Sun, the Earth, and the spin axis directions cannot be
distinguished. (Turyshev et al. (2011)). An Earth directed PA implies a
signal related cause that the STOE calculates rather than acceleration of
the spacecraft that all other models calculate.
286
Chapter 8
288
8.1
Sample
(8.1)
where Ar is the angle observed from the center of the galaxy to the point
at r.
The criteria for choosing galaxies for the analysis are (1) the galaxy has a
published Cepheid distance, (2) the galaxy has an unambiguous, published,
HI rotation curve, (3) the galaxy has a rotation curve with at least 2 data
points in the outer bulge region, and (4) the galaxy has a sufficiently wide
central luminous region on the Digital Sky Survey (DSS)1 image such that
an error of one pixel (1.7 of arc) will be within tolerable r limits. The
DSS data was chosen rather than Hubble Space Telescope (HST) images
because DSS data was available for all the sample galaxies and each image
included both sides of the center.
1 Based on photographic data of the National Geographic Society - Palomar Observatiory Sky Survey
(NGS-POSS) obtained using the Oschin Telescope on Palomar Mountain. The DSS data was obtained
from the SkyView database available at: http://skyview.gsfc.nasa.gov.
8.1. SAMPLE
289
The data and references for the 14 chosen galaxies are presented in
Tables 8.1 and 8.2. The morphology type data was obtained from the
NED database2 . The morphology code, inclination, luminosity code, and
21-cm line width at 20% of the peak value W20 (km / s) data was taken
from the LEDA database3 . The rotation curve data is from HCb.
As seen in Tables 8.1 and 8.2, this sample has low surface brightness
(LSB), medium surface brightness MSB, and high surface brightness HSB
galaxies, galaxies with a range of maximum W20 values of from 120 kms1
to 607 kms1 , includes LINER, Sy, HII and less active galaxies, galaxies
which have excellent and poor agreement between distances calculated using
the Tully-Fisher (TF) relation Dtf and Dc , a Dc range of from 750 kpc to
18,980 kpc, field and cluster galaxies, and galaxies with rising, flat, and
declining rotation curves.
NGC 1365 has the most rapidly declining rotation curve in the sample
with a decline of at least 63% of the peak value(Jorsater & van Moorsel
1995). NGC 2841 is a candidate to falsify the modified Newtonian dynamics
(MOND) (Bottema et al. 2002). Either a minimum distance of 19 Mpc
compared to Dc = 14.1 Mpc or a high mass/luminosity ratio is needed for
MOND. A sizeable range of TF values is given in the literature. NGC
3031 has significant non-circular motion in its HI gas (Rots & Shane 1975).
2 The
3 The
290
8.2. ANALYSIS
291
NGC 3198 is well suited for testing theories since the data for it has the
largest radius and number of surface brightness scale lengths. The rotation
curve of NGC 4321 at 4 arcmin implies it has a declining shape. NGC
4321 has a large asymmetry in the south west lobe of 209 km s1 which
cannot be interpreted as disk rotation (Knapen et al. 1993). The receding
side of the HI rotation curve of NGC 4321 is declining and appears more
normal. The approaching side of the HI rotation curve of NGC 4321 is
lopsided (Knapen et al. 1993) and is sharply rising after a slight decline.
Guhathakurta et al. (1988) suggested the HI rotation curve is declining.
NGC 4321 is very asymmetric in HI. It also has the closest neighbor of
the sample. The NGC 4321 rotation curve has the largest number of data
points in the bulge region in the sample.
8.2
Analysis
292
8.2. ANALYSIS
293
Figures 8.2 and 8.3 show examples on the I - r curves along the major axis
of DSS FIT images. Examination of I r curves show a small central area
followed by linearly declining I until a change of slope occurs on each side of
the center at about the same I. The distance between these I points using
DSS data is 2rd . Since the I r curves are asymmetric, the side-to-side
measurement is taken.
The rd in Table 8.1 was measured from DSS images. The luminosity
of the 1.7 x 1.7 pixels in the DDS image is in units of scaled density
which is how DSS reports. From the image of the galaxy, the I readings
were taken at intervals along the major axis. The Dc as listed in Table 8.2
was used to convert the angular position of each pixel to r in kpc from the
294
Figure 8.2: Plot of surface brightness I (scaled distance of Digital Sky Survey) versus
the distance along the major axis from the center of the galaxy r (kpc) for NGC 0925
using Dc to calculate r. The center region shows the peak characteristic.
8.2. ANALYSIS
295
Figure 8.3: Plot of surface brightness I ( scaled distance of Digital Sky Survey) versus
the distance along the major axis from the center of the galaxy r (kpc) for NGC 4321
using Dc to calculate r. The difference from peak I to I at rd is average compared to
other galaxies. The center region shows the flat characteristic.
296
center according to Equation (8.1). The data was plotted for each target
galaxy. Examples are shown in Figure 8.2 for NGC 0925 and in Figure 8.3
for NGC 4321. A straight line was fitted by eye to the declining curve
outside the center region for each galaxy. When the curve showed a change
in slope, another straight line was drawn to reflect the new pattern.
The criteria for finding the straight line are: (1) The inner portion of this
curve may be rounded or flat. The region to measure is when a declining,
straight line may be fitted. (2) The straight line is drawn on the trend
(at least three pixels) of the lowest luminosity for each angle. At a given
angle, a pixel may have a higher brightness than the line drawn. (3) The
straight line must have a downward slope from the center. (4) The straight
lines from each side of the galaxy must be approximately symmetrical and
have symmetrical slope. (5) The straight line must yield a rd of between
0.2 kpc and 7.0 kpc. (6) Bright object profiles must be ignored. Bright
object profiles are several pixels wide and show an increase than decrease
in brightness. (7) Low brightness, isolated pixels must be ignored.
The galaxies with a rising rotation curve in the disk have very diffuse
I profiles. NGC 0224 and NCG 2841 are examples of a false straight line
on one side not symmetrical with the other side. The luminosity profile
of NGC 2841 is nearly flat in the center and has the lowest slope in the
KR in the sample as noted by Bottema et al. (2002). NGC 2841 and NGC
7331 are examples where a candidate line extends beyond 8.0 kpc and is
not symmetrical. The inner lines are diffuse and only slightly declining but
do satisfy the criteria. NGC 3198, NGC 4536 and NGC 3351 are examples
with a false straight line on one side that is not symmetrical and, hence,
the smaller value is discounted.
The luminosity profiles of the sample galaxies have two general shapes
in the center. The declining slope of the luminosity profile of NGC 0925
in Figure 8.2 is the steepest and declines farther than other galaxies in the
sample. This shape is peaked in the center that starts declining immediately
from the center. The luminosity profile of NGC 4321 as seen in Figure 8.3
is flat in the center. The decline starts a distance from the center. Also
note the profile for NGC 4321 has an asymmetric characteristic on the right
side of the curve that is interpreted as an effect of a close neighbor. NGC
4321 also has the largest asymmetry in the sample. The sample had eight
galaxies with peak I r shapes and six galaxies with flat I r shapes.
To determine the points on the rotation curve in the RR, a characteristic
of the RR must be identified. As an example, a plot of v 2 vs r for NGC 4321
is presented in Figure 8.4. The RR is identified as the range of r before
8.2. ANALYSIS
297
the knee of the disk SR region and extends inward until another knee is
encountered. A characteristic of this region is the straight line of the v 2 vs
r curve,
vr2 = Krs rr + Kri ,
(8.2)
where Krs and Kri are the slope and intercept of the linear relationship,
respectively.
The number of data points in RR and, hence, the first and last data
points were chosen such that the correlation coefficient of the chosen points
was greater than 0.99. The slope was determined to yield the maximum F
test value and the intercept was determined to yield the minimum deviation
between the line and chosen data points. Therefore, the remarkable fact is
the number of points on the line. The straight line in Figure 8.4 is the fit
obtained for nine data point of NGC 4321. Since rrmax is in the RR region
not in Trs , vrmax can be found in rising, flat and falling rotation curves by
fitting a high correlation, straight line to a few points of a vr2 vs rr plot. The
vrmax value is the value of the last data point on the line. The last point
on the line was chosen because the next 2 data points significantly lowered
the correlation coefficient if they were included. The Krs and Kri data for
the chosen galaxies are listed in Figure 8.2.
A model of this characteristic may be as follows. Flattened and circular
orbits in the RR (Binney and Merrifield 1998) (page 723-4), vr2 = GMr /rr ,
and Equation (8.2) implies Mr is a linear function of rr . If dark matter
causes this, then the distribution of dark matter must be mysteriously constrained. Alternatively, changes in the type and density of matter in the
RR may cause the required Mr distribution. In the RR, the relative abundance of oxygen to hydrogen [O/H] and of nitrogen to hydrogen [N/H] of
the interstellar matter (ISM) in spiral galaxies has been shown to be a linear function of r (Binney and Merrifield 1998) (pages 516-520). A similar
condition may exist for more massive particles. If the RR is modeled as a
cylinder of thickness Tr (Binney and Merrifield 1998) (page 724), the mass
in an elemental volume Dr Tr 2rr drr with density Dr is a linear function of
rr . Thus, Dr Tr is a constant. Thus,
Mr = (Dr Tr 2) rr2 + Mr ,
(8.3)
(8.4)
298
Figure 8.4: Plot of HI (diamonds) and H (squares) (Sempere et al. 1995) rotation
velocity v 2 (103 (km2 s2 ) versus the distance from the center of the galaxy r (kpc)
for NGC 4321 using Dc to calculate r. A linear curve has been fitted to the RR. The
2
highest value of v 2 on the straight line is the value of vrmax
. The vertical lines mark
the position of discontinuities as noted in the text. NGC 4321 is the only galaxy in the
sample with the HI rotation curve extended inward to rd .
8.2. ANALYSIS
299
2
Figure 8.5: Plot of the maximum rotation velocity in the RR vrmax
(km2 s2 ) versus the
discontinuity radius rd (kpc) for the chosen galaxies. The line is a plot of the equation
2
vrmax
= 21000rd 1000.
(8.5)
where Krslope = 21000 1000 km2 s2 kpc1 and Kras = 1000 2000 km2
s2 at 1 with a correlation coefficient of 0.99 and a F test of 0.99.
The analysis presented herein depends on Dc being secure, standard distances. Freedman et al. (2001) adopted a metallicity PL correction factor of
300
0.2 0.2 mag dex1 . Ferrarese et al. (2000) assumed the PL relation was
independent of metallicity. If Dc used are from the earlier work of Ferrarese
et al. (2000) without NGC 2841 (which used the metallicity correction factor), Krslope = 21000 1000 km2 s2 kpc1 and Kras = 2000 3000
km2 s2 at 1 with a correlation coefficient of 0.99 and a F test of 0.99.
Ferrarese et al. (2000) Dc values are higher than Freedman et al. (2001).
Thus, the theory and the result presented herein appear robust relative to
systematic variation of Dc .
Since vr reflects the potential from the central region of a galaxy, the
2
linear vrmax
rd relation implies rd and Krslope are also related to the
potential from the central region.
An examination of Table 8.1 shows a correlation of rd and the rotation
curve type. Galaxies with rd < 0.5 kpc have rising rotation curves. Another possible division is a group of galaxies with rd < 1.0 kpc, another
group of galaxies with rd > 2.5 kpc, and a gap between the groups. The
galaxies with rd < 1.0 kpc have peaked luminosity profiles (Figure 8.2).
The galaxies with rd > 2.5 kpc have flat luminosity profiles (Figure 8.3).
8.2.2
The same method used for DSS data may be applied to HST data. HST
FIT images were obtained from MAST4 . HST optical I r curves generally have higher resolution pixels that offer finer detail than DSS images.
Optical HST images of the central few kpc exists for 10 of the 14 sample
galaxies. The HST data for NGC 0224 (U2E2010FT and others) differs
significantly from DSS data. In addition, the inclination appears to differ
from LEDA database data. Therefore, the NGC 0224 HST images are not
used. The HST data for NGC 3621 (U29R1I02T and others) has insufficient
exposure for the purpose herein. That is, the I value of the pixels in the
area of interest is too small to provide r definition. The HST data for the
remaining, useable eight sample galaxies is listed in Table 8.3.
The I data values of the pixels along the major axis of the galaxy in
the HST FIT images are recorded as a function of r. Unlike the DSS
images, the data values of the pixels are recorded as they are in the FIT
file. The r value is calculated from angular data from the FIT image, Dc ,
and Equation (8.1). A graphical example of the I r curve result for NGC
4321 is shown in Figure 8.6 and Figure 8.7.
Region 1 (R1) is defined by a linear I r relation from the peak I to
4 Based
8.2. ANALYSIS
301
r3 b
0.0680.007c
0.19 0.02c
0.40 0.04c
3.8 0.4c
0.28
0.55
1.9
2.6
0.03
0.06
0.2c
0.2c
r4 b
0.390.04c
0.220.02c
0.760.08c
5.8 0.6c
1.2 0.1
1.0 0.1
2.7 0.3c
ND
Figure 8.6: Plot of surface brightness I (data value of FIT image of HST) versus the
distance along the major axis from the center of the galaxy r (kpc) for NGC 4321 using
Dc to calculate r. This figure shows the r1 and r2 positions. The A shows the first
local maximum after r2 which is the first pixel in the third region.
302
Figure 8.7: Plot of surface brightness I (data value of FIT image of HST) versus the
distance along the major axis from the center of the galaxy r (kpc) for NGC 4321 using
Dc to calculate r. This figure shows the r3 and r4 positions. The curved line is the
plot of I r1 line. The straight line is the fourth region line.
8.2. ANALYSIS
303
304
2
Figure 8.8: Plot of the maximum rotation velocity in the RR vrmax
(km2 s2 ) versus the
discontinuity radius r (kpc) for the chosen galaxies with HST images. The diamonds
are r4 , the squares are r3 , and the triangles are r2 . The lines are plots of the equations
in Table 8.4.
NGC 3198. The NGC 7331 image lacked an r4s value. The other sample
galaxies had only one side and, therefore, r4 is the r4s for that single side.
The values of r from HST data on each side of the center were asymmetric for all galaxies in the sample. Therefore, a measurement error is
introduced in those images that had only one side with a value.
Figure 8.8 shows a plot of vrmax versus r values from HST data for R2,
R3, and R4. The equations for the lines in Figure 8.8 are listed in Table 8.4.
Because of the poor correlation coefficient for R1, the vrmax versus r1 plot
was omitted from Figure 8.8.. The equations for the lines are best fit with
an F test = 0.99. The error for the Ks and Ks values is 1.
8.3. DISCUSSION
305
2
Table 8.4: Data of the equations vrmax
= Ks r + Ks curves.
r a
rd
r1
r2
r3
r4
Ks c
Ks d
0.99
0.09
0.96
0.99
0.99
21000 1000
ND
17000020000
24000 1000
17900 900
-10002000
ND
70005000
30002000
-20002000
Corr.
8.3
Discussion
2
The data required for a vrmax
- r distance calculation are relative to the
target galaxy. That is, the data are independent of extinction and similar
biases. Also, the parameters required are measured by analyzing trends
and finding the point where the trend changes. This makes the parame2
ters relatively robust. Unfortunately, vrmax
is difficult to determine and is
generally unavailable for galaxies with Dc values.
The vr2 relations are with galaxy parameters rr and r . Hence, the slopes
of the vr2 - rr curves for a galaxy are galaxy related not instrument related.
The causes of the curious correlations between the rd and the I r
shape in the central region and between rd and the rotation curve type
are unknown. The gap between the rd populations may be the result of a
small sample size.
Ferrarese (2002) discovered a linear relationship between circular velocity vc beyond R25 and bulge velocity dispersion c (see her Figure 1). NGC
0598 was a clear outlier. Galaxies with c less than 70 km s1 (vc less than
150 km s1 ) also deviate from the linear relation. Also, galaxies with significant non-circular motion in the HI gas such as NGC 3031 were omitted in
Ferrarese (2002). The vc for NGC 0598 used in Ferrarese (2002) was 135 km
s1 which is the highest data point of a rising rotation curve in Corbelli &
Salucci (2000). Most of the curves Ferrarese (2002) used were flat. Therefore, vc vrmax . If the vc = vrmax = 79 km s1 is used by Ferrarese (2002)
for NGC 0598, NGC 0598 will fit on her plotted line. Thus, the range of
vc c relation extends to 27 km s1 < c < 350 km s1 . The M c
306
relation need not tilt upward to comply with the M MDM relation. Apparently, vc = vrmax may improve the correlation of these relationships. The
tightness of the vc c and, perhaps, a vrmax c relation suggests a tight
correlation between M and total mass of the galaxy. If Mrmax = Mbulge ,
then M Mbulge (Ferrarese 2002; McLure & Dunlop 2002; Merritt &
Ferrarese 2001b) and Equation (8.5) imply the M c relation. Then the
extended relationship Ferrarese (2002) noted to elliptical galaxies implies
the relationships herein may be extended to elliptical galaxies.
The TF relationship is an empirically determined formula. Freemans
Law states HSB galaxies have a central surface brightness at a roughly constant Freeman value. If rd may be roughly correlated with an isophotal
2
level, rd is proportional to the exponential scale length and rd
proportional
to the total luminosity. Hence, total luminosity would then be proportional
to v 4 as TF requires. However, the I level at rd for NGC 2841 and NGC
4321 (both HSB galaxies) are 12993 DSS scaled density units and 10129
DSS scaled density units, respectively. Therefore, rd is not correlated with
an isophotal level.
Further, the central surface brightness of low surface brightness (LSB)
and of moderate surface brightness (MSB) galaxies deviate from the Freeman value. Also, non-luminous dark matter that is thought to dominate the
LSB galaxies and, therefore, should add to mass. Therefore, if the Freeman argument above is valid, distances to LSB galaxies should deviate
from Dtf . However, Dtf has correlated with Dc for galaxies such as NGC
2403 (MSB) and NGC 0300 (LSB). On the other hand, Dtf for NGC 0925
(MSB) and for NGC 3031 (HSB) differs significantly from Dc . HCa suggests the difference between Dtf and Dc is caused by neighboring galaxies.
Therefore, the above argument using Freemans Law is invalid.
The vrmax is close to the maximum rotation velocity for galaxies with
flat and declining rotation curves. The W20 in TF may approximate vrmax
for galaxies with flat and declining rotation curves. However, for galaxies
with rising rotation curves, the W20 - vrmax relationship is more complex.
Therefore, the relationship herein is not a restatement of the TF relation.
2
However, the TF relation as modified by HCa and the vrmax
rd relation
could both be used to calculate distance correlated with Dc and, therefore,
are related.
It is worth considering whether the relations discovered herein are expected from other theories.
The linear potential model (LPM) explored by Mannheim and Kmetko
(1996) proposes the potential acting on particles is of the form V (r) =
8.3. DISCUSSION
307
/r + r/2 where and are constants. The LPM and HCb suggests the
intrinsic rotation curve in the RR and SR without the influence of other
galaxies is rising. Hence, v is a linear function of r and V has a linear
r term in the RR and SR. The LPM derives flat rotation curves by mass
distribution but suggests that ultimately the rotation curve is rising. HCb
suggested that since neighboring galaxies form flat and declining rotation
curves, rotation curves do not ultimately rise. If LPT is to derive declining
curves, it is by the influence of other galaxies. This is difficult to understand
even if the potentials of all galaxies counteract each other to maintain a
delicate balance. The LPM does not predict the relations with rd herein.
The dark matter model (DMM) posits large amounts of non-luminous
matter in a halo around a galaxy to explain rising rotation curves. In
addition, the DMM would consider vrmax as merely another point on the
rotation curve. The DMM is not really a predictive theory but rather
a parameterization of the difference between observation and Newtonian
expectation.
This Chapters result suggests the species and density of matter changes
with radius and suggests matter is arranged in strata which cause discontinuities in the I r curve. In the MOND model (Bottema et al. 2002),
the Newtonian acceleration is measured in our solar system which is in the
low r end of the SR. The MOND data comparison is usually in the far RR
and SR. Hence, the existence of a differing rd among galaxies or a modified acceleration among galaxies is expected in a limited r range as in this
Chapter. Also, since we are near the vrmax in the Galaxy and the relation2
ship of vrmax
rd is a universal relationship, the MOND proposition that
the modified acceleration is a universal constant for the limited region with
r rd follows. However, the linear vr2 rr relation is inconsistent with
MOND.
NGC 2841 may falsify MOND. However, it fits in well with this Chapters result. This chapter has restricted comment on the mass/luminosity
ratio since this ratio is uncertain. Since the chemical species may change
with each radial ring, the mass/luminosity ratio should change, also. The
only use of the mass/luminosity ratio is to say the chemical species is constant in a ring. Therefore, a shift in I - r slope signifies a border between
rings. NGC 2841 has greater luminosity through a low slope in the I - r
curve. This Chapter ignores this.
Therefore, the results herein are inconsistent with current galaxy models.
1. A sample of 14 galaxies with published data was considered. The
308
Chapter 9
309
310
However, vrmax present a formidable measurement task. The TF and Chapter 8 methods each have one parameter that is easier to use. By combining
the equations of the two methods, an equation may be derived to find D
using the easier-to-measure parameters of a galaxy. These parameters are
r and the measured 21-cm line width at 20 percent of the peak value W
(km s1 ) corrected for inclination between the line of sight and the target
galaxys polar axis i (degrees). These parameters are measurements relative
to the target galaxy and are independent of magnitude biases such as extinction and reddening. This Chapter suggests a new D calculation method
using r and W . The results suggest r is proportional to the intrinsic energy creation rate of a galaxy (from HCa). Therefore, galaxy parameters
of total absolute magnitude, total mass, rotation velocity curves, Dc , and
r are related to the galaxys , and to neighboring galaxies .
The approach is to (1) develop an equation to relate the TF with the
model of Chapter 8, (2) develop an equation to relate r and W to D,
(3) use the galaxies of Chapter 8 as calibration galaxies to determine the
constants, and (4) apply the model to test galaxies which also have Dc
values for comparison.
9.1
Model
Mg
2.5
= Ke ,
(9.2)
9.1. MODEL
311
Kreslope
2
(vrmax
Kras ),
(9.4)
where, from HCa, Kreslope = 21000 1000 km2 s2 kpc1 , and Kras =
2
1000 2000 kpc2 s2 are the slope and intercept of the vrmax
r curve,
respectively.
Combining Equation (9.2), (9.3), and (9.4) and rearranging terms
yields,
!
Ke
2
2.5 log(vrmax
Kras ).
(9.5)
Mg = 2.5 log
K Kreslope
The TF (Tully et al. 1998) combined with the correction factor C from
HCa has the form,
CMb,k,i = K1 K2 log(W ),
(9.6)
(9.7)
2
R [Ks log(W ) + Ki ] [log(vrmax
Kras )].
(9.8)
2
where Ks and Ki are the slope and intercept of a plot of log(vrmax
Kras )
versus log(W ), respectively.
For a single galaxy, Equation (9.1) requires that W depend on Wo .
Define the residual R as the difference between the line of Equation (9.7)
2
and the measurement of log(vrmax
Kras ),
(9.9)
312
where Ksr and Kir are the slope and intercept of the R versus log W curve,
respectively.
By the definition of the slope of a line in a graph of Equation (9.7),
Ks =
or,
R
,
log W log Wvc
(9.10)
R
,
Ks
(9.11)
2
where log Wvc is the value of log(W ) at the measured log(vrmax
Kras ).
Solving Equation (9.10) for R, substituting R into Equation (9.8), solv2
ing for vrmax
Kras and substituting into Equation (9.4) yields,
r =
Ks
10Ki Wvc
.
Kreslope
(9.12)
Ks
r
10Ki Wvc
10KiKir W Ks Ksr
=
=
,
tan(ra )
Kreslope tan(ra)
Kreslope tan(ra )
(9.13)
9.2
Galaxy
NGC 0224
NGC 0300
NGC 0598
NGC 0925
NGC 1365
NGC 2403
NGC 2841
NGC 3031
NGC 3109
NGC 3198
NGC 3319
NGC 4321
NGC 4535
NGC 7331
a
313
Dc e
0.750.02
2.020.07
0.820.04
9.130.17
17.230.40
3.140.14
14.071.57f
3.550.13
1.130.12g
13.690.51
13.440.57
14.330.47
14.800.35
14.530.62
RZ
3
1
1
2
3
2
3
3
1
2
1
3
2
3
314
2
Figure 9.1: Plot of log(vrmax
Kras ) from HCa versus log(W ). The line is the best fit.
2
The line is a plot of log(vrmax
Kras ) = 2.138 log(W ) 1.099.
of from 0.75 Mpc to 17.39 Mpc, field and cluster galaxies, and galaxies with
rising, flat, and declining rotation curves.
2
Figure 9.1 shows a plot of log(vrmax
Kras ) versus log(W ). The line in
Figure 9.1 is a plot of Equation (9.7) with Ks = 2.138 0.2 km s1 and
Ki = 1.099 0.6 km2 s2 to 1 . For the final calculation, the constants
used had four significant figures although the error bars indicate a fewer
number of significant figures. The correlation coefficient is 0.93 and the F
test is 0.99. The fit of the line to the data was done by finding the Ks
which yielded an F test of over 0.99. Then the Ki was found to produce
a low total difference between data and the line. The low total difference
was required because the TF assumption that the average Wo = 0 in the
equations was important in the model.
NGC 0925, NGC 2403, NGC 3198, and NGC 4535 have similar r values
(Chapter 8) and, hence, values. Also, from HCa, the average of the 10
closest neighbors of each galaxy varies from about 0.75 Mpc to 2.42 Mpc. In
addition, the W values vary from 248 km s1 to 423 km s1 . Because the r
of these galaxies is similar and the W has significant variance, the suspicion
is strong that the effect of neighboring galaxies on W is significant. This is
confirmed in HCa and this Chapter.
Figure 9.2 shows a plot of R (km2 s2 ) versus log(W ) [log(km s1 )].
The structure has three R zones (RZ) of linear relationships. The data for
the linear lines and the boundaries of each RZ as displayed in Figure 9.2
are presented in Table 9.2. The lines intersect at [R, log(W )] = [0.53 0.05
315
2
Figure 9.2: Plot of the residual R of log(vrmax
Kras ) measurement and the line in
Figure 9.1 versus log W . The lines are the best fit and are described in Table 9.2. The
+ sign, diamond, and square are for galaxies in RZ 1, RZ 2, and RZ 3, respectively.
The R, F, and D denote the galaxies with rising, flat, and declining rotation curves,
respectively.
zone
RZ 1
RZ 2
RZ 3
a
No.
4
4
6
316
the galaxies with rising rotation curves in the sample are in the RZ 1 and
all the galaxies in RZ 1 have rising curves and r less than one kpc. The
galaxies with flat rotation curves in the other RZs have a low absolute value
of R and all galaxies with low absolute values of R have flat curves in RZ
2 and RZ 3. The galaxies with declining rotation curves in the other RZs
have a high absolute value of R and all galaxies with high absolute values
of R have declining curves in RZ 2 and RZ 3. All the galaxies in RZ 2
have 1 kpc< r < 2 kpc. All galaxies in RZ 3 with flat rotation curves have
2 kpc < r < 4 kpc. All galaxies in RZ 3 with declining rotation curves have
r greater than four kpc. NGC 2841 which is the galaxy with the highest
R has a nearly flat but slightly declining rotation curve. NGC 0300 which
has the highest R value in RZ 1 has a nearly flat but slightly rising rotation
curve. NGC 7731 has a nearly flat but slightly declining rotation curve. It
has been labeled a flat rotation curve.
The galaxies in RZ 2 and RZ 3 are on their respective line with a high
correlation and high F test. No galaxy occupies a position significantly off
the lines. Apparently, whatever causes the RZ 2 and RZ 3 is digital in
nature.
The log(W ) value between the end of the RZ 1 and beginning of RZ
2 is relatively small. This implies the R value at the beginning of the
RZ 2 is 0.15 km2 s2 . A similar situation exists at the beginning
of the RZ 3 where the R value is 0.30 km2 s2 . This suggests that
the physical mechanism in galaxies in RZ 2 is present with an additional
physical mechanism in galaxies in RZ 3 rather than two, mutually exclusive
physical mechanisms or a physical mechanism with two states.
Examination of the surface brightness profiles in Chapter 8 shows two
distinct shapes. The shape is determined by the trend of the minimum
luminosity for each angle of observation (distance from center). Individual,
low value pixels and high value pixels above the trend line are ignored. One
shape shown in Figure 9.3 for NGC 2403 in RZ 2 has a distinctive spike
or peak shape. This shape has a high slope on the sides and the change
of slope appears to have a large discontinuity at the center. The other
shape shown in Figure 9.4 for NGC 7331 in RZ 3 has a distinctive square
or flat appearance. This shape has lower slope on the sides and is nearly
flat around the center. The surface brightness profiles of all the calibration
galaxies in RZ 1 and RZ 2 have the peak shape. The surface brightness
profiles of all the calibration galaxies in RZ 3 have the flat shape.
If the surface brightness profile is indecisive, an additional method to
decide to which RZ a galaxy belongs is to choose the RZ with the distance
317
Figure 9.3: Plot of surface brightness I (scaled distance of Digital Sky Survey) versus
the distance from the center of the galaxy r (kpc) for NGC 2403 using Dc to calculate r.
The profile is characteristic of a peak shaped curve with a peak and high slope sides.
These profiles are found in RZ 1 and RZ 2 galaxies.
Figure 9.4: Plot of surface brightness I (scaled distance of Digital Sky Survey) versus
the distance from the center of the galaxy r (kpc) for NGC 7331 using Dc to calculate
r. The profile is characteristic of a flat shaped curve with a flat top and sides with a
lower slope than RZ 2 galaxies. These profiles are found in RZ 3 galaxies.
318
NED type a
SA(s)m
SB(s)m
SA(rs)b
SA(rs)b HII
SA(s)cd LINER
SB(r)b HII Sbrst
SAB(rs)ab;Sy LINER
SA(s)d
SAB(s)b;Sy2 LINER
SAB(s)bc;Sy1.9 LINER
SA(s)c LINER
SB(rs)m
SAB(rs)bc
SBb(rs);SY LINER
SA(r)d
SA(rs)bc
SAB(rs)bc;Sy1.8
SAB(r)ab;Sy2 pec
Im pec;HII Sbrst
SAB(rs)cd
i b
27.1
42.5
69.5
68.3
66.8
41.5
54.7
65.6
57.3
72.0
54.0
48.1
58.9
37.0
30.0
55.0
52.0
54.4
66.7
22.0
W20 c
61
88
367
296
211
281
358
279
378
438
412
175
337
270
171
376
330
415
93
178
2ra d
44 4
13 2
4515
9616
37 4
10911
45248
16925
41070
14627
22122
26 8
8214
76 9
68 4
25 2
50 5
27644
43 5
17919
Dtf e
7.601.13
18.202.70
23.573.49
15.262.26
15.382.28
15.152.25
13.762.04
7.721.15
11.621.73
9.621.42
26.333.90
14.582.16
19.662.91
21.603.20
24.063.57
35.055.20
36.455.39
18.032.67
2.400.36
10.821.60
Dc f
4.53 0.13
16.15 0.77
20.91 0.47
11.44 0.21
11.23 0.26
9.34 0.39
9.87 0.28
6.55 0.18
9.38 0.35
7.73 0.26
16.61 0.38
14.53 0.20
14.46 0.27
15.01 0.35
15.15 1.46h
49.7016.26i
21.00
11.92
3.25
6.70
0.79
0.33
0.22
0.35
Dcalc g
4.36 0.42
14.76 1.53
23.57 5.89
10.29 2.08
10.56 0.97
10.07 1.16
10.36 1.00
5.79 1.02
10.91 2.23
7.75 1.78
15.82 1.70
15.92 3.73
13.03 1.89
14.85 1.92
14.93 0.95
45.51 3.47
21.89 2.42
12.64 1.74
3.08 0.41
6.27 0.74j
f Distance in Mpc from Cepheid data from Freedman et al. (2001), unless otherwise noted.
g The calculated distance using Equation (9.13) in Mpc.
h Distance from Pierce et al. (1992).
i Distance from Paturel et al. (2002).
j Distance with W
1 which is within its error limit and in RZ 2.
20 = 178kms
W
Dcalc1 = 2.513 104 tan(r
10%,
a )
W 0.366
5.423 105 W
15%,
tan(ra )
1.939
W 246
246 < W 440
440 < W ,
(9.14)
where Dcalc1 , Dcalc2 , and Dcalc3 are the Dcalc values for the respective RZ.
The criteria for choosing the test galaxies for the comparison are (1) the
galaxy has a published Dc , (2) the galaxy has a sufficiently wide central
luminous region on DSS image such that an error of one pixel (2 of arc)
will be within tolerable ra limits, and (3) the incl, w20 , total apparent
corrected magnitude m (either btc or itc) and position angle (pa)
data must be available in the LEDA database.
The data for the test galaxies are shown in Table 9.3.
For each of the test galaxies, Dtf was calculated. The w20 value was
corrected for inclination, and used for WRI in the TF formulas. For the
galaxies IC 4182, NGC 1326A, NGC 4496A, and NGC 4571, the B band
m was used since the I band data was unavailable in LEDA. For the other
test galaxies, Dtf was calculated using I band data and formulas.
319
320
Figure 9.5: Plot of the calculated distance Dcalc to the test galaxies versus the reported
Cepheid distance Dc . The line is Dcalc = Dc . The correlation coefficient is 0.99 with an
F test of 0.78.
9.3
Discussion
Shanks et al. (2002) suggested the Cepheid distances may be underestimated due to metallicity and magnitude incompleteness. As the difference
between Ferrarese et al. (2000) and Freedman et al. (2001) Dc values indicated, the effect will be to change only the constants.
The measure of ra is clear for many galaxies. However, some galaxies
are less clear. Careful attention to the criteria is necessary. The future
improvement in the DSS data or data from the Hubble Space Telescope
will improve the accuracy of ra .
The physical cause of the peak or flat surface brightness profile is unknown (see Faber et al. (1997)).
9.3. DISCUSSION
321
322
Chapter 10
323
324
325
with the metallicity obtained by extrapolating [O/H] within the disk to the
galactic center. Low luminosity galaxies tend to be metal poorer than high
luminosity galaxies.
The Newtonian calculation of v of a test particle in a galaxy is
major +
v 2 = Rmajor R
GM
,
Rmajor
(10.1)
where, for simplicity, the mass of the test particle is assumed to be constant
over time, the effect of neighboring galaxies (the environment) is ignored,
G is the gravitational constant, Rmajor (kpc) is the galactocentric radius of
the test particle along the major axis, the double dot over Rmajor indicates
the second time derivative of Rmajor , M (M ) is the mass inside the sphere
of radius Rmajor from Newtons spherical property, and, hereinafter, the
inertial mass m equals gravitational mass mg of the test particle (Will
2001).
Generally, when considering RCs, the orbits of matter in the disk of
galaxies are assumed to be nearly circular (Binney and Merrifield 1998, p.
= 0. If most of the matter of a galaxy is in the bulge region,
725), R
classical Newtonian mechanics predicts a Keplerian declining RC in the
disk. The observation of rising RCs in the RR and of rising, flat, or declining
RCs in the OR poses a perplexing problem. Postulates to model the RCs
have included the existence of a large mass MDM of non-baryonic, dark
matter (DM) and dark energy; the variation of the gravitational acceleration
GM/R2 with R; the variation of G with R; and the existence of an added
term to the rhs of Eq. (10.1) opposing gravitational attraction.
Repulsive DM has been proposed to provide a repulsive force that has
the form of an ideal, relativistic gas (Goodman 2000). In DM models the
rising RCs of low surface brightness galaxies (LSBs) require larger relative
amounts of DM than flat or declining RCs. This contradiction to the successful Tully-Fisher relation (TF) (Tully and Fisher 1977) is well established
(Sanders and McGaugh 2002). Another mystery of the DM paradigm is the
apparent strong luminous-to-dark matter coupling that is inferred from the
study of RCs (Persic 1996).
Currently, the most popular modified gravity model is the Modified
Newtonian Dynamics (MOND) model (Bottema et al. 2002; Sanders and
McGaugh 2002, and references therein). MOND suggests gravitational acceleration changes with galactic distance scales by a fixed parameter related
to the mass-to-luminosity ratio with units of acceleration. MOND appears
limited to the disk region of spiral galaxies. MOND requires the distance for
326
NGC 2841 and NGC 3198 to be considerably larger than the Cepheid-based
distance Da (Mpc) (Bottema et al. 2002). Appling MOND to cosmological
scales is being investigated (Bekestein 2004; Sanders 2005). MOND may
represent a effective force law arising from a broader force law (McGaugh
1998).
327
(10.2)
NX
source
i=1
NX
sink
i
l
+ K
,
ri
l=1 rl
(10.3)
where i and l are numbers representing the strength of the ith Source
and lth Sink, respectively, K and K are proportionality constants, and ri
(Mpc) and rl (Mpc) are the distance from a Source and a Sink, respectively,
to the point at which is calculated, > 0, < 0, and Nsource and Nsink
are the number of Sources and Sinks, respectively, used in the calculation.
Hodge (2006a) suggested the ms property of matter is the cross section
of the particle; the Mt , where Mt is the total mass of the Source
galaxy; and the Mt , where Mt is the total mass of the Sink galaxy.
This chapter further constrains the SPM to describe another set of observations inconsistent with present models. The SPM finds the RC in the
OR may be rising, flat, or declining depending on the galaxy and its environment. The Source of the scalar field acts as a monopole at distances
of a few kpc from the center of spiral galaxies. In addition to fitting the
RR and OR parameters, the SPM is consistent with EOR and asymmetry
observations. In section 2, the model is discussed and the SPM v 2 calculation equation is developed. The resulting model is used to calculate RC
parameters in Section 3. The discussion and conclusion are in Section 4.
328
10.1
The coordinate system center was placed at the sample galaxys kinematical
center and was aligned to our line of sight. The SPM posits the v 2 of a
particle in orbit of a spiral galaxy is the sum of the effects of Fs opposing
the gravitational force Fg ,
v2 =
ms L
GM
~ ~ao + |R
~ ~ao | ,
Gs
R
R
m R
(10.4)
where (1) the L term is due to the Fs of the host galaxy; (2) L = K =
100.4MB erg s1 for Source galaxies or L = K = 2.7 100.4MB erg
s1 for Sink galaxies (Hodge 2006a); (3) the mass of the test particle is
assumed to be constant over time; (4) | | indicates absolute value; and (5)
~ao (km s2 ) is the acceleration caused by neighboring galaxies,
~ao =
Gs ms ~
,
m
(10.5)
where the number of galaxies exclude the host galaxy. Note that no assumption about the significance of ~ao has been made.
Because v is measured only along the major axis in the region under
~ field is
consideration (Binney and Merrifield 1998, p. 725) and if the
approximately uniform across a galaxy, Eq. (10.4) becomes
v2 = G
M
Rmajor
Gs ms L
~ ~ao |Rmajor ,
+ |K
m Rmajor
(10.6)
10.2. RESULTS
329
The ms /m ratio of stars is changing through changing elemental composition by nucleosynthesis in addition to accretion and emission of matter.
Therefore, the H i RC is preferred to trace the forces influencing a galaxy
outside the bulge. Because only the H i RC is considered in the calculations
herein, the units used were Gs ms /m = 1 kpc km2 s1 erg1 .
10.2
Results
10.2.1
Sample
The elliptical, lenticular, irregular, and spiral galaxies used in the calculations were the same as used in Hodge (2006a). That is, they were selected
from the NED database2 . The selection criteria were that the heliocentric
redshift zmh be less than 0.03 and that the object be classified as a galaxy.
The parameters obtained from the NED database included the name, equatorial longitude Elon (degrees) for J2000.0, equatorial latitude Elat (degrees)
for J2000.0, morphology, the B-band apparent magnitude mb (mag.), and
the extinction Ext (mag.) as defined by NED. The galactocentric redshift
z was calculated from zmh .
The 21-cm line width W20 (km s1 ) at 20 percent of the peak, the
inclination in (arcdegrees), the morphological type code t, the luminosity
class code lc, and the position angle Pa (arcdegrees) for galaxies were
obtained from the LEDA database3 when such data existed.
The selection criteria for the select galaxies were that a H i RC was
available in the literature and that Da was available from Freedman et al.
(2001) or Macri et al. (2001). Data for the 16 select galaxies are shown in
Table 10.1 and their H i RCs are plotted in Fig. 10.1. The L for the select
galaxies was calculated using Da , mb , and Ext . The selection criteria for
the other sample galaxies were that a H i RC was available in the literature
and that the L could be calculated using the TF method with the constants
developed in Hodge (2006a).
This select galaxy sample has LSB, medium surface brightness (MSB),
and high surface brightness (HSB) galaxies; has LINER, Sy, HII, and less
active galaxies; has galaxies that have excellent and poor agreement between
the distance Dtf (Mpc) calculated using the TF relation and Da ; has a Da
2 The Ned database is available at http://nedwww.ipac.caltech.edu. The data were obtained from
NED on 5 May 2004.
3 The LEDA database is available at http://leda.univ-lyon.fr. The data were obtained from LEDA
on 5 May 2004.
330
Figure 10.1: Plot of the square of the H i rotation velocity v 2 (103 km2 s2 ) versus galactocentric radius Rmajor (kpc) along the major axis. The straight lines mark the application
of the derived equations to the RCs of the select galaxies. The application of the derived
equations to NGC 0224, NGC 0300, and NGC 0598 were omitted because these galaxies
~ ~ao | value. The references for the RCs are noted in brackets and are as
lacked a |K
follows: [1]Gottesman et al. (1966), [2]Carignan & Freeman (1985), [3]Corbelli & Salucci
(2000), [4]Krumm and Salpeter (1979), [5]Begeman et al. (1991), [6]Rots (1975), [7]van
Albada et al. (1985), [8]Bosma (1981), [9]Moore and Gottesman (1998), [10]van Albada
and Shane (1975), [11]Sempere et al. (1995), [12]Braine et al. (1993), [13]Chincarini & de
Souza (1985), [14]Vollmer et al. (1999), [15]Huchtmeier (1975), and [16]Mannheim and
Kmetko (1996). (Reprinted with permission of Nova Science Publishers, Inc. (Hodge
2010).)
10.2. RESULTS
331
morphologya
SA(s)b
SA(s)d
SA(s)cd
SAB(s)d HII
SAB(s)cd
SA(r)b;LINER Sy
SA(s)ab;LINER Sy1.8
SB(rs)c
SB(rs)cd;HII
SAB(s)bc;LINER Sy1.9
SAB(s)bc;LINER HII
SA(rs)c? LINER
SAB(s)c
SBb(rs);LINER Sy
SAB(rs)cd
SA(s)b;LINER
tb
3
6.9
6
7
6
3
2.4
5.2
5.9
4
4
5.1
5
3.1
5.9
3.9
lcc
2
6
4
4
5
1
2
3
3.3
3.9
1
3.6
1.9
2
1
2
RCd
F
R
R
D
F
F
D
F
R
D
D
D
D
D
D
F
Da e
0.79
2.00
0.84
9.16
3.22
14.07f
3.63
13.80
13.30
7.98
15.21
17.70
15.78
16.22
6.70
14.72
a Galaxy
range of from 0.79 Mpc to 17.70 Mpc; has field and cluster galaxies; and
has galaxies with rising, flat, and declining RCs.
10.2.2
First approximation
332
Rrr radius is
2
Mrr = (Drr Hrr 2) Rrr
+ M + Mc ,
(10.7)
mg
GDrr Hrr 2 Rrr
m
G(M + Mc ) ms Gs L/m
+
Rrr
~ ~ao |Rrr ,
|K
(10.8)
10.2. RESULTS
333
2
in all galaxies. Thus, the vrrmax
and Rrrmax are comparable parameters
among galaxies. Posit, as a first approximation,
2
vrrmax
L
+ Ia ,
2 2 = Sa
8
3
10 erg s1
10 km s
(10.10)
where Sa and Ia are the slope and intercept of the linear relation, respectively.
When the error in calculating distance is a small factor such as when
2
using Da or Dtf , more galaxies were sampled. The data of vrrmax
versus L
of 15 select galaxies from Fig. 10.1 and of 80 other galaxies from Begeman
et al. (1991); Broeils (1992); Garca-Ruiz et al. (2002); Guhathakurta et
al. (1988); Kornreich (2000); Kornreich (2001); Liszt and Dickey (1995);
Mannheim and Kmetko (1996); Rubin et al. (1985); and Swaters et al.
(1999), for which the RR was identified, are plotted in Fig. 10.2. The other
sample galaxies are listed in Table 10.2 denoted by an integer in the a1
column. Some of the galaxies had only two data points and a significant
decline in v 2 to indicate the end of the RR. The distribution of the data
2
points in Fig. 10.2 suggested a grouping of the galaxies such that the vrrmax
versus L relations are linear for each group. This is reflected in the plotted
lines in Fig. 10.2. Call each group a classification. For the calculation, the
2
(L,vrrmax
) = (0,0) point was included in all classifications. The relationship
of the slopes of the lines is
log10 Sa = Servd a1 + Iervd ,
(10.11)
334
Table 10.2: Interger values for the non-select galaxies in the sample.
Galaxy
IC 0467
IC 2233
NGC 0701
NGC 0753
NGC 0801
NGC 0991
NGC 1024
NGC 1035
NGC 1042
NGC 1085
NGC 1087
NGC 1169
NGC 1325
NGC 1353
NGC 1357
NGC 1417
NGC 1421
NGC 1515
NGC 1560
NGC 1620
NGC 2715
NGC 2742
NGC 2770
NGC 2844
NGC 2903
NGC 2998
NGC 3054
NGC 3067
NGC 3109
NGC 3118
NGC 3200
NGC 3432
NGC 3495
NGC 3593
NGC 3600
NGC 3626
NGC 3672
NGC 3900
NGC 4010
NGC 4051
NGC 4062
NGC 4096
NGC 4138
NGC 4144
NGC 4178
a1
3
4
3
4
5
2
4
4
3
5
4
a2
9
6
8
8
8
3
8
6
4
6
8
2
5
4
5
3
5
4
3
2
3
9
6
9
9
4
4
10
5
7
5
f1
f2
j1
j2
-3
-2
3
3
5
3
4
5
5
4
4
4
2
5
4
13
11
2
11
15
5
4
15
10
5
11
5
3
3
4
4
3
4
5
4
2
1
4
4
5
6
8
7
5
7
4
1
3
3
1
2
3
5
-2
6
6
2
2
2
4
5
6
Galaxy
NGC 4206
NGC 4216
NGC 4222
NGC 4237
NGC 4359
NGC 4378
NGC 4388
NGC 4395
NGC 4448
NGC 4605
NGC 4647
NGC 4654
NGC 4682
NGC 4689
NGC 4698
NGC 4772
NGC 4845
NGC 4866
NGC 5023
NGC 5107
NGC 5229
NGC 5297
NGC 5301
NGC 5377
NGC 5448
NGC 5474
NGC 6503
NGC 6814
NGC 7171
NGC 7217
NGC 7537
NGC 7541
UGC 01281
UGC 02259
UGC 03137
UGC 03685
UGC 05459
UGC 07089
UGC 07125
UGC 07321
UGC 07774
UGC 08246
UGC 09242
UGC 10205
a1
2
2
5
4
4
3
5
6
4
4
3
4
3
5
4
a2
7
8
10
6
3
11
5
3
7
10
7
1
7
9
6
f1
f2
j1
j2
-1
-1
3
4
4
7
9
8
4
4
7
-1
4
4
4
6
6
8
3
0
3
4
0
5
5
1
5
7
3
4
4
4
4
5
5
4
5
4
4
2
4
3
7
12
9
6
6
8
-1
0
5
8
9
7
6
0
4
7
8
2
3
1
5
8
3
-2
4
-3
3
1
0
2
2
3
2
1
0
4
4
4
4
4
3
3
2
4
4
5
3
5
4
4
4
4
4
3
-4
1
4
-4
2
-2
-1
8
1
-2
3
0
2
0
1
6
2
0
0
5
6
6
4
3
4
5
5
23
10
Table 10.3: Data for the integer values of a1 of Eq. (10.13) shown as the plotted lines in
Fig. 10.2.
a1 a
1
2
3
4
5
6
7
a The
b The
c The
d The
e The
f The
No.b
2
7
20
39
24
2
1
Corr.c
1.00
0.97
0.99
0.98
0.97
1.00
1.00
F Test
1.00
0.93
0.96
0.92
0.90
1.00
1.00
Sa de
2.4820.004
6.6 0.6
12.8 0.5
23.2 0.7
42
2
84.3 0.3
261
Ia f
0.0060.006
0.1 0.7
-0.8 0.8
0.7 0.7
1
2
-0.2 0.2
0
10.2. RESULTS
335
2
Figure 10.2: Plot of square of the rotation velocity vrrmax
(103 km2 s2 ) at the maximum
8
1
extent of the RR versus B band luminosity L (10 erg s ) for the 95 sample galaxies.
The 15 select galaxies have error bars that show the uncertainty range in each section of
the plot. The error bars for the remaining galaxies are omitted for clarity. The straight
lines mark the lines whose characteristics are listed in Table 10.3. The large, filled circle
denotes the data point for NGC 5448. The large, filled square denotes the data point for
NGC 3031.(Reprinted with permission of Nova Science Publishers, Inc. (Hodge 2010).)
correlation coefficient for each of the lines from the origin to the data points
(see Table 10.3 and Fig. 10.2) greater than 0.90, that yielded a minimum
e , and that rejected the null hypothesis of the test in Appendix A with a
confidence greater than 0.95.
Combining Eqs. (10.10) and (10.12) yields
2
L
vrrmax
a1
e ,
2 2 = Ka1 Ba1
8
3
10 erg s1
10 km s
(10.13)
where e = 21%. Note e = 16% for only the select galaxies. The large,
2
2
filled circle in Fig. 10.2 is the data point for NGC 5448 (vrrmax
/vrrmax
=
0.35). The large, filled square in Fig. 10.2 denotes the data point for NGC
2
2
3031 (vrrmax
/vrrmax
= 0.25).
Tables 10.2 and 10.4 lists the a1 values for the sample galaxies. Table 10.5 lists the minimum correlation coefficient Ccmin of the lines, the
constants Kx , and the exponential bases Bx , where the subscript x denotes the generic term.
The average difference between L and the luminosity calculated using
2
Dtf is 0.38L for the select galaxies. The relative difference in vrrmax
included
the measurement uncertainty and the uncertainty that the RR may extend
farther than the measured point such as seen for NGC 4258. The relative
336
Table 10.4: First approximation integer values for the select galaxies.
Galaxy
a1 b1 c1 d1 e1 f1 g1 h1 i1 j1 k1 l1
NGC 0224
5
5
5
3
4
4
3
3 4
4
NGC 0598
4
6
3
6
6
5
5
3 6
6
NGC 3031
5
5
5
5
4
3
3
2 4
3
5 7
NGC 0300
5
7
3
6
7
5
5
2 7
7
NGC 2403
5
6
4
6
6
4
4
2 6
1
4 6
NGC 5457
4
5
5
1
2
1
4
3 1
4
5 4
NGC 4258
3
2
1
4
5
2
2
2 3
2
1 2
NGC 0925
5
6
4
4
2
2
4
2 4
2
3 6
NGC 2841
1
4
4
4 3
2
5 1
NGC 3198
4
6
5
2
1
2
4
3 3
2
6 5
NGC 4414
5
4
4
4
4
2
3
2 3
1
2 6
NGC 3319
3
6
4
2
3
3
4
2 5
1
5 4
NGC 7331
5
3
3
5
5
3
3
3 4
3
3 4
NGC 4535
4
4
3
3
4
2
1
1 5
3
4 3
NGC 4321
4
4
4
2
2
2
1
2 4
3
2 3
NGC 4548
6
1
2 10
8
4
2
2 6
9 16 7
2
2
differences L/L = 0.3 and vrrmax
/vrrmax
= 0.1 are shown as error bars in
Fig. 10.2 for the select galaxies.
Other parametric relationships to L in the RR were calculated using the
2
same procedure that was used to evaluate the vrrmax
- L relation. Because
these equations involved Rrrmax , the calculation considered only the 15 select
galaxies. The resulting equations are:
Rrrmax
L
= Kb1 Bbb11 8
14% ;
kpc
10 erg s1
(10.14)
2
Rrrmax vrrmax
L
c1
15% ;
2 2 = Kc1 Bc1
8
3
10 erg s1
10 kpc km s
(10.15)
2
vrrmax
/Rrrmax
L
d1
11% ; and
1
2 2 = Kd1 Bd1
8
3
10 erg s1
10 kpc km s
(10.16)
L
13% ;
(10.17)
erg s1
kpc
where first approximation integer values are listed in Table 10.4 and the
Ccmin , Kx , and Bx are listed in Table 10.5.
Equations (10.13)(10.16) suggest at least two of the four integers a1 ,
b1 , c1 , and d1 are functions of the other two.
103
Srr
1
km2 s2
= Ke1 Bee11
108
10.2. RESULTS
337
Table 10.5: Values of the minimum correlation coefficients Ccmin , constants Kx , and
exponent bases Bx for the first approximation equations.
Integera
a1
b1
c1
d1
e1
f1
g1
h1
i1
l1
Ccmin b
0.97
0.86
0.96
0.99
0.97
0.96
0.93
0.97
0.95
0.96
Kx c
1.3 0.2
0.34 0.04
19
1
1.0 0.1
0.49 0.06
8.6 0.7
4.4 0.4
150
10
0.09 0.02
0.075 0.009
Bx d
2.060.07
1.880.05
1.890.05
1.570.06
1.710.04
1.570.04
1.770.06
2.5 0.1
1.9 0.2
1.950.02
a The
integer of the exponential value of parametric relationship that denotes the applicable equation.
minimum correlation coefficient rounded to two decimal places of the lines of the parametric
relationship.
c The constant of proportionality at 1 of the parametric relationship.
d The exponent base at 1 of the parametric relationship.
b The
The end of the EOR was considered to be at the largest radius Reormax
(kpc) along the major axis that H i is in orbit around the galaxy. Several
galaxies have a H i measured point on one side of the galaxy at a greater
Rmajor than the other side. Because averaging the v from each side of the
~ ~ao | effect that causes non-circular
galaxy was required to reduce the |K
rotation, the outermost measured point used in the calculations was at the
Reormax with data points on both sides of the galaxy. At R > Reormax
particles are no longer in orbit. For particles other than H i, the ms /m is
less. Therefore, their maximum radius in a galaxy is less. NGC 4321 has
H i extending farther than a radius of four arcminutes. However, Knapen
et al. (1993) concluded the H i motion beyond four arcminutes is not disk
rotation. Therefore, Reormax was considered to be at four arcminutes for
NGC 4321.
Because Mt and M , Meormax L, where Meormax is
the mass within Reormax of Eq. (10.6). The rotation velocity veormax of the
galaxy at a radius of Reormax can be used to compare parameters among
galaxies.
Parameter relations to L in the EOR were calculated using the same
2
procedure used to evaluate the vrrmax
L relation.
2
The data of veormax versus L of 16 select galaxies and of 34 other galaxies
from Begeman et al. (1991); Broeils (1992); Garca-Ruiz et al. (2002); Kornreich (2000); Kornreich (2001); Liszt and Dickey (1995); Mannheim and
Kmetko (1996); and Swaters et al. (1999) for a total of 50 sample galaxies
are plotted in Fig. 10.3. The 34 other galaxies are listed in Table 10.2 de-
338
2
Figure 10.3: Plot of the square of rotation velocity veormax
(103 km2 s2 ) at extreme
8
1
outer region versus B band luminosity L (10 erg s ) for the 50 sample galaxies. The 16
select galaxies have error bars that show the uncertainty level in each section of the plot.
The error bars for the remaining galaxies are omitted for clarity. The large, filled circle
denotes the data point for NGC 5448. The large, filled square denotes the data point for
NGC 3031. (Reprinted with permission of Nova Science Publishers, Inc. (Hodge 2010).)
(10.18)
where the large, filled circle in Fig. 10.3 denotes the data point for NGC
2
2
5448 (veormax
/veormax
= 0.20) and the large, filled square denotes the data
2
2
point for NGC 3031 (veormax
/veormax
= 0.08);
Reormax
L
= Kg1 Bgg11 8
16% ;
kpc
10 erg s1
(10.19)
2
Reormax veormax
L
h1
=
K
B
19% ; and
h
1
h
2
1
8
10 erg s1
103 kpc km s2
(10.20)
2
veormax
/Reormax
L
i1
15% ;
1
2 2 = Ki1 Bi1
8
3
10 erg s1
10 kpc km s
(10.21)
10.2. RESULTS
339
where first approximation integer values are listed in Tables 10.2 and 10.4
and the Ccmin , Kx , and Bx are listed in Table 10.5.
~ ~ao | depends on the accuracy of meaThe uncertainty in calculating |K
suring the distance D (Mpc) to neighbor galaxies that are without a Da and
without a Dtf measurement. The cell structure of galaxy clusters (Aaronson et al. 1982; Ceccarelli et al. 2005; Hodge 2006a; Hudson et al. 2004;
Lilje et al. 1986; Rejkuba 2004) suggests a systematic error in z relative
to D of a galaxy. Therefore, using z and the Hubble Law to determine D
introduces a large error. However, the cell model also suggests neighboring
galaxies have a similar D/z ratio in the first approximation. For those few
galaxies near our line of sight to the sample galaxy and near the sample
galaxy, Hodge (2006a) found a z change caused by the light passing close to
galaxies. For a single galaxy this effect is small and was ignored. Therefore,
Di =
zi
Dp ,
zp
(10.22)
where zp and Dp (Mpc) are the redshift and distance of the sample galaxy,
respectively, and zi and Di (Mpc) are the redshift and calculated distance
of the ith neighbor galaxy, respectively. The Dp = Da for the select galaxies
and Dp = Dtf for the other sample galaxies. The L of the ith neighbor
galaxy was calculated using Di , mb , and Ext .
Because larger |zi zp | implies larger error in Di , the Nsources and Nsinks of
Eq. (10.5) was limited to the number N of galaxies with the largest influence
~ of the sample galaxy, where the
~ of Eq. (10.5) is evaluated at the
on ||
center of the sample galaxy. The N = 7 was chosen because it produced
the highest correlation in the calculations. Also, 6 N 9 produced
acceptable correlation in the calculations.
~ ~ao | also requires knowledge of the orientation of
The calculation of |K
the sample galaxy. The direction of the sample galaxys polar unit vector
~epolar was defined as northward from the center of the sample galaxy along
the polar axis of the galaxy. Hu et al. (2005); Pereira and Kuhn (2005); and
Trujillo et al. (2006) found an alignment of the polar axis of neighboring
spiral galaxies. Therefore, the orientation of ~epolar with the higher |~epolar ~ao |
was chosen for the calculations.
The major axis unit vector ~emajor was defined as eastward from the center
of the sample galaxy along the major axis of the galaxy. The minor axis
unit vector ~eminor ~emajor ~epolar .
At equal Rmajor > Rrrmax on opposite sides of the galaxy, the L terms of
Eq. (10.6) are equal and the M terms are nearly equal. Define asymmetry
340
~ ~ao |
Figure 10.4: Plot of maximum asymmetry Asymax (103 km2 s2 ) versus |K
(103 kpc1 km2 s2 ) for the 50 sample galaxies. The 13 select galaxies have error bars
that show the uncertainty level in each section of the plot. . The error bars for the
remaining galaxies are omitted for clarity. The large, filled circle denotes the data point
for NGC 5448. The large, filled square denotes the data point for NGC 3031.(Reprinted
with permission of Nova Science Publishers, Inc. (Hodge 2010).)
Asym ,
Asym (vh2 vl2 )|R ,
(10.23)
where vh and vl are the larger v and smaller v at the same Rmajor (|R ) on
opposite sides of the galaxy, respectively.
~ ~ao | are sensitive
The Asym is a function of Rmajor . The Asym and |K
to both radial (non-circular) motion of particles and to variation of v due
to torque. For the first approximation the maximum asymmetry Asymax in
~ ~ao |. Because Asymax is minimally
the OR depends predominately on |K
dependant on R, Asymax is comparable among galaxies.
NGC 0224, NGC 0300, and NGC 0598 were omitted from the sample
because their z < 0 and their neighbor galaxies may have positive or negative z. Therefore, evaluating the distance of neighbor galaxies of these
~ ~ao |, Asymax , and
galaxies was considered unreliable. Table 10.6 lists the |K
Asymax references for the remaining 13 select galaxies. The sample consisted
of 13 select galaxies and of 36 other galaxies from Garca-Ruiz et al. (2002);
Kornreich (2000); Kornreich (2001); and Swaters et al. (1999) for a total of
49 sample galaxies. The 36 other galaxies are listed in Table 10.2 denoted
~ ~ao | is shown
by an integer in the j1 column. The plot of Asymax versus |K
in Fig. 10.4.
2
The same procedure that was used to evaluate the vrrmax
L relation
10.2. RESULTS
341
(10.24)
where first approximation integer values are listed in Tables 10.2 and 10.4,
Ccmin = 0.98, Kj1 = (1.000 0.001), Bj1 = 2.94 0.09, and
~
K
(10.25)
342
were found by fitting a least squares straight line to the outermost two or
more data points of the RC with a 0.97 or higher correlation coefficient.
2
The same procedure that was used to evaluate the vrrmax
L relation was
~
used to evaluate the Seor |K ~ao | relation. The result is
Seor
103 kpc1 km2 s2
~ ~ao |
|K
103 kpc1 km2 s2
+Iseor 15% ,
= Kk1 Bkk11
(10.26)
where Ccmin = 0.96, Kk1 = 0.64 0.09, Bk1 = 1.74 0.06, and Iseor =
4.8 0.1 at 1. Tables 10.4 lists the k1 values for the 13 select galaxies.
As seen in Fig. 10.1 the measured points in the OR appear nearly linear rather than a smoothly varying curve in most of the select galaxies.
Therefore, the linear approximation of the RC slope in the OR was considered more appropriate than a smoothly varying function. Define Sor
(km2 s2 kpc1 ) as the slope from the Rrrmax data point to the beginning
of the Seor of the H i RC. For the 15 select galaxies excluding NGC 2841,
following the same procedure as for finding Eq. (10.13) yields
103
Sor
2 2
km s
kpc
= Kl1 Bll11
L
108 erg s1
+ Ior 15% ,
(10.27)
where the integer values are listed in Table 10.4. The Ccmin , Kl1 , and Bl1
values are listed in Table 10.5 and Ior = 1.6 0.1.
10.2.3
Second approximation
= Ka1 Baa11
L
108 erg s1
~ ~ao |
|K
10% ,
103 kpc1 km2 s2
(10.28)
10.2. RESULTS
343
2
2
where Ka2 = 0.15 0.02, Ba2 = 2.00 0.03 at 1, for NGC 5448 (vrrmax
/vrrmax
= 0.04), and for
2
2
NGC 3031 (vrrmax
/vrrmax
= 0.03);
Rrrmax
kpc
= Kb1 Bbb1
L
108 erg s1
~ ~ao |
|K
2% ,
103 kpc1 km2 s2
(10.29)
L
108 erg s1
= Kc1 Bcc11
~ ~ao |
|K
3% ,
103 kpc1 km2 s2
(10.30)
L
108 erg s1
Kd1 Bdd1
1
~ ~ao |
|K
3% ,
103 kpc1 km2 s2
(10.31)
= Ke1 Bee11
L
108 erg s1
103
~ ~ao |
|K
2% ,
kpc1 km2 s2
(10.32)
= Kf1 Bff1
1
L
108 erg s1
103
~ ~ao |
|K
2% ,
kpc1 km2 s2
(10.33)
2
2
where Kf2 = 0.7 0.1 and Bf2 = 1.9 0.1 at 1, for NGC 5448 (veormax
/veormax
= 0.03), and for
2
2
NGC 3031 (veormax
/veormax
= 0.02);
Reormax
kpc
= Kg1 Bgg11
L
108 erg s1
103
~ ~ao |
|K
3% ,
kpc1 km2 s2
(10.34)
= Kh1 Bhh1
1
L
108 erg s1
~ ~ao |
|K
3% ,
103 kpc1 km2 s2
(10.35)
= Ki1 Bii1
1
L
108 erg s1
~ ~ao |
|K
2% ,
103 kpc1 km2 s2
(10.36)
344
Table 10.7: Second approximation integer values for the select galaxies.
Galaxy
a2 b2 c2 d2 e2 f2 g2 h2 i2 j2 k2
l2
NGC 0224
NGC 0598
NGC 3031
8
1
7
6
4
4
3
6
1
4
3 11
NGC 0300
NGC 2403
3
3
4
4
6
2
4
2
5
5
5
6
NGC 5457
4
6
7
5
8
3
7
7
4
5
3
8
NGC 4258
4
2
1
3
6
2
4
3
2
4
2
1
NGC 0925
5
2
2
2
1
1
3
1
1
4
6
3
NGC 2841
5
8
8
4
2
4
1
NGC 3198
6
6
9
1
2
5
8
6
4
3
2
7
NGC 4414
4
3
6
4
6
2
5
2
1
1
1
6
NGC 3319
4
4
6
2
4
3
7
3
5
2
6
7
NGC 7331
7
3
6
6
7
3
6
5
5
2
3
3
NGC 4535
7
6
5
4
9
3
4
2
7
4
3
6
NGC 4321
1
1
2
1
2
2
1
3
3
3
1
5
NGC 4548 16 12
8 16 21 13 14 13 16
4
5 18
where Ki2 = 0.024 0.002 and Bi2 = 1.78 0.04 at 1;
Asymax
103 km2 s2
~ ~ao |
|K
103 kpc1 km2 s2
L
+(1)sj Kj2 Bjj2 8
8% ,
2 10 erg s1
= Kj1 Bjj1
1
(10.37)
where Kj2 = 0.0530.008 and Bj2 = 2.40.2 at 1 for the select galaxies, e (Asymax /Asymax ) = 0.04,
for NGC 5448 (Asymax /Asymax = 0.02), and for NGC 3031 (Asymax /Asymax = 0.04);
Seor
103 kpc1 km2 s2
~ ~ao |
|K
+ Iseor
103 kpc1 km2 s2
L
+(1)sk Kk2 Bkk2 8
2% ,
2 10 erg s1
= Kk1 Bkk1
1
(10.38)
= Kl1 Bll1
1
L
108 erg s1
~ ~ao |
|K
2% ,
103 kpc1 km2 s2
(10.39)
If the measured value of a parameter is larger than the first approximation calculated value, the sign sx = 0 in Eqs. (10.28)-(10.39). If the measured value of a parameter is smaller than the first approximation calculated
value, the sign sx = 1 in Eqs. (10.28)-(10.39). The second approximation
integer values are listed in Tables 10.2 and 10.7.
The calculated RCs of the above equations are plotted as solid lines in
Fig. 10.1 for the select galaxies.
10.3. DISCUSSION
345
Because barred galaxies were part of the sample, the monopole Source
suggests the is not sourced by matter as is gravity. The Source appears
as a monopole at the beginning of the RR that is a few kpc from the center
of the galaxy.
10.3
Discussion
The second approximation equations show little effect of neighboring galaxies on the H i RC except when orientation and closeness of luminous galaxies
~ ~a|. Although the neighboring galaxys effect is less
produce a higher |K
significant on the RC, it does induce small perturbations in the orbits of
particles that cause the observed asymmetry. Because the ms /m is smaller
for higher metallicity particles, the effect of neighboring galaxies on the H
RC is less. Therefore, the finding of cluster effects on H RCs is subject to
~ ~a| is large
the sample selected and may be detected only when |K
~ ~a| requires the host galaxy be oriented appropriately and
A large |K
~
~ is present near the cluster center or where
requires large .
A large
the near galaxies are large and asymmetrically arranged around the host
galaxy. The former condition is caused by the < 0 of the galaxies in
the cluster center surrounded by > 0 of the galaxies in the cluster shell.
Therefore, the SPM is consistent with the finding of Dale et al. (2001). The
latter condition may be caused by a large, near neighbor galaxy.
The SPMs (Gmg Mg Gs ms L)/m R2 term has the form of a subtractive
change in Newtonian acceleration. The SPM suggests the ms change with
radius causes the species of matter to change with radius that causes the
appearance of a gravitational acceleration change with radius that is related
~ ~a| term is small, the SPM effective force model reduces to
to L. If the |K
the MOND model as was suggested in Hodge (2006a) for elliptical galaxies.
The deviation of the data of NGC 5448 from Eq. (10.13) suggest a
physical mechanism behind the quantized galaxy parameters. The clear
departure from circular motion and the significant mass transfer inward
6= 0) found by Fathi et al. (2005) suggests this galaxy is in transition
(R
from one virial state to another. Further, the noted stellar and gas velocity
2
difference decreases at larger radii. The better fitting of the veormax
L
~
and of the Asymax |K ~ao | relations is the expected result. NGC 3031
shows strong, non-circular motion in the disk (Gottesman et al. 1966). This
suggests the integer variation is caused by the accumulation of mass at
potential barriers such as at R and Rrrmax . Continued nucleosynthesis
346
10.3. DISCUSSION
347
348
Chapter 11
Because the amplitude and shape of galaxy rotation curves (RCs) correlate with galaxy luminosity (Burstein & Rubin 1985; Catinella 2006; Hodge
& Castelaz 2003b; Persic 1996), relationships between galaxy central parameters and large scale galaxy parameters are unexpected by Newtonian
dynamics.
Whitmore et al. (1979) and Whitmore & Kirshner (1981) found the ratio
of the rotation velocity vc (km s1 ) in the flat region of the RC and the
central velocity dispersion c (km s1 ) 1.7 for a sample of S0 and spiral
galaxies. Gerhard et al. (2001) found the maximum circular velocity of
giant, round, and nearly non-rotating elliptical galaxies is correlated to the
c . Ferrarese (2002) discovered a power law relationship between circular
velocity vc25 (km s1 ) beyond the radius R25 of the 25th isophote and c for
a sample that also include elliptical galaxies (see her Fig. 1). Baes et al.
(2003) expanded on the data for spiral galaxies with flat and smooth RCs.
NGC 0598 was a clear outlier. The vc25 for NGC 0598 used in Ferrarese
(2002) was 135 km s1 that is the highest data point of a rising RC (Corbelli
& Salucci 2000). Galaxies with c < 70 km s1 (vc25 < 150 km s1 ) also
deviate from the linear relation. NGC 4565 was excluded because of warps
in the HI disk. Also, galaxies with significant non-circular motion of the HI
gas such as NGC 3031, NGC 3079, and NGC 4736 were omitted in Ferrarese
(2002). NGC 3200 and NGC 7171 were also excluded from Ferrarese (2002)
1 Portions
349
350
351
the Mc to the mass MDM of the dark matter halo thought to be around
spiral galaxies is a positive value that decreased with MDM . The Mc can
be distributed over the central volume with a density of at least 1012 M
pc3 (Dunning-Davies 2004; Ghez et al. 1998, 2000, 2003b). The orbits of
stars closest to the center of the Galaxy are approximately 1,169 times the
Schwartschild radius of a supermassive black hole (SBH) thought to be at
the center of the Galaxy (Ghez et al. 2000; Schodel 2002). The orbits of
stars closest to the center of the Galaxy are following elliptical paths (Ghez
et al. 2000) that suggests a net, attractive central force consistent with the
Newtonian spherical property (Ghez et al. 2003a; Schodel 2003).
That Mc is crowded into a ball with a radius of less than 45 AU is
proven (Ghez et al. 2003b). That the structure of Mc is a SBH is widely
accepted, but unproven (see Kormendy & Richstone 1995 for a discussion).
The Newtonian model implies the Mc must either quickly dissipate or must
quickly collapse into a SBH (Kormendy & Richstone 1995; Magorrian et al.
1998). The long term maintenance of Mc rules out the first possibility.
Mouawad et al. (2004) suggested there is some extended mass around Sgr
A. Observations have ruled out many models of the nature of Mc of galaxies
(Ghez et al. 2003a; Schodel 2003).
Observations inconsistent with the SBH model include shells of outward flowing, shocked gas around galactic nuclei (Binney and Merrifield
1998, page 595)(Konigl 2003). Shu et al. (2003) and Silk and Rees (1998)
suggested a repulsive force, called a wind (a gas), exerted a repulsive force
acting on the cross sectional area of particles. Therefore, denser particles
such as black holes move inward relative to less dense particles. Less dense
particles such as hydrogen gas move outward. Other observations inconsistent with the SBH model include the apparent inactivity of the central
SBH (Baganoff et al. 2001; Baganoff 2003a; Nayakshin and Sunyaev 2003;
Zhao et al. 2003) and the multitude of X-ray point sources, highly ionized
iron, and radio flares without accompanying large variation at longer wavelengths reported near the center of the Milky Way (Baganoff et al. 2001;
Baganoff 2003a,b; Binney and Merrifield 1998; Genzel et al. 2003; Zhao
et al. 2003; Wang et al. 2002).
The Mc correlation with Blue band luminosity Lbulge of the host galaxys
bulge (Kormendy & Richstone 1995) has a large scatter. The Mc
c , where varies between 5.270.40 (Ferrarese & Merritt 2000) and
3.750.3 (Gebhardt et al. 2000a). The Mc c relation appears to hold for
galaxies of differing Hubble types, for galaxies in varying environments, and
for galaxies with smooth or disturbed morphologies. Tremaine et al. (2002,
352
K1 B1I1
L
108 erg s1
~
ao |
e ,
(1)s K2 B2I2 103 kpc|K~
1
km2 s2
(11.1)
11.1. SAMPLE
353
Section 11.3.
11.1
Sample
The galaxies used in the calculations were those used in Hodge & Castelaz
(2003b). That is, they were selected from the NED database2 . The selection
criteria were that the heliocentric redshift zh be less than 0.03 and that
the object be classified as a galaxy. The parameters obtained from the
NED database included the name, equatorial longitude Elon (degrees) for
J2000.0, equatorial latitude Elat (degrees) for J2000.0, morphology, the Bband apparent magnitude mb (mag.), and the extinction Ext (mag.) as
defined by NED. The galactocentric redshift z was calculated from zh .
The c , the 21-cm line width W20 (km s1 ) at 20 percent of the peak,
the inclination in (arcdegrees), and the position angle pa (arcdegrees) for
galaxies were obtained from the LEDA database3 if such data existed.
The host sample galaxies with c , mb , W20 , in , and pa values were (1)
those used in Hodge & Castelaz (2003b); and Swaters et al. (1999), (2) those
used in Ferrarese (2002), and (3) those specifically excluded from Ferrarese
(2002). A total of 82 host sample galaxies were used for the c calculation.
Of the host galaxies, 60 are Source galaxies and 22 are Sink galaxies. Tables
11.1 and 11.2 lists the host galaxies used in the c calculation. Table 11.2
lists the 29 host galaxies used in the Mc calculation.
The distance D (Mpc) data for the 29 host sample used in the Mc calculation were taken from Merritt & Ferrarese (2001b). The D to nine host
galaxies was calculated using Cepheid stars from Freedman et al. (2001) and
Macri et al. (2001). The D to NGC 3031 and NGC 4258 were from Merritt
& Ferrarese (2001b) rather than from Freedman et al. (2001). The D to the
remaining host sample galaxies was calculated using the Tully-Fisher relation with the constants developed in Hodge (2006a). The remaining galaxies
from the NED database were neighbor galaxies. The D of these galaxies
was calculated from the relative z and D of the host galaxy as described by
Hodge & Castelaz (2003b). The L for the galaxies was calculated from D,
mb , and Ext .
This host galaxy sample has LSB, medium surface brightness (MSB),
and HSB galaxies; includes LINER, Sy, HII, and less active galaxies; field
2 The Ned database is available at http://nedwww.ipac.caltech.edu. The data were obtained from
NED on 5 May 2004.
3 The LEDA database is available at http://leda.univ-lyon.fr. The data were obtained from LEDA
on 5 May 2004.
354
Table 11.1: Data for the host sample galaxies used in the c calculation.
Galaxy
IC 0342
IC 0724
N 0224
N 0598
N 0701
N 0753
N 0801
N 1024
N 1353
N 1357
N 1417
N 1515
N 1620
N 2639
N 2742
N 2775
N 2815
N 2841
N 2844
N 2903
N 2998
N 3067
N 3079
N 3145
N 3198
N 3200
N 3593
N 4051
N 4062
N 4216
N 4321
N 4378
N 4388
N 4414
N 4448
N 4548
N 4565
N 4647
N 4698
N 4736
N 4866
N 5033
N 5055
N 5297
N 5457
N 6503
N 6814
N 7171
N 7217
N 7331
N 7506
N 7537
N 7541
Morph.
SAB(rs)cd HII
Sa
SA(s)b LINER
SA(s)cd HII
SB(rs)c Sbrst
SAB(rs)bc
Sc
(R)SA(r)ab
SA(rs)bc LINER
SA(s)ab
SAB(rs)b
SAB(s)bc
SAB(rs)bc
(R)SA(r)a ? Sy1.9
SA(s)c
SA(r)ab
(R)SB(r)b
SA(r)b ;LINER Sy1
SA(r)a
SB(s)d HII
SAB(rs)c
SAB(s)ab? HII
SB(s)c;LINER Sy2
SB(rs)bc
SB(rs)c
SA(rs)bc
SA(s)0/a;HII Sy2
SAB(rs)bc Sy1.5
SA(s)c HII
SAB(s)b HI I/LINER
SAB(s)bc;LINER HII
(R)SA(s)a Sy2
SA(s)b sp Sy2
SA(rs)c? LINER
SB(r)ab
SBb(rs);LINER Sy
SA(s)b? sp Sy 3 Sy1.9
SAB(rs)c
SA(s)ab Sy2
(R)SA(r)ab;Sy 2 LINER
SA(r)0+ sp LINER
SA(s)c Sy1.9
SA(rs)bc HI I/LINER
SAB(s)c sp
SAB(rs)cd
SA(s)cd HI I/LINER
SAB(rs)bc Sy1.5
SB(rs)b
(R)SA(r)ab;Sy LINER
SA(s)b LINER
(R)SB(r)0+
SAbc
SB(rs)bc pec HII
a Unit: 108 erg cm2 s1 .
b Unit: 103 kpc1 km2 s2 .
c Unit: 103 km2 s2 .
d This galaxy has a z value too small to
La
4.014
1.796
1.125
0.219
0.425
1.255
1.162
1.714
1.014
2.063
1.580
0.745
1.120
4.588
0.750
2.987
2.217
1.822
0.602
0.947
1.022
0.309
1.300
1.529
0.855
1.930
0.263
2.166
0.518
1.731
2.209
6.049
0.791
1.294
1.163
1.087
1.655
0.748
1.396
1.230
1.904
1.302
1.383
0.956
2.129
0.197
0.036
1.006
2.117
1.570
0.741
0.693
1.397
|K a|b
d
0.057
20.414
27.936
1.198
0.584
0.753
0.713
0.386
0.270
0.613
1.896
0.446
0.023
2.353
0.103
3.306
0.524
0.055
0.130
2.551
0.002
0.036
0.452
0.272
0.009
0.059
0.653
0.869
0.175
2.613
0.120
4.067
2.158
1.000
0.002
12.922
0.134
3.587
1.092
0.416
0.937
3.582
1.432
0.370
0.303
2.830
0.457
0.918
0.711
0.051
146.761
125.361
c c
m1
74
246
170
37
73
116
146
173
87
124
140
101
124
198
66
176
203
206
110
102
91
80
146
169
63
177
54
84
93
207
86
198
115
110
173
144
136
98
133
104
210
131
101
119
73
46
112
84
127
138
147
78
65
0
5
5
2
3
3
4
4
3
3
3
4
4
3
2
3
4
4
4
3
3
4
4
4
2
4
3
1
4
4
1
2
4
3
5
4
3
3
3
3
4
3
3
4
1
3
9
2
3
3
5
3
1
m2
d
11
6
1
6
6
7
8
9
10
8
7
9
13
4
7
6
9
9
8
6
12
11
8
7
14
8
7
7
11
5
11
6
6
9
12
4
9
6
7
10
8
7
8
7
5
7
8
8
8
12
1
-1
c /c
-0.11
-0.02
0.03
0.01
0.02
0.00
0.00
-0.02
0.17
0.08
0.05
-0.11
-0.11
0.04
-0.01
0.00
0.01
0.04
-0.01
0.02
-0.06
-0.01
-0.07
0.03
-0.08
0.12
0.01
-0.04
0.00
-0.05
0.04
-0.04
0.04
0.01
-0.07
0.00
0.01
0.05
0.05
-0.02
-0.08
0.00
0.21
0.11
-0.05
-0.01
0.05
-0.04
-0.11
-0.01
0.15
-0.01
0.01
11.2. RESULTS
355
Table 11.2: Data for the host sample galaxies used in the c and Mc calculations.
Morphology
La
|K a|b
c c
E3
3.757
0.809
306
e
cE2
0.021
72
SB(r)0+
0.122
2.230
194
SA(s)ab
1.047
0.352
162
S01.187
322.121
252
SA(r)0
0.956
6.161
210
E1
0.979
14.051
207
E2
1.164
0.708
192
SAB(s)bc
1.188
1.536
134
E2-3
2.972
10.588
309
S00.121
0.108
251
E1;LERG
3.595
9.080
282
E5
0.941
0.842
179
E+0-1
5.075
0.084
333
E6
0.362
56.704
157
E2
3.699
1.337
335
E6
1.291
1.994
174
S0 pec
0.655
1.739
120
E
0.368
0.314
234
E;LERG
4.990
0.016
311
E
3.284
0.232
270
SB(s)
0.690
2.424
148
E4
0.193
7.798
109
SB(rs)0
0.625
2.266
204
E3
0.816
1.414
285
SA(rs)0
0.323
0.900
69
E6?
1.518
0.329
200
E5-6
0.454
0.442
139
E
0.253
2.230
162
Unit: 108 erg cm2 s1 .
3
1
2
2
Unit: 10 kpc
km s
.
Unit: 103 km2 s2 .
Unit: 108 M .
This galaxy has a z value too small to obtain the |K
Galaxy
I 1459
N 0221
N 2787
N 3031
N 3115
N 3245
N 3379
N 3608
N 4258
N 4261
N 4342
N 4374
N 4473
N 4486
N 4564
N 4649
N 4697
N 5128
N 5845
N 6251
N 7052
N 3384
N 4742
N 1023
N 4291
N 7457
N 0821
N 3377
N 2778
a
b
c
d
e
Mc d
4.600
0.039
0.410
0.680
9.200
2.100
1.350
1.100
0.390
5.400
3.300
17.000
0.800
35.700
0.570
20.600
1.700
2.400
2.900
5.900
3.700
0.140
0.140
0.440
1.900
0.036
0.390
1.100
0.130
m1
4
8
8
4
6
5
5
5
4
5
9
4
5
4
6
5
4
4
7
4
4
5
6
6
7
4
5
5
7
m2
10
c /c
0.00
0.03
-0.04
-0.11
0.05
-0.02
-0.05
0.05
0.07
0.04
0.01
-0.06
0.04
-0.02
0.00
-0.07
0.07
0.04
-0.02
0.00
0.04
0.03
0.04
0.00
-0.11
0.00
-0.15
0.04
-0.10
8
10
11
7
6
9
8
7
11
7
8
12
0
9
7
6
10
10
10
7
5
6
9
7
10
8
7
n1
5
6
7
4
9
6
6
5
3
6
11
8
5
8
6
8
6
7
9
5
5
2
4
4
7
1
3
7
4
n2
10
e
2
6
13
5
3
6
2
1
5
7
7
19
-3
9
8
7
11
17
11
-4
-1
3
8
-2
8
9
1
Mc /Mc
-0.02
0.05
-0.01
0.01
-0.03
-0.04
-0.01
-0.01
0.02
0.00
-0.01
-0.02
0.01
0.00
-0.01
0.00
0.07
0.00
0.01
0.03
-0.02
0.00
-0.02
0.01
-0.06
0.02
-0.04
-0.03
0.01
a| measurement.
and cluster galaxies; galaxies with rising, flat, and declining RCs; and galaxies with varying degrees of asymmetry. The host sample includes NGC 0598,
NGC 3031, NGC 3079, NGC 3200, NGC 4565, NGC 4736, and NGC 7171
that were excluded from Ferrarese (2002), six galaxies with c < 70 km s1 ,
and galaxies that Pizzella et al. (2005) would exclude.
11.2
Results
Appling the same procedure used by Hodge & Castelaz (2003b) for finding
the parametric equations yields:
c2
=
103 km2 s2
m1
K1 B1
L
108 erg s1
|K~ao |
m2
(1)s K2 B2
6%
103 kpc1 km2 s2
(11.2)
and
Mc
=
108 M
n1
L
+
KM1 BM1
108 erg s1
~
|K~ao |
n2
(1)sM KM2 BM2
3%,
103 kpc1 km2 s2
(11.3)
356
11.3
Discussion
The SPM speculates structures of the central mass and the structure of
stellar nuclear clusters are the same. The suggested CMO structure is a
central Source of a matter-repulsive R1 , where R is the galactocentric
radius, surrounded by a spherical shell of matter. The SPM suggests the
L , where is the Source strength, and, therefore, Fs at a given R
on the cross section of matter ms . Therefore, the density (ms /mi ), where mi
is the inertial mass, of particles at a given radius varies with L. Therefore,
the galaxies with larger L will have more mass in the center shell to balance
the higher Fs with the gravitational force Fg . Therefore, the SPM naturally
leads to the smoothness of the Mcmo Mgal relation for the full range of
CMO spiral galaxies.
If this speculation is essentially correct, then the correlation of central
parameters with spiral galaxy global and RC parameters suggests not only
a similar galaxy formation process but also a self- regulatory, negative feedback process continually occurring. Feedback processes have been suggested
in several recent studies of galaxies with CMOs (e.g. Li et al. 2006; Merritt
& Ferrarese 2001b; Robertson et al. 2006). I further speculate the is the
control of the negative feedback process. If the mass of the CMO increases,
the Fg increases and mass migrates inward. At very high , the high repulsive Fs compresses matter, the mass (black hole) cracks like complex
molecules in the high heat and pressure of a fractional distillation process,
and matter is reclaimed as radiation and elementary particles that form
hydrogen. This accounts for the large amount of hydrogen outflowing from
the Galaxy center and shocked gas near the Galaxy center. A single black
hole reclamation event is consistent with the periodic X-ray pulses from the
Galaxy center. Further, the feedback loop controlled by is the connection among the central parameters, outer RC parameters, and the global
11.3. DISCUSSION
357
358
Chapter 12
359
360
12.1. MODEL
361
12.1
Model
Posit energy Qin is injected into our universe (U) through Source portals
from hot, thermodynamic reservoirs (HRs). The macro thermodynamic
processes were considered the same for all Sources. The Qin flows away
from the Source. Some matter remains near the Source as a galaxy and
some matter is removed from the Source galaxy by Fs . Gravitational forces
Fg cause the matter removed from galaxies to concentrate in a Source
less galaxy. Eventually, enough matter becomes concentrated to initiate a
Sink. Because a minimum amount of matter around a Sink is required to
initiate a Sink, a minimum amount of energy Qk and matter in a cell is
also required. The Sink ejects energy Qout out of U through Sink portals
to cold, thermodynamic reservoirs (CRs). The Sink strength depends on
the amount of matter in the local Sink volume. This is a negative feedback
mechanism that tightly controls the energy Qu = Qin Qout in U. Therefore,
Qu
now
Nsources
X
i=1
|i |
NX
sink
k=1
|k | dt,
(12.1)
where t is time since the start of U; i and k are indexes; Nsources and Nsink
are the total number of Sources and Sinks, respectively, in U; and | | means
absolute value of.
Thermodynamic equilibrium for U is when
Nsources
X
i=1
|i | =
NX
sink
k=1
|k |.
(12.2)
362
the elliptical galaxy to the Sink, and time to change . Therefore, the cells
are not in internal, thermal equilibrium.
For simplicity, consider only one cell and posit: (1) The cell consists of
a distribution of Sources around the core of Sinks. The core Sinks were
considered a monopole (point) Sink. (2) A Gaussian surface may be constructed enclosing the volume wherein all matter flows to the Sink core. The
Gaussian surface is the border of the cell.2 (3) The temperature v within
a cell is a function of distance x from the Sink core and t [v = v(x, t)]. (4)
The volume of the cell may be characterized by a linear dimension l wherein
l is the x of the Gaussian surface and a minimum. The cells are not required to share boundaries. The transparency of intercell space supports
this assumption. (5) The matter in a cell is gained and lost only at the
Sources and Sinks of the cell, respectively. That is, the matter flux across
the Gaussian surface is zero at all points of the surface. (6) Only radiation
and may leave the cell. (7) The x to the outermost Source is less than
l. (8) The v is proportional to the matter density in a cell. (9) The intragalactic medium cluster observations and D 1 suggests the diffusion
(heat) equation applies to the flow of matter from Sources to Sinks. (10)
The initial temperature of U is zero everywhere. (11) The Sink feedback
control mechanism is the amount of matter around the Sink that controls
the amount of radiation Q(t) per unit area per unit time emitted from the
cell through the Gaussian surface. Because the matter transport from the
Sources to the Sink core is considerably slower than the speed of light, the
matter transport and cooling (conductivity K) are the time determining
factors of Q(t). (12) Because only vl = v(l, t) was of concern, the initial
condition of the distribution of the Sources and Sinks was ignored. Therefore, the v(x, t) for values of x 6= l was not calculated. (13) The Q(t) is
proportional the departure of vl from V . Thus, Q(t) = C(V vl ), where
C is a function of the rate of matter input of the Sources in a cell and
was considered a constant. (14) The radiant energy and from other cells
influences K. Because only one cell was considered, K was considered a
constant. (15) The boundary conditions are
K
dv(0, t)
= C(V vl ),
dx
v(x, 0) = 0.
(12.3)
The solution of the heat equation for vl with these boundary conditions
has been performed [see Carslaw and Jeager (2000, 15.8, pp. 407-412)].
2 This
is a redefinition of a cell. Hodge (2006a) defined a cell with equal Source and Sink strengths.
12.1. MODEL
363
Figure 12.1 is a plot of vl /V versus kt/l2 for a stable value of kl, where
k = C/K is a positive constant.
U is the sum of all cells plus the and radiation in the space among
cells. There is no other external energy in U.
Posit each cell is a radiator like in a black box and all the cells are at
the same vl . The redshift of photons is caused by a loss of energy from the
photon to the universe caused by the field (Hodge 2006a). The lost photon energy must remain in U and be reabsorbed by U. Full thermalization
requires such an emission and absorption process. Therefore, one of two
possibilities exists: (1) All cells were formed at the same time and follow
identical evolution paths. That is, there is a universal time clock. (2) A
communication exists to equalize the temperature of each cell with other
cells by means of a feedback mechanism. For example, the field from a
neighboring cell may change the Fs in a cell that changes the rate of matter
transport, hence k. The latter may provide a mechanism for the cosmic
jerk suggested by Riess et al. (1998).
When vl > V , there is excess Qu . As vl decreases to values less than
V , the excess Qu is removed from U. Therefore, vl converges (hunts)
to V after a number of cycles that depend on kl. If the value of kl is
too low, vl < V always. If kl is too high, the hunting will diverge and
the universe will be unstable. The findings of Riess et al. (1998) suggest
vl is oscillating about V after Qk is established. Therefore, the process of
increasing Qu above Qk is reversible and U is behaving as a thermodynamic,
Carnot engine at 100% efficiency.
The Kelvin temperature scale is defined for a Carnot engine such that
the ratio of two Kelvin temperatures are to each other as the energies (heats)
absorbed and rejected by U,
Qu
V
Vhr
Qhr
=
=
=
,
Qu
Qcr
Vcr
V
(12.4)
where Qhr and Qcr are the energy in CR and HR, respectively; Vcr and Vcr
are the time average temperature of the CR and HR, respectively; and the
units of V , Vcr , and Vhr are Kelvin.
The amount of heat in each reservoir is proportional to the amount of
heat in the previous reservoir. Also, if the zero point of the Kelvin scale for
U is defined as Vcr , then
V = e K = 2.718 . . . K.
(12.5)
364
Figure 12.1: Behavior of vl with feedback control for intermediate values of kl.
12.2. CONCLUSION
12.2
365
Conclusion
366
Chapter 13
368
369
allow the necessary Competition without war and must have a means to
match the population with the food supply. The Way suggests the models
on which to organize humanity are present today. The present governing
systems and moralities appear to have reached their limits. The life Spirit
requires a new morality and governing system to progress to the next step.
The new morality must cause Change that will result in abolishing war.
The universe is changing. The Spirit that Changes the universe of physics
also Changes the universe of the Spirit.
The Fundamental Principles of life, social structures, and physics are
the same.
13.1
What are you? How will you survive the next 1000 years?
Basis of morality
Survival is not preordained. Humanitys philosophy and morals have not
solved the survival problem for either Individuals or our species. Of all
species known to have lived, less than one percent still survives. Nature is
against us.
Survival is a difficult goal to achieve. If it is a simple goal, why hasnt it
been achieved? Why havent philosophers agreed? Why is there disagreement on how to survive? Why isnt there only one religion?
If there is an underling moral of nature, it is to survive. Morals in
conformity to nature help their observers survive. The morals that allow
survival are not to be made by humans. Rather, they are to be discovered
by humans.
The test of any moral proposition is does it aid or hinder survival?
If it does not aid survival, then it hinders survival because it consumes
Resources.
Knowledge, mind and brain
Biological life is an extension of the chemical world via the chemical DNA.
This adaptation allowed faster Change and better use of Resources than the
chemicals could do on their own. Animals in societies are also a continuation
of the limits DNA imposes. Fewer genes are required to meet the challenge
of a wider range of environments. Instinctive or DNA Changes to do this
370
371
Nature of death
Death is an adaptation of life to develop and gain strength. The members of
the mineral world are very long lived. This means that Change occurs over
millions if not billions of years. Lifes adaptation of death allows Change
to occur much faster. Death, therefore, helps life meet natures conditions
better than everlasting life for an Individual.
That species or Individuals die is neither tragic nor immoral. The death
or crippling of a child may be shocking to human sensibilities. So humans
think the laws of nature are villainous. However, nature is neither good nor
bad. Nature merely is. Nature does not need to follow mere human ideas
of morality. Change will occur. That species die is part of the increase
of the life Spirit of all life. As the death (selection) of Individuals helps
make room and strengthen the Spirit of groups and species, the continued
modification of and death of species strengthens life.
What can be the goal of an Individual or a group if its destiny is to
die and be replaced? The basic argument for a supreme being (God) is
that only a being of superabundant Wisdom and Resources could have the
leisure and humor to create people and set them on the path of becoming
Gods. The goal is to be one more step along a very long path.
Life must follow natures rules if it is to continue to exist. This applies
to the big such as the dinosaur and to the beautiful such as the passenger
pigeon.
Natures rules are at least very difficult for us to know. They may be
unknowable. Current physics holds that the early seconds of the universe
had very different forces than now. As new organizations of societies are
created, new rules of behavior are created. Therefore, natures rules are
constantly Changing.
Meaning of life
Life and all the alleged political rights people have claimed for themselves
are not options for us. Life itself is not an option. Life has to strive for life.
Life costs energy. Death is the only option.
Nature has neither read nor endorsed the American Declaration of Independence, nor the French Declaration of the Rights of Man. We are born
unequal and confined. As civilization grows, inequality also grows. Nature
smiles on the process of selection from diversity.
Current physics holds that the earth, the solar system, and the universe
must eventually die. The only questions preoccupying physicists are when
372
and how the universe will end. Humans nobler goal should be to identify
that path of Spiritual development leading to a Spirit that outlives the
universe?
This message is the exhilarating imperative to live. The only question
is how?
Free will and problem solving
Homo sapiens is just one species among many that have evolved and survived until now. There are other species that have survived longer and,
therefore, are more worthy. The study (mans method of survival) of the
methods of these more worthy species can offer insights about the nature
of nature and about the moral methods we may use.
The study of survival methods yields human ideas of morals, philosophy
and religion. If any conclusion can be drawn from human history, it is that
the developments of some of human views of morality of grace, of beauty, of
faith, of good deeds, etc. do not yield survival. In fact, it sometimes seems
immoral behavior of war, of unfairness, of Change, etc. yield survival.
There are only a few concepts that have survival value. Homo sapiens must
act consistent with such concepts to survive. These concepts are called the
7 Great Ideas. The pursuit of all the Great Ideas will yield conflicts between
them. Therefore, the pursuit must also balance the effort among them.
There are many methods recognized as methods to solve problems. The
study or formal approach so cherished by homo sapiens is only one means of
solving relatively low complexity, slow changing problems. So homo sapiens
have slowly found some Truths. These helped him reduce the complexity
of problems and so yield the survival problem up to formalistic approaches.
People can plan only to the limit of their forecast ability, of their Understanding. Beyond that limit, all species approach the survival problem
by trial-and-error. So homo sapiens establish morals and religions. Some
survive, most die. Mystically, the greater glory of God is discovered.
Humans cope by determining the horizon of forecast ability, and should
plan within this horizon. Should a plan require a time longer than their
forecasting ability (Understanding), they are really using an experimental
method and run the risk of failure through unforeseen causes. Consider
the economic and political crises afflicting the U.S. and the world during
the past centuries. All have taken us unaware. World War II could have
been prevented had the ending of World War I been approached with more
Wisdom.
373
Complexity
high
low
Rate of change
slow
Experimental
Focus
Formal
Analogy
Representation
Piecemeal
fast
Serendipity
Synergy
Parallelism
374
375
state, the feeling of living in a state of oneness with your mothers, your
religion, your God, the universe is part of Change - Competition.
The Im in the milk and the milk is in me analogy is known to lovers
in sex (not Love) with each other, saints, psychotics, druggies, and infants.
It is called bliss. I think it no mistake that some of the people in bliss
are very low on the life scale (infants, druggies, psychotics, lovers seeking
procreation). The longing for union is a longing to return, not to progress.
Life progression is the becoming of an Individual Spirit. Adam and Eve
were the infants who by their own act chose to leave Eden. God arranged
it thus. God spent considerable effort to plant the apple tree, to tell Adam
and Eve of this, and to both forbid and entice them. He didnt have to
tell them of the benefits. So they could feel guilt and know there was no
return - a prime requirement for growth. Therefore, Eden can be regarded
not as connoting paradise but as connoting irresponsible dependence. Eden
symbolizes a life Spirit incapable of survival.
The loss of paradise and dependence can be suffered positively if the
life force (Spirit) itself initiates the Change. To be thrown out before one
is ready is to be permanently in an environment where the Spirits survival
mechanism doesnt function. Thus, the Spirit has a permanent psychosis.
The cure is to go back and begin again. Such restarting is common in
species, Individuals, and groups. The progress of Rome was halted and the
dark ages followed.
The progress of a life Spirit demands that the Individual leave the sameness and progress to a union with different Individuals. Electrons and protons united to form the atom Spirit. First an electron must be an electron
and a proton must be a proton.
Maslow (Motivation and Personality, Abraham H. Maslow, New York,
Harper and Row 1954) defined a set of needs that form in a hierarchy.
When a lower need is satisfied, a stress for a higher need predominates in
an Individuals concern. If a higher need is not satisfied, frustrating stress
occurs. This frustration is likely to endanger the Individuals survival in a
last effort to gain satisfaction.
The bottom two needs (physiological and security) of Maslows need
hierarchy are easily seen to be directly survival oriented. The survival
orientation of social affiliation and selfesteem are less clearly related to
survival. The difference is one of time orientation. The lower two needs
affect survival for the extent of months. The latter two needs effect survival from months to decades. The highest need, for self-actualization, may
require generations to effect the survival of a group. The need to increase
376
the likelihood of survival for longer time is the driving force to the higher
stages of Maslows hierarchy.
The groups need for survival over generations has been internalized into
the Individuals need for self-actualization. Those groups whose Individuals
did not satisfy the groups long term interest became less able to Compete.
The length of time over which a set of needs is satisfied is defined as
survival horizon. This definition also includes the set of actions that
Individuals or groups use to obtain survival. The set of actions or morals
of the group in relation to its resulting survival horizon depends on the
Competition. Nazi Germany might well have had a longer survival horizon
if its Competitors were tribes or feudal states as the Europeans found in
Africa two centuries ago.
As Maslows needs are related to the survival horizon, so is the survival
horizon related to Individuals needs. Thus, if an Individuals need is self
esteem, he need only begin to live his life with concern for the effect of each
decision of each moment on his well being for the next few years.
Increasing the survival horizon is not as easy as it sounds. The Individual must live each moment with an Understanding of the results of
his every action. He must Understand the reactions of others. Note the
stress on each moment rather than on long term plans. To Understand
reactions knowledge of the interaction of conflicting relations (Spirits) is
required. The Individual must know his place in his environment. An
Understanding of the Spirit of an environment is mostly unknowable.
Beyond Understanding and self-actualization there is a Godlike need to
create. To create, the Individual Spirit must not only have Understanding
but also have Hope and goals, and the Wisdom to Change the Spiritual
interactions.
The turnaround or crisis manager in the business management world has
a reputation for rapid action. If this were his only requirement, competent
crisis managers would be more common. This survival manager must also
have an uncommon Understanding of the likely outcome of decisions.
What an Individual of a Spirit group does in life has a continuing impact
on the Spirit of the group life. Individuals need not appear in the history
books nor engage in worldshaking enterprises. Each Individual lives in the
Spirit as the Individual contributes to the Spirit.
Living adds to greater Spirituality. Upon the death of the Individual,
the ability of that Individual to add to the groups Spirit ends. Dying ends
the contribution the Spirit the Individual has made. The Spirit of romantic
Love of Tristan and Isolde is gained in life and fixed, never to Change after
377
death. What would have happened had the lovers lived? Their Love would
have Changed. Therefore, Wagner had them die at the moment of their
reaching their highest Spirituality. The idea of gaining Spirituality in death
is not convincing. Through living Love, parts of a Spirit give to the greater
survival horizon of the group. Thus, living increases a Spirits strength.
Who you are is not the Individual now. Tomorrow that Individual will
learn and Change. Rather, who you are is what process you have chosen to
continue and to advance. This is how you survive for 1000 years.
378
REALITY
TOO
TOO
TOO
TOO
LATE
LATE
LATE
LATE
TODAY
TODAY
TODAY
TODAY
we
we
we
we
to
to
to
to
TOMARROW
TOMARROW
TOMARROW
TOMARROW
OUR
OUR
OUR
OUR
13.2
the
the
the
the
PROGENY
PROGENY
PROGENY
PROGENY
must remember
must change
must struggle
may survive
379
380
exists and is larger because of the other. The (gravitational) force of the
new Spirit is larger than either of the parts. As this process continues, life
Individuals become larger. The larger new life exerts more force.
The Spirit is stronger if it can encompass more energy. Bringing together
greater amounts of energy means the Spirit becomes bigger. We observe
that, in this state, the Spirit binds the energy less. Suns are created. If
the mass is large enough, it explodes. Therefore, we observe an opposing
mechanism of energy to prevent the Spirit from becoming infinitely large in
the mineral world. Balance is required.
The unity of the atom is that the proton and electron serve to define
only one atom at a time. The atom contracts to share an electron with
another atom and a chemical is formed.
The unity of a society is that the people serve only that society. An
Individual may attempt to serve many societies. If the societies come into
conflict, choices will be made. These choices determine which society survives.
A society Individual includes the animals, plants, minerals and land over
which it can achieve dominion. A society can kill animals for food for other
members. A society can trade with another society any of its Resources
without regard to the survival of the Spirit of the Resource. A society may
commit its Resources to war with another society in an attempt to acquire
more Resources.
When a Spirit reaches a limit, Change allows the formation of another
Spirit that may overcome the limitation of previous Spirits. When a sun
or an atom becomes too large, it explodes. When a society becomes larger
than its Spirit (organizational structure) can contain, it explodes.
Why should any arrangement of matter, any Spirit, be favored over any
other Spirit? The Spirit that survives must be more efficient in the use of
energy. A Spirit that is more efficient must be able to acquire more energy
with less waste than Competing Spirits. With limited Resources, the more
efficient Spirit eventually has more energy and, therefore, has the potential
of surviving longer.
If a Spirit is to survive, it must grow faster than the available Resources
or gain new uses for existing Resources. If a lower growth rate were allowed,
even a minor misfortune would kill the life.
The life Spirit in societies also grows faster than the Resources. Therefore, societies always appear to be more complex than the models can comprehend.
Therefore, for all life there is one condition of existence and two re-
381
sponses of existence. The condition is that there are only limited Resources
available. The two responses are Change and Competition.
Change
A constant of nature is that Change occurs. Even in the world of matter,
energy is being used, the universe is expanding, and matter is continually
changing form.
Change is a building upon previous Spirits and death of Spirits. A baby
Changes into an adult as time passes and it acquires more Resources. The
adult dies.
The types of Change are creation, evolution, revolution, and disintegration. Change is time related. The speed of Change is different for different
Spirits. The speed that DNA can Change is revolutionary for minerals.
The speed that societies can Change is revolutionary for DNA.
Lack of Change is not an option. A Spirit must grow or die because
other Spirits are growing and need Resources.
Creation
Creation is the formation of a new Spirit from other Spirits that will become
a part of the new Spirit. Several people may meet and from a new social
group such as the formation of a corporation. Asteroids may attract each
other and become one bigger asteroid, a planet, or a sun.
Evolution
Evolution is the formation of a new Spirit by modifying a small part of a
Spirit. The Change from one to another is such that the majority of the
new Spirit is the same as the old Spirit. A planet attracts one asteroid at
a time to become a sun. Part of a baby growing to be an adult is eating.
Little Change occurs each day. The corporation hires new people one by
one.
The faster a Spirit can Change, the more able it is to acquire Resources.
As a Spirit grows, it will become too big. It must either undergo a revolution or disintegration. A caterpillar must become a butterfly through a
revolution. As a corporation grows, it becomes too big for the management
style and structure. Reorganization is required.
382
Revolution
A revolution is the formation of a new Spirit by rapidly incorporating other
Spirits. This doesnt include killing the old Spirit. It is rapidly building on
the old. The time required to Change differs from one Spirit to another.
The revolution can occur from within by reorganization or from without
by being conquered by another or conquering another. Mass grows by
attracting other mass. It evolves until the mass is big enough to become a
sun. Then a revolution from within occurs and a sun is born. A corporation
may be bought and become a subsidiary of another corporation.
The danger of a rapid Change is that the new Spirit may not be able to
incorporate the new size. If the new size is too big, the Spirit disintegrates.
Disintegration
Disintegration is the death of a Spirit by its components becoming related
in less than an Individual. Disintegration may also occur from within or
from without. A sun explodes if it becomes too large. A corporation may
be bought and dissolved.
Competition
Change adds new forms and uses of Resources. The limited Resource condition implies there isnt enough for all. Therefore, some process to direct
Resources to the more efficient use is necessary. This is Competition.
Competitions place is to select and enlarge the forms of life that are
efficient users of Resources. Those less efficient must cease.
One form of being less efficient is to grow at a rate less than or equal to
the rate available Resources will allow. Other Spirits will kill such a Spirit.
The power of Competition is selection rather than disintegration or
death. If no choice exists, there is no selection and no death of less suitable
Individuals. Nature abhors stagnation. Death then is a part of life.
The varieties of Competition are reproduction, repetition, cooperation,
and war.
Reproduction
Reproduction is growth of an existing Spirit by duplication. Because they
use the same type of Resources, they must occupy different areas. Growth
383
384
other Spirits. This may mean the warriors will ultimately kill those Spirits
unwilling to engage in Competition and Change.
The economics of predation and war and their place as an acceptable
form of Competition are harsh relative to the morals associated with cooperation. A common theme of human history is to have a less cultured
group destroy a more cultured group. Thus, the Mongol herder society
conquered many agricultural groups. This is because the agricultural groups
had a Spirit that failed to fully comprehended Competition and Change.
Within a society, litigation is a form of war. Resources are spent for the
primary purpose of seizing others Resources.
Peoples sensibilities may disapprove of war being categorized as a positive process toward life. Lifes advance requires diversity and the culling of
weaker Spirits. People can be assured that if they dont perform lifes goal,
nature will with war. So, if people wish to avoid war, then they must do as
life dictates. This is difficult to do in society, as we know it.
If a society has no warrior class, both its male and female members must
die during famine or be conquered by a neighbor. Societies with a distinct
soldier/policeman/lawyer class from the producer class can survive only if
its neighbors are subject to the same ratio of soldiers to producers or are
very isolated. The versatility of requiring all males be subject to military
service could produce more in times of plenty, and form larger military
might when required. Standing armies and police forces are tremendous
burdens. This has tremendous influence on societys organization and laws.
Serf and slave states have tended to fail in war against states that allow
most males to be armed.
If war is to be abolished, a method must be found to allow weaker
societies to Change or die. The problem of war is the Murder of Individuals.
Cooperation and Competition must allow the weaker Individuals to dissolve
without taxing the more vital Individuals.
As war is a part of Competition, Murder is not. Murder is the destruction of contributing Individuals ability to Compete. The slaughter of
cattle Individuals is not Murder if the meat is eaten and if the cattle Spirit
is maintained by breeding replacements. The slaughter of a steer is not the
Murder of the cattle Spirit in a society. The killing of a pray animal is not
the Murder of the pray Spirit if the pray Spirit is encouraged to reproduce
and thrive.
History has many examples of one society conquering another through
war. So far, so good. Then some conquerors salt the fields, destroy the
plants, and kill the cattle to extinction. After conquering a society, the
385
386
ate in finding the most efficient niche positions. Cooperation reduces the
Resources used in the Competition and selection process. So long as cooperation helps each Spirit find his most efficient, contributing niche, the group
that is cooperating will be stronger. Therefore, cooperation aids Competition. A system not pursuing the goal of more efficient use of Resources, is
not cooperating and is becoming a weak Competitor. For instance, the systems of welfare and subsidies are not systems of cooperation although there
is organized action because non-contributing members are not Resources.
However, Competition with or without cooperation will proceed. It may
proceed from within the group or from without the group.
Predator, prey, parasite, victim
All Spirits feed on other Spirits. Plants feed on minerals. Animals feed on
plants. Societies feed on minerals, plants, animals and other societies. Individuals are incorporated into the predators Spirit. The Individuals Spirit
is destroyed so the predator can acquire the Resources of the Individual.
Each Spirit must try to survive. Whether a Spirit survives depends on
the larger Spirit of which it is a part, the Spirits to which it is a potential
victim, and the Spirits it eats.
Rabbits and foxes cooperate. Predation is a cooperative process between
different Spirits occupying different niches in the use of Resources. The
non-cooperation Competition is when Competing Spirits are attempting to
occupy the same niche.
Humans, as in other Spirits, at the top of a food chain are less fortunate
than rabbits and foxes. Humans must be held in check by Malthuss agents
(Essay on Population (1798) Thomas Malthus) of famine, pestilence, and
war. Hope and greater Understanding may refute Malthus.
There are cycles in nature of abundance and scarcity. A population
with abundance can expand to require Resources beyond those available
in scarce periods. The storage of Resources and continued efficient use of
Resources for the scarce periods is one approach to having a larger population. Another approach is to allow the weaker part, which is determined
by Competition in the ability to contribute, of the population to die during
scarcity. If the weaker part is allowed to continue during abundant periods,
then more than only the weaker will die during scarce or revolutionary periods. Indeed, the tolerance of the weaker may cause Disintegration from
within or without. The storage solution is viable for short cycles. Eventually the latter approach must be used.
387
388
allowed to exist, nature will wipe out the whole deer Spirit and start again
on another Spirit.
If an Individual is not contributing to a society, it is not Murder to kill
that Individual.
Balance the responses
The Competition concept leads to the concept of a life cycle - birth, growth,
death, birth. The same Spirits exist with differing Individuals. Life needs
progress. Change provides that progress, that breaking of the life cycle.
The Change concept leads to the concept of advancing toward a better
state. If the Change is inefficient, nature starts again.
Competition is the repeating of past occurrences, without Change. While
Change implies an expansion, a penetrating, a new usage. Competition is
a Nurturing of already existing Spirits.
To acquire more energy, a Spirit may Change or may Repeat itself. Each
requires energy. Each can further a Spirits ability to survive. If all energy
is devoted to Change, another Spirit may conquer through sheer size. If
all energy is devoted to Competition, another Spirit may conquer through
efficiency. Therefore, the responses of Change and Competition must be
balanced.
At the simplest level, the need to keep order in the universe or in a
society is a need for Competition. The tendency to disorder is the tendency
to Change. Thus, the order/disorder conflict is the Change/Competition
conflict that needs to be balanced for survival.
Indeed, the need to balance the forces of Change and Competition creates the fertile basis for a more efficient approach than mineral and physical
forces allow.
The same laws determine life processes. This is not because the laws or
processes of life are equal in different fields - organisms, minerals, physical,
religions, groups, or societies. Rather, because the condition of life processes
are the conditions of the functioning of all forms of Spirit and energy. The
process of each level of Spirit is different. Speaking of parallel processes
for minerals, bacteria, and humans is not justified. However, parallelism is
present in the sense that all Spirits must obey the same rules and overcome
the same type of forces.
Understanding a physical or Spiritual condition of a life form (process)
which we call lower (more primitive) or higher means (1) determining the
significance of the processes in the development of the life (past), (2) de-
389
termining the role that other life phenomena play with the totality of the
experiences of the life process in study (present), and (3) determining how
that life form is best able to realize its nature to the degree that the life
process survives (future). Note the distinctions being made between an
Individuals fate and a life process (or Spirit).
But this Understanding, however advanced, is limited. Just determining
the past, present, and future does not mean survival in the presence of
Change. A fourth, timeless factor of a Spirit is necessary. This timeless
factor is the ability to cause Change.
The causing of Change begins with an alteration in the way a life form
realizes its nature. This causes a Change in the role of that life form that
causes a Change in the processes of development of the life form. Out
of these reorientations evolve new niches and Competition results. Bison
hooves cut the grass that helps the grass roots reproduce and that kills
other plants. The bison cause the grassland and the grassland feeds the
bison. This timeless factor that causes the alteration is called Wisdom.
13.3
Solving the survival problems of groups presents very complex issues. The
longterm impact of any philosophical approach can be tested only in the
span of millennia. Even then there are limitations.
Existing biological mechanisms have passed the survival test over very
long periods. Biological systems of survival can show the Spiritual characteristics groups must have to survive. A group is as much a living, breathing
biomass as biological organisms are. The progression from Energy to mineral Life to biological Life to ideologies is not each a separate step. The
growth of a worldview of human kind and human ideologies is merely the
continuation of the increase of the Spirit of Life. Each is fundamentally
Energy and Spirit.
Individuals do only three things
They eat, they breed, and they die. These are biological words. Eating
is the acquiring of Energy into the Spirit. This may be directly acquired
as chemicals develop chemical bonds to make larger molecules. War is one
way for a nation to eat. The U. S. showed another way when it bought the
Louisiana Purchase territory, allowed emigration, and killed Indians. The
acquired Energy may be used to Repeat or to build a larger body (Spirit).
390
391
anxiety is dealt with in such a manner that a relative balance occurs before
the next disturbance, then a slow evolving will occur as Darwin suggests.
As more Spirits come into being, as each Changes, or as the environment
Changes, the frequency of catastrophic disorders increases. If each disturbance is not dealt with before the next disturbance, then the Spirit must
completely revolt in a major upheaval oriented toward changing the environment. This is done by eliminating other Spirits influence, by shrinking
his own role, by adjusting to the new order, or by dying. This is a growth
crisis.
The only option Life has within its own control is death. A destructive
alternative is that a Life Spirits may choose to self-destroy itself and destroy
its Competitor with it. This Samson syndrome is seen frequently in
history. A strong Life Spirit must allow for and overcome the possible evil
of the Competitors death wish.
Charles S. Elton, a British scientist, wrote a wonderful, sensible and
informative book on this subject, The Ecology of Invasions by Animals and
Plants. The biological world has been consistently disproving mankinds
view of moral. The various invasions by plant and animals into mankinds
ordered world have been very savage. Even the wildest fiction of doom
cannot compare with reality. Life is not for the squeamish. The infestations
cannot be stopped with poisons or bombs. The only sensible course of action
is to Understand the basics of the fight for survival and meet these invasions
on their own terms.
Examples in just the last few decades in the Americas are terrifying.
There are fire ants, killer bees, chestnut fungus, the sea lamprey of the
Great Lakes, and many more.
There are also many lessons. Lesson one: very often, a successful invasion is irrevocable. Once a new Life force is established, it will require
a new Life force (not poison or bombs) to supplant it. The new species
find crannies to hide in and tricks of survival so eradication is prohibitively
expensive in Resource requirement.
Lesson two: having shed its past and left its own enemies behind, an
invader will often explode to extreme population densities. Often the native
species and older Life forms are victims. The most benign invasion is when
the invader is merely Competitive.
Lesson three: most invaders that achieve calamitous population explosions have come from ecosystems that are more complex and rich in species.
They invaded a habitat that is less rich, less ecologically tangled, and more
simplified. So mankinds effort to simplify his environment to his level of
392
393
Groups and societies are a type of Individual and distinct from biological
Individuals as biological Individuals are distinct from mineral Individuals.
Cattle are Individuals in a group. Traditional thinking in the U.S. has people in a nation holding power. Private plunder or public graft is equally
unhealthy. Societies have different allocations of power. Societies are a
matter of organization and relationships of Individuals. Therefore, groups
and societies are Individuals of a Life force. Societies must obey the same
natural laws and respond to the same conditions as the mineral and biological worlds. Therefore, the society is a Life Spirit as is the mineral
and biological Spirits. Further, because more Energy is controlled by the
society, it is potentially an advance of the Life Spirit.
There are several organizational types of societies. One type is the
totalitarian society where its members are viewed as members of the state
and little more. Its members are as organs of a body. Each has his function.
Resources are used by the state. If one function fails, all die. Its strength
is that it can be single-mindedly directed to react rapidly to an emergency.
The good of the state must prevail over the Individual. The problem is
that the perceived good of the state does not always mean the survival
of the state. Thus, ants in one colony are one effective Individual rather
than a collection of ants. Competition is restricted and the state looses its
Understanding of the requirements for survival.
Another type of society is the opposite extreme. The Individualist view
is that the good of the Individual must come before the state. The citizen
uses Resources. The state is the instrument of the Individual to get his
good. Again, good is subject to Individual desires and may not achieve
survival. Those Individuals who chose to be in conformity to the Vital Way
will survive. Competition restricts the ability of the state to respond to
other states.
A third view is a combination of the two extremes. A well ordered
and just state (whatever that is) serves the Individual. A civicminded
Individual serves the whole. Often consumers that are not Individuals are
allowed to remain a member of the society. This is a symptom of the causes
of the collapse of the society.
The philosophy of anarchism requires Individuals or subgroups to cooperate to achieve survival without any coercive state or authority. Natures
forces are to Compete and Change. Without a coercive authority the Competition can result in violent destruction of Resources. The decentralization
of Roman power was an anarchists dream. It led to violent, destructive
conflict within Europe that slowed Europes growth (the dark age). There-
394
395
13.4
All Spirits must have some response to the 7 dimensions of action. A weak
Spirit will have responses that do not aid survival and do no direct harm
without Competition. These responses are called passive opposites. Some of
the possible responses are active in a sense that inhibits survival or causes
selfdestruction. These are called active opposites to survival. Mystical
terminology would refer to the active opposites as evil. The responses listed
above as Justice, Mercy, Truth, Love, Understanding, Hope and Wisdom
are the positive responses to survival and are called the Vital Ways.
Development of the life Spirit
The biological survival of the fittest mechanism is only the last resort of
natures Competition. For species at the top of a food chain with no natural
enemies, survival of the fittest implies war, starvation, and destruction of
limited Resources. When this answer is applied to groups, the groups war
and starve. This entails a huge Disintegration of Resources. Some action
to make the selection is required with less Disintegration. Using Justice is
much more efficient and is, therefore, necessary for life.
396
397
398
PAST
PAST
PRESENT
PRESENT
FUTURE
FUTURE
ALL TIME
Orientation
Expanding
Nurturing
Expanding
Nurturing
Expanding
Nurturing
balancing/coupling
Great Idea
Justice
Mercy
Truth
Love
Understanding
Hope
Wisdom
399
ing signs and maps may increase his Understanding of his choice. Once the
choice is made, he is committed to follow that path and experience what
that path holds.
Time
As noted before, the universe is in the process of becoming. Thus, a past,
present, and future coordination is necessary for survival. The condition
of limited Resources and the responses of Change (of becoming) and Competition are ever present and are timeless. Justice and Mercy are concepts
that respond to past events in the balancing of Change and Competition.
The ideas of Truth and Love are statements about the current condition so
Justice and Mercy may be balanced for survival. The ideas of Understanding and Hope are future oriented. As the natural condition and responses
have no time orientation (are always present), Wisdom is concerned with
all time. Wisdom is outside time and is concerned with the application of
forces to influence the flow of events of nature from the past through time
to the future.
The stage a Spirit is in can be likened to the growth of a person. For
example, a Spirit with a well-balanced Justice and Mercy level but weak
or imbalanced Truth and Love level is like a child. An adult Spirit has
Wisdom to the extent it can cause Change and Competition to balance.
Nurture
Hope
Love
Mercy
Competition
Spirit
400
401
The passive Nurturing ideals are Grace, Beauty, and Faith. Survival
demands active Nurturing. Not the passivity of Grace, but the activity
of Mercy. Not the passivity of Beauty, but the activity of Love. Not the
passivity of Faith, but the activity of Hope.
The Jews created the word of God- of the balance of Justice and Mercy
(substituting grace) through history. Christians were born to carry the word
of Truth and Love (Jesus substituted Love for beauty) through history. The
next great Change will be the formation of a church to carry the word of
Understanding and Hope as opposed to faith.
If each of the 7 dimensions and 2 responses are Vital Ways, then their
opposites must be equally great. There are different types of opposites. One
type must serve to enhance survivability. Another type serves to inhibit
survivability.
Nurturing and Expanding are active opposites required for survival.
Each of the 7 Vital Ways has passive opposites that allow an exhaustion
of the life Spirit for the eventual conquering by another, more active life
Spirit. There are also opposites that are active in a manner that results
in the selfdisintegration of life. For lack of a better term, these are called
evil.
The more obvious is hate as the active opposite of Love. The telling or
believing of lies (unTruths) is not the opposite of Truth. Many times a lie
is necessary for Love to help survival. The destructive opposite of Truth
is betrayal or bearing false witness when a Justice/Mercy decision is to be
made because this destroys the Justice - Mercy balance.
The mechanism of opposing forces is necessary for life. The balancing of Vital Way opposites (e.g., Justice and Mercy) is necessary for life.
The avoidance of passive opposites and the fight against active opposites
prolongs life. If Vital Way opposites do not exist, life will not survive.
Sameness can be deadly because diversity is required for Change and Competition. Therefore, the mechanism of opposites must exist. It evolved as
a necessary condition to survival.
The need for opposites and balance solved the greatest pollution problem
of all time. The life process of Plants gave off oxygen. While sufficient
carbon dioxide existed, all was sameness (only plants). However, oxygen
was a poison to plants. Beings that converted oxygen to carbon dioxide
were required. The new beings, animals, required a source of carbon and
energy. The plants had both and were required to sacrifice their own bodies
to the animals so the plant Spirit could live.
The conditions required for the continued existence of plant life may
402
have been attempted many times by plants. The arrival of animals allowed
the balance to be achieved. The mechanism of life is a mechanism for
achieving coupling and balance.
Help
Survival
Great
Idea
Competition
Change
Justice
Mercy
Truth
Love
Understanding
Hope
Wisdom
Selfdestructive
Active
Opposite
Welfare
Regression
Murder
Cruelty
Betrayal
Hate
Obstinacy
Death wish
403
in false prophets results in unfounded faith and illusion. If the theory and
Truth are correct, the prediction will come to pass. If not, the Truth and
the Understanding are wrong. The surviving Spirit will modify its Truth
and Understanding.
Comparing the response of the Individual to the Vital Way can make an
evaluation of the survival potential of an Individual. The survival potential
against Competing ideologies can predict which will survive. This system
can also be used to coordinate the ideology of a group so the survival of the
group can be enhanced.
For instance, the ideology of a parliamentary government and democracy appeared to be the major export of European colonial powers to
Africa. However, as the colonies became more selfgoverning, many, nondemocratic forms of government replaced the parliamentary ideal. The
experience of the U. S. in exporting democracy and freedom ideology has
also been less than successful.
Why? The survival model explanation is that the freedom ideal in practice in the colony did not match the religious, economic and organizational
(e.g., tribal) ideals. Secondly, the export was not really freedom. The U.S.
found dealing with strong, centralized government easier. The true export
had many passive and active opposites. Thus, gross imbalance was created
in the colony that could not use Resources, Compete, and Change as effectively as the older, tribal ideals. Note the reaction in the exporting power
was to increase its survival potential because the colony, for a short time,
gave more than it got. The exporter saw the transaction as a Love transaction. The colony, however, lost Resources and saw the transaction as Hate.
That the people of mideast nations want to kill us is a natural reaction to
the Hate toward them that we exported. Therefore, the relation between
the exporter and the colony was not a Love transaction.
The Hate in the colony fostered rebellion. In time the colonies required
the continued use of an internally oriented standing army from the homeland. The colonies were uneconomical to keep. The idea that the parliamentary system in the colony was working is false. To maintain the pax
Britannia and parliamentary government in the colonies, Britain had to
pay too high a price or to extract too heavy a tax from the people. Britain
was eventually bled of its Resources and its Competitiveness. The bleeding of Resources was more than economic. Many people migrated to other
countries like the U. S. As time passed, the Love transaction became imbalanced in that the Truth was lost. People rejected the idea that they
were predators. Therefore, grace and beauty as seen in Britain were met
404
with hate and war in the colonies. The real behavior of the passive and
active opposites caused the collapse of the empire. The same is true of pax
Romania centuries ago and pax Americana of this century.
There is a Spirit of all human kind. However, so far in human development, this Spirit has oscillated between the active and passive opposites.
Therefore, this Spirit is very weak. Nations continue to war. Much stronger
is the Spirits associated with states, nations and families.
A Spirit must pursue the Vital Ways to survive and avoid the opposites.
The pursuit of any of the opposite ideas will lead to a weakening of the
Spirit.
405
406
Soon crime will pay and, therefore, increase. Therefore, Love without Truth
will result in a quest for grace and peace. The Spirit of the state will decay.
The expansion of western civilization often overran societies that were
pursuing peace, grace, illusion, beauty, ignorance, and faith.
Third wrong way
Pursuing an active opposite will result in promoting the active opposites in
the next lower level. For example, the reaction to lack of Understanding
(inaccurate forecasting) and obstinately refusing to correct the models is to
place great strain on the Individuals of a Spirit. The Individuals then must
resort to betrayal in their interaction to other Individuals in the Spirit lest
their own Spirit be threatened.
Fourth wrong way
An Expanding imbalance is pursuing an Expanding method and not the
corresponding Nurturing method. An Expanding imbalance will result in
promoting both the active opposites in the next lower level.
Throughout history there have been many who have placed heave emphasis on the Expanding way of Justice without Mercy, Truth without Love
and Understanding without Hope. These often engage not only in war but
murder. Standing against such a great conqueror is often destructive. The
conqueror ends in selfdestruction or by causing another to destroy them
(a form of self destruction).
The degree of impact differs among the wrong ways
A significant part of Wisdom is to choose the wrong way it will tolerate.
Making no choice is really choosing the fourth wrong way.
For instance, if a society is suffering unjust, reduced effectiveness of some
of its member (murder) in the past level, then the cause and correction can
be found on the present level. Love may be misapplied or betrayal not
being restrained. If the society fails to act sufficiently to reduce bearing
false witness (betrayal), murder results.
If a society finds too often to have illusions rather than Truth, the
cause is having too much Hope and not enough Understanding, of having
inappropriate faith and allowing ignorance to persist.
There are limits to the Wisdom a Spirit possesses. Therefore, ignorance
and faith exists. Therefore, some acceptance of the passive opposites exists.
407
The first wrong way must exist. This is felt by a society as the everpresent
evil and as a foreboding that Disintegration is near. The goal is to develop a
Spirit stronger than other Spirits. A Spirit need only be better not perfect.
The error toward Expanding ways creates a high risk and rapid self
destruction. The error toward Nurturing ways allows others to conquer.
The tendency for societies to error toward Nurturing ways at least assures
longer survival if there are no immediate wars.
The Nurture imbalance is worse than allowing only one passive opposite.
It is within a Spirits control to strive for a balance. This Second wrong
way is unnecessary and can be corrected by a Spirit. Nurture imbalance is
worse for a Spirit than the First wrong way.
Pursuing an active opposite causes more destructive responses within a
Spirit than passive opposites. These responses reduce the Spirits ability to
Compete. Therefore, a Competing Spirit need not advance life to conquer.
The Competitor need only wait. The Spirit must take action by prohibiting
the Third wrong way. The Third wrong way is worse for a Spirit than the
first or second wrong way.
Expanding imbalance is much worse than dealing with only one active
opposite. Expanding imbalance is difficult to prohibit by laws. Expanding
imbalance is total anarchy. The Spirit will soon disintegrate. A Spirit
must exercise care lest another Spirit in Expanding imbalance drags it to
disintegration.
The misapplication of Expanding ways has a much more severe result
than the misapplication of Nurture ways. The risk is much higher. Therefore, societies have tended to imbalance in favor of the Nurturing ways.
This is especially true when the Competing risk (war) appears remote.
The meek shall indeed inherit the earth. The meek are those that pursue
the passive opposite (peace, grace, illusion, beauty, ignorance, and faith).
The meek seek to survive from misapplication, often the imbalance form, of
Nurturing ways. The earth is the mineral world. The meek shall not have
the Great Spirit embodied in the Vital Ways.
Previous attempts to eliminate war have failed. This means the Understanding and Truths of those movements are faulty. Many of these attempts
were and are grounded in the second and third wrong ways. Therefore, they
will continue to fail.
Preventing war means the Competition and Change processes must be
encouraged to continue in a manner that does not destroy Resources. The
second, third and fourth wrong ways must be inhibited. Although the First
wrong way will always be with us in ever smaller degrees, the second, third
408
and fourth wrong ways can be eliminated by our organizing our Spirit.
Chapter 14
410
411
out some galaxies and have outliers in the model. The STOE used all these
galaxy data. If some of the STOE parameters assume an average value
and are considered just one parameter, over 80% of the galaxy sample is
explained. This is approximately the rate of galaxies selection in other papers. By modeling the regions and by including neighbor Source and Sink
strength, much greater accuracy is achieved in addition to incorporating
the asymmetric RCs into the model. This yields a much more cohesive
picture of galaxies.
A novel concept in the STOE model is that the philosophies of societies
and life have the same Principles as the physics of the universe. The STOE
paradigm suggests the energy of the universe is being replaced. Therefore,
any entropy consumed to form Spirits and life is replaced. Indeed, the entropy of life comes from the cooling flow of energy from Sources to Sinks.
Therefore, the feedback parameter is modified that would change the temperature of the universe. Our society and we are a part of the universe.
412
appendix
.1
Category A galaxies consisted of 32 galaxies. The distances Da to Category A galaxies were calculated using Cepheid variable stars as listed by
Freedman et al. (2001) and Macri et al. (2001).
Category B galaxies consisted of 5967 spiral galaxies in the sample that
were not Category A galaxies with W20 , in , and mb values listed in the
databases. The distance Db (Mpc) for each of the Category B galaxies were
calculated following the method of Tully-Fisher (Tully and Fisher 1977) as
follows: (1) For the Category A galaxies, the absolute magnitude Ma was
calculated,
!
Ma
mb
Ext
Da
=
25 5 log10
.
mag.
mag. mag.
Mpc
(1)
i
i
(2) A plot of Ma /mag. versus log(W20
/km s1 ), where W20
is the inclination
corrected W20 , is presented in Fig. 1. IC 1613 (an Irregular galaxy classified as a Sink), NGC 5253 (an Irregular galaxy classified as a Sink), and
NGC 5457 (Freedman et al. (2001) noted the Da was calculated differently)
were omitted from the plot. The straight line in Fig. 1 is a plot of
i
Ma
W20
= Kws log
mag.
km s1
+ Kwi ,
(2)
where Kws = 6.0 0.6, and Kwi = 4 1 at one standard deviation (1).
The correlation coefficient is -0.90. Tully and Fisher (1977) calculated a
similar relation with Kws = 6.25 and Kwi = 3.5. The circles indicate
the data points for galaxies with (l,b) = (290 20 ,75 15 ) as in Fig. 6.1.
However, unlike Fig. 6.1, the data in Fig. 1 for the outlier galaxies appears
consistent with the other sample galaxies data.
413
414
APPENDIX
Figure 1: Plot of the absolute magnitude Ma versus the inclination corrected 21 cm. line
i
width W20
at 20% of peak value for 29 of the 32 Category A galaxies (Freedman et al.
2001; Macri et al. 2001). The straight line is a plot of Eq. (2). The circles indicate the
data points for galaxies with (l,b) = (290 20 ,75 15 ).
.2. THE MANY LINES IN THE PARAMETER RELATIONS ARE NOT RANDOM.415
(3) The Db is
where
Db = 100.4(mb Mb Ext ) ,
(3)
i
Mb
W20
= Kws log
+ Kwi .
(4)
mag.
km s1
Category C galaxies consisted of galaxies in the sample that were not
in the previous categories with 0.001 < zm < .002. A plot of Da versus
zm for the Category A galaxies in the Category C range is presented in
Fig. 2. Because the goal of this section is to arrive at the initial distance
estimation, NGC 2541 and NGC 4548 are outlier galaxies and were omitted
from the plot. The straight line in Fig. 2 is a plot of
czm
+ Kczi ,
(5)
Da =
Kczs
where Kczs = 100 10 km s1 Mpc1 , and Kczi = 1.7 0.4 Mpc at 1. The
correlation coefficient is -0.93.
The distance Dc (Mpc) for Category C galaxies is
czm
Dc =
+ Kczi .
(6)
Kczs
Category D galaxies are galaxies in the sample not in the previous categories and with zm < 0.001. The distance Dd (Mpc) to Category D
galaxies was 1 Mpc.
Category E galaxies are all other galaxies in the sample not in the previous categories. The distance De (Mpc) to these galaxies was calculated
using the Hubble law with Ho = 70 km s1 Mpc1 .
.2
The many lines in the plot in Fig. 10.2 suggest the data points may be
random. The null hypothesis tested was the data points are indeed random
points. This null hypothesis was tested by following the procedure used
in discovering Eq. (10.13) as follows: (1) A trial consisted of generating 15
sets of two random numbers between zero and one and subjecting the sets
to the slope, correlation coefficient, and e tests the galaxy samples passed.
Call the independent variable (Xi ) and the dependant variable (Yi ), where
i varies from one to 15. (2) The equation tested was
Ycalci = K1 B ti Xi ,
(7)
416
APPENDIX
.2. THE MANY LINES IN THE PARAMETER RELATIONS ARE NOT RANDOM.417
where K1 is the proportionality constant, B is the exponential base determined by the relation to be tested, and ti is the integer classification for
the ith set. (3) The K1 value for each trial was calculated as the minimum
value of K1 = Yi /(B Xi ) with an Xi > 0.4. (4) The ti value was calculated
for each set of (Xi , Yi ) values as
ti = ROUND log2
Yi
k 1 Xi
(8)
where ROUND means to round the value in the braces to the nearest
integer. (5) If any subset lacked a value, that subset was ignored. (6) If
the correlation coefficient of any ti subset of (Xi , Yi ) values, including the
(X0 , Y0 ) = (0,0) point, was less than 0.90, the trial failed. (7) The Ycalci
was calculated according to Eq. (7). If the e > 0.21 between the Yi and
Ycalci for the 15 sets, the trail failed. (8) A count of the number N of sets
of (Xi , Yi ) with Xi2 + Yi2 1 and the number NXltY of sets of (Xi , Yi ) with
Xi < Yi was made. (9) The trial was redone with another 15 sets of random
numbers for 30,000 trials.
For B = 2 the results were: (1) The N /(15 30, 000) = 0.78528 /4
and NXltY /(1530, 000) = 0.49999. Therefore, the random number generator performed satisfactorily. (2) Of the 30,000 trials 22 passed (0.07%), the
remainder of the trials failed. Therefore, the null hypothesis was rejected
with a 99+% confidence. For B = 1.65 the confidence level of rejecting the
null hypothesis was greater than 0.95.
418
APPENDIX
Bibliography
Aaronson, M., et al., 1982. ApJ 258, 64.
Afshar,
S.S.,
Proceedings
of SPIE
5866 (2005):
preprint
http://www.arxiv.org/abs/quant-ph/0701027v1
http://arxiv.org/ftp/quant-ph/papers/0702/0702188.pdf.
Aguilar, J. E. M., et al.,
qc?0411074), (2004).
229.
and
preprint (http://www.arxiv.org/abs/gr-
Alpher, R. A., Bethe, H. A. & Gamow, G., 1948. Phys. Rev. 73, 803.
Anderson, J. D., et al., 1998. Phys. Rev. Lett. 81, 2858.
Anderson, J. D.., et al., 2002. Phys. Rev. D65 082004.
Anderson, J. D.., et al., 2007. New Astrn. 12, 383. arxiv: astro-ph/0608087.
Aracil, B. et al., 2006. MNRAS 367, 139.
Arp, H., 1987. Quasars, Redshifts and Controversies,(Berkeley, Interstellar
Media).
Arp, H., 1998. Seeing Red: Redshifts, Cosmology and Academic Science,
Montreal, Canada: Apeiron.
Arsenault, R. et al. 1988, A & A, 200, 29.
Baes, M. et al., 2003. Mon. Not. R. Astron. Soc. 341L, 44.eprintastroph/0303628.
Baganoff, F.K. et al., 2001. Nature (London) 413, 45.
Baganoff, F.K., 2003a. American Astronomical Society HEAD meeting
#35, session #03.02.
Baganoff, F.K., 2003b. ApJ 591, 891.
419
420
BIBLIOGRAPHY
2000.
preprint
Begeman, K.G., Broeils, A. H., Sanders, R. H., 1991. Mon. Not. R. Astron.
Soc. 249, 523.
Bekestein, J.D., 2004. Phys.Rev.D 70, 083509.
Bell, M. B. ApJ 616 (2004), p. 738.
Bell, M. B., Comeau, S. P. and Russel, D.
http://www.arxiv.org/abs/astro-ph?0312132 (2003).
G.
preprint
G.,
preprint
2002.
preprint
Bertolami, O., Paramos, J., 2004. Clas. Quantum Gravity 21, 3309.
Bertolami, O., Paramos, J., 2005. Phys. Rev. D 71, 023521. preprint
http://www.arxiv.org/abs/astro-ph?0408216.
Bilic, N., Tupper, G. B. and Viollier, R. D. (2003), arXivastro-ph/0310172.
Binney, J., Merrifield, M., 1998. Galactic Astronomy. Princeton University
Press, Princeton NJ (and references therein).
Biviano, A. et al. 1990, ApJS, 74, 325.
Blondin, S., et al., 2008. ApJ 682, 724.
Bosma, A., 1981. AJ 86, 1791.
Bosma, A., Gross, W.M., Allen, R.J., 1981. A&A 93, 106.
BIBLIOGRAPHY
421
422
BIBLIOGRAPHY
BIBLIOGRAPHY
423
Zanghi,
N.,
2009.
preprint
125,
2951.preprint
424
BIBLIOGRAPHY
344.
preprint
http://arxiv.org/PS cache/astro-
BIBLIOGRAPHY
425
Kornreich, D.A., Haynes, M.P., Lovelace, R.V.E., van Zee, L., 2000. AJ
120, 139.
Kornreich, D.A., Haynes, M.P., Jore, K.P., Lovelace, R.V.E., 2001. AJ 121,
1358.
Krumm, N., Salpeter, E.E., 1979. AJ 84, 1138.
LopezCorredoira, M., 2010. Int. J. Mod. Phys. D. 19, 245.
Lauscher, O. and Reuter, M. 2005. Asymptotic Safety in Quantum Gravity.
preprint http://arxiv.org/abs/hep-th?0511260v1.pdf .
Li, Y. et al., 2006. preprint http://www.arxiv.org/abs/astro-ph?0607444.
Lilje, P.B., Yahil, A., Jones, B.J.T., 1986. ApJ 307, 91.
Liszt, H.S., Dickey, J.M., 1995. AJ 110, 998.
D. Lynden-Bell et al., ApJ 326 (1988), p. 19.
H. M
uller et al., (2003), eprintphysics/0305117.
Macri, L.M. et al., 2001. ApJ 559, 243.
Madore, B. M., & Freedman, W. L. 1991, PASP, 103, 933.
Magorrian, J. et al., 1998. AJ 115, 2285.
Mannheim,
P.D.,
Kmetko,
J.,
http://www.arxiv.org/abs/astro-ph?9602094.
1996.
preprint
preprint
http://www.arxiv.org/abs/astro-ph?
preprint
426
BIBLIOGRAPHY
BIBLIOGRAPHY
427
Paumard, T., Maullard, J. P., Morris, M. and Rigaut, F., 2001 A&A 366,
p. 466.
Peacock, J. A. et al., 2001.Nature (London) 410, p. 169.
Pecker, J. and Narliker, J. V. Current Issues in Cosmology, Cambridge,
UK: Cambridge University Press (2006).
Pereira, M.J., Kuhn, J.R., 2005. ApJ 627, 21.
Perlmutter, S. et al., 1998. Bull. Am. Astron. Soc. 29, 1351.
Persic, M., Salucci, P., Stel, F., 1996. Mon. Not. R. Astron. Soc. 281, 27.
Pierce, M. J., et al. 1992, ApJ, 393,523.
Pirogov, Y. F. 2005. preprint (http://www.arxiv.org/abs/gr-qc?0505031).
Pizzella, A. et al., 2005. ApJ 631, 785.
Pound, R.V., and Rebka, Jr., G.A., 1960. Phys. Rev. Letters 4, 337.
Prada F. et al., 2003. ApJ 598, p. 260.
Prasad,
N.,
Bhalerao,
R.
S.,
http://www.arxiv.org/abs/astro-ph?0309472.
2003.
preprint
Quataert, E., Narayan, R., & Reid, M. J. 1999, ApJ, 517, L101.
Rejkuba, M., 2004. A&A 413, 903.
Reuter, H. P. et al. 1996, AA, 306, 721.
Riess, A.G. et al., 1998. AJ 116, 1009.
Riess, A.G et al., 2004. ApJ 607, 607.
Robertson, B. et al., 2006. ApJ 641, 90.
Rodriguez-Meza, M. A. et al., 2005 preprint http://www.arxiv.org/abs/grqc?0504111.
Romanowsky, A. J. et al., 2003. Science 301 (2003), p. 1696.
Roscoe, D.F., 2002. A&A 385, 431.
Roscoe, D. F., 2004. GreGr 36, 3.
Rots, A. H., & Shane, W. W. 1975, AA, 45, 25.
428
BIBLIOGRAPHY
2005.
preprint
http://www.arxiv.org/abs/astro-
BIBLIOGRAPHY
429
430
BIBLIOGRAPHY
J.,
2005. preprint
Index
absolute space, 29, 49
absolute time, 29, 49
action at a distance, 29
action by contact, 178
adiabatic, 112
aether, 24
Einsteins, 62
Newton, 41
Afshar experiment, 166
Alhazen, 12
anthropic principle, 70, 72, 181
Aristarchus, 11
Aristotle, 8
atom
Democritus, 7
atomic theory, 99
atomism, 43
Babylonia, 2
Bacon, 13, 24
Bells theorem, 159
Berkeley, 37
big bang, 67, 71
three pillars, 82
black body, 107, 130
blackbody radiation
Newton, 41
Bohm, 159
Bohm Interpretation, 164
Boyle, 99
Braggs law, 142
Brahe, 14
Brans-Dicke theory, 70
Cavalieri, 23
central massive objects, 349
Change Principle, 181
life, 374, 399
chiral, 146
Clarke, 34
clockwork universe, 37
coherent, 103
Competition Principle, 182
life, 374, 399
contact force
aristotle, 9
Newton, 38
Copenhagen Interpretation, 154, 155
Afshar experiment, 166
Copernicus, 13
Cosmic Microwave Background, 52,
72, 75
temperature, 75, 359
cosmological constant, 46
Cosmological Principle, 69
covariance principle, 63
crystal spheres, 10
Brahe, 15
Cusa, 13
Dalton, 99
dark energy, 79
dark matter, 65, 74
crystal spheres, 10
431
432
de Broglie, 144
de Sitter, 65
Democritus, 7
Descartes, 21
diffraction
Gregory, 23
Newton, 25
Dirac, 69
Doppler, 107
Doppler shift, 107
Einstein, 49, 134
elliptical galaxy, 89
Empedocles, 9
EPR paradox, 158
Equivalence Principle, 49, 54
Aristotle, 10
Galileo, 17
Newton, 25
STOE, 205
strong, 57
Euclid, 10
Eudoxus, 8
Fermat, 22
fermions, 144
Fitzgerald contraction, 120
Fizeau experiment, 116
flatness problem, 72
FLRW metric, 68
Foucault, 47
Fourier, 101
ancients, 3
Galileo, 16
harmony, 5
Kepler, 18
music of the spheres, 19
Schrodinger equation, 150
uncertainty principle, 102
fractal
INDEX
ancients, 3
Fractal Principle, 183
frame dragging, 51
Fresnel, 105
Friedmann, 67
Fundamental Principle, 179
galaxy clusters, 89
Galileo, 15
Gamow, 72
General Relativity, 51, 61
Genzel, 77
geodesic, 62
Geometric Rules Principle, 184
Ghez, 77
gravitational ether, 49
gravitational redshift, 54
gravitational time dilation, 54
gravity
STOE, 199
Gregory, 23
GUT, 170
Guth, 69
harmony of the spheres, 5
Heisenberg, 151
Heron, 12
hidden variable, 155, 168
hod, 186
Democritus, 7
Descartes, 21
STOE, 187
homogeneity, 68
horizon problem, 72
Hubble, 62, 66
Hubbles law, 66
Huygens, 22, 45
Huygens principle, 105
inflation, 73
INDEX
inflationary universe, 69
inverse square law, 19, 26
isotropic, 68
Jansky, 76
Kepler, 17
laws, 18
ladder paradox, 53
Laplace, 45
Lavoisier, 99
Leibniz
Clark debate, 34
Lematre, 67
Lense-Thirring effect, 51
Leucippus, 7
Liebniz, 16, 29
life and STOE, 367
light
Pound-Rebka experiment, 55
Copenhagen, 40
Copenhagen Interpretation, 156
corpuscular, 24
double slit experiment, 157
drag by Fizeau experiment, 116
duality, 135
emitter theory, 119
Euclid, 10
Fitzgerald contraction, 120
Huygens, 22
HuygensFresnel equation, 105
Newton, 24, 40, 45
particle, 23
particle/wave duality, 29
photon, 149
quanta, 134
wave, 24, 149
Youngs double-slit experiment,
104
433
434
INDEX
redshift
discrete, 244, 262
galaxy, 239
Pioneer anomaly, 273
relationist, 31, 47
Repetition Principle, 182
rest mass, 53
Richstone, 76
Riemann, 63, 107
right hand rule, 146
Roemer, 23
rotation curves, 74
asymmetric, 324, 342
galaxys inner radius, 287
STOE, 323
Sagnac experiment, 120
Schrodinger, 150
Schrodinger equation, 102, 150
Shapiro delay, 70
Shapley, 66
singlephoton interference, 168, 238
sink
STOE, 204
Snell, 22
source
STOE, 204
Special Relativity, 122
Special Theory of Relativity, 49
speed of light, 23, 139
spherical property, 28
spiral galaxy, 89
Standard Cosmological Model, 82
problems, 82
Steady State Theory, 72
SternGerlach experiment, 143
STOE
correlation with current models,
410
INDEX
435