Sei sulla pagina 1di 158




Presented in Partial Fulfillment of the Requirements for the Degree Doctor
of Philosophy in the Graduate School of
The Ohio State University
Feras M. Abou-Galala, M.S.
The Ohio State University

Dissertation Committee:

Prof. Betty Lise Anderson, Advisor Approved by
Prof. George Valco
Prof. Charles Klein ______________________
Graduate Program in
Electrical Engineering
In this dissertation we present a new design of an optical performance monitor
(OPM) that utilizes optical correlation techniques to produce real-time measurements of
optical link performance. We also introduce a novel design of temporal optical correlator
that is based on the White cell. We show the advantages of our method over existing
techniques and outline how our proposed device can be integrated in next generation all-
optical Internet networks.
We build and experimentally demonstrate a proof-of-concept design of an OPM
using a White cell-based time-integrating optical correlator. The experimental apparatus
is analyzed and measurements are compared to their theoretical values. Results show that
our proposed technique produces the expected results with an error margin of less than
5%. Additionally, we show a detailed power loss analysis (measured) and discuss the
feasibility and scalability of our method. Measurements prove the design to be promising
and that it can be scaled without large power losses (less than 7dB).


Dedicated to my father Moustafa, my mother Fatima,
my brother Basil and my two sisters Hadeel and Haneen

I would like to take this opportunity to thank everyone that helped me out and
supported me throughout my course of studies. I would like to dedicate special thanks to
my beloved advisor and mentor, Professor Betty Lise Anderson. Her support and
continuous encouragement enabled me to be where I am at right now and played a big
part in making me the person I am now. She was always available to answer questions
when I needed answers, provide mental support when I was down, and above all be the
happy and joyful person she is. Again, thank you.
I would also like to thank all my peers who provided me great insight through our
discussions and conversations. Special thanks to my optics group members, Dave Rabb,
Dr. Rashmi Mital, Dr. Carolyn Warnky and Dr. Victor Argueta-Diaz.
I would specially like to mention that I couldnt have got to this point in my life
as a human being or as a professional without the support and encouragement from my
parents and my brother and sisters. Their love guided me through any obstacles that I
faced and made me a better person. Thank you.
April 22, 1978Born: Tripoli, Libya
2000....B.S., Electrical Engineering
University of Qatar. Doha, Qatar

2003M.S., Electrical and Computer Engineering
The Ohio State University. Columbus, Ohio

1. D. Rabb, B. L. Anderson, C. M. Warnky, F. Abou-Galala, "Binary White cell true
time delay: demonstration of micro-blocks and folded lens trains as delay elements,"
IEEE Journal of Lightwave Technology, Vol. 24-4, pp. 1886-1895, April 2006.

2. B. L. Anderson, D. J. Rabb, C. M. Warnky, F. M. Abou-Galala, "Binary Optical True
Time Delay Based on the White Cell: Design and Demonstration," IEEE Journal of
Lightwave Technology, July 15, 2005.

3. B. L. Anderson, A. Durresi, D. Rabb, F. Abou-Galala, "Real-Time All-Optical
Quality of Service Monitoring Using Correlation and a Network Protocol to Exploit
It," Applied Optics, 42(5) pp. 1121-1130, March 2004.

4. B. L. Anderson, F. Abou-Galala, D. Rabb, A. Durresi, "All-Optical Quality-of-Signal
Monitoring in Real Time," Paper # 5247-8, SPIE ITCom conference and proceedings,
September 7-11 2003.

5. B. L. Anderson, F. Abou-Galala, V. Argueta-Diaz, G. Radhakrishnan, R. L. Higgins,
"Optical cross-connect based on tip/tilt micromirrors in a White cell," IEEE Journal
of Special Topics in Quantum Electronics, 9(2), pp.579-593, March/April,2003


Major Field: Electrical and Computer Engineering

Abstract ..ii
Dedication .iii
Acknowledgments .iv
Vita .v
List of Tables .....ix
List of Figures ........x

CHAPTER 1 INTRODUCTION..................................................................................... 1
1.1 Optical Networks ................................................................................................ 1
1.2 Optical Impairments ........................................................................................... 3
1.2.1 Linear and non-linear impairments............................................................. 5
1.2.2 Attenuation.................................................................................................. 7
1.2.3 Dispersion ................................................................................................... 8
1.2.4 Noise ......................................................................................................... 10
1.2.5 Jitter........................................................................................................... 11
1.3 Link Quality Measurement ............................................................................... 11
1.4 OPM: Existing Methods .................................................................................. 14
1.5 OPM: Our Proposal .......................................................................................... 19
1.6 Physical Implementation................................................................................... 20
1.7 Routing Protocol based on OPM...................................................................... 21
1.8 Document Organization.................................................................................... 25

CHAPTER 2 THEORY.................................................................................................. 26
2.1 Introduction....................................................................................................... 26
2.2 Principal of Correlation .................................................................................... 26
2.3 Optical Correlation for OPM............................................................................ 28
2.4 Time-Integrating Optical Correlator (TOC) ..................................................... 29
2.5 White cell principle........................................................................................... 32
2.5.1 Beam Propagation in the White Cell ........................................................ 34
2.5.2 White cell Imaging Conditions ................................................................. 36
2.6 White cell-based TDL....................................................................................... 37
2.6.1 White cell delay arm................................................................................. 39
2.6.2 Design constraints..................................................................................... 40
2.6.3 Linear White cell-based Tapped Delay Line (TDL)................................. 44
2.6.4 Weighting elements and beam summation ............................................... 46

CHAPTER 3 Simulations and Analysis........................................................................ 49
3.1 Introduction....................................................................................................... 49
3.2 Impairment simulations .................................................................................... 49
3.2.1 Attenuation and Dispersion....................................................................... 50
3.2.2 Modeling Noise and Jitter......................................................................... 51
3.3 Simulation Results and Analysis ...................................................................... 53
3.4 Relating Correlation to BER............................................................................. 55
3.5 Number of Taps in the TOC............................................................................. 57

CHAPTER 4 EXPERIMENTAL IMPLEMENTATION........................................... 59
4.1 Introduction....................................................................................................... 59
4.2 Input System..................................................................................................... 61
4.3 MEMS setup ..................................................................................................... 68
4.4 Impairment generation circuitry ....................................................................... 69
4.5 Linear White cell Design .................................................................................. 71
4.6 Output Optics.................................................................................................... 75
4.7 Optical System Simulation ............................................................................... 79

CHAPTER 5 EXPERIMENTAL RESULTS............................................................... 87
5.1 Introduction....................................................................................................... 87
5.2 Apparatus Alignment ........................................................................................ 89
5.3 Correlation Measurements.............................................................................. 100
5.4 Impairment Measurements ............................................................................. 105
5.4.1 Attenuation Measurements ..................................................................... 105
5.4.2 Dispersion Measurements....................................................................... 106
5.4.3 Noise Measurements............................................................................... 107
5.4.4 Correlation Measurements Analysis ....................................................... 108
5.5 Power Loss Analysis....................................................................................... 110

CHAPTER 6 CONCLUSION...................................................................................... 112
6.1 Accomplishments............................................................................................ 112
6.2 Future Work.................................................................................................... 113
6.2.1 Input System Improvements ................................................................... 113
6.2.2 White cell-based TOC improvements..................................................... 114
6.2.3 Output Summation Improvements.......................................................... 114
6.2.4 Correlation Output Measurement Improvements ................................... 115




Table............................................................................................................................. Page
Table 1.1: Test time required to establish a reliable BER measurement for various bit rate
...................................................................................................................... 12
Table 2.1: Bounce pattern to produce different amounts of delay.................................... 45
Table 5.1: Weights of the optical power associated with each arm in the TOC............. 104
Table 5.3: Power loss measurements of our experimental OPM apparatus ................... 110

Figure ........................................................................................................................... Page
Figure 1.1: Simplified design of next generation all-optical networks............................... 3
Figure 1.2: DWDM telecommunication channel............................................................... 5
Figure 1.3: Illustration of the effects of different types of impairments on a square pulse 7
Figure 1.4: Reproduced from [13], screen shot of an eye diagram taken using a real-time
scope ................................................................................................................................. 13
Figure 1.5: Reproduced from [15], Amplitude histogram generated using asynchronous
(a) and synchronous (b) sampling..................................................................................... 15
Figure 1.6: Reproduced from [18,26] Block diagram of a APS monitor ......................... 17
Figure 1.7: Reproduced from [31] Illustration of the frequency spectrum of both the data
signal and the SC signal.................................................................................................... 18
Figure 1.8: Reproduced from [49], Performance analysis of DORP against availability-
based routing protocol....................................................................................................... 24
Figure 2.1 : N-tap time-integrating optical correlator. Figure shows the correlation output
between a degraded input and the weighting elements at each tap................................... 30
Figure 2.2: Two types of tapped delay lines. (a) 1xN splitter followed by N fibers with
different lengths each providing a different delay. (b) 2x2 couplers/splitters with each
splitter amounts for a single tap........................................................................................ 32
Figure 2.3: The original White cell with three spherical mirrors ..................................... 33
Figure 2.4a,b,c: Beam propagation in the original White cell .......................................... 34
Figure 2.5: Single input bounce pattern on mirror M in the White cell............................ 35
Figure 2.6: Multiple inputs bounce pattern on mirror M.................................................. 36
Figure 2.7 : Modification made to original White cell (shown in red) ............................. 37
Figure 2.8: Pixels tip angle and deflected beam angle...................................................... 38
Figure 2.9: Delay arm showing the image locations produced by each lens.................... 40
Figure 2.10: White cell-based TDL highlighting the null cell, the switching arm, and the
delay arm........................................................................................................................... 45
Figure 3.1a,b: Auto/cross correlation function. (a) Effect of attenuation on the correlation
function; (b) Effect of dispersion on the correlation function .......................................... 51
Figure 3.2: Effect of noise and jitter on correlation function. (a) Hundred separate
correlations are superimposed for 20% noise; (b) Hundred superimposed correlations,
with jitter varying randomly with standard deviation
= 10% jitter .............................. 52
Figure 3.3 Measurement of the area of the correlation function that exceeds a certain
threshold during a specified time interval......................................................................... 54
Figure 3.4: Area of the correlation function that is greater than 50% threshold and within
the time window in which the ideal correlation function exceeds 50%. The independent
variable is jitter, with dispersion as a varying parameter.................................................. 55
Figure 3.5: (a) Simulated eye diagram. The shaded area is the open area of the eye; (b)
Variation in the open area of the eye diagram for combined jitter and dispersion........... 56
Figure 3.6: Effect of number of taps on the correlation functions shape ........................ 57
Figure 4.1: Experimental apparatus block diagram.......................................................... 61
Figure 4.2: Principal of operation of a MZ modulator
................................................ 62
Figure 4.3: MZ Modulator transfer function..................................................................... 63
Figure 4.4: Cross section of the V-groove fiber array ...................................................... 65
Figure 4.5: Setup of the input optics and the beam propagation path from v-groove fiber
array into the White cell.................................................................................................... 66
Figure 4.6: MEMS pixel close up and the intensity profile of maximum allowed spot size
........................................................................................................................................... 67
Figure 4.7: Subsection Subset of MEMS pixels ............................................................... 68
Figure 4.8: Circuit schematic of a dispersion generation circuitry................................... 70
Figure 4.9: Circuit schematic of the noise generation circuitry........................................ 71
Figure 4.10: White cell-based tapped delay line............................................................... 72
Figure 4.11: Top view of null cell .................................................................................... 74
Figure 4.12: Side view of delay arm................................................................................. 74
Figure 4.13: Output arm location and the equipment used to sum the beams and view the
correlation output .............................................................................................................. 76
Figure 4.14: Output optics and mounts, units are in mm.................................................. 77
Figure 4.15: Internal connection of the common-node SM05PD4B photodiode ............. 78
Figure 4.16: Optical Simulation of the linear White cell-based TOC, using OSLO........ 81
Figure 4.17: Optical Simulations of the input optics used in the TOC design ................. 83
Figure 4.18: Optical simulation of the output of the White cell part in the TOC............. 84
Figure 4.19: Optical Simulation of the output optics used in the TOC design................. 85
Figure 4.20: Optical Simulation of the entire TOC system.............................................. 86
Figure 5.1: Layout of the experimental apparatus on the optical table-to scale-.............. 88
Figure 5.2a: Alignment procedure to establish delay arm optical axis............................. 89
Figure 5.3: Beam intensity profile of a single beam in the array...................................... 91
Figure 5.4: Gaussian beam propagation and location of measurement points ................. 92
Figure 5.5: Imaging arms locations .................................................................................. 93
Figure 5.6a: Photographic image of the setup showing a top view of the input and output
optics along with a section of the linear WC setup........................................................... 93
Figure 5.7: Magnified image of the MEMS pixels captured using an IR CCD camera... 95
Figure 5.8a,b: The beam array imaged at the MEMS plane. We see all the even-
numbered bounces in (a) and the odd-numbered ones in (b)............................................ 96
Figure 5.9: MEMS pixel matrix showing the locations of pixels used and all
malfunctioning pixels........................................................................................................ 97
Figure 5.10: Input pulse signals and their autocorrelation function as a function of time 99
Figure 5.11a: Oscilloscope screen shot showing the input pulse and the output pulse with
zero delay........................................................................................................................ 101
Figure 5.12: Oscilloscope screen shot showing the input pulse and the output
autocorrelation function.................................................................................................. 103
Figure 5.13: Measured effect of signal attenuation on the correlation output ................ 106
Figure 5.14: Measured effect of signal dispersion on the correlation output ................. 107
Figure 5.15a,b: Comparison between theoretical and experimental results (a) Attenuation
(b) Dispersion.................................................................................................................. 109
1.1 Optical Networks
The purpose of this dissertation work is to introduce a novel approach for optical
performance monitoring (OPM) of data links in next generation all-optical networks.
Our approach introduces a new device that is used to detect signal degradation and/or link
failure in real-time (tens of picoseconds), where the results could be utilized in real-time
protection and provisioning of all-optical links. The device is to be deployed in the
optical domain and be integrated in the design of next generation all-optical networks.
Currently, the increasing demand in the Internet network for real-time multimedia
data traffic with high quality of service (QoS) is pushing the limits of existing network
. Such demand can be easily noticed in our daily life with services
such as: Voice over IP (VoIP), Video on Demand (VoD), IP-TV, and other high-
bandwidth, minimal-delay services. A lot of research has been conducted by the Internet
Engineering Task Force (IETF) and other organizations to establish new standards for
future networks that can cope with such continuously increasing demands while
maintaining a high level of reliability.
Next generation networks call on a new paradigm of all-optical dynamically
routed networks, where network links are fully transparent to the data bit rate and format.
The network core is envisioned to comprise a mesh of nodes interconnected using all-
optical switches over optical fiber cables
. This requires the elimination of the
expensive electronic transponders that are used to do the Optical-Electronic-Optical
(OEO) conversion for signal regeneration and reshaping. The speed and price of
electronics are considered a bottleneck in next generation designs as they impose an
upper limit on the network bandwidth and the overall cost effectiveness of a network
design. Additionally, network domain transparency requires the development of new
routing and signaling protocols that take into account the physical layer parameters of the
transmitted optical signal. Several enhancements to standard Internet Protocol (IP)
routing and signaling protocols (e.g. OSPF
, and RSVP

respectively) have taken place in

order to be capable of handling new optical parameters imposed on the network
This new paradigm proposes a full layer of optical transparency, where the
current existing optical core networks are integrated with edge routers that act as a
gateway for the Internet Service Providers (ISPs) to the core network.
Figure [1.1] shows a simplified view of next generation Internet networks. The
network backbone (core of the network) is represented as a transparent cloud consisting
of pure optical links, which are interconnected via all-optical switches. Edge routers act
as the interface between the optical physical layer (backbone) and the client IP layer
(ISPs, clients etc). Other components of all-optical networks such as optical amplifiers
(OA), optical add drop multiplexers (OADM), wavelength converters and others are not
shown in this figure for simplicity.

OSPF stands for Open Shortest Path First

RSVP stands for Resource ReSerVation Protocol


Figure 1.1: Simplified design of next generation all-optical networks
1.2 Optical Impairments
Physical layer parameters have been ignored in the past in low bandwidth
networks as they only imposed very minor limitations on the overall network bandwidth
or the maximum span length that a signal has to travel. This is primarily due to the low
signal bandwidth and the large channel spacing between the multiplexed transmitted
signals over a fiber. As the network bandwidth increases, however, the amount of data
transmitted over a single fiber increases and hence the channel spacing decreases. This
calls for modulation techniques such as Dense Wavelength Division Multiplexing
(DWDM), where hundreds (commercially available) and even thousands (theoretically)
of signals expected in the near future to be carried on a single fiber with each signal being
modulated at bit rates of 10Gbps to 40Gbps
DWDM is becoming the core technology for coping with the rapidly increasing
demand for bandwidth in the Next Generation Internet. An adverse consequence of
boosting network capacity, however, is increasing the chances of large-scale network
. Therefore, increasing the network capacity demands the increase of the
network reliability and stability and arises the need to continuously monitor DWDM links
for any failures or signal degradation. In addition, once a failure is detected, a real-time
link recovery mechanism has to be put in place either to establish a new data path or to
recover the lost data.
DWDM links comprise several essential active optical components. In a DWDM
link, signals encoded on different wavelengths are first multiplexed onto a single fiber
using DWDM multiplexers (MUXs). The signal routing function is performed using
optical cross connects (OXCs) sometimes called optical switches. Optical amplifiers
(OAs) are placed along the link to increase the overall transmitted length of the optical
signal. The amplifiers function as power boosters by pumping more photons in the signal
as it gets attenuated along its path in the optical fiber. Some of the routing functions are
performed by optical add drop multiplexers (OADMs), which are devices used to change
the wavelength at which signals are traveling by adding or dropping wavelengths at
intermediate nodes. They are also used for wavelength conversion routing techniques
described in [50]. At the receiver end, de-multiplexers (DE-MUXs) are placed to decode
the transmitted data. DE-MUXs can also be placed at intermediate nodes to assist in
optical switching functions. Optical performance monitors (OPMs) are expected to be an
integral part of DWDM links in next-generation Internet networks, where reliability and
high QoS of high bandwidth optical networks are essential aspects of the networks
design. Figure [1.2] illustrates a cross section of a DWDM link including its basic
components arranged along a network link. Additionally we show some locations at
which OPMs could be installed. In the figure the red arrow suggests that a portion of the
transmitted power is tapped and sent to the OPM.

Figure 1.2: DWDM telecommunication channel
DWDM networks are usually designed with respect to the maximum span length
that a signal traverses at a given bit rate. Hence, only physical impairments that would
limit the maximum span length are considered. Over the last decade a lot of research has
been conducted in locating the impairments that affect the DWDM signal transmission
the most and that should be taken into consideration during the network design. In the
next section we will discuss some of the most prominent types of impairments that have
been reported.
1.2.1 Linear and non-linear impairments
When an optical signal is transmitted through a transmission link, usually a fiber,
it is subjected to several degradation factors or impairments. At a specified bit rate these
physical impairments limit the number of optical spans (point-to-point links between two
switching nodes, two active elements, or a mixture of both) that the signal transverses
throughout the network. Additionally, impairments tend to corrupt the data signal
transmitted and make it harder for the receiver to distinguish between a digital 1 and a
0 and hence introduce errors in the received signal and increase the Bit Error Rate
Different types of degradation factors or signal impairments have different effects
on the transmitted optical signal. These effects could be seen as a reduction in the
signals optical power, noise added to the signal, change in the shape of the optical pulse,
time displacement, or a combination of these effects.
Optical physical layer impairments are often divided into two categories, linear
impairments and non-linear impairments. This classification is based on the dependency
on the signal power
, where linear impairments affect individual
wavelengths and are independent of the transmitted signal power, while non-linear
impairments are more complex and are very hard to quantify as they depend on a
combination of factors such as the signal power, number of wavelengths per channel, and
the channels bandwidth. The effect of linear impairments is dominant and is usually the
main concern of network designer, although in some situations non-linear effects are
more pronounced and need to be considered in the design
. In our research we
concentrate on quantifying and measuring linear impairments as they contribute the most
to signal degradation. We will always assume that the effect of nonlinear impairments is
Linear degradation factors can be categorized under four major areas: (i)
attenuation; (ii) dispersion; (iii) noise; (iv) jitter. Figure [1.3] shows how these different
factors affect the shape and/or the amplitude of the transmitted signal.


Figure 1.3: Illustration of the effects of different types of impairments on a square
Clearly from the figure above, we see that different impairments affect the signal
differently. Let us further discuss the source(s) of each of the four types shown above
and elaborate more on how they impose a limitation on optical data transmission.
1.2.2 Attenuation
In this section we refer to attenuation as the reduction in the optical power of the
transmitted signal over an optical fiber. The effect is a function of the distance traveled,
where the longer the distance the higher the attenuation factor. Attenuation in optical
telecommunications is often referred to as the transmission loss and is measured in units
of dB/Km indicating the total loss over power accumulated over a certain length of fiber.
We can quantify attenuation using the following formula
) (
) (
log * 10 ) (
mW Power Input
mW Power Output
dB n Attenuatio = [1.1]
Note that the signal could get attenuated due to some other factors that we will
discuss in the following sections such as dispersion and noise. Other types of losses that
lead to signal attenuations include bending losses in the fiber or fiber bundle and coupling
losses between fibers at intermediate nodes.
1.2.3 Dispersion
Dispersion in optical communication is a temporal effect that results in pulse
spreading in time and is highly present in optical fibers
. A group of pulses traveling
as a bit stream will spread in time such that pulses merge together causing errors at the
receivers end and impairing the signals transmission. At higher bit rates, more pulses
are multiplexed on the same fiber with smaller channel spacing between pulses, which
results in the effect becoming more evident.
The dispersion effect is caused by different sources including material dispersion,
waveguide dispersion, and polarization mode dispersion
. The first two effects are
dependent on the refractive index of the transmission medium and the wavelengths
transmitted in the fiber. Let us consider a transmitted pulse with a finite spectral width
traveling over a dispersive medium, e.g. an optical fiber. We note that any pulse with a
finite spectrum will contain multiple frequency components and hence multiple
wavelengths. In material dispersion, the refractive index seen by different wavelengths
traveling along the fiber vary and hence result in different travel velocities for different
wavelengths causing the pulse to spread in time. Waveguide dispersion is very similar to
material dispersion as it depends on the propagation constant of the transmission medium,
which is a function of the signals wavelength. It is, however, a much smaller effect and
could be ignored. Polarization mode dispersion or PMD happens when one of the two
polarization components of the optical field traveling down the fiber lags behind the other
component resulting in a spreading in the overall pulse shape.
Let us examine one of these dispersion sources more carefully, namely material
dispersion. Material dispersion is a dominant effect in optical fiber transmission
and is
commonly quantified by the group velocity dispersion (GVD), where the group velocity,
, of the transmitted signal is defined as the velocity at which the information is
conveyed along the optical wave and is usually referred to as the signal velocity
is a result of the dependency of the group velocity on the frequency components present
in the transmitted signal, where different frequency components travel through the optical
fiber at different speeds. GVD for a uniform medium can be calculated as:


n d
GVD [1.2]
where is the wavelength of the transmitted signal, c is the speed of light, and n is the
refractive index of the uniform medium.
In our analysis, we treat the signal as an analog signal, where we only consider
the shape of the signal and hence all types of dispersion mentioned above will have the
same affect on our correlation measurement.
1.2.4 Noise
Noise in optical communication links is introduced in either active optical
elements in the optical link or at the receiver end. Noise produced along the DWDM
optical link is primarily due to Amplified Spontaneous Emission (ASE)
, which is
light produced by spontaneous emission and amplified in an optical gain medium. ASE
is generated in optical active elements such as optical amplifier and light sources. ASE is
directly proportional to the signal power and inversely proportional to the amplifiers
gain and the links bandwidth. Optical amplification nodes such as in Erbium Doped
Fiber Amplifier (EDFA), are intended to amplify the amplitude of the optical signal only,
however, background noise and transmission link noise get amplified as well in addition
to the generated ASE. The spectrum of the background noise is often wide; however,
some of that noise can land near or on the signals wavelength spectrum and cause the
signal to get impaired due to interference between the signal and the noise. This noise
affects the receivers ability to properly decode the optical signal and hence introduces
The noise is quantified in term of the Optical Signal to Noise Ratio (OSNR),
which is mathematically defined as:
log 10 = [1.3]
where P
is the optical signal power and P
is the optical noise power. The
higher the OSNR the better the signal quality is. This measure is very frequently defined
as a design factor when determining the QoS requirements of an optical link.
The noise added to the signal is often approximated as Gaussian noise that affects
the entire data stream traversing a specific link with the same probability
. We
use this approximation in our simulation presented in chapter 3.
1.2.5 Jitter
In optical telecommunications, jitter is defined as variation in the signal
characteristic between consecutive pulses such as a variation in the pulse width and/or the
phase of the pulse
. In our analysis, we only consider temporal variation such as
variation in the pulse interval or signal frequency variations. Our assumption is based on
the way we treat and analyze the output signal, where the signal is considered to be
incoherent. Jitter is often quantified based on the type of variation measured, which in
our case would be a displacement in the pulse peak value.
1.3 Link Quality Measurement
In the telecommunication industry there already exist several standard methods
for measuring the quality of an optical link and the overall BER of the transmission link.
One standard technique is to directly measure the BER using a BER tester. A Bit
Error occurs when a transmitted signal gets corrupted by an internal or external event that
causes for example the reception of a 0 when a 1 is transmitted. The BER is a
statistical measure of how often these errors occur. For BER measurements to be
statistically significant, at least 100 errors need to be collected at the receiver end. This
requires a lot of time, several seconds or several minutes. In optical data communication
links the BER is expected to be below 10
for a good connection. The test time required
for 95% confidence interval

would depend on the transmission bit rate

. In table [1.1]
we show the test time required for standard bit rates used in optical data networks
Bit rate Industry standard name Test time
40 Gbps OC-768 1 sec
10 Gbps OC-192 3 sec
2.5 Gbps OC-48 12 sec
155 Mbps OC-3 3.2 min
Table 1.1: Test time required to establish a reliable BER measurement for various
bit rate standards

Eye diagram measurement is another common measurement technique. Eye
diagram measurements are much faster than BER tester and are considered to be the
current industry standard for measuring and analyzing the performance of optical links
. An eye diagram is constructed by superimposing every possible bit sequence from
simple 101s and 010s, to isolated ones after long runs of consecutive zeros and other
problem sequences that often show up weaknesses in an optical link. The eye is
generated on real-time oscilloscopes by accumulating millions of bit sequences, a process
which takes up to tens of seconds using the fastest real-time scopes in the industry. Once
generated, data measurements are observed over a time window equal to three data
periods wide, which, at ultra-high speeds, can be done in milliseconds. Figure [1.4]
illustrates a typical eye diagram screen shot on a real-time scope
. Features of the eye,
such as the eye opening, the eye overshoot/undershoot (i.e. amplitude distortion at the top
and the bottom of the eye), and the eye width are commonly used to determine the

Confidence interval is a statistical measure used in BER calculations, where an interval

with a given probability is generated multiple times from a random set of samples.
different types of impairments we discussed earlier and more. The effects of noise and
dispersion cause the eye opening to get smaller, while amplitude distortion is often
recorded using the eye overshoot/undershoot thresholds. Jitter and timing
synchronization impairments are recorded using the width of the eye. The figure shows a
perfect eye diagram where no impairments added to the signal. The masks labeled mask
1 through mask 5 are used as a pass/fail test to measure the type and amount of
impairments present. Mask 1 is used to measure the eye opening, while masks 2 and 3
keep track of the eye overshoot and undershoot. Finally masks 4 and 5 are used to
measure the width of the eye.

Figure 1.4: Reproduced from [13], screen shot of an eye diagram taken using a real-
time scope
The results obtained from the eye diagram are related to the BER of the
transmitted signal through the Quality factor or Q-factor of the signal, which is defined
0 1
0 1

Q [1.4]
are the mean value and the standard deviation of the impairment
measured and the subscript x denotes the bit value of 0 or 1. Once the Q-factor is
quantified the BER is calculated using the formula described in equation [1.5]:

2 2
1 Q
erfc BER [1.5]
Both BER testers and eye diagram measurements are slow as they require the
accumulation of a very large number of samples in order to obtain a valid statistical
measurement. In addition, the signal has to be converted to the electronic domain, in
which the signal is analyzed. The need to accumulate a large number of samples,
typically millions, to achieve a reliable measurement stands as a bottleneck as it limits
how fast an action can be taken when an error occurs. As we discussed earlier in this
chapter, next-generation optical networks require real-time response to signal failure or
signal degradation, which existing techniques cannot offer. To solve this problem new
link monitoring methods are being developed such as the OPM we describe in this
1.4 OPM: Existing Methods
Optical performance monitoring (OPM) has been recently introduced in the
literature. One approach based on asynchronous amplitude histograms has been proposed
and shows to be promising
. In this method a small amount of power is tapped from
the optical link and used to measure the quality of the link. This eliminates the need for
Optical-Electronic-Optical (O-E-O) conversion of the main data signal and maintains the
signal in the optical domain. The method is based on a statistical approach, where the
tapped optical signal is first collected on a high-bandwidth photodiode. Next, the
generated electrical signal is asynchronously sampled at a lower rate than the rate of the
signal. The samples are then collected and an amplitude histogram is generated
representing the frequency of occurrence of digital 0s and 1s and anything in between.
Figure [1.5a] and [1.5b] show two amplitude histograms generated using asynchronous
sampling (a) and to validate the results it is compared to the histogram generated using
synchronous sampling (b). Using information obtained from the shape and amplitudes of
the generated histograms the Q-factor is calculated and then related to the BER of the

Figure 1.5: Reproduced from [15], Amplitude histogram generated using
asynchronous (a) and synchronous (b) sampling
The asynchronous amplitude histogram technique requires gathering a large
number of samples (at least one million samples are usually needed) before a meaningful
result is obtained
. This requires a sampling time of multiple milliseconds, which is
not acceptable if the device is to be used in real-time protection or provisioning of all-
optical links. This technique, however, meets the transparency requirement imposed by
next generation networks as the monitoring technique is independent of the bit rate or
modulation format. Amplitude histogram measurements also do not take all kinds of
impairments into consideration, such as dispersion. The system always assumes the use
of dispersion-compensated fiber.
Amplitude Power Spectrum (APS) analysis techniques have been recently
deployed for monitoring and analyzing the behavior of any types of signals transmitted
over an optical dispersive/noisy channel. In such analysis, signals are treated as analog
waveforms and are often independent of the data format or bit rate, which is very
desirable in monitoring all-optical networks in order to achieve the desired goal of
complete transparency.
Figure [1.6] shows the general block diagram of an APS monitor, where a single
DWDM channel is shown. A low-frequency subcarrier (SC) is added to the data stream
(baseband signal). The baseband signal is combined with the SC either electronically or
optically and the combined signal is then used to modulate the laser transmitter. A
unique SC frequency gets transmitted on each DWDM channel
(or a separate
channel on a different wavelength () can be dedicated for monitoring
). The optical
fiber channel is then tapped at any point in the transmission line and the SC is filtered out
and monitored. The SC tone can be detected by either using an electrical band pass filter
(BPF) after photo detection (as shown in fig [1.6]) or by optical pre-filtering prior to
photo detection. The idea behind APS technique is to superimpose a narrow-band
spectral signal or an RF tone (referred to as subcarrier tone or pilot tone) on the optical
baseband data signal. The SC signal travels the complete same path with the baseband
signal (original data). The subcarrier is extracted at intermediate nodes throughout the
optical channel and monitored without disrupting the original signal. The average power
and shape of the subcarrier can be directly related to those of the baseband signal, hence
providing information about the OSNR and dispersion that the original data encountered.
Cross talk, which is a non-linear impairment, can be measured by measuring the crosstalk
encountered by SC tones in adjacent DWDM channels.

Figure 1.6: Reproduced from
Block diagram of a APS monitor
There are several constraints on the SC signal that need to be taken into
consideration when using APS techniques. Since the SC tone is transmitted over the
baseband data signal, we have to make sure that no interference occurs between the two
transmitted signals. For this requirement to hold, the SC tone frequency has to be higher
than the spectral tail of the data signal such that no crosstalk can occur between the two
. Figure [1.7] shows the frequency spectrum of baseband signal along with a
subcarrier signal for a certain WDM channel.

Figure 1.7: Reproduced from
Illustration of the frequency spectrum of both the
data signal and the SC signal

Furthermore, the depth of modulation (power or strength of modulation of the SC)
has to be sufficiently smaller than that of the baseband signal. Precautions also need to
be taken when monitoring the WDM channels power levels, since power fluctuation
(gain or loss) may occur at the transmitter. Therefore, it is important to fix the SCs
power level relative to the channels power
. Further constraints are required on the O-
E module (e.g. photodetector) of the monitor circuit. Since the signal is not monitored at
the receiver and is usually tapped somewhere along the channel, it is important to set the
sensitivity of the O-E interface to be much higher than the downstream receivers
sensitivity. This ensures accurate measurements and encounters for any additional signal
degradation that may occur between the tap and the downstream receiver.
Although APS monitoring techniques may seem to be a good solution for a lot of
OPM applications, they still suffer from several weaknesses. They require modification
of the transmitter to add the SC generation circuitry, which could be a major problem due
to physical limitations in long-haul networks. The monitoring speed of such techniques
is limited by how fast electronics (the O-E module) can go. This could be a bottleneck in
applications that require high-speed fault detection and restoration. In addition, there are
several SC-specific obstacles that need to be overcome before any of these techniques
could be standardized
[2, 12, 13]
Other all-optical monitoring systems have been proposed
, however, most
of them deal only with a specific type of signal impairment such as dispersion, or jitter
and often use assumptions in the network layout that severely limit the existing
configuration of the core optical network. In the next section we describe our proposal
for OPM in all-optical networks and show how it compares to other existing methods.
1.5 OPM: Our Proposal
Let us first summarize the motivation behind OPM in a next generation all-optical
networks. When a link is found to be unhealthy (e.g. a link failure due to cable cut or
signal degradation due to impairments in the link), an immediate action needs to be taken
to either set up an alternative path for the data (link restoration), or switch to a backup
path that has been already established (link provisioning). At high bit rates (>10Gbps)
and with DWDM channels with very high BW, a large amount of data can be lost within
a very small period of time (e.g. a link disruption of 1s at bit rate of 40Gbps a total of 40
million bits would be lost, which is equivalent to 10,000,000 traditional phone lines).
Therefore, using previously discussed techniques to keep track of the links health is not
adequate nor scalable as the network wont be capable of acting fast enough to link
We can outline the essential features required in any OPM technique that will
satisfy the next generation Internets network requirements (i.e. transparency, reliability,
protection, and real-time link provisioning). The monitoring system should be:
- Independent of the transmitted signal format
- Fully implemented in the optical domain
- Capable of near-instantaneous error detection (picoseconds)
In this dissertation we introduce a novel approach for solving the problem of
optical link health monitoring. We propose a new monitoring technique based on the use
of optical correlation, where a known bit stream or a test signal (e.g. 0 1 0) is
continuously transmitted or transmitted as a burst at a known frequency. It is very
important to keep in mind that the data transmitted over the bit stream is irrelevant to our
technique as we treat the signal as a pure analog signal and analyze the amplitude and the
shape of the signal. The test signal is sent over a dedicated channel (i.e. a different
wavelength) and gets multiplexed with the data stream in a DWDM system. The signal
gets affected by all the impairments that the data channel encounters in as it undergoes
the same path as the data. The signal is either picked up at intermediate nodes along the
link or at the receivers end by tapping a small portion of the signals power. The signal
is then optically correlated with a clean version of the transmitted bit stream. Information
from the correlation output (amplitude, side lobes, rise/fall time, frequency components,
and others) is then extracted and used to set a threshold that indicates whether the
transmission link meets the quality of service (QoS) and performance requirements
specified by the carrier or not. The thresholding is either implemented in the optical
domain using an optical saturable absorber or electronically using a fast comparator.
1.6 Physical Implementation
Our approach is based on a time-integrating or temporal optical correlator (TOC).
The correlator is physically implemented using an N-Tap Delay Line (TDL), N weight
elements with one at each tap, and an N-input summer. In general terms, the TOC is used
to measure how different a deteriorated bit sequence is (received at its input) from a clean
replica of the transmitted bit sequence (i.e. not affected by any impairments). At the
input of the TOC, the received signal r(t) is split into N copies, where each copy is
delayed by a discrete time increment . Each copy is then optically multiplied by a
weight function s
(t), where s
represents the weight function at the k
tap in the TDL.
Each weight function is chosen to represent the pulse shape for a [0 1 0] or other specific
sequence and is determined by the transmitted bit sequence and the number of taps
implemented. Finally the amplitudes of the delayed copies are summed incoherently (no
phase components) resulting in an output C(t) that represents the cross-correlation
function between the delayed copies and the weight functions. Coherent summing could
also be used.
Additionally, we propose a new design for an optical correlator based on the
White cell that can produce hundreds or even thousands of delays with a tolerable amount
of loss. The White cell
technology has been adapted by the optics research group at
The Ohio State University and used in several applications such as: Optical true time
delay, optical switching, and others. We will discuss the detailed design of the White-
cell-based TOC in Chapter 2.
1.7 Routing Protocol based on OPM
Information obtained from OPMs need to be included when calculating new
routes for data signals or backup routes when signal failures occur in next-generation
Internet networks. The routing decision needs to be based on how healthy the overall
path is between the transmitter and the receiver or between intermediate nodes. Current
routing protocols primarily base routing decisions on the shortest available path to the
receiver, where the shortest path is measured in terms of the number of hops (or spans),
the propagation delay, the blocking probability of intermediate nodes, or a combination
of those factors and others
. OPM data will need to be added as an additional factor
in the routing decision formula in order to sustain reliability requirements of future
Recently, this topic has been actively addressed in the research field. Most of the
proposed ideas are based on pre-knowledge of the network topology and its physical
parameters such as links lengths, type of fiber used in each link, number of active
elements, etc.
. Pre-knowledge of network topology and parameters requires a lot
of processing and storage power, and furthermore conflicts with the network transparency
requirement. Other ideas suggest establishing routes based on worst-case scenarios
meaning that switches will have to choose their paths based on the behavior of the worst
link along the path. This approach results in low bandwidth utilization and literature
shows that results are rarely reliable in making routing decisions due to the changing
network dynamics
The final goal of routing in the optical domain is to increase the revenue of the
network and keep the level of QoS promised to customers. Choice of a good route-
computation algorithm is essential to the performance of these networks. The physical
capacity available in optical data networks has increased (theoretically) to several Tbps
with optical switching. How much of this available physical capacity that can be utilized
reliably, depends on the route-computation algorithm used. With each fiber link capable
of carrying 40Gbps or more worth of data, the impact of even a few percent improvement
in the usable network capacity is significant, which can be of the order of hundreds of
Gbps, if not Tbps.
In a joint work with the Computer Information Science Department at The Ohio
State University
a new route-computation algorithm, called Domain Optical Routing
Protocol (DORP) was proposed. The protocol combines intelligent routing with the
immediate availability of information about quality of signal provided by the optical
correlator-based OPM. The route-computation algorithm defines the weight of a link
using its available capacity and its quality as well. The proposed distributed protocol
requires the nodes inside a domain to exchange availability and quality information,
where a domain is defined as a sub-section of nodes geographically spaced within a pre-
specified distance. Inside the domain all nodes will exchange link state information,
which includes availability and quality. Between domains, border nodes will advertise
the aggregate cost to pass through corresponding domains. In this way the domain cost
will be more updated and more meaningful for the purpose of routing.
The division of the network into domains is done to avoid problems associated
with network scalability. For example, if the network size is extended then the
distribution of the link state information will take more time, making the information
itself old and misleading for the purpose of routing. Also it is known that the Internet is
composed of autonomous networks and that for administrative reasons its impossible to
distribute the detailed link state information among all such networks. To overcome
these two problems the proposed protocol is based on domains.
Some preliminary simulations of the effectiveness of DORP are presented in [49].
Figure [1.8] shows a comparison between the availability-based protocols (such as OSPF,
RSVP) with DORP using NSF network with 16 nodes, 25 links and 4 wavelengths per
link. Results indicate that DORP outperforms the availability-based routing protocols in
generating more revenue. In this simulation the revenue is given by the number of
accepted calls. The QoS factor is simulated by dropping the quality of only one link
randomly for short periods of time (few seconds) below an acceptable threshold. In
availability-based routing, calls that use links with quality below the threshold do not
generate revenue where as in DORP using the information provided by the optical
correlator avoids these links and hence all accepted calls are generated. For more than
one link with quality below the threshold the advantage of DORP against the availability-
based protocol increases.

Figure 1.8: Reproduced from [49], Performance analysis of DORP against
availability-based routing protocol
1.8 Document Organization
The dissertation is organized as follow: In Chapter 2 we will explain the
theoretical principles behind optical correlation and discuss the different types of optical
correlators available. We then discuss how optical correlation can be used in OPM.
Later in the chapter we will introduce a new design for a temporal optical correlator
(TOC) based on the White cell and discuss the details of the design.
Chapter 3 will describe some of the simulations obtained to support our design.
We present simulation results describing how the different types of impairments affect
the correlation output. We finally relate our obtained results to industry-standard
monitoring technique and explain how we can relate our measurements to the BER of the
transmitted signal.
In Chapter 4, we describe in detail the OPM proof-of-concept experimental
apparatus implemented. We divide the setup into five sections, namely: Input system,
MEMS, impairment generation circuitry, TOC, and output system and explain the design
details of each.
Chapter 5 will discuss the experimental results obtained using the proof-of-
concept setup and compare the obtained results to their expected theoretical values. We
first describe the alignment procedure used to align the optics used in the White cell-
based TOC. We then show correlation output results obtained and how the correlation
function respond to each of the impairment types we discussed in chapter 1. We finally
show a detailed power loss analysis of the system and discuss its feasibility.
Finally in chapter 6 we conclude the dissertation with suggestions for future work
that could be implemented to enhance the current design.
2.1 Introduction
In this chapter we explain the theory behind optical correlation and how we use
correlation techniques in optical performance monitoring (OPM). We also describe in
detail the design of a new optical correlator based on the linear White cell. In section 2.2
we briefly discuss the concept of correlation and its significance in signal processing
applications. We then, in section 2.3, explain how we utilize the correlation function in
OPM applications. Following in sections 2.4, 2.5 and 2.6, we describe in detail the
design of a new White cell-based optical correlator and discuss its advantages.
2.2 Principal of Correlation
The concept of correlation was introduced in 1890 by the English statistician,
. He defined the relationship between any pair of statistical events or processes
through the concept of statistical regression. His definition was considerably extended
throughout the twentieth century and a new time-dependent measure was introduced,
which is termed now as the correlation function.
The correlation function is defined depending on the field of study that is being
considered and not all definitions are identical. Although most definitions quantify the
co-relation between two random variables at a specific time or between different time
instants of the same variable, the mathematical representation can vary.
Statistically, the correlation function
between two random variables X and Y with
expected values (i.e. mean values) E(X) and E(Y) and standard deviations
, is
defined as:
) ( ) ( ) ( ) (
) ( ) ( ) ( ) , cov(
2 2 2 2

= =

where the numerator represents the covariance between the two variables and the
denominator represents the product of their finite non-zero standard deviations. The
absolute value of
cannot exceed 1. The value 0 indicates no correlation, or that the
variables are independent. For any value of
that is less than 1, the function is
termed as a cross-correlation function. Correlation function values that are close to 1
indicate a high degree of similarity between the two variables. A value of 1 indicates a
complete match between the two variables and is termed as auto-correlation. The values
between 0 and 1 define the strength of the correlation function and are usually referred to
as the correlation coefficients. The correlation function can be constructed from those
coefficients by a direct averaging of the time-dependent function. The averaging process
can be thought of as an extension of the mean square value.
In signal processing, the function is defined somewhat differently. The definition
is represented without normalization, that is, without subtracting the mean and dividing
by the standard deviation. In this definition we will consider the example of a data signal
transmitted over an optical transmission medium (e.g. fiber optic link), which is our
interest in this dissertation. The two variables of interest are the transmitted signal s(t)
over the fiber optic link and the received signal r(t-), where r(t) is delayed by a time
variable, , due to transmission and t is the reference time. The correlation function, (t),
is defined by the integral:


= dt t r t s t ) ( ) ( ) (
The infinite limits indicate that the correlation function is continuous over an infinite data
stream. In the discrete domain, we can re-write (t) for a finite number of samples of
the received signal by the following summation:
) ( ) ( ) (
k t r t s t =

where N is the number of samples of interest. We will be using the definition in equation
2.3 for our analysis and simulations throughout the rest of this document.
2.3 Optical Correlation for OPM
In optics, correlation functions are commonly used in interferometry to quantify
the degree of coherence between electromagnetic waves
. Correlation functions are
also well-known in the literature for signal processing applications, primarily as encoders
and decoders for optical code division multiple access (OCDMA)
Optical correlators come in two basic styles: spatial and temporal. In temporal
correlation, a time-varying signal (e.g., intensity or phase) is compared to a reference
time-varying signal, using, for example, an acousto-optic device
or an optical
tapped delay line
. The result of the comparison is then summed or
integrated to produce the correlation output. Temporal correlators based on tapped delay
lines encounter low power attenuation and can produce a large number of delays ranging
from picoseconds to tens of nanoseconds.
Spatial correlators, on the other hand, are widely used in image detection and
processing applications. They usually take advantage of holograms, for example, to
compare a two-dimensional image with some reference image
. Spatial integrating
correlators are much faster than time integrating ones, processing approximately 10

samples/sec, which is about three orders of magnitude higher than time-based integrators
. Their disadvantage is the range of spatial shifts (equivalent to delays) possible since
delays are usually produced in a crystal by a spatial shift, which is in the range of
femtoseconds or tens of fs. The losses could be much higher too.
Our approach is based on a temporal optical correlator (TOC), using an optical
tapped delay line. Although any optical tapped delay line can be used in a temporal-type
optical correlator, we introduce a novel one in this dissertation that is based on the White
cell. We show that our WC-based correlator outperforms existing temporal correlators in
the number and the range of delays produced by a factor of 100 or more with power
losses below 7dB.
2.4 Time-Integrating Optical Correlator (TOC)
The TOC is implemented using a tapped delay line (TDL), a set of weighting
elements or reference elements, and an optical summer. The correlation takes place
between the received signal, r(t), after going through the optical link and a reference
signal representing a copy of the original transmitted signal. The reference signal present
at the TOC is represented by the weighting elements, s(t). The weights could be
amplitude weights or phase weights. Amplitude weights are either 1s or 0s, whereas
phase weights are implemented by phase shifts of either 0 or .
In figure [2.1] we show the structure of a TOC, where the correlator is physically
implemented using an N-tap TDL. The input to the correlator is a distorted square pulse
with a frequency of 1/T. The signal gets delayed and multiplied by N-weighting elements
representing the original square pulse. The outputs are then all summed producing a
correlation output with a period of 2T.

Figure 2.1 : N-tap time-integrating optical correlator. Figure shows the correlation
output between a degraded input and the weighting elements at each tap
The correlation output is generated as follows: The received time varying signal
r(t) enters the TDL, where a small amount of the power is siphoned off at each tap. Each
tap is delayed relative to the next tap by a fixed time increment of . Each time-shifted
replica of r
is multiplied by a weight s
present at each tap, and the resulting
multiplication products are summed. The result is the correlation function of Equation
2.3, between the deteriorated test signal r
and the TDL weights s
. As described
previously, if the two signals are identical, Eq. (2.3) becomes an autocorrelation, and (t)
will have a high peak in the center of the time slot, and low side lobes. If the signals are
less well-matched, Eq. (2.3) becomes a cross correlation function, and the peak decreases
whereas the information in the sides of the pulse increases. Information from the
correlation output such as peak amplitude, side lobes, rise/fall time, frequency
components and others is extracted and processed (optically or electronically). The
processed data is then compared to a reference threshold or a reference function to
indicate whether the tested transmission link meets the signals quality requirements
specified by the carrier or not.
The key element of the TOC is the tapped delay line, and the more taps the higher
resolution of the correlation output. One way to implement taps is using fiber splitters.
Figure [2.2] shows two common styles: the tree in (a) consists of a 1xN splitter followed
by N lengths of fiber. Each fiber is longer than the previous one by a distance of one
; the other type in (b) uses 2x2 couplers/splitters in various types of lattices
. The number of splitters is equal to the number of taps, and for each tap there is
a separate, precisely cut length of optical fiber. Such designs are not scalable as the
amount of power loss increases dramatically with each splitter added. Additionally, the
lengths of the fibers have to be cut very precisely in order to ensure correct delays.
Another recent approach uses fiber Bragg gratings
, where the gratings are
imprinted at various distances along the fiber. As the light beam enters the fiber, portions
of the beam get reflected at each grating resulting in multiple reflected beams with
different delays. Such technologies become impractical to implement if a very large
number of taps are needed. The largest number of taps reported is 63
using fiber
Bragg gratings (FBGs). This means a maximum resolution of only 64 samples, which
may not be enough for a high-resolution correlation output. In addition, the length of
each grating in the FBG needs to be long in order to get high reflectivity. This introduces
ambiguity in the time delay.

Figure 2.2: Two types of tapped delay lines. (a) 1xN splitter followed by N fibers
with different lengths each providing a different delay. (b) 2x2 couplers/splitters
with each splitter amounts for a single tap
In our design, the implementation of the correlator is based on a free space
approach rather than fibers. The correlator utilizes the concept of the White cell, which is
described in the next section.
2.5 White cell principle
The White cell
was introduced by J. White in 1942 for the purpose of
spectroscopy, specifically for measuring low-pressure vapor spectra. Since then, the
White cell has been adapted and utilized in many other applications such as optical true
time delay
, optical computing
, and in optical reflectometers
The White cell is a free-space optical device consisting of three spherical mirrors
with equal radii of curvature, R, figure [2.3]. The three mirrors are organized such that
one mirror, mirror M, is facing the other two, mirrors A and B. Mirror M is referred to as
the field mirror and mirrors A and B as the object mirrors. The distance between the
mirrors is equal to their radius of curvature, R= 2f, where f is the focal length. The center
of curvature of mirror M, CC(M) is located between mirrors A and B, where the centers
of curvature of both A and B, CC(A)&CC(B), are located on mirror M. CC(A) is
located a small distance above the center of mirror M, while CC(B) is located the same
distance below the center.

Figure 2.3: The original White cell with three spherical mirrors
Figure [2.4] shows how light propagates through the White cell. The light first
enters the White cell using an input turning mirror (ITM). The input light beam is
focused onto the ITM, which is tilted such that the light beam gets directed towards
mirror A as shown in figure [2.4a]. Mirror A sees the input spot on the input turning
mirror as an object and images it to a new spot (bounce 1) on mirror M. As we see in
figure [2.4b], the location of the new spot is formed at an equal and opposite distance, y1,
from the center of curvature of A, CC(A). Meanwhile, mirror M sees the light on mirror
A as an object and re-images it onto mirror B at an equal and opposite distance, y2, from
CC(M). The process repeats and the second bounce is formed similarly on mirror M at
an equal and opposite distance from CC(B).

Figure 2.4a,b,c: Beam propagation in the original White cell
2.5.1 Beam Propagation in the White Cell
The re-imaging process between the field mirror and the object mirrors generates
a spot pattern on mirror M. The number of spots generated is controlled by the distance
and/or the diameter of mirror M. The location of these spots is controlled by location of
the input spot(s) and the locations of the centers of curvature of mirrors A and B with
respect to the optical axis of mirror M. Figure [2.5] illustrates a specific spot pattern as
viewed on the front of mirror M. As we see in the figure there are eight generated spots
that toggle back and forth around the center of mirror M until the final spot eventually
walks off the edge of mirror M. The final spot is picked up using an output turning
mirror (OTM) that usually directs the beam outside the White cell, where the beam is
analyzed or further processed.

Figure 2.5: Single input bounce pattern on mirror M in the White cell
It is also possible for multiple beams to circulate in the White cell with each beam
tracing a unique spot pattern. Figure [2.6] shows the spot pattern formed for each of
three input beams indicated by three different spot colors.

Figure 2.6: Multiple inputs bounce pattern on mirror M
Note in figure [2.6] that each input spot follows an independent path without
interfering with other beams before leaving mirror M. Now, if the each spot is made to
land on a pixilated reflective surface whose angle we can control, each beam can then be
manipulated independently at any given bounce. This property will be exploited in our
design of our White cell-based TOC.
2.5.2 White cell Imaging Conditions
In order for a White cell to function as described in section 2.4, there are two
imaging conditions that have to be maintained at all times. Namely, Mirror M has to
image onto itself through either of the object mirrors A or B with a total magnification of
-1. Secondly, each of the object mirrors A and B has to image onto each other through
mirror M.
2.6 White cell-based TDL
In our design we adapt the White cell to be used as the tapped-delay line in the
temporal correlator. To do so, we perform several modifications to the original White
cell as illustrated in figure [2.7]. The modifications are highlighted in red. First, we
replace White cell mirror M with a Micro Electro Mechanical System (MEMS) and a
field lens along each arm. The MEMS consists of a two-dimensional array of mirrors,
each can be controlled to be tipped to a certain angle, either + or - or stay flat with
respect to the pixels normal. In addition, we build an additional White cell arm which
will eventually be used to produce delays, arm C, as shown in the figure.

Figure 2.7 : Modification made to original White cell (shown in red)
The White cell arms are placed such that one arm, arm B, is located along the
MEMS normal, while the other two arms A and C, are positioned along angles equal to
twice the tip angle of the MEMS pixels (i.e. 2). In figure [2.8] we show three pixels,
where one is flat and the other two are tipped at angles . Consider a beam coming
from object mirror B and striking a pixel at an angle normal to the MEMS surface plane,
the deflection angle of the beam will depend on the tip angle of the mirror. If the pixel
is tipped to an angle equal to +, the beam will get deflected at an angle equal to twice
the tip angle, +2, which sends the beam to arm A. Similarly, if the pixel is tipped to -,
the beam will get deflected at -2 or to arm C.

Figure 2.8: Pixels tip angle and deflected beam angle
Note that the mirrors alone provide a reflective surface; however, since they are
flat (as opposed to spherical like the original Mirror M), the imaging conditions discussed
in section 2.5.2 will no longer hold. To fix this problem we add a field lens placed at a
calculated distance from the MEMS plane along each of the arms. Ideally, a spherical
mirror with a radius of curvature R
is equivalent to a flat mirror right next to a lens
with a focal length f
_ lens f
. The focal length of the field lenses we used were
slightly different, chosen to compensate for the separation between the field lens and the
MEMS while maintaining imaging the White cell.
2.6.1 White cell delay arm
In the original White cell, a beam bounces a fixed number of times and hence
encounters a fixed time delay. The total delay is proportional to the separation between
the object mirrors and the field mirrors or in other words, the distance light has to travel
in the White cell before exiting. In the modified White cell shown in figure [2.7] above,
this same delay is produced in the White cell containing arms A and B along with the
MEMS. The delay increment of our White cell-based tapped delay line, , discussed in
section 2.3, will be produced in arm C. We modify arm C shown in figure [2.8] to
produce a longer time delay by increasing the separation between the MEMS and mirror
C. We will refer to arm C in our discussion as the delay arm.
Beams circulating in the White cell TDL can either bounce back and forth
between mirrors A and B or get sent to the delay arm. The delay produced in the White
cell A-B-MEMS will be considered as a null delay or zero delay and we will refer to this
White cell as the null cell. Beams that visit the delay arm get delayed by for each round
trip in arm C compared to the time it takes to make a round trip to B. Hence by
controlling the number of times a beam is sent to the delay arm, we can control the total
delay a beam accumulates before it exits the White cell.

2.6.2 Design Constraints
Delay arm design

In order to maintain the imaging conditions in the delay arm, an even number of
lenses or a lens train is added between the field lens and mirror C. Other methods of
producing time delays in the White cell, such as using glass or silicon blocks
been demonstrated.
The lens train contains a group of lenses placed such that the first lens, lens1, is
located at a conjugate plane (CP) of mirrors A and B, and that is at the same distance
from the MEMS. The second lens is identical to lens1 and is placed at a distance equal to
twice their focal length (2f
). Figure [2.9] describes the optics layout of the delay arm
along with the locations of the produced images in the arm.

Figure 2.9: Delay arm showing the image locations produced by each lens
First the field lens sees the MEMS as an object and forms a virtual image of the
MEMS at a plane located behind the MEMS (1
image in the figure). Lens1 then sees
this image as an object and produces a real image of the MEMS between lens1 and lens2
of the delay arm (2
image in the figure). The second lens in the lens train, in turn treats
this image as an object and produces an image of the MEMS (3
image in the figure)
with a magnification of -1 back at the same location as the 2
image through mirror C.
The lenses are chosen such that the second image is located at a distance equal to the
radius of curvature of mirror C away from mirror C. The rays forming the image then
follow the same path backwards to the MEMS with a total magnification of -1, hence,
conforming to the White cell imaging conditions.
Note that the total delay produced is due to the extra distance that the beam
travels, which is highlighted in blue in the figure. The delay produced is always going to
be a multiple of . The delay increment, , can be calculated as shown in equation (2.4):


+ + =
n c
n c
where c is the speed of light in air, D is the round trip distance = 8 * f
, th
and th
the thicknesses of the lenses 1 and 2 in the delay arm and n
, n
are their refractive
indices respectively.
Field lens design
There are two constraints that are considered when determining the separation
between the MEMS and the field lens. We first have to make sure that the optical mounts
housing both pieces can fit side by side. Second, we have to make sure that all the beams
leaving the MEMS and diverging towards any of the White cell mirrors will be captured
by the field lenss clear aperture. As a rule of thumb, we always try to keep the
separation between the MEMS and the field lens as small as possible in order to reduce
the overall size of the entire system.

Input beam considerations

As beams circulate in the White cell, they get focused onto on a column of spots
on the MEMS pixels at each bounce. The size of the focused spot is critical in the
design of the White cell components. The spot size has to be small enough to fit on the
MEMS pixel to avoid any power loss but not so small that it diverges too fast and gets
apertured at the optics used.
Both constraints were considered when designing the input optics. The input
system is designed to produce an input spot size such that the MEMS pixel captures >
99.99% of the beams energy. Additionally the pitch between adjacent beams is set to
match the pixels pitch. In our calculations we approximate the input beam with a perfect
Gaussian beam with a beam waist of
. This approximation holds with little error since
the beam enters the input system from a single mode fiber array.
We calculate the ratio between the spot size and the pixel size by finding the ratio
between the power landing on the MEMS pixel and the total power of the same Gaussian
beam. We assume square pixels with dimension, a, to simplify the calculations. The
electric field of a Gaussian beam is represented by the following equation:
y x
e A y x E

2 2
) , (
where A is a constant and x and y are the beams position variables. We will drop A in
the remaining calculations as it wont affect the final result. To calculate the power ratio,
we integrate the intensity of the Gaussian beam over the pixels area and divide by the
total power


dy e dx e
dy e dx e
o o
o o
y x
2 2

Since x = y for a square pixel, we simplify the integral by substituting for
y x
o o
= =

2 2
, which in turn results in dy dx du
o o

2 2
= = . Equation [2.6] now
2 2
2 2






o u
du e
du e

We further simplify equation [2.7] and set
= 0.9999. Now the equation
9999 . 0



Finally we rewrite equation [2.8] in terms of the pixel dimension, a, to get
o o
a 4 8678 . 2
= [2.9]
Hence we choose the spot diameter to be 1/4
the size of the pixels dimension.
Further discussion of the design of the input optics is included in chapter 4.
2.6.3 Linear White Cell-based Tapped Delay Line (TDL)
Recall that a beam encounters a delay of each time it is sent to mirror C instead
of B. If the total number of times a beam bounces in the cell is m, where a bounce is
defined as each time the beam hits a pixel, then the maximum delay a beam can
accumulate is N
= m/2 * . The delay is termed linear since the number of delays is
linear in m. The m/2 factor is there because the beam has to visit mirror B every other
bounce, which means that it takes two bounces to produce one delay.
Figure [2.10] shows the setup of a linear White cell. As mentioned earlier, the
MEMSs pixels can be either tipped to . This allows beams to bounce from any arm to
the other, so beams can bounce between A&B, A&C, or B&C. For example, for beams
coming from arm A to got to arm B, the pixels will have to be tipped to + and from B to
go to C, the pixel is set to be at - and so forth. We will refer to arm B as the switching
arm or the decision arm as all beams have to go to arm B before being sent to arm C or
arm A and hence delayed or not.

Figure 2.10: White cell-based TDL highlighting the null cell, the switching arm, and
the delay arm
In table [2.1] we show the beam progression in order to produce the various
delays possible with the linear White cell. The beam is assumed to have entered the
White cell through the ITM and is directed towards mirror B through the MEMS. In the
table we chose the beam array size to be six beams. Hence, the beam array will use six
pixels each MEMS bounce. The total number of bounces needed to produce a total of six
delays is 14, which includes one input bounce and one output bounce.
Delay amount Beam progression
: :
Table 2.1: Bounce pattern to produce different amounts of delay
Although we only demonstrate a design with a fairly small number of delays,
other White cell designs have been successfully demonstrated at The Ohio State
University and are capable of producing a larger number of delays such as the quadratic
cell. The quadratic cell has two delay lines instead of one and can produce a number of
delays proportional to m

. The quartic cell, which consists of four different delay
lines produces a number of delays proportional to m
4 [63]
. For example, in a quartic
system with 17 bounces, a total of 624 different taps can be produced. This corresponds
to a resolution of 624 samples/correlation, which is better than existing optical delay lines
by a factor of ten. An octic cell
can outperform other existing techniques by a factor
of a hundred. The design of the WC-based optical correlator could be easily scaled to use
any of the aforementioned cells under the same principles discussed in this document.
We note that our design is a proof of concept design that is capable of producing
only six to ten delays, primarily limited by the size of the MEMS. The linear cell was
implemented in this design due to its simplicity and because of limited funding. For a
higher resolution TDL, a higher order polynomial cell or a binary cell
will need to be
The design of the output summer of the correlator will depend on which design
we chose. In the next section we will only consider a design that is suitable for our proof
of concept setup.
2.6.4 Weighting Elements and Beam Summation
We discussed the implementation of the TDL of our temporal correlator and
explained how each beam in the input beam array gets delayed separately before exiting
the White cell. We next describe the implementation of the remaining two parts of the
correlator, namely, the weighting elements and the optical summer.
Each beam leaving the TDL gets multiplied by an amplitude or phase weighting
element. In this work we elected to use amplitude weighting. This choice allows us to
sum the beams incoherently on a single photo detector or a photo detector array, which
simplifies the control and stability requirements on the summing optics. In addition, we
assume the data is modulated using Non Return to Zero (NRZ) modulation format, which
is a widely used modulation format in the optical telecomm world. Additionally, we
simplify our apparatus by assuming perfectly square pulses, so that the weights are all
either ones or zeros (light or no light). Those weights, in our system, are optically
implemented by simply using a shutter, where the ones pass through to the summer and
the zeros get blocked. Since we block the beams that we dont want, we can achieve the
same result by applying the weights before the TDL. Hence, we only generate the beams
that we want to pass and simply not generate any beams that we would block.
We now have our delayed replicas of the incoming signal, and they have been
appropriately weighted with the s(t)s. It remains to sum them. We note that each light
beam leaves the TDL at a unique location. We want, however, for each beam to arrive at
the same output spatial location but separated in time. We can do one of two things. If
the number of beams (or taps of the TDL) is small as in the case of the linear cell design
that we implemented, we can focus the spot array with a lens down onto a photodiode
with a relatively large active area (e.g. from 0.5 mm to 5 mm) while still keeping the
beams separate. Note that there is a tradeoff between the photodiodes time response (i.e.
speed or bandwidth) and its active area size, which has to be considered when choosing
the right detector for the system based on the data rate used.
If the number of beams (i.e. taps) is larger, however, we can use an optical
summer based on a White cell interconnection device that is very similar to the time
delay device just described. It is called a trap-door summer and was developed and
patented at The Ohio State University
. It uses a micro-mirror array and the union of
several White cells. Such a technique was not implemented in our design and we only
include it in our discussion for completeness.
We point out that the hardware in our design is very simple, just a few mirrors
and lenses, a photo detector, and a single MEMS. Note that the MEMS pixels angles are
fixed and that the beam path is going to be the same for a given beam array size. Now,
although our design was implemented using MEMS, we can easily replace the MEMS
with a fixed micro mirror array where the pixels are micro machined to the desired angle.
The latter option would be a much cheaper solution, which could be easily scaled for a
larger number of taps. Thus, we expect this approach to performance monitoring will not
only be much faster, but far cheaper than existing solutions such as BER testers or eye
diagram monitors utilizing high speed real-time scopes.

3.1 Introduction
In this chapter we show the simulation results of the effect of various impairments
on the shape and amplitude of the correlators output. All simulations are conducted
using MatLab software from MathWorks Inc. We specifically show the effects of
attenuation, dispersion, jitter, and noise. To validate our results, we then compare these
results with what would be obtained from an eye diagram measurement obtained using a
real-time oscilloscope measurement. In section 3.2 we present simulation results
showing the effect of attenuation, dispersion, noise and jitter on the correlation function.
In section 3.3 we discuss our simulation and analyze our results. In section 3.4 we
present a relationship between our results and the BER of the test signal. Finally we
show, in section 3.5, the effect of the number of taps in the TDL on the correlation
3.2 Impairment Simulations
The test signal used is a series of three bits, [0 1 0], with each test signal sampled
with 500 samples, which corresponds to 500 taps in the TOC. Consecutive test signals
are each delayed by a time delay element. The test signal is artificially impaired by
artificially adding attenuation, dispersion, and so forth. The degraded signals are then
correlated with a clean [0 1 0] sample. Simulation results show how different each type
of impairment affect the correlation output.
3.2.1 Attenuation and Dispersion
Figure [3.1a] shows the correlation function for received signals subjected to
attenuation only. From the figure we see that the height of the correlation peak varies
linearly with percent attenuation, measured as percent reduction of the original signals
amplitude, shown in 10% increments.
To model dispersion we used the unattenuated signal and modified the shape of
the sides of the pulse. We used one half-cycle of a raised cosine function to transition
from 0 to 1, and again from 1 to 0. We defined percent dispersion as the fraction of the
actual bit period occupied by the transition. When the dispersion reaches 50%, the rising
and falling transitions meet in the middle of the bit. The raised cosine function can be
expressed using the following formula in equation [3.1]

+ >




) 1 (
) 1 (
) 1 (
) 1 (
sin 1
) (

where T is the pulse width and is the fractional percentage of dispersion. For example,
a value of =0.3 indicates 30% added dispersion.
Figure [3.1b] shows the effect of dispersion alone on the correlation signal. The
shape and amplitude of the resulting correlation functions are shown for varying amounts
of dispersion, ranging from 0% to 50%. Here we see two effects: The amplitude is
reduced, and the peak becomes more curved as part of the signals energy is transferred
outside the pulse.

Figure 3.1a,b: Auto/cross correlation function. (a) Effect of attenuation on the
correlation function; (b) Effect of dispersion on the correlation function
Information on both attenuation and dispersion can be thus extracted in a time of
3T, where T is the bit period. For example, for a 40 Gbps system, a period of 3T is equal
to 75 picoseconds.
3.2.2 Modeling Noise and Jitter
Noise and jitter must be measured statistically over multiple correlations. Noise
produces a variation in the peak and a slight variation in the shape of the correlation
signal, while jitter only affects the location of the correlation output in time.
In our simulations we assume the noise to be Gaussian and take the noise to be the
same for 1s and 0s. To measure the amount of noise affecting the signal, one might
repeat the test signal a hundred or a thousand times and measure the RMS variation in the
correlation peak height either optically or electronically. Figure [3.2a] shows one
hundred correlation functions superimposed for the same test signal with 20% noise (i.e.
a Gaussian noise with a standard deviation = 0.2) added. Noise has the effect of adding
both time offset and amplitude variations to the peak.
Jitter is modeled by shifting the received bit with respect to the reference signal.
The result is a correlation function that is also shifted in time, as shown in figure [3.2b].
For the purpose of simulation, the position of the pulse is shifted by a random number
with a standard deviation
expressed as a fraction of the bit period. As with noise, jitter
would be measured over many bits; we have shown 100 correlations superimposed.

Figure 3.2: Effect of noise and jitter on correlation function. (a) Hundred separate
correlations are superimposed for 20% noise; (b) Hundred superimposed
correlations, with jitter varying randomly with standard deviation
= 10% jitter
The question is, can one distinguish between the various kinds of impairments?
There are a couple of possibilities. First, suppose one draws a threshold at some
percentage of the ideal correlation peak amplitude, say 50%. One can observe that
although both attenuation and dispersion produce a reduced peak height and reduced area
above this threshold; attenuation produces a narrower peak, whereas dispersion maintains
the width but introduces curvature. Thus if one compares the total energy received with
the peak height, one can determine the degree to which each effect is present. This
requires extra processing time, but even if the signals are converted to electronic ones, the
processing time can be on the order of nanoseconds. Another possibility is to perform a
second correlation or optical matched filtering operation to compare the correlation
output with an ideal output, measuring in effect the degree of curvature. On the other
hand, it may not be necessary to distinguish theses effects at all, if the goal is only to
determine whether the link currently meets some particular quality threshold.
3.3 Simulation Results and Analysis
In practice, it may be easiest to measure the amount of energy received that
exceeds some threshold. Figure [3.3] shows this measurement for a signal containing
multiple impairments, for an arbitrary threshold of 50%, and for a time window
corresponding to the time interval over which an unimpaired signal would exceed 50%.
Although this assumes the existence of some reference clock, in our final design we aim
for a completely transparent monitoring system.

Figure 3.3 [Measurement of the area of the correlation function that exceeds a
certain threshold during a specified time interval]
So far we have developed an understanding of how each degradation mechanisms
affects the correlation peak, by considering the effect of each type of impairment
separately. We now consider what happens when two or more effects are combined. For
the purpose of illustration we show the correlation area as a function of jitter with varying
amounts of dispersion. We have again used an amplitude threshold of 50% of the ideal
maximum amplitude and a time window that corresponds to the case when the ideal
correlation function is greater than 50%. In figure [3.4] we observe that the correlation
function becomes increasingly sensitive as the impairments become worse for both jitter
and dispersion. We also note that the overall area remains fairly high (70% of maximum)
even for 50% jitter combined with 50% dispersion. This suggests that the signal-to-noise
ratio will remain high even for badly degraded signals.

Figure 3.4: Area of the correlation function that is greater than 50% threshold and
within the time window in which the ideal correlation function exceeds 50%. The
independent variable is jitter, with dispersion as a varying parameter
3.4 Relating Correlation to BER
If correlation is to be an effective measure of QoS, then it must relate directly to
the quality of the signal as measured by conventional means. We use a simplified eye
diagram and compare the open area of the eye with the area of the correlation peak. We
do this for various combinations of impairments.
Figure [3.5a] shows a simulated eye diagram. In an eye diagram a long series of
bits are superimposed on an oscilloscope. Dispersion clips off the corners of the eyes,
jitter narrows the open area (in time), and noise and attenuation close the eye. The larger
the open area, the better the signal. For a particular level of noise , expressed as a
percentage of the bit amplitude, we draw two lines, one at 2 below the 1 level and one
at 2 above the 0 level. For dispersion, we take a line whose slope is equal to that of
our simulated dispersion (see Fig [3.1b]), taking the slope at the point where it crosses
50% amplitude. To include the effects of jitter, we then move these sloping lines towards
the inside of the eye by an amount equal to a specified amount of jitter (e.g. 10% of the
bit width). We then calculate the area of the eye enclosed by these lines.
Figure [3.5b] shows the eye opening area calculated by this method for combined
jitter and dispersion. The latter figure can be compared directly with the correlation area
of Figure [3.4]. Both figures show a decrease in the correlation area as an increase in the
amount of impairment.

Figure 3.5: (a) Simulated eye diagram. The shaded area is the open area of the eye;
(b) Variation in the open area of the eye diagram for combined jitter and dispersion
We can see that the correlation function area is a reliable indicator of signal
impairment and thus bit error rate. The advantages of using correlation are bit rate
transparency, data format transparency, speed as results are generated in a few bit periods
instead of in minutes, no o-e-o conversion, and significantly reduced hardware.
3.5 Number of Taps in the TOC
In this dissertation we propose the design of a novel optical correlator that is
capable of correlating a very large number of samples. We argue that the more taps we
have in the correlator the higher the resolution of the correlation function. This argument
is probably true; however, how many samples do we really need to achieve a meaningful
correlation result? Note that increasing the number of taps would only require minor
modifications in the TOC design with only little extra hardware needed.
In figure [3.6] we show a simulation of how the shape of the correlation function
varies as the number of samples changes. The figure shows four plots for tap resolutions
of six, eighteen, thirty, and sixty. As the number of taps increase, the resolution of the
correlation increases. Simulations show that the shape of the correlation function doesnt
vary much for tap resolutions higher than 50.

Figure 3.6: Effect of number of taps on the correlation functions shape
Higher resolution TOCs would have a more sensitive response to impairments. A
small variation in dispersion, for example, might not even show in six-taps TOC, but
would be evident in a sixty-tap TOC.

4.1 Introduction
In this chapter we describe the experimental implementation of the optical
performance monitor (OPM) and the procedures followed to obtain the final quality-of-
signal results. In the following sections, we describe the equipment used in the
experimental apparatus and the design specifications of the different parts. Section 4.2
describes the input optics design. In section 4.3 we discuss in detail the specification of
the MEMS used in the setup and how it is integrated in the TOC. In section 4.4 we
introduce the circuitry used to artificially generate the effects of the type of impairments
discussed earlier. Section 4.5 explains the design of the linear White cell in section 4.6
we describe the design of our output system. Finally in section 4.7 we show our optical
simulation results of our system using OSLO optical design software.
Figure 4.1 shows a block diagram of the experiment. The figure is divided into
three main blocks, the input setup, the White cell-based tapped delay line, and the output
setup. In following sections we will discuss each block in detail and show how the
blocks are integrated.
The experiment utilizes a diode laser at a wavelength of 1550 nm that is in the C-
Band in the International Telecommunication Union (ITU) grid, which occupies the
wavelength range from 1535.04 nm to 1565.50 nm. The continuous-wave output from
the laser gets modulated using a Mach Zehnder (MZ) Interferometer. The modulator RF
input is driven with an external circuit that produces an artificially impaired signal by
adding the effects of dispersion, attenuation, and noise to the modulated signal. The
modulated signal gets split into six copies that enter the White cell-based correlator,
where the beams get delayed by different amounts.
The White cell consists of a handful of spherical mirrors and lenses in addition to
a micro-electromechanical system (MEMS) that is used to control the beam path. Several
computers were also employed to capture the output beam profile and to control the
MEMS pixels.
Finally, an InGaAs high-speed photo detector is used to sum the beam array and
produce the final correlation output. In the figure, part of the output is also shown to be
sent to a saturable absorber device that connects to an optical thresholding device. This is
an alternative approach that could be used in practice but in the interest of cost was not
implemented here. The signal is converted and processed electronically in this setup. All
the equipment used was mounted on a 4 ft by 10 ft optical table.

Figure 4.1: Experimental apparatus block diagram
4.2 Input System
The input to the system comes from a 60 mW continuous wave (CW) laser with a
center wavelength at 1.55 m (JDSUniphase laser with fiber pigtail, class IIIb laser). The
output of the laser is butt-coupled to a single mode (SM) polarization maintaining (PM)
fiber with a mode field diameter of 9.5 m.
The output of the laser is then externally modulated using a LiNbO
interferometer specifically designed for microwave analog intensity modulation (JDSU
AM-150-1-1-C2-l2-O2). The modulators input is connected to SM/PM fiber (Fujikara
SM 13-P-8/125-UV/UV-100) while the output terminal is coupled into a standard SM
fiber (Corning SMF-28). Both fibers were equipped with FC angle-polished
connectorized (APC) ends. The modulator has an upper cutoff frequency at 20GHz,
which is under-utilized in our experiment as our modulation frequencies are about a
factor of 1000 less (~Tens of MHz).
Figure 3.2 shows the principle of operation of the MZ interferometer. The input
polarized light enters the modulator and gets split at a Y-junction. Half the optical power
passes through each of the two waveguides. The two beams in the two arms get
combined at the output port, where both copies of the signal are recombined. The
waveguide material in one of the arms is made out of an electro-optic material, in which
the refractive index varies as a function of applied voltage. If both optical paths have the
same refractive indices, then both beams will undergo the same phase shift and interfere
constructively at the output terminal. If, however, we place a high-voltage electrode on
one of the two arms, we can vary the phase shift encountered by one of the beams with
respect to the other copy. At the output, both beams interfere destructively or
constructively or anything in between, causing the intensity of the output to vary as a
function of the applied voltage.

Figure 4.2: Principal of operation of a MZ modulator

The modulator used here has two electrical inputs, an RF input and a bias input.
The bias port is used to define the operating point on the Intensity-Voltage curve of the
modulator. The operating point of electro-optical modulators is usually defined as a
function of V

,, which is defined as the bias voltage required to produce a phase shift

difference of 180
difference in the relative phase of the optical wavefronts between the
two optical arms. Figure [4.3] illustrates the effect on the output optical intensity as
function of a change in the bias point. The figure shows a bias-free modulator, where the
modulator is designed such that the operating point is set to half the maximum output
intensity with 0V applied voltage. Such a setup is desirable in digital communication as
the output intensitys extension ratio between digital one and zero is set to maximum with
no bias voltage applied.

Figure 4.3: MZ Modulator transfer function
Therefore, for a fixed bias voltage the RF port is used to input the modulating
signal. The voltage change in the input signal produces a phase shift difference between
the modulator arms from 0
and 90
. Note that no DC voltages should be applied to this
port to avoid heating and instability in the modulators operating point. The modulator is
driven by the impairment-generation circuitry, described in section 3.4, which artificially
adds noise, dispersion and attenuation onto the signal.
The modulated optical signal is then split into eight copies using a 1x8 fiber
splitter. The splitter uses FC/APC connectorized single mode fibers with core diameters
of 9.5 m. The eight outputs are connected to an 8-element V-groove fiber array
(Flextronics Fiber Array, PDXWR-08-SNA-081-20DB-1). Figure [4.4] shows a
schematic of the V-groove fiber array. This is a silicon chip with grooves etched into it
photolithographically for high placement precision. Because of the etchant used and the
silicon crystal orientation, the groove width and depth are highly precise. The eight fibers
lie in these grooves and are therefore held in a single row with their axes coplanar. The
fibers in the V-groove are also single-mode with a mode field diameter of 9 m, on a
pitch of 250 m resulting in a total array length of approximately 1.8 mm. Both the
splitter and the fiber array are designed for a wavelength of 1.3 m, which is not our
operating wavelength and therefore we should expect additional optical power loss.
The fiber array is oriented vertically and gets magnified before entering the White
cell. The magnification is set such that the fiber array pitch matches the pitch of the
MEMS pixels. The magnified spot size of each individual fiber is also made sure to stay
smaller than the pixel size, such that 99.99% of the beam still lands on the pixel.
1.8 mm
250 um

Figure 4.4: Cross section of the V-groove fiber array
The magnified beams are then sent into the White cell to get delayed. Figure
[4.5] shows the layout of the input optics and the propagation of the input into the White
cell from the V-groove fiber array to White cell mirror B. The input enters the White cell
from behind the MEMS plane through an opening in the MEMS housing mount. An
input turning mirror (ITM) is used to fold the beam and direct it toward the MEMS pixels
at the correct angle. The input pixels are set to be at -.

Figure 4.5: Setup of the input optics and the beam propagation path from v-groove
fiber array into the White cell
The spot size at the MEMS plane, as discussed in chapter two, has to be small
enough to allow the pixel to contain at least 99.99% of the beams intensity. The MEMS
pixels are elliptical with a major dimension of 400 m and minor of 320 m, as
illustrated in Figure [4.6]. In the figure we show the outline of the square area used to
calculate the maximum allowed spot size on the pixel to sustain the 99.99% requirement.
The figure also shows the intensity profile of the maximum allowed spot. Note that the
square area is an approximation, where the spot size can be even larger while still
containing 99.99% of the beams intensity.
> 99.0% beams intensity
Available Area
400 um

Figure 4.6: MEMS pixel close up and the intensity profile of maximum allowed spot
In figure [4.7] we show a subset of the MEMS pixel array along with the x and y
pitch dimensions and the pixel size. We also show the magnified spot size drawn to scale
with respect to the pixel dimensions. Note that pixels are spaced differently in the x and y
dimensions. The pixels have a pitch of 645 m in x and 616 m in y.
In the figure we see that we use every other MEMS pixel, i.e. the first spot of the
fiber array hits the pixel located at row number one (for example), then the second spot
hits the pixel at row three, the third at row five and so forth. We had to do so in order to
avoid using some malfunctioning pixels in the MEMS.


Figure 4.7: Subsection Subset of MEMS pixels
4.3 MEMS Setup
We use an analog MEMS provided by Calient Technologies that was originally
designed for optical routing (The Calient Network Diamondwave Half Switch). The
MEMS pixels can be tipped to any angle between -10
to 10
in an analog fashion in both
the x and the y directions. The tip angle is controlled by the voltage applied to two
electrodes, one in x and the other in y. The pixels are organized in a hexagonal array
comprising 24 rows and 17 columns, resulting in a total of 384 pixels.
Although we originally intended to use eight light beams and thus have a
correlation resolution of eight samples, one could not propagate eight beams without
landing on a malfunctioning or a dead pixel. The maximum number of beams was
determined to be six. Hence, this produced a limitation on the total number of delays that
could be obtained and as a result required us to reduce the number of input beams to six.
The voltage applied to each individual pixel was controlled using an external
computer connected to the MEMS control board through the serial port. The control
software was provided by Calient (ZOC/PRO 4.12, by EMTec Innovative Software) and
was used along with the voltage vs. angle tables provided by the vendor, shown in
appendix (z), to configure the MEMS pixels angles. In addition, several scripts were
written to control individual columns and pixels, which were useful in the alignment
procedures discussed in appendix (y).
4.4 Impairment Generation Circuitry
The impairment generation circuit is a cheap artifice that works as a replacement
of a real-life optical link. The circuit adds adjustable simulated amounts of dispersion,
attenuation and/or noise onto the modulated signal.
Figure [4.8] shows the design of a circuit that can adjust the rise/fall times and the
amplitude of a given signal, which emulates the effects of dispersion and attenuation
. The circuit uses a pair of MOSFET transistors and a couple of 10
digital potentiometers with standard Watt resistors and ceramic capacitors. The
RC constant (controlled by R and C shown in red in the figure) controls the rise and fall
time of the output signal and hence we can fix the value of the capacitor (for example)
and program the resistor value to obtain the desirable rise or fall time. Attenuation can be
simply modeled by adjusting the amplitude of the modulated signal from the function
The input to the circuit is a pulse wave signal generated using an
arbitrary/function generator (Tektronix AFG3252) with a pulse width of 41.16ns, a rise
time of 5ns and at a repetition rate of 8.09MHz.

Figure 4.8: Circuit schematic of a dispersion generation circuitry
The effect of noise is modeled using a noise generator circuit that produces a
reasonably even noise spectrum in the high frequency range. Figure [4.9] shows a
schematic representation of the noise generation circuitry used in this experiment. The
noise is generated when current passes through a Zener diode and then gets amplified
through two cascaded wide-band amplifiers. The amount of current that flows in the
Zener is controlled by a standard Watt potentiometer.

Figure 4.9: Circuit schematic of the noise generation circuitry
Note that the noise simulations described in chapter (3) are based on white
Gaussian noise, with a wider noise spectrum than what is produced from this circuit.
However, the proposed circuit is a suitable approximation for the proof-of-concept
apparatus described in this chapter.
4.5 Linear White cell Design
The tapped delay line is implemented using a linear White cell with a single delay
arm. Beams traveling in the linear cell can either accumulate no delay, by staying in the
null cell, or get sent to the delay arm some number of times, where a total relative delay
of is accumulated each round trip. The delay produced is a function of the length of the
delay arm. The delay element is set to be = 6.86 ns per roundtrip, which translates to a
total length of the delay arm of 2.06 m (6.75 ft) longer than the null path.
A top view of the linear White cell design is shown in figure [4.10]. The figure is
to scale and shows the specifications of all the optics used along with the distances
between them. The blue, purple, and green lines are ray traces of beams leaving from an
edge pixel of the MEMS towards arms C, B, and A respectively. The ray trace is used to
determine the size of the beam at each optical surface in the White cell. All numbers
shown are in units of millimeters.

Figure 4.10: White cell-based tapped delay line
The divergence angle of the beam array is governed by the numerical aperture
(NA) of the fibers used and the input optics. The single-mode fibers in the V-groove
have a NA = 0.13, corresponding to a half divergence angle of 7.46
. This angle is
reduced to 1.31
after traveling through the input optics, where the beam array is
magnified and therefore the divergence angle is reduced. The latter angle was considered
when determining the size of the lenses and mirrors used in the apparatus, since it defines
the maximum cone of light that can enter or exit any optical surface in the setup. The
beam divergence is also used to determine the minimum separation distance between the
field lens and the MEMS and consequently the overall size of the null cell. Beams
leaving the MEMS directed to a White cell arm have to separate before a field lens can be
placed. This requirement is maintained when designing our linear White cell.
Several considerations were kept in mind when designing the White cell and
choosing the optics to be used. First of all, due to budget constrains, we had to choose
off-the-shelf lenses and mirrors wherever possible. We also had to keep the size of the
White cell optics as small as possible while maintaining an f-number (ratio of focal
length to lens diameter) of seven or more, to satisfy the paraxial approximation used in
the design. Furthermore, we had to consider the beams divergence angle coming out
from the input fiber array and ensure that all the lenses and mirrors used were large
enough to contain the beams propagating through the White cell. Finally, all optics and
mounts had to fit on a 4x10 table.
In the following discussion, we will divide the White cell into two parts and
discuss each separately, the null cell and the delay arm. Figure 4.11 shows a top view of
the null cell, illustrating the specifications of the different optics used and the distances
between them. All units are in millimeters. The two mirrors used in the null cell are 2-
diameter concave spherical mirrors with a radius of curvature of 1000 mm (CVI Laser,
SMCC-2037-1.0-C). The mirrors are BK7 glass blanks coated with a broadband coating,
providing a reflectivity of approximately 99% at the wavelength of 1.55 m.
The design utilizes multiple field lens, one per arm, in order to minimize the
overall astigmatism. The distance between the MEMS and field lens (the object distance)
is calculated such that beams leaving the MEMS from different locations and going
towards the field lenses are fully separated before hitting the front surface of the field
lens. The field lenses used are standard plano-convex 2 BK7 precision lenses with a
focal length of 206 mm (BICX-50.8-205.2-C-1550).


Figure 4.11: Top view of null cell

The delay arm, shown in Figure 4.12, is placed at angle of -20
with respect to the
MEMS normal. The delay is produced in a lens train consisting of two biconvex lenses
and a spherical mirror. The increased distance added in the delay arm is a total of 2.06m ,
results in a time delay = 2.06/c = 6.86 nanoseconds, where c is the speed of light in air.
Note that this delay is added to the null cell delay.
The first lens in the lens train is conjugate to the White cell mirrors in the null cell.
The focal lengths of the lenses are chosen to produce a real image of the MEMS plane
between the first and second lenses. The distances between the optical elements in the
lens train were adjusted to satisfy both imaging conditions described in chapter 2.

Figure 4.12: Side view of delay arm
All six beams bounce back and forth in the White cell, and finally after the last
bounce on the MEMS pixels, the beams get diverted toward the output arm, discussed in
the next section.
4.6 Output Optics
The output arm is located betweens arms A and B, as shown in Figure [4.13(a)].
The output setup consists of a biconvex lens, an achromatic doublet lens, and a high-
speed InGaAs photodiode. The MEMS mirrors in the output column of the pixel array are
tipped at a +5
angle, hence directing the beams coming from arm B toward the output
arm located at a +10
angle with respect to the MEMS normal. Figure [4.14] shows the
optical system used in the output arm along with all the mounts utilized to focus all six
spots on the photodiode. A ray trace outlining the two extreme rays leaving the fiber
array is also shown (i.e. rays from spot one and spot six). All dimensions are to scale.
After the last bounce, beams leaving the MEMS get demagnified by a factor of 7.69X in
order to fit on the 0.8mm
active area of the photodiode. The optical system consists of a
2 biconvex lens placed at the plane of the null cell mirrors with a focal length of f =
154.83 mm (CVI LASER BICX-50.8-153.4-C-1550) and a 1 achromatic doublet with an
effective focal length of 30 mm (ThorLabs AC254-030-C).
The photodiode is an InGaAs IR detector with a PIN structure (ThorLabs
SM05PD4B). The photodiode has a spectral response from 800 nm to1800 nm and is
capable of producing a rise/fall time of 7.0 ns (typical) with a 5V bias voltage. The
bandwidth of the detector is calculated using the following formula:
t C R
35 . 0
* * 2
= =

0 . 7
35 . 0
= =
where R
and C
in equation [4.1] are the load resistance and diode capacitance

Figure 4.13: Output arm location and the equipment used to sum the beams and
view the correlation output


Figure 4.14: Output optics and mounts, units are in mm
The detector is a common-anode photodiode mounted in a standard ThorLabs
SM05 threaded tube. Figure [4.15] illustrates the internal structure of the photodiode
with its SMA terminated connector and the external bias circuitry connection.

Figure 4.15: Internal connection of the common-anode SM05PD4B photodiode
The detectors output is connected to a low-noise high-frequency photocurrent
amplifier (MellesGriot 13AMP007) to amplify the photocurrent to a level that is
distinguishable by the oscilloscope. The transimpedance gain of the amplifier is given to
be 6250 V/A and an output RMS noise of 3.2 mV. The amplifier also has an internal bias
circuiting outputting a voltage signal that is directly fed into the oscilloscope.
The photodiode acts as a summer. When multiple optical beams are superimposed,
if all the beams are made to land on the same spot, coming from the same direction, there
is fan-in loss. That is, if N beams are superimposed, the output power is 1/N
. We
avoid the fan-in loss discussed in Chapter 2 by taking advantage of the small number of
beams used: we de-magnify the array of spots so that they all fall on a single detector, but
are still spatially separate. Note that, as they are demagnified, the rays from each spot are
also coming from a slightly different direction. Each photon of light that lands on the
detector produces photocurrent, and the photocurrents are in effect temporally summed in
the diode and result in a composite signal, which is then the auto/cross correlation output.
The output of the photodiode is then connected to a high speed oscilloscope (HP
Infinium Oscilloscope 1.5 GHz, 8 GSa/s) where the correlation output is viewed and
4.7 Optical System Simulation
The optical system was first designed using ray matrix optics and assuming a
paraxial approximation (see appendices B and C), where all beams are assumed to be
close to the optical axis and all lenses used are to be thin. The system was then simulated
using optical design software, OSLO ver.6.2. The simulation considered all
specifications of the optics used in the design, thickness, material used, pixel tip angle,
and limitations imposed by the beam array as opposed to a single beam. The design was
optimized to reduce the spherical aberrations and astigmatism introduced in the White
cell. The main optimization goal, however, was to ensure that all the output beams are
imaged at the detectors plane and that all beams fit on the active area of the detector
The simulation was split into three phases, where the input system was first
optimized to produce the correct magnification required for each beam in the array to
land on the center of the MEMS pixel in the input column. Next, the White cell-based
TDL was simulated and optimized to ensure that all six bounces land on the MEMS
pixels each bounce. The beam size was also maintained to be less than one third of the
pixels area to ensure that 99.99% of the beams energy is reflected off of each pixel.
Finally, the output optics was simulated and optimized to de-magnify the output beam
array from the White cell such that all beams fit on the 0.8 mm
active area of the
detector. All three phases were then combined to simulate the system as a whole and the
optimization factors listed above were ensured to be valid.
Figure [4.16] shows a top view of the simulated design along with the cone of
rays traveling through the system. We clearly see that all beams are well confined within
the optics used in the design.

Figure 4.16: Optical Simulation of the linear White cell-based TOC, using OSLO
As mentioned earlier, we first simulated the input optics system. Figure [4.17]
illustrates the input system along with the beam array spot diagrams. The beams vertical
pitch was optimized to be 1.231 mm, which is equivalent to twice the pitch of
consecutive pixels. This corresponds to a magnification of 4.924. We also see that the
diameter of all beams are well within our requirements of less than 106.67 m ([1/3 *
pixels minor dimension] = [1/3*330 m] = 110.0 m). The simulated beam diameter is
shown to be 108.06 m. The diffraction limited spot size (Airy disk) of a Gaussian beam
is also shown. In the beam energy diagram in the bottom left corner of the figure we
show that the total beam energy is confined within a square area with a dimension of less
than 125 m, which less than half the size of the square pixel approximation discussed in
Chapter 2. The point spread function (PSF) of the beam is shown in the top right corner,
which describes the intensity distribution of the beam in space. We see that the beam
follows a Gaussian profile and is confined within the pixel diameter.

Figure 4.17: Optical Simulations of the input optics used in the TOC design
The beams leaving the input optics are then fed into the White cell simulation file.
Figure [4.18] (top) shows the PSF of center beam in the beam array at the last bounce on
the MEMS pixels (bounce 11) before leaving to the output arm. The bottom part of the
figure illustrates the variation in the spot size and shape for different spots in the beam
array. The figure shows three spots in x and three in y, where x and y are the coordinates
of the MEMS. We notice that edge beams (beams towards the edge of the array)
encounter more aberration, which is primarily due to spherical aberrations as the beams
end up traveling along the edge of the optics resulting in a slight variation in the beam
focus at the MEMS plane. Additionally, edge beams strike the optics at larger angles
than center beams, and hence experience more astigmatism resulting in the beams being
more elongated.

Figure 4.18: Optical simulation of the output of the White cell part in the TOC
The output setup, consisting of a biconvex lens and an achromatic doublet is
simulated next. In figure [4.19] we observe the simulated spot diagram of one of the
output beams, where we see the geometrical radius to be less than 32 m. The top right
corner of the figure shows the PSF of the output spot, where we see the output beam in
focus at the output plane. Notice in the bottom left corner we show that the energy of the
output beam is confined within a circular area with a radius of approximately 62 m,
which is less than 10% of the active area of the detector.

Figure 4.19: Optical Simulation of the output optics used in the TOC design
The overall system is combined and the total magnification was found to be
0.553x, which results in a total beam array size of 0.741 mm ([input array size * total
system magnification] = [1.34 mm * 0.553] = 0.741), which is less than the active area of
the detector. The PSF of one of the final spots on the detector is shown at the top of
figure [4.20], where the beam intensity is confined within an area less than 15% of the
detectors active area. We also show the spot diagram of three beams in the beam array
in the bottom right corner. Although the beams show evidence of experiencing more
aberrations as they move away from the center of the array, the size of all six beams
together in the array is still smaller than the active area of the detector. The bottom left
section of the figure shows the energy diagram of the center beam, where 100% of the
beam energy is confined within a radius of less 0.08 mm, which is less than 10% of the
detectors radius.

Figure 4.20: Optical Simulation of the entire TOC system
Our simulations indicate that all of our design considerations are achievable
without the need for any custom optics. We note that the beam quality could be
increased by correcting for the various types of aberrations present in the cell. As we
mentioned earlier, however, our main goal is to focus all six pixels within the active area
of the photodetector used.


5.1 Introduction
The objective of this chapter is to present the experimental results obtained using
our experimental test bed and show how the results compare to our simulations. We
describe in detail the procedures taken to obtain the correlation results and provide
measurements of the effects of attenuation, dispersion and noise on the correlation output.
We also show a step-by-step procedure of the alignment process used to align the White
cell-based Temporal Optical Correlator (TOC) along with a full analysis of the optical
power losses associated with the setup. We conclude the chapter with a summary of our
work indicating the effectiveness of the TOC technique in optical performance
In figure 5.1 we show the general configuration of the Linear White cell-based
OPM setup as it is assembled on the optical table. The figure is to scale and describes the
location of all three arms of the White cell in addition to the input and output arms. The
locations of the laser source along with all the measurement and imaging equipment are
also shown.

Figure 5.1: Layout of the experimental apparatus on the optical table-to scale-
The figure shows that approximately two thirds of the optical table area was
utilized. The laser source and the input setup were placed along the width of the table,
while the White cell-based TOC and measurement equipment were setup along the length
of the table. The delay arm (the longest arm) was assembled parallel to the length of the
table to minimize the alignment complexity. The MEMS normal and arm B are located
at a 20
angle with respect to the tables length. The beam height was adjusted to the
MEMS center, which is at 145 mm from the table surface. The optical axes of all lenses
and mirrors were adjusted to that height.
5.2 Apparatus Alignment
The first step in setting up the apparatus was to establish the optical axis for each
of the White cell arms and the input and output arms. We chose the MEMS normal to be
our reference such that all angles are measured with respect to it. For the initial
alignment process we used a HeNe visible laser (Power
=0.5 mW, =633 nm) to
simplify the process. We also replaced the MEMS with a flat mirror (or pseudo-MEMS)
placed on a rotation stage. The angle of the pseudo-MEMS normal was accurately
recorded using the dial on the rotation stage.
We started by placing the pseudo-MEMS flat such that its normal is parallel to the
tables length. Figure [5.2a] shows the setup. This step sets up the optical axis for the
delay arm, arm C. In figure [5.2b], the pseudo-MEMS is rotated around its axis by +10
where a positive angle through out this dissertation indicates an angle above the reference
axis. The beam leaving the laser will hit the MEMS and get deflected at a +20
This step is used to set up the optical axis for arm B and our global reference axis.
Finally in figure [5.2c] the pseudo MEMS are rotated by a +20
angle and hence the
deflected beam will leave at an angle of +40
, establishing the optical axis for arm A.

Figure 5.2a: Alignment procedure to establish delay arm optical axis

Figure 5.2b: Alignment procedure to establish arm B optical axis

Figure 5.2c: Alignment procedure to establish arm A optical axis
After establishing the White cell arms, the next step was to align the input optics
and the input turning mirror such that the input beam array gets directed to the center of
WC mirror B after it enters the cell. The pseudo-MEMS rotation stage was readjusted
such that the MEMS normal is perpendicular to mirror B. The pseudo-MEMS was
placed on a kinematical stage and was substituted with the Calient analog MEMS.
Before integrating the input optics into the setup, the profile of the input beam
array was analyzed. The input array was magnified and focused onto a CCD IR camera.
Figure [5.3] shows the beam intensity profile of a single beam in the array along with
comparison to a Gaussian envelope. The imaging magnification was set to 15.6X. The
beam diameter is shown to be 53.8 m (measured beam diameter was equal to 840 m
with a magnification of 15.6, hence the actual beam diameter is 840 m/15.6 which is
equal to 53.8 m). The theoretical spot size was previously calculated in chapter 3 based
on a Gaussian profile to be 46.77 m, which leads to approximately 15% experimental
error. The error might seem large at first, however, the measured beam is not perfectly
Gaussian and has a correlation factor of only 84% to a perfect Gaussian. Most
importantly, the vertical pitch between consecutive spots in the array was measured and
was found to be the same as the MEMS pixel pitch, which is equal to 1.231 mm (every
other pixel).

Figure 5.3: Beam intensity profile of a single beam in the array
We also took multiple measurements of the beams diameter away from its focus.
Figure [5.4] illustrates the location of the measurement points along the beams path.
The divergence angle was calculated and found to be approximately 17% larger than its
theoretical value. Hence, the beam diverges at a rate faster than what we expected and
we needed to take that into consideration when calculating the size of the optics used.

Figure 5.4: Gaussian beam propagation and location of measurement points
The input setup was now installed in the apparatus. The input beam array enters
the White cell from behind the MEMS through the input turning mirror and gets focused
on the MEMS pixels.
The output arm was aligned last and was placed at a +10
with respect to the
MEMS normal. To do so, the input beams were first focused onto the MEMS pixels and
those pixels were tipped to +5
angle and the reflected beam was used to set the location
of the output setup.
At this point all the angular alignment is completed and all three arms of the
White cell along with the input and output arms are in place. The final step in aligning
the setup is to adjust the longitudinal distances between the optics used in order to
establish the imaging conditions discussed earlier. To assist with placing all lenses and
mirrors at their right location, three imaging arms were introduced to the setup. Figure
[5.5] shows the location of the imaging arms and the magnification associated with each.
The beams were picked up as they return from each of the White cell arms after getting
refocused using spherical mirrors A, B, and C using partially reflective pellicles, shown
in blue in figure [5.5]. An additional imaging arm with a much higher magnification was
added to be able to observe the individual spot profiles.

Figure 5.5: Imaging arms locations
The setup is now completely assembled on the optical table. Figures [5.6a] and
[5.6b] show photographic images of the assembled setup along with the control and test
equipment used.

Figure 5.6a: Photographic image of the setup showing a top view of the input and
output optics along with a section of the linear WC setup

Figure 5.6 b: Photographic image of the WC setup showing all the test and control
equipment used
To image the MEMS, the MEMS pixels were illuminated using a flash light with
an IR filter mounted onto the flash light such that only wavelengths larger than 1100 nm
and less than 1600 nm would pass. Since our operating wavelength is 1550 nm, doing so
ensures that the MEMS image captured by the CCD camera is located at the same plane
as the beam array and hence allow for a more accurate alignment. If we used white light

beam array
) directly to illuminate the pixels, the image produced will
focus at a different plan when compared to the focus of the beam array. Figure [5.7]
shows an image of the MEMS pixels using arm 3, where the tipped pixels to either 10

appear as blank spots. The illumination source was placed along arm C.


Figure 5.7: Magnified image of the MEMS pixels captured using an IR CCD camera
Figure [5.8a] and [5.8b] show the returning beams on the MEMS pixels captured
using arm 3 and arm 1 respectively after going through 10 bounces through the cell. The
even-numbered bounces are seen in arm 3 including the input bounce, bounce 0. The odd
bounces are recorded at arm1 as beams bounce back from WC mirror A. At the 11

bounce the beams are directed towards the output arm located at a +10
angle with
respect to the MEMS normal, where the beams are summed and analyzed.
Flat pixel
No pixel
Tipped pixel

Figure 5.8a,b: The beam array imaged at the MEMS plane. We see all the even-
numbered bounces in (a) and the odd-numbered ones in (b)
In part a of figure [5.8] we can see the entire six-beam matrix as we are picking
the beam up from arm B, where all beams, whether delayed or not, have to pass through.
On the other hand, in part b we notice that the upper half triangle of the beam matrix is
missing. This is explained by realizing that we are picking the beam up from arm A,
where only beams that dont encounter any delay circulate. Hence, we see five beams on
the first bounce, four on the third and so on.
In figure [5.9] we illustrate the actual MEMS pixels used to produce the bounce
pattern in the TOC. The design choices were limited due to several malfunctioning
pixels scattered all around the pixel matrix. The bad pixels are highlighted in red. The
input and output pixel columns are labeled Bounce 0 and Bounce 11 respectively.

Figure 5.9: MEMS pixel matrix showing the locations of pixels used and all
malfunctioning pixels
The last step before starting to take measurements was to modulate the CW laser
output of the laser source. The laser output is connected to the MZ modulator, where the
RF port of the modulator is fed from a function generator with a pulse waveform. The
modulator bias was controlled using a DC power source biasing the modulator at half of
its full peak-to-peak voltage range. Doing so allows us to achieve the maximum
modulation depth. The RF pulse width and signal frequency were set to be 34.3 nsec and
9.71 MHz respectively. The pulse frequency was chosen such that the pulse width
represents only 33.3% of the total pulse period. Hence, this is generating our test signal,
0 1 0, described in previous chapters. Figure [5.10] shows a schematic of all six pulses
transmitted as a function of time and the autocorrelation result of summing all six signals.
As shown in the figure, the delay element, , is set to be 6.86 nsec, which is equivalent to
a single roundtrip in the delay arm or a distance of 2.06 m.

Figure 5.10: Input pulse signals and their autocorrelation function as a function of
We have now completed all the necessary steps in the design and set up of the
White cell-based TOC. In the next section we describe the correlation outputs obtained
and show the effects of artificially adding different types of impairments onto the signal
on the correlation output.
5.3 Correlation Measurements
The output beams were temporally summed onto the InGaAs photodiode by
focusing all six beams on the 0.8 mm
active area of the detector. The detector current
output was amplified and converted into a voltage signal using a photocurrent amplifier
with internal bias circuitry. The output of the amplifier was then monitored on the CRT
screen of a high speed oscilloscope.
We first needed to test whether the detected output power was sufficient to
produce a signal that could be resolved by the oscilloscope. We disconnected all inputs
except for one allowing only a single beam to circulate in the cell. We chose the beam
with the least amount of delay (circulates only in the null cell) and then repeated the test
for the beam with the longest delay. Doing so allowed us to also ensure that both ends of
the beam array are landing on the photodiode from which we can conclude all the beams
in between are too. Figures [5.11a] & [5.11b] show a screen shot of the input pulse along
with the test pulses, where we can see that the amplitudes of both of the output beams are
detectable and comparable. The green trace represents the input pulse signal and the
yellow trace represents the detected output pulse. Note that the amplitude scale for the
input signal is 40 times the scale of the output. For the beam shown with zero delay, the
signal amplitude was detected to be 57.58 mV, while the amplitude of the beam with five
delays amplitude was found to be 43.37 mV.


Figure 5.11a: Oscilloscope screen shot showing the input pulse and the output pulse
with zero delay

Figure 5.11bOscilloscope screen shot showing the input pulse and the output pulse
with five delays
0 delay
5 delays
Input pulse
Input pulse
Output pulse
Output pulse

Looking carefully at the two figures above we see that there is a fixed delay
between the input and output signals introduced by the input setup and the null cell or as
we will refer to it as the null delay. The delay can be obtained from figure [5.11a] and
is found to be 57.7 ns. The null delay can be calculated using the following formula:
ns ns ns ns ns
measured Delay Fiber Modulator
air in light of speed
length Output Input
air in light of speed
roundtrips of No length Null
53 . 54 0 . 31 93 . 1 6 . 21 31
10 * 3
58 . 0
10 * 3
5 * 296 . 1
) ( &
& . *
8 8
= + + = + + =
+ =

Our results show that the delay is off by 5.8% from its theoretical value.
Furthermore, we also extract the maximum delay associated with our White cell-
based TOC by subtracting the delay accumulated by the beam in figure [5.11b] from the
one in [5.11a]. That number was found to be 33.87 ns. We similarly calculate the delay
based on the distance the beam travels and find it to be:
air in light of Speed
roundtrips of No Length Delay
ay MaximumDel 33 . 34
10 * 3
5 * 06 . 2 . *
= = =
we find the error to be 1.3%.
We now reconnect all six inputs and measure their sum as a function of time on
the detector, which corresponds to their autocorrelation function. The following figure,
figure [5.12] shows the input pulse signal along with the output autocorrelation function.

Figure 5.12: Oscilloscope screen shot showing the input pulse and the output
autocorrelation function
Note that the correlation function width is less than twice the pulse width by a
single roundtrip delay, 2T as we described it to be in earlier chapters. The correlation
width was measured to be 66.32ns, which when compared to its theoretical value of 68.6
ns (2*34.3 ns) results in a total error of 3.3%. The correlation function amplitude was
measured to be 788 mV, which represents the summation of the amplitudes of all six
input beams.
Note in here that all six inputs do NOT have the same weights. This is due to
several reasons, namely:
- Different coupling/insertion losses associated with each beam at the splitter
- Beams at the top or the bottom of the beam array will experience higher losses as
they get higher aperturing losses due the finite size of the spherical optics used.
- Beams traveling through the center of the optics experience the least amount of
loss. This problem could be overcome by using oversized optics
- Each beam traverses a different path in the TOC, where the number of optical
surfaces associated with each path varies and therefore the losses vary too
Therefore, in order to get an accurate measurement of the output correlation
function that we can compare to our simulations we needed to measure the weights
associated with each beam and reflect the results in our simulations. Table [5.1] reflects
the weights associated with each of the six arms shown in units of power and as a factor
of the normalized correlation peak. The total output power of the correlation function
was measured to be 460 W 25 W.
Beam Number Output Power (W) Factorized weight (P
Beam 1 16 3.59
Beam 2 42 9.43
Beam 3 91 20.44
Beam 4 93 20.89
Beam 5 188 42.24
Beam 6 15 3.37
Table 5.1: Weights of the optical power associated with each arm in the TOC
We clearly see that beams three through five hold the majority of the total power
(>80%), whereas beams at the edge of the array attain less than 7% of the total power. In
addition to the reasons mentioned previously, one might think that the edge beams are not
well focused on the detector and hence not fully landing on the active area of the detector.
This possibility, however, was eliminated by disconnecting all beams except for the edge
beams and their power was measured one at a time. The detector position was adjusted
in both the lateral and the transverse direction to make sure that the beam is landing at the
center of the detector but no change in the output power was recorded, hence disproving
that thought.
5.4 Impairment Measurements
In this section we describe the correlation results obtained when the input signal is
modified by artificially adding impairments onto it. We investigate the effects of signal
attenuation, dispersion, and noise. Measurements of the crosscorrelation function were
recorded for different values of added impairments and the results were analyzed.
The impairment generation circuitry shown in figure [5.1] was removed from the
setup. Thanks to the advanced features of the Tektronixs Arbitrary Function Generator
(TEK/AFG3252) that allowed us to internally generate all the waveforms we needed
without the need for any external circuitry. We tested our circuits performance,
described earlier in chapter 2, against the internal functions provided by the function
generator and results proved to be comparable and even more accurate with the generator.
We recall that attenuation is modeled by a decrease in the signals amplitude (or
voltage), while dispersion is modeled as an increase in the signals rise/fall times. Noise
was added to the signal through a built-in Gaussian noise generator that allowed us to
adjust the percentage of noise added as a function of the signals amplitude.
5.4.1 Attenuation Measurements
The amplitude of the input signal was reduced to 75%, 50%, and 25% of its
original value and three measurements were recorded and compared to the measured
autocorrelation function. All four waveforms were regenerated using the recorded data
and then superimposed over the same time and amplitude scales. Results show that the
correlation peak decreases linearly with the signals amplitude, while the shape of the
correlation function remains unaffected, as expected. Figure [5.13] demonstrates the
resultant cross correlation functions for the input signal values mentioned above.

Figure 5.13: Measured effect of signal attenuation on the correlation output

5.4.2 Dispersion Measurements
We varied the amount of artificial dispersion added to the signal by adjusting the
rise and fall times of the pulse signal, which results in a pulse spread (or smear) that we
used to approximate dispersion. Two measurements were taken at 25% and 50% (as
defined earlier in chapter 3 of added dispersion. The correlation output was compared
again to the autocorrelation function and results were analyzed. Figures [5.14] show the
measured input and output signals as a function of time. We can clearly see from the
figure that as the signal accumulates more dispersion the crosscorrelation output gets
affected in two ways: The correlation peak amplitude decreases by a certain amount and
the curvature of the correlation output changes.

Figure 5.14: Measured effect of signal dispersion on the correlation output
5.4.3 Noise Measurements
Noise was introduced on the original signal using the function generators built in
noise generator. The device allows us to vary amount of Gaussian noise between 20%
and 50% relative to the signals amplitude.
Unfortunately, we ran into an unexpected problem during these measurements.
Recall that we are summing all signals using an InGaAs photodiode with an upper cutoff
frequency of 50 MHz as calculated earlier in equation [3.1]. This means that all
frequencies higher than 50 MHz are going to be filtered out. We found the frequency
bandwidth of the noise signal produced by the generator to be 240 MHz, which is
approximately five times the detectors bandwidth. Hence, the limitations in our
equipment formed a barrier at this point and we werent able to get any meaningful
This problem could be solved by using a detector with a cutoff frequency that is at
least twice the signals frequency. Note that the detectors bandwidth (BW) is a function
of its junction capacitance and as the BW increases the capacitance has to get smaller.
As a result the detector head will end up with a much smaller active area. The small area
would complicate the output optics design as multiple beams will require to be focused
onto a much smaller area, while maintaining all beams separate. Using higher precision
optics in the output setup, however, should solve this problem.
As an attempt to take a meaningful measurement, we varied the amount of noise
imposed on the signal and observed the output correlation function for any changes. The
output function, however, showed no response. The amplitude of the crosscorrelation
function remained constant with noise levels of up to 50%.
5.4.4 Correlation Measurements Analysis
From the measurements presented in this section so far, we can see an agreement
between the measured correlation function behavior and our previous simulations
described in chapter 3. In figure [5.15] we describe the change in the correlation
functions peak amplitude as a function of both attenuation and dispersion. We compare
the measured results to simulations for the variation in the correlation peak due to both

Figure 5.15a,b: Comparison between theoretical and experimental results (a)
Attenuation (b) Dispersion
From the figure we clearly conclude that our experimental measurements follow
our simulations with a total error margin of < 5%. These results prove the validity of
our method and validate our simulations.
5.5 Power Loss Analysis
Given the amount of surfaces each beam circulating in the TOC has to hit, one
would argue that the power losses could be too high and would limit the scalability of the
device. In this section we demonstrate the total losses associated with our setup and
explain the reasoning of each loss. We divide our analysis into three sections, namely:
Losses due to the input setup, losses due to the TOC, and finally losses due to the output
setup. In table [5.2] we describe the power losses associated with each section and
provide a brief explanation for the cause of each.

Power Loss
Power Loss
(% of total)
Section (I)
MZ Modulator 2.95 13.90%
Insertion loss + loss for
operating at quadrature
1x8 Splitter 10.6 50.11%
Insertion loss + coupling
loss of seven 1x2 splitters
8x1 V-Groove array** 4.0 18.91%
Combined insertion loss
of eight fiber inputs
Section (II)
White cell optics 3.6 17.02%
Combined loss of WC
mirrors, field lenses, &
MEMS pixels[11
Section (III)
AlGaAs Photodiode

Sensitivity = 0.95 A/W at =1550nm

21.15 dB 1.1 dB
Table 5.2: Power loss measurements of our experimental OPM apparatus

Both the splitter and the v-groove fiber array were designed for = 1310 nm, which
resulted in higher losses as our working wavelength is =1550 nm

Table [5.2] shows that the majority of the losses are due to incompatible
equipment as it is the case in both the splitter and v-groove fiber array with almost 70%
of the total accumulated power loss. These losses could be largely improved by replacing
the 1xN coupler and the fiber array with ones designed for our operating wavelength. On
a good note, the losses associated with the WC-based TOC totaled less than 4 dB, which
is less than 0.35 dB per bounce. The latter observation indicates that we can scale our
system to more than 20 bounces while maintaining the TOC losses below 7 dB.
6.1 Accomplishments
In this dissertation we presented a complete design of an optical performance
monitor (OPM) that is based on optical correlation. The design utilized a novel design of
a temporal optical correlator based on the White cell technology. The system was
simulated and the simulation results were analyzed and compared to other existing
techniques, where a relationship was established between the correlation output
measurements and the optical signals BER.
We also implemented a proof-of-concept experimental apparatus of the OPM
design that utilized a handful of off-the-shelf optics and single analog MEMS.
Experimental results presented in chapter 5 were very close to theoretical calculations
with error percentages less than 10%. The correlation output was analyzed using a high-
speed oscilloscope and the effect of different types of impairment on the correlation
function was presented. Results proved to match our simulations validating our
technique in OPM based on optical correlation.
The optical design was simulated using OSLO, where we made sure that all the
imaging conditions in the system are satisfied and that all the beams landed on the output
detector. The design also aimed to minimize aberrations with emphasis on astigmatism
to ensure that all beams fully land on the MEMS pixels during every round trip.
We finally, conducted a detailed analysis of the optical power losses associated
with our proof-of-concept experimental apparatus system and showed that the total losses
associated with the TOC were less than 4dB. We also showed that the design could be
scaled to include more inputs without a large increase in the optical power loss.
6.2 Future Work
Our proof-of-concept design and demonstration could be expanded and modified
in ways to make the design more realistic and improve the correlation outputs sensitivity
to various impairments. Let us divide our proposed improvements into four sections, the
input system, the White cell-based TOC, the output summation technique, and the
correlation output measurement technique.
6.2.1 Input System Improvements
Recall that the input to our system was artificially manipulated to add the desired
impairments on the transmitted signal. Such a method would only produce an
approximation of the actual effect of each of the impairments discussed and not the real
effect. A modification could be made to the design to replace the impairment generation
circuitry with a real optical link that includes one or more active optical components such
as an optical amplifier. The link could include several fiber spans of lengths up to tens of
kilometers. Additionally the input CW laser could be replaced with a tunable laser
allowing for WDM of multiple signals onto a single fiber. The later modification would
enable us to examine non-linear optical impairments that are induced due to the presence
of multiple channels over a common link. It would also give us a more accurate
measurement of the effect of dispersion on adjacent pulses and give us a realistic measure
of the total allowable dispersion in the link. We can additionally see the effect of noise
present due to different types of noise sources as we test the link with and without an
optical amplifier.
6.2.2 White Cell-based TOC Improvements
The experimental design that we implemented took advantage of the slow
modulation speed of the input signal. The spacing between the optics was fairly large
and the delay arm was very long (>2 m roundtrip). The design could be much more
compact if the delay element required by the TOCs TDL were much smaller. For
example, if our signal was modulated at a bit rate of 10Gbps that would require our delay
increment to be much smaller than 100 picoseconds, which would translate to a delay
arm shorter than 15 mm or 30 mm roundtrip. The entire TOC could then be designed to
fit in a very a small area (e.g. a 100 mm x 100 mm box) even including the other WC
mirrors and mounts. Such a design, however, would require most of the optics used and
the mounts to be custom made, which would increase the overall price of the system.
6.2.3 Output Summation Improvements
The output summer utilized in out design uses a high speed photodiode that acts
as an O-E module through which we can temporally sum the beams incident on the
detectors active area and analyze the output using an oscilloscope. This method is
limited by the detectors active area, which at higher data rates would even get smaller.
Hence this implies that such a technique is not scalable and was only implemented in our
design due to simplicity and budget constraints. A modification could be made to the
output by replacing the single photodiode with a photodiode array, where a larger number
of beams could be summed. However, alignment could be an issue as the beams leaving
the White cell exit at slightly different angles. Another modification would suggest the
use of the trap-door device proposed and demonstrated at The Ohio State University
The device utilizes a White cell system and is independent of the number of input beams
to the system. The device take an array or a bundle of spatially separated optical beams
and steer them in White cell setup such that they all exit at the same location with the
same angle. The output could then be fed into a single photodiode head without any high
precision alignment needed.
6.2.4 Correlation Output Measurement Improvements
So far in our results, we only were able to analyze the correlation output in the
electronics domain by utilizing a high-speed oscilloscope. Since we only require a
thresholding measurement as a first indicator of the links health, an optical thresholding
device could be placed at the output arm. We suggest in chapter 2 the use of a saturable
absorber device, which would output a pulse if there is enough incident light intensity on
it from the correlation output. Whereas, if the signal were corrupt and the intensity
present was below the desired correlation threshold, no output would be present and we
can in real-time detect the channel failure.

In this appendix we describe the code used to simulate the effects of adding
optical impairments on the correlation function. We divide the code into four sections
with each section adding one type of impairment (i.e. attenuation, dispersion, noise,
and/or jitter) to a test bit sequence (e.g. [0 1 0]). The code is written in C programming
language and compiled using MatLab software.
The program is capable of handling the addition of multiple impairments onto any
chosen bit sequence. The correlation takes place by manually multiplying the delayed
copies of the impaired bit sequence with the desired weights and then summing all the
signals with any chosen tap resolution. The program outputs the final correlation
function in a graphical format. Additionally, the program calculates the area (energy) of
the output correlation function above any given threshold and compares the result to the
area of an eye-diagram affected by the same type of impairments. The program
algorithm works as follows:
1. Create a bit stream with the desired number of samples per bit
2. Generate a new bit stream with one or more type of impairments added to it
3. Select which impaired bit stream do you want to correlate the original bit stream with
4. Sum the delayed copies of the impaired signal chosen to obtain the correlation
5. Determine the area of the correlation function over a defined threshold
6. Define the impairment thresholds to be used in calculating the area of the eye diagram
using the chosen impaired bit stream
7. Plot the desired output(s)

Simulation Code:
clear all
array = [ 0 1 0 ]; % Test signal to be used
tap_resolution=6; % Number of correlator taps
total_bit_size=60; % Number of points per bit

amplitude=0.7; % Percentage of attenuation
sigma_noise = 0.1; % Percentage noise added
sigma_jitter= 0.01; % Percentage jitter added
pi= 3.14159265;

start = 0.25*total_bit_size;
for i= 1 : array_size;
for x=start : step_size: start+(0.75*total_bit_size-
original_array(x) = array(i);
start= start+(0.75*total_bit_size-0.25*total_bit_size);
*total_bit_size) )=0;

%=============== INITIALIZATION ================
%========= GENERATE DISPERSED ARRAY ============
for k= 1 : array_size; % [Dispersion LOOP]
for i
eight_bit_array(i) = array(k);

%append the remaining part with zeros

%====== CHECK IF BIT IS 1 or 0 ============

if (array(k)==1)
% First: Create an original bit
% Second: Apply cosine function to the new bit
window_size=0.5*total_bit_size;% Percentage Dispersion
newbit_array_size= (window_size*2)+(0.5*total_bit_size -
newbit_array_start= ceil(((total_bit_size -
newbit_array_finish= ceil(((total_bit_size -
for i=newbit_array_start:step_size:(newbit_array_finish)

%append the rest of the array with zeros
first_part= zeros(1,(newbit_array_start-1));
third_part= ones(1,(newbit_array_size-(2*window_size)));
fourth_part=(1+cos(pi* [1:(window_size)]/(window_size)))
fifth_part= zeros(1,(length(newbit_array)-

%======= GENERATE COSINE WINDOW ==========
cosine_window = newbit_array.*[first_part second_part
third_part fourth_part fifth_part];

if (array(k)==0)
%Create the N-bit dispersed array
% First: Shift the beginning of the generated cosine
window to the end of the dispersed array
% Second: Add the two arrays and adjust the pointer.
Then repeat the procedure for the next bit
% Third: Append the dispersed array with zeros up to the
beginning of the next bit and make it ready for the next

for j=1 : length(cosine_window)

cosine_window_pointer= cosine_window_pointer+1;
cosine_window_pointer= cosine_window_pointer -

dispersed_array= dispersed_array+shifted_cosine_window;



shifted_cosine_window=0; %reset the shifted cosine window

end; % [END Dispersion LOOP]

array))=0; % match the length of original_array to

% add x% noise to bit 1
noise_1 = (sigma_noise/2)*randn (1,length(original_array));
% add x% noise to bit 0
noise_0 = (sigma_noise/2)*randn (1,length(original_array));
for k=1 : length (original_array)
if (original_array(k)==1)
noisy_array= original_array + noise_1;
noisy_array= original_array + noise_0;

% Add jitter to bit array

for i=1 : length(original_array+jitter_amount)
jitter (i+jitter_amount)= original_array(i);

% Add jitter to bit array

Attenuated_array = Amplitude*original_array;

% Change this for different impairment effect

chosen_bit_array= dispersed_array; % example

%=============== INITIALIZATION ================
delay_increment= tau;

% Generate two arrays
1. Autocorrelation Array using original bit array
2. crosscorrelation Array using chosen bit array
for r=1 : (total_bit_size/tau)
for c=1 : length (original_array)


for c=1 : length(tap_delay_line)




% Determine the correlation peak and upper 50% area

area_start= (total_bit_size - (0.25*total_bit_size));
area_finish= (total_bit_size + (0.25*total_bit_size));

for jj=area_start : area_finish
if (cross_correlation_array(jj) >= 0.5)
if (cross_correlation_array(jj) >
correlation_area1= (correlation_area1 +
correlation_area2= (correlation_area2 +
correlation_area1= correlation_area1 +0;
correlation_area2= correlation_area2 +0;
correlation_area= correlation_area1+correlation_area2;
% Define the eye diagram thresholds to be at +-2sigma_noise
and +-3sigma_jitter of the bits amplitude, where sigma is
the standard deviation
% Determine the x-value at each of the thresholds at four
points, two rising and two falling
% Determine the slope of the dispersed/attenuated bit
% Repeat for variable amount of each impairment and
tabulate results

%=============== INITIALIZATION ================
% Assuming that one's and zero's have the same amount of
distortion this covers the lower part as we will multiply
by two, when determining the eye area.

simulated_bit_array= chosen_bit_array;


%========== CALCULATE EYE AREA =================
for ii=start:finish
if (chosen_bit_array(ii) > 0.5)
bit_area(gg)= bit_area(gg)+chosen_bit_array(ii)-
if (chosen_bit_array(ii) > (1-(2*sigma_noise)))
upper_bit_area(gg)= (upper_bit_area(gg) +

bit_area_with_boundries= bit_area - upper_bit_area;
measured_bit_area= 2*bit_area_with_boundries;
normalized_measured_bit_area(gg)= measured_bit_area /

for cc= start : finish
if (simulated_bit_array(cc) > 0.5)
if (simulated_bit_array(cc)<= chosen_bit_array(cc))
(half_upper_area_no_boundries(gg) +
else (simulated_bit_array(cc)>
(half_upper_area_no_boundries(gg) + chosen_bit_array(cc))-
if (simulated_bit_array(cc) > (1-(2*sigma_noise)))
if (simulated_bit_array(cc)<= chosen_bit_array(cc))
(half_upper_boundry_area(gg) + simulated_bit_array(cc))-(1-
else (simulated_bit_array(cc) >
(half_upper_boundry_area(gg) + chosen_bit_array(cc))-(1-

%Calculate the area within the simulated bit until 50% of
the bit size and then add the second half from the
reference array to avoid the error in the falling slope of
the bit

upper_area_with_boundries= upper_area_no_boundries -
measurement_area= 2 * upper_area_with_boundries;


subplot(17,1,1),plot (tap_delay_line (10,:))
subplot(17,1,2),plot (tap_delay_line (20,:))
subplot(17,1,3),plot (tap_delay_line (30,:))
subplot(17,1,4),plot (tap_delay_line (35,:))
subplot(17,1,5),plot (tap_delay_line (40,:))
subplot(17,1,6),plot (tap_delay_line (45,:))
subplot(17,1,7),plot (tap_delay_line (50,:))
subplot(17,1,8),plot (tap_delay_line (55,:))
subplot(17,1,9),plot (tap_delay_line (60,:))
subplot(17,1,10),plot (tap_delay_line (65,:))
subplot(17,1,11),plot (tap_delay_line (70,:))
subplot(17,1,12),plot (tap_delay_line (75,:))
subplot(17,1,13),plot (tap_delay_line (80,:))
subplot(17,1,14),plot (tap_delay_line (90,:))

subplot(17,1,15), plot (cross_correlation_array,'r')


In this appendix we describe the concept of ray matrices, which we utilize to
evaluate the imaging conditions in the White cell-based TOC. Ray matrices or matrix
optics is a method of tracing a paraxial optical ray through an optical system. A ray is
described by two values: its positions and its angle with respect to the optical axis.
These two values vary as the beam traverses refractive and reflective surfaces throughout
the optical system
In the paraxial approximation, the ray is assumed to travel very close to the
optical axis such that its angle with respect to the optical axis is very small. This
approximation allows for the substitution of sin with , and tan with , where is the
ray angle. This allows relating the input and output planes of an optical system using
only two linear equations. Hence, in a paraxial system we can describe any optical
system with a 2X2 matrix. This matrix is often referred to as the transfer matrix of the
optical system. The transfer matrix of an optical system depends on the property of the
optical transmission medium, i.e. its numerical aperture n and on the surface curvature of
the optical medium, e.g. flat, refractive, reflective, or none.
Some standard transfer matrices that were used in our initial design are described
Ray Propagation in Free-Space:
Beams traveling in free-space assume the transmission medium to be air with a
refractive index, n
= 1. The transfer matrix, M, is quantified by the distance of travel of
the optical ray and is often referred to the translation matrix as it only affects the position
of the beam. The matrix representation is:

1 0
1 d
M , where d is the distance traveled in air
Ray Propagation through a Thin Lens:
When a beam travels through a lens, the beam gets refracted at the spherical
boundary of the lens causing the ray angle to change, while the beam position remains
unchanged. The output angle after refraction depends on the radius of curvature, R, of
the lens surfaces(s). Also, in thin lens approximation the thickness of the lens is assumed
to be negligible and has no effect on the ray propagation. The matrix representation is as

0 1
M , where f is the focal length of the lens and is equal to

Ray Reflection from a Spherical Mirror:
When a ray gets reflected of a spherical mirror, the direction of travel and the ray
angle get altered. The transfer matrix is defined by the radius of curvature, R, of the
spherical mirror. We similarly obtain the transfer matrix to be:

0 1
So, far we described the transfer matrices associated with a single system. If
several optical systems are cascaded, such that they all lay in a single plane (i.e. planar
geometry), the resultant transfer matrix of the entire system is obtained by multiplying all
matrices in reverse order. For example a system with N components (sub-systems), M
, the resultant transfer matrix of the system M
= M
. This
concept was used when validating the imaging conditions of our White cell design.
The resultant transfer matrix can be represented by an ABCD matrix, where each
letter represents one of the entries of the 2X2 transfer matrix. A beam entering the
system at an initial position of y
and a slope
, will exist at a new position y
and new
slope of
. The matrix representation is as follows:



B A y

Each entry of the ABCD has some significance. The first entry, A, determines the
magnification associated with an imaging system, while B determines the imaging
properties of the system. A value of B=0 indicates that the system is an imaging system.
We will not consider the other two entries, C and D, in our calculations and we will
refrain from discussing their functionality. For a detailed discussion of the ABCD matrix
please refer to



In this appendix we present the code used to validate the imaging conditions in
the White cell-based TOC and determine the distances between the optics used in the
design. We also show the code used to determine the divergence angle of the beam in the
TOC based on the input beam size and the MEMS pixels size. The code is also used to
find the minimum size of the optics needed such that 99.99% or more of the beam is
captured within the optics. Finally, we show the code used to determine the minimum
separation required between the MEMS and the field lens of each White cell arm.
The calculations are done using MAPLE 9.01 software and are divided into
several sections. In the first part we verify both imaging conditions described in chapter
2 for the Null cell and determine the distances between the optics used. Next, we design
the optics in the delay arm. We show the ray matrices used to design the lens train such
that it satisfies both of the imaging conditions of the White cell..
C.1 imaging between mirrors A and B through the MEMS
The first step in calculating the imaging conditions is to determine the distance
between the MEMS and the field lens. We choose the WC mirrors to be 2 concave
spherical with a radius of curvature of R = 1000 mm. We then chose different catalog
lenses for the field lens and determine what field lens would result in the smallest WC
arm size, while maintaining a high F# (seven or more). The distance between the field
lens and the White cell mirror, d1, is set to be the focal length of the field lens. Then the
distance between the MEMS and the field lens, d0, is found to be:

Plugging these values into the transfer matrix of the optical system we can
achieve both imaging conditions in the Null cell.

> f1:=412; d0:=d0; d1:=412; R:=1000;
> A1:= Matrix([[1,d0],[0,1]]);

> A2:= Matrix([[1,0],[(-1/f1),1]]);

> A3:= Matrix([[1,d1],[0,1]]);

> A4:= Matrix([[1,0],[(-2/R),1]]);

> M2:=A1.A2.A3.A4.A3.A2.A1;

> B1:=M2[1,2];

> d0:=evalf(solve(B1,d0));

> f1:=412; d0:=242.256; d1:=412; R:=1000;

> A1:= Matrix([[1,d0],[0,1]]);

> A2:= Matrix([[1,0],[(-1/f1),1]]);

> A3:= Matrix([[1,d1],[0,1]]);

> A4:= Matrix([[1,0],[(-2/R),1]]);

> M1:=A3.A2.A1.A1.A2.A3;

> M2:=A1.A2.A3.A4.A3.A2.A1;

The first imaging condition imaging from A to B through MEMS is shown in
matrix M1. We see that the total magnification is shown in the A term in the ABCD
matrix, which is -1. The imaging information is carried in the B term, which has a
value of 5.68x10
or almost zero. The second imaging condition imaging the MEMS
back through WC mirror is shown in the second matrix, M2, where again we find A=-1
and B=0. Therefore, both imaging conditions are satisfied in the Null arm.
C.2 Delay Arm Design
Similarly the delay arm optics and distances were designed using the same
method as in C.1. The lens train in the delay arm consists of two lenses and their focal
lengths are chosen such that the total delay produced in the arm is larger than 5ns (due to
limitations in the speed of our detector). Hence, the added distance (roundtrip) had to be
more than approximately 1.8m. Using two lenses with equal focal lengths and 2f-2f
imaging in the delay arm, where f is the focal length of each lens, our lenses had to have f
> 200 mm. Choosing two catalog biconvex lenses with f = 255.28 mm, we achieve our
requirement. What remains is to find the distances between the delay arm lenses, which
is shown in the following code:

Similarly as in C.1 we see that A=-1 and B=0 for both imaging conditions. Note
that these results are based on paraxial approximation and might change in a real system.
> f1:=412; d0:=242.256; d1:=412; f2:=255.28; f3:=255.28;

> A1:= Matrix([[1,d0],[0,1]]);

> A2:= Matrix([[1,0],[(-1/f1),1]]);

> A3:= Matrix([[1,d1],[0,1]]);

> A4:= Matrix([[1,0],[(-1/f2),1]]);

> A5:= Matrix([[1,d2],[0,1]]);

> A6:= Matrix([[1,0],[(-1/f3),1]]);

> A7:= Matrix([[1,d3],[0,1]]);

> A8:= Matrix([[1,0],[-2/(R),1]]);

> M1:=A7.A6.A5.A4.A3.A2.A1.A1.A2.A3.A4.A5.A6.A7;

> M2:=A1.A2.A3.A4.A5.A6.A7.A8.A7.A6.A5.A4.A3.A2.A1;


1. B. Rajagopalan, J. Luciani, D. Awduche, B. Cain, B. Jamoussi, IP Over Optical
Networks A Framework, July 2004.

2. A. Chiu, J. Strand, Unique Features and Requirements for The Optical Layer
Control Plane,
05.txt May 2004.

3. D. Awduche, Y. Rekhter, J. Coltun, "Multi-Protocol Lambda Swtiching:
Combining MPLS traffic Engineering Control with Optical Crossconeects, "

4. R. Ramaswami and K. N. sivarajan, Optical Networks: A Practical Prospective,
San Francisco, CA, Morgan Kaufmann Publishers, Inc. 1998

5. Stamatios V. Kartalopoulos, Introduction to DWDM Technology, Data in a
Rainbow, IEEE Press, New York, 2000.

6. J. White, Long optical paths for large aperture, J. Opt. Soc. Amer., vol. 32, no. 5,
pp. 285288, May 1942.

7. J. U. White, Very long optical paths in air, J. Opt. Soc. Amer., vol. 66,no. 5,
pp. 411416, 1976.

8. Galton, "Kinship and correlation", North American Review 150 (1890), 419-431.
Reprinted in Statistical Science 4 (1989), 81-86.

9. E.S. Pearson, J.W. Tukey, Approximate means and standard deviations based on
distances between percentage points of frequency curves, Biometrika (1965), 38,

10. Rodney Loudon, The Quantum Theory of Light (Oxford University Press,

11. R. Mital, Design and Demonstration of a Novel Optical Ture Time Delay
Technique using Polynomial Cells based on White Cells, Ph.D. Dissertation,

12. John G. Proakis, Digital Communications, McGraw-Hill, Inc. 2nd. ed., 1989

13. High Speed Digital Design,

14. Politi, C.T.; Haunstein, H.; Schupke, D.A.; Duhovnikov, S.; Lehmann, G.;
Stavdas, A.; Gunkel, M.; Mrtensson, J.; Lord, A., "Integrated Design and
Operation of a Transparent Optical Network: A Systematic Approach to Include
Physical Layer Awareness and Cost Function," IEEE Communications Magazine,
February 2007.

15. P. S. Andr, J. L. Pinto, A. L. J. Teixeira, M. J. N. Lima, and J. F. da Rocha, Bit
error rate assessment in DWDM transparent networks using optical performance
monitor based on asynchronous sampling, to be presented at the 2002 Optical
Fiber Communication Conference, Anaheim, Calif., 1721 March 2002.

16. H. Chen, A. W. Poon, and X. -R. Cao, "Transparent Monitoring of Rise Time
Using Asynchronous Amplitude Histograms in Optical Transmission Systems," J.
Lightwave Technol. 22, 1661- (2004)

17. Y. C. Chung, Optical monitoring technique for WDM networks, in
Proceedings of IEEE/LEOS Summer Topical Meetings 2000 (IEEE, New York,
2000), pp. 4344.

18. G. Rossi, T. E. Dimmick, and D. J. Blumentahal, Optical performance
monitoring in reconfigurable WDM optical networks using subcarrier
multiplexing, J. Lightwave Technol. 18, 16391648 (2000).

19. I. Shake, H. Takara, S. Kawanishi, and Y. Yamabayashi, Optical signal quality
monitoring method based on optical sampling, Electron. Lett. 34, 21522153

20. Saleh, Bahaa E. A. / Teich, Malvin Carl Fundamentals of Photonics Wiley Series
in Pure and Applied Optics. 1. Edition - September 1991 119.

21. CliffordR. Pollock, Fundamentals of Optoelectronics, Richard D. Irwin, Inc.,
Chicago (1995).

22. Telecommunications: A Boost for Fibre Optics, Z. Valy Vardeny, Nature 416,
489491, 2002.


23. Fisher, R.A. (1956) Statistical Methods and Scientific Inference. Oliver and Boyd,
Edinburgh. (See p. 32.)

24. A. Chiu, etc., Features and Requirements for The Optical Layer Control Plane,

25. G. R. Hill et al., A transport network layer based on optical network elements,
J. Lightwave Technol., vol. 11, pp. 667679, 1993.

26. G. Rossi, T. E. Dimmick, and D. J. Blumenthal, Optical performance monitoring
in reconfigurable WDM optical networks using subcarrier multiplexing, J.
Lightwave Technol., vol. 18, pp. 16391648, 2000.

27. K.-P. Ho and J. M. Kahn, Methods for crosstalk measurement and reduction in
dense WDM systems, J. Lightwave Technol., vol. 14, pp.11271135, 1996.

28. T. Takahashi, T. Imai, and M. Aiki, Automatic compensation technique for
timewise fluctuating polarization mode dispersion in in-line amplifier systems,
Electron. Lett., vol. 30, pp. 348349, 1994.

29. G. Ishikawa and H. Ooi, Polarization-mode dispersion sensitivity and monitoring
in 40-Gbit/s OTDM and 10-Gbit/s NRZ transmission experiments, in Conf.
Optical Fiber Communication (OFC) 1998, 1998, pp. 117119.

30. M. Rohde, E.-J. Bachus, and F. Raub, Monitoring of transmission impairments
in long-haul transmission systems using the novel digital control modulation
technique, in Europ. Conf. Optical Commun. (ECOC), 2002.

31. T. E. Dimmick, G. Rossi, and D. J. Blumenthal, Optical dispersion monitoring
technique using double sideband subcarriers, IEEE Photon.Technol. Lett., vol.
12, pp. 900902, 2000.

32. M. Teshima, M. Koga, and K. I. Sato, Performance of multiwavelength
simultaneous monitoring circuit employing arrayed-waveguide grating, J.
Lightwave Technol., vol. 14, pp. 22772286, 1996.

33. L. E. Nelson, S. T. Cundiff, and C. R. Giles, Optical monitoring using data
correlation for WDM systems, IEEE Photon. Technol. Lett., vol. 10, pp. 1030
1032, 1998.
34. K. J. Park, S. K. Shin, and Y. C. Chung, Simple monitoring technique for WDM
networks, Electron. Lett., vol. 35, pp. 415417, 1999.

35. C. T. Chang, J. A. Cassaboom, and H. F. Taylor, Fibre-optic delay-line devices
for R. F. signal processing, Electron. Lett.13, 678680 _1977.

36. J. E. Bowers, S. A. Newton, W. V. Sorin, and H. J. Shaw, Filter response of
single-mode fibre recirculating delay lines, Electron. Lett. 18, 110111 1982.

37. K. P. Jackson, S. A. Newton, B. Moslehi, M. Tur, C. C. Cutler, J. W. Goodman,
and H. J. Shaw, Optical fiber delay-line signal processing, IEEE Trans.
Microwave Theory Tech. MTT-33, 193209 _1985.

38. S. A. Newton, K. P. Jackson, and H. J. Shaw, Optical fiber V-groove transversal
filter, Appl. Phys. Lett. 43, 149151 1983.

39. G. W. Euliss and R. A. Athale, Time-integrating correlator based on fiber-optic
delay lines, Opt. Lett. 19, 649651 _1994.

40. A. G. Podoleanu, R. K. Harding, and D. A. Jackson, Low-cost high-speed
multichannel fiber-optic correlator, Opt. Lett. 20, 112114 _1995.

41. G.K. Chang G. Ellinas, J. K. Gamelin, M.Z.Iqbal, and C.A Brackett,
Multiwavelength reconfigurable WDM/ATM/SONET network testbed, J.
Lightwave Technol., vol. 14, pp. 1320-1340, June 1996.

42. D. C. Kilper, R. Bach, D. J. Blumenthal, D. Einstein, T. Landolsi, A. W. Willner,
Optical Performance Monitoring, Journal of Lightwave Technology, vol. 22,
NO.1, 2004.

43. T. Lou, Z. Pan, S. M. R. Motaghian Nezam, L. S. Yan, PMD Monitoring by
Tracking the Chromatic-Dispersion-Insensitive RF Power of the Vestigial
Sideband, Photonics Technology Letters, vol. 16, NO. 9, 2004.

44. K. Asahi, M. Yamashita, T. Hosoi, K. Nakaya, and C. Konoshi, Optical
performance monitor built into EDFA repeaters for WDM network, in tech. Dig.
OFC 98, San Jose, CA, Feb 1998, paper Th02, pp. 318,319.

45. J. L. Wegener, T. A. Strasser, J. R. Pedrazzani, Fiber grating optical spectrum
analyzer tap, in Eur. Conf. Optical Communication (ECOC97), Edinburgh,
Scotland, Sept, 1997.

46. R. A. Sprague and C. L. Koliopoulos, Time integrating acousto-optic correlator,
Appl. Opt. 15, 8992 1976.

47. R. J. Berinato, Acousto-optic tapped delay line filter, Appl.Opt. 32, 57975809
1995 .

48. J. Campany, J. Cascn, D. Pastor, and B. Ortega, Reconfigurable fiber-optic
delay line filters incorporating electrooptic and electroabsorption modulators,
IEEE Photon. Technol.Lett. 11, 11741176 1999

49. B. L. Anderson, A. Durresi, D. Rabb, F. Abou-Galala, "Real-Time All-Optical
Quality of Service Monitoring Using Correlation and a Network Protocol to
Exploit It," Applied Optics, 42(5) pp. 1121-1130, March 2004.

50. S. J. B. Yoo, Wavelength conversion technologies for WDM network
applications, J. Lightwave Technol., vol. 14, pp. 955-966, June 1996.

51. Modulator Designer Guide,

52. L.E. Nelson, S. T. Cundiff, and C.R. Giles, Optical Monitoring Using Data
Correlation for WDM Systems, IEEE Photonics Technology Letters, Vol. 10, No.
7, July 1998.

53. P. R. Prucnal and M. A. Santoro, Spread spectrum fiber-optic local area network
using optical processing, Journal of Lightwave Technology, vol. LT-4, pp. 547-

54. D. M. Gookin and M. H. Berry, Finite impulse response filter with large
dynamic range and high sampling rate, Applied Optics, col. 29, pp.1061-1062,

55. G.W. Euliss and R. A. Athale, Time-integrating correlator based on fiber-optic
delay lines, Optics Letters, col. 19, pp. 649-651, 1994.

56. A.G. Podoleanu, R. K. Harding, and D. A. Jackson, Low-cost high-speed
multichannel fiber-optic correlator, Optics Letters, col. 20, pp. 112-114, 1995.

57. Y. L. Chang and M. E. Marhic, Fiber-optic ladder networks for inverse decoding
coherent CDMA, Journal of Lightwave Technology, vol. 10, pp. 1952-1062,

58. K. P. Jackson, S. A. Newton, B. Moslehi, M. Tur, C. C. Cutler, J. W. Goodman,
and H. J. Shaw, Optical fiber delay-line signal processing, IEEE Transactions
on Microwave Theory and Techniques, vol. MTT-33, pp. 193-209, 1985.

59. B. Moslehi, Fiber-optic filters employing optical amplifiers to provide design
flexibility, Electronics Letters, vol. 28, pp. 226-228, 1992.

60. B. Moslehi and J. W. Goodman, Novel amplified fiber optic recirculating delay
line processor, Journal of Lightwave Technology, col. 10, pp. 1142-1146, 1992.

61. P. Petropoulos, N. Wada, P. C. The, M. Ibsen, W. Chojo, K. I. Kitayama, and D. J.
Richardson, Demonstration of a 64-chip OCDMA system using superstructured
fiber gratings and time-gating detection, IEEE Photonics Technology Letters,
vol.13, pp.1239-1241, 2001.

62. B. L. Anderson, D. J. Rabb, C. M. Warnky, F. M. Abou-Galala, "Binary Optical
True Time Delay Based on the White Cell: Design and Demonstration,"IEEE
Journal of Lightwave Technology, ," IEEE Journal of Lightwave Technology,
24(4), pp. 1886-1895, April, 2006.

63. B. L. Anderson, C. D. Liddle, Optical true-time delay for phased array antennas:
demonstration of a quadratic White cell, Applied Optics, 41(23), pp. 4912-4921,

64. C.M. Warnky, R. Mital, B. L. Anderson, "Demonstration of q quartic cell, a true-
time-dealy device based on the White cell," IEEE Journal of Lightwave
Technology, 24(10), pp. 3849-3855, Otober 1006.

65. V. Argueta-Diaz,B. L. Anderson, "Optical cross-connect system based on the
White cell and three-state MEMS: Experimental demonstration of the quartic
cell," Applied Optics 45(19) pp. 4658-4668, 2006.