\

=
2
2
d
n d
c
GVD [1.2]
where is the wavelength of the transmitted signal, c is the speed of light, and n is the
refractive index of the uniform medium.
In our analysis, we treat the signal as an analog signal, where we only consider
the shape of the signal and hence all types of dispersion mentioned above will have the
same affect on our correlation measurement.
10
1.2.4 Noise
Noise in optical communication links is introduced in either active optical
elements in the optical link or at the receiver end. Noise produced along the DWDM
optical link is primarily due to Amplified Spontaneous Emission (ASE)
[2,4]
, which is
light produced by spontaneous emission and amplified in an optical gain medium. ASE
is generated in optical active elements such as optical amplifier and light sources. ASE is
directly proportional to the signal power and inversely proportional to the amplifiers
gain and the links bandwidth. Optical amplification nodes such as in Erbium Doped
Fiber Amplifier (EDFA), are intended to amplify the amplitude of the optical signal only,
however, background noise and transmission link noise get amplified as well in addition
to the generated ASE. The spectrum of the background noise is often wide; however,
some of that noise can land near or on the signals wavelength spectrum and cause the
signal to get impaired due to interference between the signal and the noise. This noise
affects the receivers ability to properly decode the optical signal and hence introduces
errors.
The noise is quantified in term of the Optical Signal to Noise Ratio (OSNR),
which is mathematically defined as:
dB
P
P
OSNR
NOISE
SIGNAL
10
log 10 = [1.3]
where P
SIGNAL
is the optical signal power and P
NOISE
is the optical noise power. The
higher the OSNR the better the signal quality is. This measure is very frequently defined
as a design factor when determining the QoS requirements of an optical link.
11
The noise added to the signal is often approximated as Gaussian noise that affects
the entire data stream traversing a specific link with the same probability
[12,15,17]
. We
use this approximation in our simulation presented in chapter 3.
1.2.5 Jitter
In optical telecommunications, jitter is defined as variation in the signal
characteristic between consecutive pulses such as a variation in the pulse width and/or the
phase of the pulse
[23]
. In our analysis, we only consider temporal variation such as
variation in the pulse interval or signal frequency variations. Our assumption is based on
the way we treat and analyze the output signal, where the signal is considered to be
incoherent. Jitter is often quantified based on the type of variation measured, which in
our case would be a displacement in the pulse peak value.
1.3 Link Quality Measurement
In the telecommunication industry there already exist several standard methods
for measuring the quality of an optical link and the overall BER of the transmission link.
One standard technique is to directly measure the BER using a BER tester. A Bit
Error occurs when a transmitted signal gets corrupted by an internal or external event that
causes for example the reception of a 0 when a 1 is transmitted. The BER is a
statistical measure of how often these errors occur. For BER measurements to be
statistically significant, at least 100 errors need to be collected at the receiver end. This
requires a lot of time, several seconds or several minutes. In optical data communication
links the BER is expected to be below 10
9
for a good connection. The test time required
12
for 95% confidence interval
=
factor
Q [1.4]
where
x
and
x
are the mean value and the standard deviation of the impairment
measured and the subscript x denotes the bit value of 0 or 1. Once the Qfactor is
quantified the BER is calculated using the formula described in equation [1.5]:

\

=
2 2
1 Q
erfc BER [1.5]
Both BER testers and eye diagram measurements are slow as they require the
accumulation of a very large number of samples in order to obtain a valid statistical
measurement. In addition, the signal has to be converted to the electronic domain, in
which the signal is analyzed. The need to accumulate a large number of samples,
typically millions, to achieve a reliable measurement stands as a bottleneck as it limits
how fast an action can be taken when an error occurs. As we discussed earlier in this
chapter, nextgeneration optical networks require realtime response to signal failure or
signal degradation, which existing techniques cannot offer. To solve this problem new
link monitoring methods are being developed such as the OPM we describe in this
dissertation.
1.4 OPM: Existing Methods
Optical performance monitoring (OPM) has been recently introduced in the
literature. One approach based on asynchronous amplitude histograms has been proposed
and shows to be promising
[15,16]
. In this method a small amount of power is tapped from
the optical link and used to measure the quality of the link. This eliminates the need for
OpticalElectronicOptical (OEO) conversion of the main data signal and maintains the
15
signal in the optical domain. The method is based on a statistical approach, where the
tapped optical signal is first collected on a highbandwidth photodiode. Next, the
generated electrical signal is asynchronously sampled at a lower rate than the rate of the
signal. The samples are then collected and an amplitude histogram is generated
representing the frequency of occurrence of digital 0s and 1s and anything in between.
Figure [1.5a] and [1.5b] show two amplitude histograms generated using asynchronous
sampling (a) and to validate the results it is compared to the histogram generated using
synchronous sampling (b). Using information obtained from the shape and amplitudes of
the generated histograms the Qfactor is calculated and then related to the BER of the
signal.
Figure 1.5: Reproduced from [15], Amplitude histogram generated using
asynchronous (a) and synchronous (b) sampling
The asynchronous amplitude histogram technique requires gathering a large
number of samples (at least one million samples are usually needed) before a meaningful
result is obtained
[25]
. This requires a sampling time of multiple milliseconds, which is
not acceptable if the device is to be used in realtime protection or provisioning of all
optical links. This technique, however, meets the transparency requirement imposed by
next generation networks as the monitoring technique is independent of the bit rate or
16
modulation format. Amplitude histogram measurements also do not take all kinds of
impairments into consideration, such as dispersion. The system always assumes the use
of dispersioncompensated fiber.
Amplitude Power Spectrum (APS) analysis techniques have been recently
deployed for monitoring and analyzing the behavior of any types of signals transmitted
over an optical dispersive/noisy channel. In such analysis, signals are treated as analog
waveforms and are often independent of the data format or bit rate, which is very
desirable in monitoring alloptical networks in order to achieve the desired goal of
complete transparency.
Figure [1.6] shows the general block diagram of an APS monitor, where a single
DWDM channel is shown. A lowfrequency subcarrier (SC) is added to the data stream
(baseband signal). The baseband signal is combined with the SC either electronically or
optically and the combined signal is then used to modulate the laser transmitter. A
unique SC frequency gets transmitted on each DWDM channel
[18,26,31]
(or a separate
channel on a different wavelength () can be dedicated for monitoring
[13]
). The optical
fiber channel is then tapped at any point in the transmission line and the SC is filtered out
and monitored. The SC tone can be detected by either using an electrical band pass filter
(BPF) after photo detection (as shown in fig [1.6]) or by optical prefiltering prior to
photo detection. The idea behind APS technique is to superimpose a narrowband
spectral signal or an RF tone (referred to as subcarrier tone or pilot tone) on the optical
baseband data signal. The SC signal travels the complete same path with the baseband
signal (original data). The subcarrier is extracted at intermediate nodes throughout the
optical channel and monitored without disrupting the original signal. The average power
17
and shape of the subcarrier can be directly related to those of the baseband signal, hence
providing information about the OSNR and dispersion that the original data encountered.
Cross talk, which is a nonlinear impairment, can be measured by measuring the crosstalk
encountered by SC tones in adjacent DWDM channels.
Figure 1.6: Reproduced from
[18,26]
Block diagram of a APS monitor
There are several constraints on the SC signal that need to be taken into
consideration when using APS techniques. Since the SC tone is transmitted over the
baseband data signal, we have to make sure that no interference occurs between the two
transmitted signals. For this requirement to hold, the SC tone frequency has to be higher
than the spectral tail of the data signal such that no crosstalk can occur between the two
signals
[31]
. Figure [1.7] shows the frequency spectrum of baseband signal along with a
subcarrier signal for a certain WDM channel.
18
Figure 1.7: Reproduced from
[31]
Illustration of the frequency spectrum of both the
data signal and the SC signal
Furthermore, the depth of modulation (power or strength of modulation of the SC)
has to be sufficiently smaller than that of the baseband signal. Precautions also need to
be taken when monitoring the WDM channels power levels, since power fluctuation
(gain or loss) may occur at the transmitter. Therefore, it is important to fix the SCs
power level relative to the channels power
[26]
. Further constraints are required on the O
E module (e.g. photodetector) of the monitor circuit. Since the signal is not monitored at
the receiver and is usually tapped somewhere along the channel, it is important to set the
sensitivity of the OE interface to be much higher than the downstream receivers
sensitivity. This ensures accurate measurements and encounters for any additional signal
degradation that may occur between the tap and the downstream receiver.
Although APS monitoring techniques may seem to be a good solution for a lot of
OPM applications, they still suffer from several weaknesses. They require modification
of the transmitter to add the SC generation circuitry, which could be a major problem due
to physical limitations in longhaul networks. The monitoring speed of such techniques
is limited by how fast electronics (the OE module) can go. This could be a bottleneck in
applications that require highspeed fault detection and restoration. In addition, there are
19
several SCspecific obstacles that need to be overcome before any of these techniques
could be standardized
[2, 12, 13]
.
Other alloptical monitoring systems have been proposed
[42,45,51]
, however, most
of them deal only with a specific type of signal impairment such as dispersion, or jitter
and often use assumptions in the network layout that severely limit the existing
configuration of the core optical network. In the next section we describe our proposal
for OPM in alloptical networks and show how it compares to other existing methods.
1.5 OPM: Our Proposal
Let us first summarize the motivation behind OPM in a next generation alloptical
networks. When a link is found to be unhealthy (e.g. a link failure due to cable cut or
signal degradation due to impairments in the link), an immediate action needs to be taken
to either set up an alternative path for the data (link restoration), or switch to a backup
path that has been already established (link provisioning). At high bit rates (>10Gbps)
and with DWDM channels with very high BW, a large amount of data can be lost within
a very small period of time (e.g. a link disruption of 1s at bit rate of 40Gbps a total of 40
million bits would be lost, which is equivalent to 10,000,000 traditional phone lines).
Therefore, using previously discussed techniques to keep track of the links health is not
adequate nor scalable as the network wont be capable of acting fast enough to link
failures.
We can outline the essential features required in any OPM technique that will
satisfy the next generation Internets network requirements (i.e. transparency, reliability,
protection, and realtime link provisioning). The monitoring system should be:
 Independent of the transmitted signal format
20
 Fully implemented in the optical domain
 Capable of nearinstantaneous error detection (picoseconds)
In this dissertation we introduce a novel approach for solving the problem of
optical link health monitoring. We propose a new monitoring technique based on the use
of optical correlation, where a known bit stream or a test signal (e.g. 0 1 0) is
continuously transmitted or transmitted as a burst at a known frequency. It is very
important to keep in mind that the data transmitted over the bit stream is irrelevant to our
technique as we treat the signal as a pure analog signal and analyze the amplitude and the
shape of the signal. The test signal is sent over a dedicated channel (i.e. a different
wavelength) and gets multiplexed with the data stream in a DWDM system. The signal
gets affected by all the impairments that the data channel encounters in as it undergoes
the same path as the data. The signal is either picked up at intermediate nodes along the
link or at the receivers end by tapping a small portion of the signals power. The signal
is then optically correlated with a clean version of the transmitted bit stream. Information
from the correlation output (amplitude, side lobes, rise/fall time, frequency components,
and others) is then extracted and used to set a threshold that indicates whether the
transmission link meets the quality of service (QoS) and performance requirements
specified by the carrier or not. The thresholding is either implemented in the optical
domain using an optical saturable absorber or electronically using a fast comparator.
1.6 Physical Implementation
Our approach is based on a timeintegrating or temporal optical correlator (TOC).
The correlator is physically implemented using an NTap Delay Line (TDL), N weight
elements with one at each tap, and an Ninput summer. In general terms, the TOC is used
21
to measure how different a deteriorated bit sequence is (received at its input) from a clean
replica of the transmitted bit sequence (i.e. not affected by any impairments). At the
input of the TOC, the received signal r(t) is split into N copies, where each copy is
delayed by a discrete time increment . Each copy is then optically multiplied by a
weight function s
k
(t), where s
k
represents the weight function at the k
th
tap in the TDL.
Each weight function is chosen to represent the pulse shape for a [0 1 0] or other specific
sequence and is determined by the transmitted bit sequence and the number of taps
implemented. Finally the amplitudes of the delayed copies are summed incoherently (no
phase components) resulting in an output C(t) that represents the crosscorrelation
function between the delayed copies and the weight functions. Coherent summing could
also be used.
Additionally, we propose a new design for an optical correlator based on the
White cell that can produce hundreds or even thousands of delays with a tolerable amount
of loss. The White cell
[7]
technology has been adapted by the optics research group at
The Ohio State University and used in several applications such as: Optical true time
delay, optical switching, and others. We will discuss the detailed design of the White
cellbased TOC in Chapter 2.
1.7 Routing Protocol based on OPM
Information obtained from OPMs need to be included when calculating new
routes for data signals or backup routes when signal failures occur in nextgeneration
Internet networks. The routing decision needs to be based on how healthy the overall
path is between the transmitter and the receiver or between intermediate nodes. Current
routing protocols primarily base routing decisions on the shortest available path to the
22
receiver, where the shortest path is measured in terms of the number of hops (or spans),
the propagation delay, the blocking probability of intermediate nodes, or a combination
of those factors and others
[1,35]
. OPM data will need to be added as an additional factor
in the routing decision formula in order to sustain reliability requirements of future
networks.
Recently, this topic has been actively addressed in the research field. Most of the
proposed ideas are based on preknowledge of the network topology and its physical
parameters such as links lengths, type of fiber used in each link, number of active
elements, etc.
[1,3]
. Preknowledge of network topology and parameters requires a lot
of processing and storage power, and furthermore conflicts with the network transparency
requirement. Other ideas suggest establishing routes based on worstcase scenarios
[3]
,
meaning that switches will have to choose their paths based on the behavior of the worst
link along the path. This approach results in low bandwidth utilization and literature
shows that results are rarely reliable in making routing decisions due to the changing
network dynamics
[35]
.
The final goal of routing in the optical domain is to increase the revenue of the
network and keep the level of QoS promised to customers. Choice of a good route
computation algorithm is essential to the performance of these networks. The physical
capacity available in optical data networks has increased (theoretically) to several Tbps
with optical switching. How much of this available physical capacity that can be utilized
reliably, depends on the routecomputation algorithm used. With each fiber link capable
of carrying 40Gbps or more worth of data, the impact of even a few percent improvement
23
in the usable network capacity is significant, which can be of the order of hundreds of
Gbps, if not Tbps.
In a joint work with the Computer Information Science Department at The Ohio
State University
[49]
a new routecomputation algorithm, called Domain Optical Routing
Protocol (DORP) was proposed. The protocol combines intelligent routing with the
immediate availability of information about quality of signal provided by the optical
correlatorbased OPM. The routecomputation algorithm defines the weight of a link
using its available capacity and its quality as well. The proposed distributed protocol
requires the nodes inside a domain to exchange availability and quality information,
where a domain is defined as a subsection of nodes geographically spaced within a pre
specified distance. Inside the domain all nodes will exchange link state information,
which includes availability and quality. Between domains, border nodes will advertise
the aggregate cost to pass through corresponding domains. In this way the domain cost
will be more updated and more meaningful for the purpose of routing.
The division of the network into domains is done to avoid problems associated
with network scalability. For example, if the network size is extended then the
distribution of the link state information will take more time, making the information
itself old and misleading for the purpose of routing. Also it is known that the Internet is
composed of autonomous networks and that for administrative reasons its impossible to
distribute the detailed link state information among all such networks. To overcome
these two problems the proposed protocol is based on domains.
Some preliminary simulations of the effectiveness of DORP are presented in [49].
Figure [1.8] shows a comparison between the availabilitybased protocols (such as OSPF,
24
RSVP) with DORP using NSF network with 16 nodes, 25 links and 4 wavelengths per
link. Results indicate that DORP outperforms the availabilitybased routing protocols in
generating more revenue. In this simulation the revenue is given by the number of
accepted calls. The QoS factor is simulated by dropping the quality of only one link
randomly for short periods of time (few seconds) below an acceptable threshold. In
availabilitybased routing, calls that use links with quality below the threshold do not
generate revenue where as in DORP using the information provided by the optical
correlator avoids these links and hence all accepted calls are generated. For more than
one link with quality below the threshold the advantage of DORP against the availability
based protocol increases.
Figure 1.8: Reproduced from [49], Performance analysis of DORP against
availabilitybased routing protocol
25
1.8 Document Organization
The dissertation is organized as follow: In Chapter 2 we will explain the
theoretical principles behind optical correlation and discuss the different types of optical
correlators available. We then discuss how optical correlation can be used in OPM.
Later in the chapter we will introduce a new design for a temporal optical correlator
(TOC) based on the White cell and discuss the details of the design.
Chapter 3 will describe some of the simulations obtained to support our design.
We present simulation results describing how the different types of impairments affect
the correlation output. We finally relate our obtained results to industrystandard
monitoring technique and explain how we can relate our measurements to the BER of the
transmitted signal.
In Chapter 4, we describe in detail the OPM proofofconcept experimental
apparatus implemented. We divide the setup into five sections, namely: Input system,
MEMS, impairment generation circuitry, TOC, and output system and explain the design
details of each.
Chapter 5 will discuss the experimental results obtained using the proofof
concept setup and compare the obtained results to their expected theoretical values. We
first describe the alignment procedure used to align the optics used in the White cell
based TOC. We then show correlation output results obtained and how the correlation
function respond to each of the impairment types we discussed in chapter 1. We finally
show a detailed power loss analysis of the system and discuss its feasibility.
Finally in chapter 6 we conclude the dissertation with suggestions for future work
that could be implemented to enhance the current design.
26
CHAPTER 2 THEORY
2.1 Introduction
In this chapter we explain the theory behind optical correlation and how we use
correlation techniques in optical performance monitoring (OPM). We also describe in
detail the design of a new optical correlator based on the linear White cell. In section 2.2
we briefly discuss the concept of correlation and its significance in signal processing
applications. We then, in section 2.3, explain how we utilize the correlation function in
OPM applications. Following in sections 2.4, 2.5 and 2.6, we describe in detail the
design of a new White cellbased optical correlator and discuss its advantages.
2.2 Principal of Correlation
The concept of correlation was introduced in 1890 by the English statistician,
Galton
[8]
. He defined the relationship between any pair of statistical events or processes
through the concept of statistical regression. His definition was considerably extended
throughout the twentieth century and a new timedependent measure was introduced,
which is termed now as the correlation function.
The correlation function is defined depending on the field of study that is being
considered and not all definitions are identical. Although most definitions quantify the
27
corelation between two random variables at a specific time or between different time
instants of the same variable, the mathematical representation can vary.
Statistically, the correlation function
X,Y
between two random variables X and Y with
expected values (i.e. mean values) E(X) and E(Y) and standard deviations
X
and
Y
, is
defined as:
) ( ) ( ) ( ) (
) ( ) ( ) ( ) , cov(
2 2 2 2
,
Y E Y E X E X E
Y E X E XY E Y X
Y X
Y X
= =
[2.1]
where the numerator represents the covariance between the two variables and the
denominator represents the product of their finite nonzero standard deviations. The
absolute value of
X,Y
cannot exceed 1. The value 0 indicates no correlation, or that the
variables are independent. For any value of
X,Y
that is less than 1, the function is
termed as a crosscorrelation function. Correlation function values that are close to 1
indicate a high degree of similarity between the two variables. A value of 1 indicates a
complete match between the two variables and is termed as autocorrelation. The values
between 0 and 1 define the strength of the correlation function and are usually referred to
as the correlation coefficients. The correlation function can be constructed from those
coefficients by a direct averaging of the timedependent function. The averaging process
can be thought of as an extension of the mean square value.
In signal processing, the function is defined somewhat differently. The definition
is represented without normalization, that is, without subtracting the mean and dividing
by the standard deviation. In this definition we will consider the example of a data signal
transmitted over an optical transmission medium (e.g. fiber optic link), which is our
interest in this dissertation. The two variables of interest are the transmitted signal s(t)
28
over the fiber optic link and the received signal r(t), where r(t) is delayed by a time
variable, , due to transmission and t is the reference time. The correlation function, (t),
is defined by the integral:
+
= dt t r t s t ) ( ) ( ) (
[2.2]
The infinite limits indicate that the correlation function is continuous over an infinite data
stream. In the discrete domain, we can rewrite (t) for a finite number of samples of
the received signal by the following summation:
) ( ) ( ) (
1
0
k
N
k
k
k t r t s t =
=
[2.3]
where N is the number of samples of interest. We will be using the definition in equation
2.3 for our analysis and simulations throughout the rest of this document.
2.3 Optical Correlation for OPM
In optics, correlation functions are commonly used in interferometry to quantify
the degree of coherence between electromagnetic waves
[10]
. Correlation functions are
also wellknown in the literature for signal processing applications, primarily as encoders
and decoders for optical code division multiple access (OCDMA)
[22]
.
Optical correlators come in two basic styles: spatial and temporal. In temporal
correlation, a timevarying signal (e.g., intensity or phase) is compared to a reference
timevarying signal, using, for example, an acoustooptic device
[46,47,48]
or an optical
tapped delay line
[35,36,37,38,39,40]
. The result of the comparison is then summed or
integrated to produce the correlation output. Temporal correlators based on tapped delay
29
lines encounter low power attenuation and can produce a large number of delays ranging
from picoseconds to tens of nanoseconds.
Spatial correlators, on the other hand, are widely used in image detection and
processing applications. They usually take advantage of holograms, for example, to
compare a twodimensional image with some reference image
[20]
. Spatial integrating
correlators are much faster than time integrating ones, processing approximately 10
10
samples/sec, which is about three orders of magnitude higher than timebased integrators
[xx]
. Their disadvantage is the range of spatial shifts (equivalent to delays) possible since
delays are usually produced in a crystal by a spatial shift, which is in the range of
femtoseconds or tens of fs. The losses could be much higher too.
Our approach is based on a temporal optical correlator (TOC), using an optical
tapped delay line. Although any optical tapped delay line can be used in a temporaltype
optical correlator, we introduce a novel one in this dissertation that is based on the White
cell. We show that our WCbased correlator outperforms existing temporal correlators in
the number and the range of delays produced by a factor of 100 or more with power
losses below 7dB.
2.4 TimeIntegrating Optical Correlator (TOC)
The TOC is implemented using a tapped delay line (TDL), a set of weighting
elements or reference elements, and an optical summer. The correlation takes place
between the received signal, r(t), after going through the optical link and a reference
signal representing a copy of the original transmitted signal. The reference signal present
at the TOC is represented by the weighting elements, s(t). The weights could be
30
amplitude weights or phase weights. Amplitude weights are either 1s or 0s, whereas
phase weights are implemented by phase shifts of either 0 or .
In figure [2.1] we show the structure of a TOC, where the correlator is physically
implemented using an Ntap TDL. The input to the correlator is a distorted square pulse
with a frequency of 1/T. The signal gets delayed and multiplied by Nweighting elements
representing the original square pulse. The outputs are then all summed producing a
correlation output with a period of 2T.
Figure 2.1 : Ntap timeintegrating optical correlator. Figure shows the correlation
output between a degraded input and the weighting elements at each tap
The correlation output is generated as follows: The received time varying signal
r(t) enters the TDL, where a small amount of the power is siphoned off at each tap. Each
tap is delayed relative to the next tap by a fixed time increment of . Each timeshifted
replica of r
k
is multiplied by a weight s
k
present at each tap, and the resulting
multiplication products are summed. The result is the correlation function of Equation
2.3, between the deteriorated test signal r
k
and the TDL weights s
k
. As described
previously, if the two signals are identical, Eq. (2.3) becomes an autocorrelation, and (t)
will have a high peak in the center of the time slot, and low side lobes. If the signals are
31
less wellmatched, Eq. (2.3) becomes a cross correlation function, and the peak decreases
whereas the information in the sides of the pulse increases. Information from the
correlation output such as peak amplitude, side lobes, rise/fall time, frequency
components and others is extracted and processed (optically or electronically). The
processed data is then compared to a reference threshold or a reference function to
indicate whether the tested transmission link meets the signals quality requirements
specified by the carrier or not.
The key element of the TOC is the tapped delay line, and the more taps the higher
resolution of the correlation output. One way to implement taps is using fiber splitters.
Figure [2.2] shows two common styles: the tree in (a) consists of a 1xN splitter followed
by N lengths of fiber. Each fiber is longer than the previous one by a distance of one
span
[53,54,55,56]
; the other type in (b) uses 2x2 couplers/splitters in various types of lattices
[57,58,59,60]
. The number of splitters is equal to the number of taps, and for each tap there is
a separate, precisely cut length of optical fiber. Such designs are not scalable as the
amount of power loss increases dramatically with each splitter added. Additionally, the
lengths of the fibers have to be cut very precisely in order to ensure correct delays.
Another recent approach uses fiber Bragg gratings
[61]
, where the gratings are
imprinted at various distances along the fiber. As the light beam enters the fiber, portions
of the beam get reflected at each grating resulting in multiple reflected beams with
different delays. Such technologies become impractical to implement if a very large
number of taps are needed. The largest number of taps reported is 63
[61]
using fiber
Bragg gratings (FBGs). This means a maximum resolution of only 64 samples, which
may not be enough for a highresolution correlation output. In addition, the length of
32
each grating in the FBG needs to be long in order to get high reflectivity. This introduces
ambiguity in the time delay.
Figure 2.2: Two types of tapped delay lines. (a) 1xN splitter followed by N fibers
with different lengths each providing a different delay. (b) 2x2 couplers/splitters
with each splitter amounts for a single tap
In our design, the implementation of the correlator is based on a free space
approach rather than fibers. The correlator utilizes the concept of the White cell, which is
described in the next section.
2.5 White cell principle
The White cell
[6]
was introduced by J. White in 1942 for the purpose of
spectroscopy, specifically for measuring lowpressure vapor spectra. Since then, the
White cell has been adapted and utilized in many other applications such as optical true
time delay
[62,63]
, optical computing
[7]
, and in optical reflectometers
[7]
.
The White cell is a freespace optical device consisting of three spherical mirrors
with equal radii of curvature, R, figure [2.3]. The three mirrors are organized such that
one mirror, mirror M, is facing the other two, mirrors A and B. Mirror M is referred to as
the field mirror and mirrors A and B as the object mirrors. The distance between the
mirrors is equal to their radius of curvature, R= 2f, where f is the focal length. The center
of curvature of mirror M, CC(M) is located between mirrors A and B, where the centers
33
of curvature of both A and B, CC(A)&CC(B), are located on mirror M. CC(A) is
located a small distance above the center of mirror M, while CC(B) is located the same
distance below the center.
Figure 2.3: The original White cell with three spherical mirrors
Figure [2.4] shows how light propagates through the White cell. The light first
enters the White cell using an input turning mirror (ITM). The input light beam is
focused onto the ITM, which is tilted such that the light beam gets directed towards
mirror A as shown in figure [2.4a]. Mirror A sees the input spot on the input turning
mirror as an object and images it to a new spot (bounce 1) on mirror M. As we see in
figure [2.4b], the location of the new spot is formed at an equal and opposite distance, y1,
from the center of curvature of A, CC(A). Meanwhile, mirror M sees the light on mirror
A as an object and reimages it onto mirror B at an equal and opposite distance, y2, from
CC(M). The process repeats and the second bounce is formed similarly on mirror M at
an equal and opposite distance from CC(B).
34
Figure 2.4a,b,c: Beam propagation in the original White cell
2.5.1 Beam Propagation in the White Cell
The reimaging process between the field mirror and the object mirrors generates
a spot pattern on mirror M. The number of spots generated is controlled by the distance
and/or the diameter of mirror M. The location of these spots is controlled by location of
35
the input spot(s) and the locations of the centers of curvature of mirrors A and B with
respect to the optical axis of mirror M. Figure [2.5] illustrates a specific spot pattern as
viewed on the front of mirror M. As we see in the figure there are eight generated spots
that toggle back and forth around the center of mirror M until the final spot eventually
walks off the edge of mirror M. The final spot is picked up using an output turning
mirror (OTM) that usually directs the beam outside the White cell, where the beam is
analyzed or further processed.
Figure 2.5: Single input bounce pattern on mirror M in the White cell
It is also possible for multiple beams to circulate in the White cell with each beam
tracing a unique spot pattern. Figure [2.6] shows the spot pattern formed for each of
three input beams indicated by three different spot colors.
36
Figure 2.6: Multiple inputs bounce pattern on mirror M
Note in figure [2.6] that each input spot follows an independent path without
interfering with other beams before leaving mirror M. Now, if the each spot is made to
land on a pixilated reflective surface whose angle we can control, each beam can then be
manipulated independently at any given bounce. This property will be exploited in our
design of our White cellbased TOC.
2.5.2 White cell Imaging Conditions
In order for a White cell to function as described in section 2.4, there are two
imaging conditions that have to be maintained at all times. Namely, Mirror M has to
image onto itself through either of the object mirrors A or B with a total magnification of
1. Secondly, each of the object mirrors A and B has to image onto each other through
mirror M.
37
2.6 White cellbased TDL
In our design we adapt the White cell to be used as the tappeddelay line in the
temporal correlator. To do so, we perform several modifications to the original White
cell as illustrated in figure [2.7]. The modifications are highlighted in red. First, we
replace White cell mirror M with a Micro Electro Mechanical System (MEMS) and a
field lens along each arm. The MEMS consists of a twodimensional array of mirrors,
each can be controlled to be tipped to a certain angle, either + or  or stay flat with
respect to the pixels normal. In addition, we build an additional White cell arm which
will eventually be used to produce delays, arm C, as shown in the figure.
Figure 2.7 : Modification made to original White cell (shown in red)
The White cell arms are placed such that one arm, arm B, is located along the
MEMS normal, while the other two arms A and C, are positioned along angles equal to
twice the tip angle of the MEMS pixels (i.e. 2). In figure [2.8] we show three pixels,
38
where one is flat and the other two are tipped at angles . Consider a beam coming
from object mirror B and striking a pixel at an angle normal to the MEMS surface plane,
the deflection angle of the beam will depend on the tip angle of the mirror. If the pixel
is tipped to an angle equal to +, the beam will get deflected at an angle equal to twice
the tip angle, +2, which sends the beam to arm A. Similarly, if the pixel is tipped to ,
the beam will get deflected at 2 or to arm C.
Figure 2.8: Pixels tip angle and deflected beam angle
Note that the mirrors alone provide a reflective surface; however, since they are
flat (as opposed to spherical like the original Mirror M), the imaging conditions discussed
in section 2.5.2 will no longer hold. To fix this problem we add a field lens placed at a
calculated distance from the MEMS plane along each of the arms. Ideally, a spherical
mirror with a radius of curvature R
f_lens
is equivalent to a flat mirror right next to a lens
with a focal length f
f_lens
=
2
_ lens f
R
. The focal length of the field lenses we used were
39
slightly different, chosen to compensate for the separation between the field lens and the
MEMS while maintaining imaging the White cell.
2.6.1 White cell delay arm
In the original White cell, a beam bounces a fixed number of times and hence
encounters a fixed time delay. The total delay is proportional to the separation between
the object mirrors and the field mirrors or in other words, the distance light has to travel
in the White cell before exiting. In the modified White cell shown in figure [2.7] above,
this same delay is produced in the White cell containing arms A and B along with the
MEMS. The delay increment of our White cellbased tapped delay line, , discussed in
section 2.3, will be produced in arm C. We modify arm C shown in figure [2.8] to
produce a longer time delay by increasing the separation between the MEMS and mirror
C. We will refer to arm C in our discussion as the delay arm.
Beams circulating in the White cell TDL can either bounce back and forth
between mirrors A and B or get sent to the delay arm. The delay produced in the White
cell ABMEMS will be considered as a null delay or zero delay and we will refer to this
White cell as the null cell. Beams that visit the delay arm get delayed by for each round
trip in arm C compared to the time it takes to make a round trip to B. Hence by
controlling the number of times a beam is sent to the delay arm, we can control the total
delay a beam accumulates before it exits the White cell.
40
2.6.2 Design Constraints
Delay arm design
In order to maintain the imaging conditions in the delay arm, an even number of
lenses or a lens train is added between the field lens and mirror C. Other methods of
producing time delays in the White cell, such as using glass or silicon blocks
[62]
have
been demonstrated.
The lens train contains a group of lenses placed such that the first lens, lens1, is
located at a conjugate plane (CP) of mirrors A and B, and that is at the same distance
from the MEMS. The second lens is identical to lens1 and is placed at a distance equal to
twice their focal length (2f
delay
). Figure [2.9] describes the optics layout of the delay arm
along with the locations of the produced images in the arm.
Figure 2.9: Delay arm showing the image locations produced by each lens
First the field lens sees the MEMS as an object and forms a virtual image of the
MEMS at a plane located behind the MEMS (1
st
image in the figure). Lens1 then sees
this image as an object and produces a real image of the MEMS between lens1 and lens2
of the delay arm (2
nd
image in the figure). The second lens in the lens train, in turn treats
this image as an object and produces an image of the MEMS (3
rd
image in the figure)
41
with a magnification of 1 back at the same location as the 2
nd
image through mirror C.
The lenses are chosen such that the second image is located at a distance equal to the
radius of curvature of mirror C away from mirror C. The rays forming the image then
follow the same path backwards to the MEMS with a total magnification of 1, hence,
conforming to the White cell imaging conditions.
Note that the total delay produced is due to the extra distance that the beam
travels, which is highlighted in blue in the figure. The delay produced is always going to
be a multiple of . The delay increment, , can be calculated as shown in equation (2.4):


\

+ + =
2
2
1
1
2
n c
th
n c
th
c
D
[2.4]
where c is the speed of light in air, D is the round trip distance = 8 * f
delay
, th
1
and th
2
are
the thicknesses of the lenses 1 and 2 in the delay arm and n
1
, n
2
are their refractive
indices respectively.
Field lens design
There are two constraints that are considered when determining the separation
between the MEMS and the field lens. We first have to make sure that the optical mounts
housing both pieces can fit side by side. Second, we have to make sure that all the beams
leaving the MEMS and diverging towards any of the White cell mirrors will be captured
by the field lenss clear aperture. As a rule of thumb, we always try to keep the
separation between the MEMS and the field lens as small as possible in order to reduce
the overall size of the entire system.
42
Input beam considerations
As beams circulate in the White cell, they get focused onto on a column of spots
on the MEMS pixels at each bounce. The size of the focused spot is critical in the
design of the White cell components. The spot size has to be small enough to fit on the
MEMS pixel to avoid any power loss but not so small that it diverges too fast and gets
apertured at the optics used.
Both constraints were considered when designing the input optics. The input
system is designed to produce an input spot size such that the MEMS pixel captures >
99.99% of the beams energy. Additionally the pitch between adjacent beams is set to
match the pixels pitch. In our calculations we approximate the input beam with a perfect
Gaussian beam with a beam waist of
o
. This approximation holds with little error since
the beam enters the input system from a single mode fiber array.
We calculate the ratio between the spot size and the pixel size by finding the ratio
between the power landing on the MEMS pixel and the total power of the same Gaussian
beam. We assume square pixels with dimension, a, to simplify the calculations. The
electric field of a Gaussian beam is represented by the following equation:
o
y x
e A y x E
2 2
) , (
+
=
[2.5]
where A is a constant and x and y are the beams position variables. We will drop A in
the remaining calculations as it wont affect the final result. To calculate the power ratio,
we integrate the intensity of the Gaussian beam over the pixels area and divide by the
total power
[11]
.
43
(
(
(
(
dy e dx e
dy e dx e
P
P
o o
o o
y x
a
a
y
a
a
x
total
pixel
2
2
2
2
2
2
2
2
2 2
2
2
2
2
2
2
[2.6]
Since x = y for a square pixel, we simplify the integral by substituting for
u
y x
o o
= =
2 2
, which in turn results in dy dx du
o o
2 2
= = . Equation [2.6] now
becomes
2
2
2
2
2
2
2
2
2
2
2 2
2
2
2 2
2
.
2
.
(
(
(


\

=
(
\

(
(
(
\

=
o
o
o
o u
o
a
a
u
total
pixel
a
erf
du e
du e
P
P
o
o
[2.7]
We further simplify equation [2.7] and set
total
pixel
P
P
= 0.9999. Now the equation
becomes
9999 . 0
2
2
2
=
(
(


\

o
a
erf
[2.8]
Finally we rewrite equation [2.8] in terms of the pixel dimension, a, to get
o o
a 4 8678 . 2
2
2
= [2.9]
Hence we choose the spot diameter to be 1/4
th
the size of the pixels dimension.
Further discussion of the design of the input optics is included in chapter 4.
44
2.6.3 Linear White Cellbased Tapped Delay Line (TDL)
Recall that a beam encounters a delay of each time it is sent to mirror C instead
of B. If the total number of times a beam bounces in the cell is m, where a bounce is
defined as each time the beam hits a pixel, then the maximum delay a beam can
accumulate is N
linear
= m/2 * . The delay is termed linear since the number of delays is
linear in m. The m/2 factor is there because the beam has to visit mirror B every other
bounce, which means that it takes two bounces to produce one delay.
Figure [2.10] shows the setup of a linear White cell. As mentioned earlier, the
MEMSs pixels can be either tipped to . This allows beams to bounce from any arm to
the other, so beams can bounce between A&B, A&C, or B&C. For example, for beams
coming from arm A to got to arm B, the pixels will have to be tipped to + and from B to
go to C, the pixel is set to be at  and so forth. We will refer to arm B as the switching
arm or the decision arm as all beams have to go to arm B before being sent to arm C or
arm A and hence delayed or not.
45
Figure 2.10: White cellbased TDL highlighting the null cell, the switching arm, and
the delay arm
In table [2.1] we show the beam progression in order to produce the various
delays possible with the linear White cell. The beam is assumed to have entered the
White cell through the ITM and is directed towards mirror B through the MEMS. In the
table we chose the beam array size to be six beams. Hence, the beam array will use six
pixels each MEMS bounce. The total number of bounces needed to produce a total of six
delays is 14, which includes one input bounce and one output bounce.
Delay amount Beam progression
0 ITM BA BA BA BA BA BA BOTM
1 ITM BC BA BA BA BA BA BOTM
2 ITM BC BC BA BA BA BA BOTM
: :
6 ITM BC BC BC BC BC BC BOTM
Table 2.1: Bounce pattern to produce different amounts of delay
46
Although we only demonstrate a design with a fairly small number of delays,
other White cell designs have been successfully demonstrated at The Ohio State
University and are capable of producing a larger number of delays such as the quadratic
cell. The quadratic cell has two delay lines instead of one and can produce a number of
delays proportional to m
2
[11]
. The quartic cell, which consists of four different delay
lines produces a number of delays proportional to m
4 [63]
. For example, in a quartic
system with 17 bounces, a total of 624 different taps can be produced. This corresponds
to a resolution of 624 samples/correlation, which is better than existing optical delay lines
by a factor of ten. An octic cell
[64]
can outperform other existing techniques by a factor
of a hundred. The design of the WCbased optical correlator could be easily scaled to use
any of the aforementioned cells under the same principles discussed in this document.
We note that our design is a proof of concept design that is capable of producing
only six to ten delays, primarily limited by the size of the MEMS. The linear cell was
implemented in this design due to its simplicity and because of limited funding. For a
higher resolution TDL, a higher order polynomial cell or a binary cell
[62]
will need to be
implemented.
The design of the output summer of the correlator will depend on which design
we chose. In the next section we will only consider a design that is suitable for our proof
of concept setup.
2.6.4 Weighting Elements and Beam Summation
We discussed the implementation of the TDL of our temporal correlator and
explained how each beam in the input beam array gets delayed separately before exiting
47
the White cell. We next describe the implementation of the remaining two parts of the
correlator, namely, the weighting elements and the optical summer.
Each beam leaving the TDL gets multiplied by an amplitude or phase weighting
element. In this work we elected to use amplitude weighting. This choice allows us to
sum the beams incoherently on a single photo detector or a photo detector array, which
simplifies the control and stability requirements on the summing optics. In addition, we
assume the data is modulated using Non Return to Zero (NRZ) modulation format, which
is a widely used modulation format in the optical telecomm world. Additionally, we
simplify our apparatus by assuming perfectly square pulses, so that the weights are all
either ones or zeros (light or no light). Those weights, in our system, are optically
implemented by simply using a shutter, where the ones pass through to the summer and
the zeros get blocked. Since we block the beams that we dont want, we can achieve the
same result by applying the weights before the TDL. Hence, we only generate the beams
that we want to pass and simply not generate any beams that we would block.
We now have our delayed replicas of the incoming signal, and they have been
appropriately weighted with the s(t)s. It remains to sum them. We note that each light
beam leaves the TDL at a unique location. We want, however, for each beam to arrive at
the same output spatial location but separated in time. We can do one of two things. If
the number of beams (or taps of the TDL) is small as in the case of the linear cell design
that we implemented, we can focus the spot array with a lens down onto a photodiode
with a relatively large active area (e.g. from 0.5 mm to 5 mm) while still keeping the
beams separate. Note that there is a tradeoff between the photodiodes time response (i.e.
48
speed or bandwidth) and its active area size, which has to be considered when choosing
the right detector for the system based on the data rate used.
If the number of beams (i.e. taps) is larger, however, we can use an optical
summer based on a White cell interconnection device that is very similar to the time
delay device just described. It is called a trapdoor summer and was developed and
patented at The Ohio State University
[65]
. It uses a micromirror array and the union of
several White cells. Such a technique was not implemented in our design and we only
include it in our discussion for completeness.
We point out that the hardware in our design is very simple, just a few mirrors
and lenses, a photo detector, and a single MEMS. Note that the MEMS pixels angles are
fixed and that the beam path is going to be the same for a given beam array size. Now,
although our design was implemented using MEMS, we can easily replace the MEMS
with a fixed micro mirror array where the pixels are micro machined to the desired angle.
The latter option would be a much cheaper solution, which could be easily scaled for a
larger number of taps. Thus, we expect this approach to performance monitoring will not
only be much faster, but far cheaper than existing solutions such as BER testers or eye
diagram monitors utilizing high speed realtime scopes.
49
CHAPTER 3 SIMULATIONS AND ANALYSIS
3.1 Introduction
In this chapter we show the simulation results of the effect of various impairments
on the shape and amplitude of the correlators output. All simulations are conducted
using MatLab software from MathWorks Inc. We specifically show the effects of
attenuation, dispersion, jitter, and noise. To validate our results, we then compare these
results with what would be obtained from an eye diagram measurement obtained using a
realtime oscilloscope measurement. In section 3.2 we present simulation results
showing the effect of attenuation, dispersion, noise and jitter on the correlation function.
In section 3.3 we discuss our simulation and analyze our results. In section 3.4 we
present a relationship between our results and the BER of the test signal. Finally we
show, in section 3.5, the effect of the number of taps in the TDL on the correlation
function.
3.2 Impairment Simulations
The test signal used is a series of three bits, [0 1 0], with each test signal sampled
with 500 samples, which corresponds to 500 taps in the TOC. Consecutive test signals
50
are each delayed by a time delay element. The test signal is artificially impaired by
artificially adding attenuation, dispersion, and so forth. The degraded signals are then
correlated with a clean [0 1 0] sample. Simulation results show how different each type
of impairment affect the correlation output.
3.2.1 Attenuation and Dispersion
Figure [3.1a] shows the correlation function for received signals subjected to
attenuation only. From the figure we see that the height of the correlation peak varies
linearly with percent attenuation, measured as percent reduction of the original signals
amplitude, shown in 10% increments.
To model dispersion we used the unattenuated signal and modified the shape of
the sides of the pulse. We used one halfcycle of a raised cosine function to transition
from 0 to 1, and again from 1 to 0. We defined percent dispersion as the fraction of the
actual bit period occupied by the transition. When the dispersion reaches 50%, the rising
and falling transitions meet in the middle of the bit. The raised cosine function can be
expressed using the following formula in equation [3.1]
[12]
:
+ >
+
<
(
(
(
(




\

=
2
) 1 (
2
) 1 (
2
) 1 (
2
) 1 (
,
0
2
2
(
sin 1
2
1
1
) (
T
t
T
t
T
T
t
T
T
t
t
[3.1]
where T is the pulse width and is the fractional percentage of dispersion. For example,
a value of =0.3 indicates 30% added dispersion.
Figure [3.1b] shows the effect of dispersion alone on the correlation signal. The
shape and amplitude of the resulting correlation functions are shown for varying amounts
51
of dispersion, ranging from 0% to 50%. Here we see two effects: The amplitude is
reduced, and the peak becomes more curved as part of the signals energy is transferred
outside the pulse.
Figure 3.1a,b: Auto/cross correlation function. (a) Effect of attenuation on the
correlation function; (b) Effect of dispersion on the correlation function
Information on both attenuation and dispersion can be thus extracted in a time of
3T, where T is the bit period. For example, for a 40 Gbps system, a period of 3T is equal
to 75 picoseconds.
3.2.2 Modeling Noise and Jitter
Noise and jitter must be measured statistically over multiple correlations. Noise
produces a variation in the peak and a slight variation in the shape of the correlation
signal, while jitter only affects the location of the correlation output in time.
In our simulations we assume the noise to be Gaussian and take the noise to be the
same for 1s and 0s. To measure the amount of noise affecting the signal, one might
repeat the test signal a hundred or a thousand times and measure the RMS variation in the
correlation peak height either optically or electronically. Figure [3.2a] shows one
52
hundred correlation functions superimposed for the same test signal with 20% noise (i.e.
a Gaussian noise with a standard deviation = 0.2) added. Noise has the effect of adding
both time offset and amplitude variations to the peak.
Jitter is modeled by shifting the received bit with respect to the reference signal.
The result is a correlation function that is also shifted in time, as shown in figure [3.2b].
For the purpose of simulation, the position of the pulse is shifted by a random number
with a standard deviation
j
expressed as a fraction of the bit period. As with noise, jitter
would be measured over many bits; we have shown 100 correlations superimposed.
Figure 3.2: Effect of noise and jitter on correlation function. (a) Hundred separate
correlations are superimposed for 20% noise; (b) Hundred superimposed
correlations, with jitter varying randomly with standard deviation
j
= 10% jitter
The question is, can one distinguish between the various kinds of impairments?
There are a couple of possibilities. First, suppose one draws a threshold at some
percentage of the ideal correlation peak amplitude, say 50%. One can observe that
although both attenuation and dispersion produce a reduced peak height and reduced area
above this threshold; attenuation produces a narrower peak, whereas dispersion maintains
53
the width but introduces curvature. Thus if one compares the total energy received with
the peak height, one can determine the degree to which each effect is present. This
requires extra processing time, but even if the signals are converted to electronic ones, the
processing time can be on the order of nanoseconds. Another possibility is to perform a
second correlation or optical matched filtering operation to compare the correlation
output with an ideal output, measuring in effect the degree of curvature. On the other
hand, it may not be necessary to distinguish theses effects at all, if the goal is only to
determine whether the link currently meets some particular quality threshold.
3.3 Simulation Results and Analysis
In practice, it may be easiest to measure the amount of energy received that
exceeds some threshold. Figure [3.3] shows this measurement for a signal containing
multiple impairments, for an arbitrary threshold of 50%, and for a time window
corresponding to the time interval over which an unimpaired signal would exceed 50%.
Although this assumes the existence of some reference clock, in our final design we aim
for a completely transparent monitoring system.
54
Figure 3.3 [Measurement of the area of the correlation function that exceeds a
certain threshold during a specified time interval]
So far we have developed an understanding of how each degradation mechanisms
affects the correlation peak, by considering the effect of each type of impairment
separately. We now consider what happens when two or more effects are combined. For
the purpose of illustration we show the correlation area as a function of jitter with varying
amounts of dispersion. We have again used an amplitude threshold of 50% of the ideal
maximum amplitude and a time window that corresponds to the case when the ideal
correlation function is greater than 50%. In figure [3.4] we observe that the correlation
function becomes increasingly sensitive as the impairments become worse for both jitter
and dispersion. We also note that the overall area remains fairly high (70% of maximum)
even for 50% jitter combined with 50% dispersion. This suggests that the signaltonoise
ratio will remain high even for badly degraded signals.
55
Figure 3.4: Area of the correlation function that is greater than 50% threshold and
within the time window in which the ideal correlation function exceeds 50%. The
independent variable is jitter, with dispersion as a varying parameter
3.4 Relating Correlation to BER
If correlation is to be an effective measure of QoS, then it must relate directly to
the quality of the signal as measured by conventional means. We use a simplified eye
diagram and compare the open area of the eye with the area of the correlation peak. We
do this for various combinations of impairments.
Figure [3.5a] shows a simulated eye diagram. In an eye diagram a long series of
bits are superimposed on an oscilloscope. Dispersion clips off the corners of the eyes,
jitter narrows the open area (in time), and noise and attenuation close the eye. The larger
the open area, the better the signal. For a particular level of noise , expressed as a
percentage of the bit amplitude, we draw two lines, one at 2 below the 1 level and one
56
at 2 above the 0 level. For dispersion, we take a line whose slope is equal to that of
our simulated dispersion (see Fig [3.1b]), taking the slope at the point where it crosses
50% amplitude. To include the effects of jitter, we then move these sloping lines towards
the inside of the eye by an amount equal to a specified amount of jitter (e.g. 10% of the
bit width). We then calculate the area of the eye enclosed by these lines.
Figure [3.5b] shows the eye opening area calculated by this method for combined
jitter and dispersion. The latter figure can be compared directly with the correlation area
of Figure [3.4]. Both figures show a decrease in the correlation area as an increase in the
amount of impairment.
Figure 3.5: (a) Simulated eye diagram. The shaded area is the open area of the eye;
(b) Variation in the open area of the eye diagram for combined jitter and dispersion
We can see that the correlation function area is a reliable indicator of signal
impairment and thus bit error rate. The advantages of using correlation are bit rate
transparency, data format transparency, speed as results are generated in a few bit periods
instead of in minutes, no oeo conversion, and significantly reduced hardware.
57
3.5 Number of Taps in the TOC
In this dissertation we propose the design of a novel optical correlator that is
capable of correlating a very large number of samples. We argue that the more taps we
have in the correlator the higher the resolution of the correlation function. This argument
is probably true; however, how many samples do we really need to achieve a meaningful
correlation result? Note that increasing the number of taps would only require minor
modifications in the TOC design with only little extra hardware needed.
In figure [3.6] we show a simulation of how the shape of the correlation function
varies as the number of samples changes. The figure shows four plots for tap resolutions
of six, eighteen, thirty, and sixty. As the number of taps increase, the resolution of the
correlation increases. Simulations show that the shape of the correlation function doesnt
vary much for tap resolutions higher than 50.
Figure 3.6: Effect of number of taps on the correlation functions shape
58
Higher resolution TOCs would have a more sensitive response to impairments. A
small variation in dispersion, for example, might not even show in sixtaps TOC, but
would be evident in a sixtytap TOC.
59
CHAPTER 4 EXPERIMENTAL IMPLEMENTATION
4.1 Introduction
In this chapter we describe the experimental implementation of the optical
performance monitor (OPM) and the procedures followed to obtain the final qualityof
signal results. In the following sections, we describe the equipment used in the
experimental apparatus and the design specifications of the different parts. Section 4.2
describes the input optics design. In section 4.3 we discuss in detail the specification of
the MEMS used in the setup and how it is integrated in the TOC. In section 4.4 we
introduce the circuitry used to artificially generate the effects of the type of impairments
discussed earlier. Section 4.5 explains the design of the linear White cell in section 4.6
we describe the design of our output system. Finally in section 4.7 we show our optical
simulation results of our system using OSLO optical design software.
Figure 4.1 shows a block diagram of the experiment. The figure is divided into
three main blocks, the input setup, the White cellbased tapped delay line, and the output
setup. In following sections we will discuss each block in detail and show how the
blocks are integrated.
The experiment utilizes a diode laser at a wavelength of 1550 nm that is in the C
Band in the International Telecommunication Union (ITU) grid, which occupies the
60
wavelength range from 1535.04 nm to 1565.50 nm. The continuouswave output from
the laser gets modulated using a Mach Zehnder (MZ) Interferometer. The modulator RF
input is driven with an external circuit that produces an artificially impaired signal by
adding the effects of dispersion, attenuation, and noise to the modulated signal. The
modulated signal gets split into six copies that enter the White cellbased correlator,
where the beams get delayed by different amounts.
The White cell consists of a handful of spherical mirrors and lenses in addition to
a microelectromechanical system (MEMS) that is used to control the beam path. Several
computers were also employed to capture the output beam profile and to control the
MEMS pixels.
Finally, an InGaAs highspeed photo detector is used to sum the beam array and
produce the final correlation output. In the figure, part of the output is also shown to be
sent to a saturable absorber device that connects to an optical thresholding device. This is
an alternative approach that could be used in practice but in the interest of cost was not
implemented here. The signal is converted and processed electronically in this setup. All
the equipment used was mounted on a 4 ft by 10 ft optical table.
61
Figure 4.1: Experimental apparatus block diagram
4.2 Input System
The input to the system comes from a 60 mW continuous wave (CW) laser with a
center wavelength at 1.55 m (JDSUniphase laser with fiber pigtail, class IIIb laser). The
output of the laser is buttcoupled to a single mode (SM) polarization maintaining (PM)
fiber with a mode field diameter of 9.5 m.
The output of the laser is then externally modulated using a LiNbO
3
MZ
interferometer specifically designed for microwave analog intensity modulation (JDSU
AM15011C2l2O2). The modulators input is connected to SM/PM fiber (Fujikara
SM 13P8/125UV/UV100) while the output terminal is coupled into a standard SM
fiber (Corning SMF28). Both fibers were equipped with FC anglepolished
connectorized (APC) ends. The modulator has an upper cutoff frequency at 20GHz,
which is underutilized in our experiment as our modulation frequencies are about a
factor of 1000 less (~Tens of MHz).
Figure 3.2 shows the principle of operation of the MZ interferometer. The input
polarized light enters the modulator and gets split at a Yjunction. Half the optical power
62
passes through each of the two waveguides. The two beams in the two arms get
combined at the output port, where both copies of the signal are recombined. The
waveguide material in one of the arms is made out of an electrooptic material, in which
the refractive index varies as a function of applied voltage. If both optical paths have the
same refractive indices, then both beams will undergo the same phase shift and interfere
constructively at the output terminal. If, however, we place a highvoltage electrode on
one of the two arms, we can vary the phase shift encountered by one of the beams with
respect to the other copy. At the output, both beams interfere destructively or
constructively or anything in between, causing the intensity of the output to vary as a
function of the applied voltage.
Figure 4.2: Principal of operation of a MZ modulator
[51]
The modulator used here has two electrical inputs, an RF input and a bias input.
The bias port is used to define the operating point on the IntensityVoltage curve of the
modulator. The operating point of electrooptical modulators is usually defined as a
function of V
MHz
ns
BW
f
50
0 . 7
35 . 0
= =
[4.1]
where R
LOAD
and C
J
in equation [4.1] are the load resistance and diode capacitance
respectively.
76
Figure 4.13: Output arm location and the equipment used to sum the beams and
view the correlation output
77
Figure 4.14: Output optics and mounts, units are in mm
78
The detector is a commonanode photodiode mounted in a standard ThorLabs
SM05 threaded tube. Figure [4.15] illustrates the internal structure of the photodiode
with its SMA terminated connector and the external bias circuitry connection.
Figure 4.15: Internal connection of the commonanode SM05PD4B photodiode
The detectors output is connected to a lownoise highfrequency photocurrent
amplifier (MellesGriot 13AMP007) to amplify the photocurrent to a level that is
distinguishable by the oscilloscope. The transimpedance gain of the amplifier is given to
be 6250 V/A and an output RMS noise of 3.2 mV. The amplifier also has an internal bias
circuiting outputting a voltage signal that is directly fed into the oscilloscope.
The photodiode acts as a summer. When multiple optical beams are superimposed,
if all the beams are made to land on the same spot, coming from the same direction, there
is fanin loss. That is, if N beams are superimposed, the output power is 1/N
[65]
. We
avoid the fanin loss discussed in Chapter 2 by taking advantage of the small number of
beams used: we demagnify the array of spots so that they all fall on a single detector, but
are still spatially separate. Note that, as they are demagnified, the rays from each spot are
also coming from a slightly different direction. Each photon of light that lands on the
detector produces photocurrent, and the photocurrents are in effect temporally summed in
the diode and result in a composite signal, which is then the auto/cross correlation output.
79
The output of the photodiode is then connected to a high speed oscilloscope (HP
Infinium Oscilloscope 1.5 GHz, 8 GSa/s) where the correlation output is viewed and
analyzed.
4.7 Optical System Simulation
The optical system was first designed using ray matrix optics and assuming a
paraxial approximation (see appendices B and C), where all beams are assumed to be
close to the optical axis and all lenses used are to be thin. The system was then simulated
using optical design software, OSLO ver.6.2. The simulation considered all
specifications of the optics used in the design, thickness, material used, pixel tip angle,
and limitations imposed by the beam array as opposed to a single beam. The design was
optimized to reduce the spherical aberrations and astigmatism introduced in the White
cell. The main optimization goal, however, was to ensure that all the output beams are
imaged at the detectors plane and that all beams fit on the active area of the detector
(0.8mm
2
).
The simulation was split into three phases, where the input system was first
optimized to produce the correct magnification required for each beam in the array to
land on the center of the MEMS pixel in the input column. Next, the White cellbased
TDL was simulated and optimized to ensure that all six bounces land on the MEMS
pixels each bounce. The beam size was also maintained to be less than one third of the
pixels area to ensure that 99.99% of the beams energy is reflected off of each pixel.
Finally, the output optics was simulated and optimized to demagnify the output beam
array from the White cell such that all beams fit on the 0.8 mm
2
active area of the
80
detector. All three phases were then combined to simulate the system as a whole and the
optimization factors listed above were ensured to be valid.
Figure [4.16] shows a top view of the simulated design along with the cone of
rays traveling through the system. We clearly see that all beams are well confined within
the optics used in the design.
81
Figure 4.16: Optical Simulation of the linear White cellbased TOC, using OSLO
82
As mentioned earlier, we first simulated the input optics system. Figure [4.17]
illustrates the input system along with the beam array spot diagrams. The beams vertical
pitch was optimized to be 1.231 mm, which is equivalent to twice the pitch of
consecutive pixels. This corresponds to a magnification of 4.924. We also see that the
diameter of all beams are well within our requirements of less than 106.67 m ([1/3 *
pixels minor dimension] = [1/3*330 m] = 110.0 m). The simulated beam diameter is
shown to be 108.06 m. The diffraction limited spot size (Airy disk) of a Gaussian beam
is also shown. In the beam energy diagram in the bottom left corner of the figure we
show that the total beam energy is confined within a square area with a dimension of less
than 125 m, which less than half the size of the square pixel approximation discussed in
Chapter 2. The point spread function (PSF) of the beam is shown in the top right corner,
which describes the intensity distribution of the beam in space. We see that the beam
follows a Gaussian profile and is confined within the pixel diameter.
83
Figure 4.17: Optical Simulations of the input optics used in the TOC design
The beams leaving the input optics are then fed into the White cell simulation file.
Figure [4.18] (top) shows the PSF of center beam in the beam array at the last bounce on
the MEMS pixels (bounce 11) before leaving to the output arm. The bottom part of the
figure illustrates the variation in the spot size and shape for different spots in the beam
array. The figure shows three spots in x and three in y, where x and y are the coordinates
of the MEMS. We notice that edge beams (beams towards the edge of the array)
encounter more aberration, which is primarily due to spherical aberrations as the beams
end up traveling along the edge of the optics resulting in a slight variation in the beam
focus at the MEMS plane. Additionally, edge beams strike the optics at larger angles
than center beams, and hence experience more astigmatism resulting in the beams being
more elongated.
84
Figure 4.18: Optical simulation of the output of the White cell part in the TOC
The output setup, consisting of a biconvex lens and an achromatic doublet is
simulated next. In figure [4.19] we observe the simulated spot diagram of one of the
output beams, where we see the geometrical radius to be less than 32 m. The top right
corner of the figure shows the PSF of the output spot, where we see the output beam in
85
focus at the output plane. Notice in the bottom left corner we show that the energy of the
output beam is confined within a circular area with a radius of approximately 62 m,
which is less than 10% of the active area of the detector.
Figure 4.19: Optical Simulation of the output optics used in the TOC design
The overall system is combined and the total magnification was found to be
0.553x, which results in a total beam array size of 0.741 mm ([input array size * total
system magnification] = [1.34 mm * 0.553] = 0.741), which is less than the active area of
the detector. The PSF of one of the final spots on the detector is shown at the top of
figure [4.20], where the beam intensity is confined within an area less than 15% of the
detectors active area. We also show the spot diagram of three beams in the beam array
in the bottom right corner. Although the beams show evidence of experiencing more
aberrations as they move away from the center of the array, the size of all six beams
together in the array is still smaller than the active area of the detector. The bottom left
86
section of the figure shows the energy diagram of the center beam, where 100% of the
beam energy is confined within a radius of less 0.08 mm, which is less than 10% of the
detectors radius.
Figure 4.20: Optical Simulation of the entire TOC system
Our simulations indicate that all of our design considerations are achievable
without the need for any custom optics. We note that the beam quality could be
increased by correcting for the various types of aberrations present in the cell. As we
mentioned earlier, however, our main goal is to focus all six pixels within the active area
of the photodetector used.
87
CHAPTER 5 EXPERIMENTAL RESULTS
5.1 Introduction
The objective of this chapter is to present the experimental results obtained using
our experimental test bed and show how the results compare to our simulations. We
describe in detail the procedures taken to obtain the correlation results and provide
measurements of the effects of attenuation, dispersion and noise on the correlation output.
We also show a stepbystep procedure of the alignment process used to align the White
cellbased Temporal Optical Correlator (TOC) along with a full analysis of the optical
power losses associated with the setup. We conclude the chapter with a summary of our
work indicating the effectiveness of the TOC technique in optical performance
monitoring.
In figure 5.1 we show the general configuration of the Linear White cellbased
OPM setup as it is assembled on the optical table. The figure is to scale and describes the
location of all three arms of the White cell in addition to the input and output arms. The
locations of the laser source along with all the measurement and imaging equipment are
also shown.
88
Figure 5.1: Layout of the experimental apparatus on the optical tableto scale
The figure shows that approximately two thirds of the optical table area was
utilized. The laser source and the input setup were placed along the width of the table,
while the White cellbased TOC and measurement equipment were setup along the length
of the table. The delay arm (the longest arm) was assembled parallel to the length of the
table to minimize the alignment complexity. The MEMS normal and arm B are located
at a 20
o
angle with respect to the tables length. The beam height was adjusted to the
MEMS center, which is at 145 mm from the table surface. The optical axes of all lenses
and mirrors were adjusted to that height.
89
5.2 Apparatus Alignment
The first step in setting up the apparatus was to establish the optical axis for each
of the White cell arms and the input and output arms. We chose the MEMS normal to be
our reference such that all angles are measured with respect to it. For the initial
alignment process we used a HeNe visible laser (Power
out
=0.5 mW, =633 nm) to
simplify the process. We also replaced the MEMS with a flat mirror (or pseudoMEMS)
placed on a rotation stage. The angle of the pseudoMEMS normal was accurately
recorded using the dial on the rotation stage.
We started by placing the pseudoMEMS flat such that its normal is parallel to the
tables length. Figure [5.2a] shows the setup. This step sets up the optical axis for the
delay arm, arm C. In figure [5.2b], the pseudoMEMS is rotated around its axis by +10
o
,
where a positive angle through out this dissertation indicates an angle above the reference
axis. The beam leaving the laser will hit the MEMS and get deflected at a +20
o
angle.
This step is used to set up the optical axis for arm B and our global reference axis.
Finally in figure [5.2c] the pseudo MEMS are rotated by a +20
o
angle and hence the
deflected beam will leave at an angle of +40
o
, establishing the optical axis for arm A.
Figure 5.2a: Alignment procedure to establish delay arm optical axis
90
Figure 5.2b: Alignment procedure to establish arm B optical axis
Figure 5.2c: Alignment procedure to establish arm A optical axis
After establishing the White cell arms, the next step was to align the input optics
and the input turning mirror such that the input beam array gets directed to the center of
WC mirror B after it enters the cell. The pseudoMEMS rotation stage was readjusted
such that the MEMS normal is perpendicular to mirror B. The pseudoMEMS was
placed on a kinematical stage and was substituted with the Calient analog MEMS.
Before integrating the input optics into the setup, the profile of the input beam
array was analyzed. The input array was magnified and focused onto a CCD IR camera.
Figure [5.3] shows the beam intensity profile of a single beam in the array along with
comparison to a Gaussian envelope. The imaging magnification was set to 15.6X. The
beam diameter is shown to be 53.8 m (measured beam diameter was equal to 840 m
with a magnification of 15.6, hence the actual beam diameter is 840 m/15.6 which is
equal to 53.8 m). The theoretical spot size was previously calculated in chapter 3 based
91
on a Gaussian profile to be 46.77 m, which leads to approximately 15% experimental
error. The error might seem large at first, however, the measured beam is not perfectly
Gaussian and has a correlation factor of only 84% to a perfect Gaussian. Most
importantly, the vertical pitch between consecutive spots in the array was measured and
was found to be the same as the MEMS pixel pitch, which is equal to 1.231 mm (every
other pixel).
Figure 5.3: Beam intensity profile of a single beam in the array
We also took multiple measurements of the beams diameter away from its focus.
Figure [5.4] illustrates the location of the measurement points along the beams path.
The divergence angle was calculated and found to be approximately 17% larger than its
theoretical value. Hence, the beam diverges at a rate faster than what we expected and
we needed to take that into consideration when calculating the size of the optics used.
92
Figure 5.4: Gaussian beam propagation and location of measurement points
The input setup was now installed in the apparatus. The input beam array enters
the White cell from behind the MEMS through the input turning mirror and gets focused
on the MEMS pixels.
The output arm was aligned last and was placed at a +10
o
with respect to the
MEMS normal. To do so, the input beams were first focused onto the MEMS pixels and
those pixels were tipped to +5
o
angle and the reflected beam was used to set the location
of the output setup.
At this point all the angular alignment is completed and all three arms of the
White cell along with the input and output arms are in place. The final step in aligning
the setup is to adjust the longitudinal distances between the optics used in order to
establish the imaging conditions discussed earlier. To assist with placing all lenses and
mirrors at their right location, three imaging arms were introduced to the setup. Figure
[5.5] shows the location of the imaging arms and the magnification associated with each.
The beams were picked up as they return from each of the White cell arms after getting
refocused using spherical mirrors A, B, and C using partially reflective pellicles, shown
in blue in figure [5.5]. An additional imaging arm with a much higher magnification was
added to be able to observe the individual spot profiles.
93
Figure 5.5: Imaging arms locations
The setup is now completely assembled on the optical table. Figures [5.6a] and
[5.6b] show photographic images of the assembled setup along with the control and test
equipment used.
Figure 5.6a: Photographic image of the setup showing a top view of the input and
output optics along with a section of the linear WC setup
94
Figure 5.6 b: Photographic image of the WC setup showing all the test and control
equipment used
To image the MEMS, the MEMS pixels were illuminated using a flash light with
an IR filter mounted onto the flash light such that only wavelengths larger than 1100 nm
and less than 1600 nm would pass. Since our operating wavelength is 1550 nm, doing so
ensures that the MEMS image captured by the CCD camera is located at the same plane
as the beam array and hence allow for a more accurate alignment. If we used white light
(i.e.
illumination
beam array
) directly to illuminate the pixels, the image produced will
focus at a different plan when compared to the focus of the beam array. Figure [5.7]
shows an image of the MEMS pixels using arm 3, where the tipped pixels to either 10
o
appear as blank spots. The illumination source was placed along arm C.
95
Figure 5.7: Magnified image of the MEMS pixels captured using an IR CCD camera
Figure [5.8a] and [5.8b] show the returning beams on the MEMS pixels captured
using arm 3 and arm 1 respectively after going through 10 bounces through the cell. The
evennumbered bounces are seen in arm 3 including the input bounce, bounce 0. The odd
bounces are recorded at arm1 as beams bounce back from WC mirror A. At the 11
th
bounce the beams are directed towards the output arm located at a +10
o
angle with
respect to the MEMS normal, where the beams are summed and analyzed.
Flat pixel
No pixel
Tipped pixel
1.231mm
96
Figure 5.8a,b: The beam array imaged at the MEMS plane. We see all the even
numbered bounces in (a) and the oddnumbered ones in (b)
In part a of figure [5.8] we can see the entire sixbeam matrix as we are picking
the beam up from arm B, where all beams, whether delayed or not, have to pass through.
On the other hand, in part b we notice that the upper half triangle of the beam matrix is
missing. This is explained by realizing that we are picking the beam up from arm A,
where only beams that dont encounter any delay circulate. Hence, we see five beams on
the first bounce, four on the third and so on.
In figure [5.9] we illustrate the actual MEMS pixels used to produce the bounce
pattern in the TOC. The design choices were limited due to several malfunctioning
pixels scattered all around the pixel matrix. The bad pixels are highlighted in red. The
input and output pixel columns are labeled Bounce 0 and Bounce 11 respectively.
97
Figure 5.9: MEMS pixel matrix showing the locations of pixels used and all
malfunctioning pixels
The last step before starting to take measurements was to modulate the CW laser
output of the laser source. The laser output is connected to the MZ modulator, where the
RF port of the modulator is fed from a function generator with a pulse waveform. The
98
modulator bias was controlled using a DC power source biasing the modulator at half of
its full peaktopeak voltage range. Doing so allows us to achieve the maximum
modulation depth. The RF pulse width and signal frequency were set to be 34.3 nsec and
9.71 MHz respectively. The pulse frequency was chosen such that the pulse width
represents only 33.3% of the total pulse period. Hence, this is generating our test signal,
0 1 0, described in previous chapters. Figure [5.10] shows a schematic of all six pulses
transmitted as a function of time and the autocorrelation result of summing all six signals.
As shown in the figure, the delay element, , is set to be 6.86 nsec, which is equivalent to
a single roundtrip in the delay arm or a distance of 2.06 m.
99
Figure 5.10: Input pulse signals and their autocorrelation function as a function of
time
We have now completed all the necessary steps in the design and set up of the
White cellbased TOC. In the next section we describe the correlation outputs obtained
and show the effects of artificially adding different types of impairments onto the signal
on the correlation output.
100
5.3 Correlation Measurements
The output beams were temporally summed onto the InGaAs photodiode by
focusing all six beams on the 0.8 mm
2
active area of the detector. The detector current
output was amplified and converted into a voltage signal using a photocurrent amplifier
with internal bias circuitry. The output of the amplifier was then monitored on the CRT
screen of a high speed oscilloscope.
We first needed to test whether the detected output power was sufficient to
produce a signal that could be resolved by the oscilloscope. We disconnected all inputs
except for one allowing only a single beam to circulate in the cell. We chose the beam
with the least amount of delay (circulates only in the null cell) and then repeated the test
for the beam with the longest delay. Doing so allowed us to also ensure that both ends of
the beam array are landing on the photodiode from which we can conclude all the beams
in between are too. Figures [5.11a] & [5.11b] show a screen shot of the input pulse along
with the test pulses, where we can see that the amplitudes of both of the output beams are
detectable and comparable. The green trace represents the input pulse signal and the
yellow trace represents the detected output pulse. Note that the amplitude scale for the
input signal is 40 times the scale of the output. For the beam shown with zero delay, the
signal amplitude was detected to be 57.58 mV, while the amplitude of the beam with five
delays amplitude was found to be 43.37 mV.
101
Figure 5.11a: Oscilloscope screen shot showing the input pulse and the output pulse
with zero delay
Figure 5.11bOscilloscope screen shot showing the input pulse and the output pulse
with five delays
0 delay
5 delays
Input pulse
Input pulse
Output pulse
Output pulse
102
Looking carefully at the two figures above we see that there is a fixed delay
between the input and output signals introduced by the input setup and the null cell or as
we will refer to it as the null delay. The delay can be obtained from figure [5.11a] and
is found to be 57.7 ns. The null delay can be calculated using the following formula:
ns ns ns ns ns
measured Delay Fiber Modulator
air in light of speed
length Output Input
air in light of speed
roundtrips of No length Null
NullDelay
53 . 54 0 . 31 93 . 1 6 . 21 31
10 * 3
58 . 0
10 * 3
5 * 296 . 1
) ( &
& . *
8 8
= + + = + + =
+
+ =
Our results show that the delay is off by 5.8% from its theoretical value.
Furthermore, we also extract the maximum delay associated with our White cell
based TOC by subtracting the delay accumulated by the beam in figure [5.11b] from the
one in [5.11a]. That number was found to be 33.87 ns. We similarly calculate the delay
based on the distance the beam travels and find it to be:
ns
m
air in light of Speed
roundtrips of No Length Delay
ay MaximumDel 33 . 34
10 * 3
5 * 06 . 2 . *
8
= = =
we find the error to be 1.3%.
We now reconnect all six inputs and measure their sum as a function of time on
the detector, which corresponds to their autocorrelation function. The following figure,
figure [5.12] shows the input pulse signal along with the output autocorrelation function.
103
Figure 5.12: Oscilloscope screen shot showing the input pulse and the output
autocorrelation function
Note that the correlation function width is less than twice the pulse width by a
single roundtrip delay, 2T as we described it to be in earlier chapters. The correlation
width was measured to be 66.32ns, which when compared to its theoretical value of 68.6
ns (2*34.3 ns) results in a total error of 3.3%. The correlation function amplitude was
measured to be 788 mV, which represents the summation of the amplitudes of all six
input beams.
Note in here that all six inputs do NOT have the same weights. This is due to
several reasons, namely:
 Different coupling/insertion losses associated with each beam at the splitter
 Beams at the top or the bottom of the beam array will experience higher losses as
they get higher aperturing losses due the finite size of the spherical optics used.
 Beams traveling through the center of the optics experience the least amount of
loss. This problem could be overcome by using oversized optics
104
 Each beam traverses a different path in the TOC, where the number of optical
surfaces associated with each path varies and therefore the losses vary too
Therefore, in order to get an accurate measurement of the output correlation
function that we can compare to our simulations we needed to measure the weights
associated with each beam and reflect the results in our simulations. Table [5.1] reflects
the weights associated with each of the six arms shown in units of power and as a factor
of the normalized correlation peak. The total output power of the correlation function
was measured to be 460 W 25 W.
Beam Number Output Power (W) Factorized weight (P
i
/P
total
)%
Beam 1 16 3.59
Beam 2 42 9.43
Beam 3 91 20.44
Beam 4 93 20.89
Beam 5 188 42.24
Beam 6 15 3.37
Table 5.1: Weights of the optical power associated with each arm in the TOC
We clearly see that beams three through five hold the majority of the total power
(>80%), whereas beams at the edge of the array attain less than 7% of the total power. In
addition to the reasons mentioned previously, one might think that the edge beams are not
well focused on the detector and hence not fully landing on the active area of the detector.
This possibility, however, was eliminated by disconnecting all beams except for the edge
beams and their power was measured one at a time. The detector position was adjusted
in both the lateral and the transverse direction to make sure that the beam is landing at the
105
center of the detector but no change in the output power was recorded, hence disproving
that thought.
5.4 Impairment Measurements
In this section we describe the correlation results obtained when the input signal is
modified by artificially adding impairments onto it. We investigate the effects of signal
attenuation, dispersion, and noise. Measurements of the crosscorrelation function were
recorded for different values of added impairments and the results were analyzed.
The impairment generation circuitry shown in figure [5.1] was removed from the
setup. Thanks to the advanced features of the Tektronixs Arbitrary Function Generator
(TEK/AFG3252) that allowed us to internally generate all the waveforms we needed
without the need for any external circuitry. We tested our circuits performance,
described earlier in chapter 2, against the internal functions provided by the function
generator and results proved to be comparable and even more accurate with the generator.
We recall that attenuation is modeled by a decrease in the signals amplitude (or
voltage), while dispersion is modeled as an increase in the signals rise/fall times. Noise
was added to the signal through a builtin Gaussian noise generator that allowed us to
adjust the percentage of noise added as a function of the signals amplitude.
5.4.1 Attenuation Measurements
The amplitude of the input signal was reduced to 75%, 50%, and 25% of its
original value and three measurements were recorded and compared to the measured
autocorrelation function. All four waveforms were regenerated using the recorded data
and then superimposed over the same time and amplitude scales. Results show that the
106
correlation peak decreases linearly with the signals amplitude, while the shape of the
correlation function remains unaffected, as expected. Figure [5.13] demonstrates the
resultant cross correlation functions for the input signal values mentioned above.
Figure 5.13: Measured effect of signal attenuation on the correlation output
5.4.2 Dispersion Measurements
We varied the amount of artificial dispersion added to the signal by adjusting the
rise and fall times of the pulse signal, which results in a pulse spread (or smear) that we
used to approximate dispersion. Two measurements were taken at 25% and 50% (as
defined earlier in chapter 3 of added dispersion. The correlation output was compared
again to the autocorrelation function and results were analyzed. Figures [5.14] show the
measured input and output signals as a function of time. We can clearly see from the
figure that as the signal accumulates more dispersion the crosscorrelation output gets
107
affected in two ways: The correlation peak amplitude decreases by a certain amount and
the curvature of the correlation output changes.
Figure 5.14: Measured effect of signal dispersion on the correlation output
5.4.3 Noise Measurements
Noise was introduced on the original signal using the function generators built in
noise generator. The device allows us to vary amount of Gaussian noise between 20%
and 50% relative to the signals amplitude.
Unfortunately, we ran into an unexpected problem during these measurements.
Recall that we are summing all signals using an InGaAs photodiode with an upper cutoff
frequency of 50 MHz as calculated earlier in equation [3.1]. This means that all
frequencies higher than 50 MHz are going to be filtered out. We found the frequency
bandwidth of the noise signal produced by the generator to be 240 MHz, which is
approximately five times the detectors bandwidth. Hence, the limitations in our
equipment formed a barrier at this point and we werent able to get any meaningful
measurements.
108
This problem could be solved by using a detector with a cutoff frequency that is at
least twice the signals frequency. Note that the detectors bandwidth (BW) is a function
of its junction capacitance and as the BW increases the capacitance has to get smaller.
As a result the detector head will end up with a much smaller active area. The small area
would complicate the output optics design as multiple beams will require to be focused
onto a much smaller area, while maintaining all beams separate. Using higher precision
optics in the output setup, however, should solve this problem.
As an attempt to take a meaningful measurement, we varied the amount of noise
imposed on the signal and observed the output correlation function for any changes. The
output function, however, showed no response. The amplitude of the crosscorrelation
function remained constant with noise levels of up to 50%.
5.4.4 Correlation Measurements Analysis
From the measurements presented in this section so far, we can see an agreement
between the measured correlation function behavior and our previous simulations
described in chapter 3. In figure [5.15] we describe the change in the correlation
functions peak amplitude as a function of both attenuation and dispersion. We compare
the measured results to simulations for the variation in the correlation peak due to both
effects.
109
Figure 5.15a,b: Comparison between theoretical and experimental results (a)
Attenuation (b) Dispersion
From the figure we clearly conclude that our experimental measurements follow
our simulations with a total error margin of < 5%. These results prove the validity of
our method and validate our simulations.
110
5.5 Power Loss Analysis
Given the amount of surfaces each beam circulating in the TOC has to hit, one
would argue that the power losses could be too high and would limit the scalability of the
device. In this section we demonstrate the total losses associated with our setup and
explain the reasoning of each loss. We divide our analysis into three sections, namely:
Losses due to the input setup, losses due to the TOC, and finally losses due to the output
setup. In table [5.2] we describe the power losses associated with each section and
provide a brief explanation for the cause of each.
INPUT POWER 60mW
OUTPUT POWER 460W 25W
Source
Power Loss
(dB)
Power Loss
(% of total)
Explanation
Section (I)
MZ Modulator 2.95 13.90%
Insertion loss + loss for
operating at quadrature
1x8 Splitter 10.6 50.11%
Insertion loss + coupling
loss of seven 1x2 splitters
8x1 VGroove array** 4.0 18.91%
Combined insertion loss
of eight fiber inputs
Section (II)
White cell optics 3.6 17.02%
Combined loss of WC
mirrors, field lenses, &
MEMS pixels[11
bounces]
Section (III)
AlGaAs Photodiode
Sensitivity = 0.95 A/W at =1550nm
POWER LOSS
21.15 dB 1.1 dB
Table 5.2: Power loss measurements of our experimental OPM apparatus
Both the splitter and the vgroove fiber array were designed for = 1310 nm, which
resulted in higher losses as our working wavelength is =1550 nm
111
Table [5.2] shows that the majority of the losses are due to incompatible
equipment as it is the case in both the splitter and vgroove fiber array with almost 70%
of the total accumulated power loss. These losses could be largely improved by replacing
the 1xN coupler and the fiber array with ones designed for our operating wavelength. On
a good note, the losses associated with the WCbased TOC totaled less than 4 dB, which
is less than 0.35 dB per bounce. The latter observation indicates that we can scale our
system to more than 20 bounces while maintaining the TOC losses below 7 dB.
112
CHAPTER 6 CONCLUSION
6.1 Accomplishments
In this dissertation we presented a complete design of an optical performance
monitor (OPM) that is based on optical correlation. The design utilized a novel design of
a temporal optical correlator based on the White cell technology. The system was
simulated and the simulation results were analyzed and compared to other existing
techniques, where a relationship was established between the correlation output
measurements and the optical signals BER.
We also implemented a proofofconcept experimental apparatus of the OPM
design that utilized a handful of offtheshelf optics and single analog MEMS.
Experimental results presented in chapter 5 were very close to theoretical calculations
with error percentages less than 10%. The correlation output was analyzed using a high
speed oscilloscope and the effect of different types of impairment on the correlation
function was presented. Results proved to match our simulations validating our
technique in OPM based on optical correlation.
The optical design was simulated using OSLO, where we made sure that all the
imaging conditions in the system are satisfied and that all the beams landed on the output
113
detector. The design also aimed to minimize aberrations with emphasis on astigmatism
to ensure that all beams fully land on the MEMS pixels during every round trip.
We finally, conducted a detailed analysis of the optical power losses associated
with our proofofconcept experimental apparatus system and showed that the total losses
associated with the TOC were less than 4dB. We also showed that the design could be
scaled to include more inputs without a large increase in the optical power loss.
6.2 Future Work
Our proofofconcept design and demonstration could be expanded and modified
in ways to make the design more realistic and improve the correlation outputs sensitivity
to various impairments. Let us divide our proposed improvements into four sections, the
input system, the White cellbased TOC, the output summation technique, and the
correlation output measurement technique.
6.2.1 Input System Improvements
Recall that the input to our system was artificially manipulated to add the desired
impairments on the transmitted signal. Such a method would only produce an
approximation of the actual effect of each of the impairments discussed and not the real
effect. A modification could be made to the design to replace the impairment generation
circuitry with a real optical link that includes one or more active optical components such
as an optical amplifier. The link could include several fiber spans of lengths up to tens of
kilometers. Additionally the input CW laser could be replaced with a tunable laser
allowing for WDM of multiple signals onto a single fiber. The later modification would
enable us to examine nonlinear optical impairments that are induced due to the presence
114
of multiple channels over a common link. It would also give us a more accurate
measurement of the effect of dispersion on adjacent pulses and give us a realistic measure
of the total allowable dispersion in the link. We can additionally see the effect of noise
present due to different types of noise sources as we test the link with and without an
optical amplifier.
6.2.2 White Cellbased TOC Improvements
The experimental design that we implemented took advantage of the slow
modulation speed of the input signal. The spacing between the optics was fairly large
and the delay arm was very long (>2 m roundtrip). The design could be much more
compact if the delay element required by the TOCs TDL were much smaller. For
example, if our signal was modulated at a bit rate of 10Gbps that would require our delay
increment to be much smaller than 100 picoseconds, which would translate to a delay
arm shorter than 15 mm or 30 mm roundtrip. The entire TOC could then be designed to
fit in a very a small area (e.g. a 100 mm x 100 mm box) even including the other WC
mirrors and mounts. Such a design, however, would require most of the optics used and
the mounts to be custom made, which would increase the overall price of the system.
6.2.3 Output Summation Improvements
The output summer utilized in out design uses a high speed photodiode that acts
as an OE module through which we can temporally sum the beams incident on the
detectors active area and analyze the output using an oscilloscope. This method is
limited by the detectors active area, which at higher data rates would even get smaller.
Hence this implies that such a technique is not scalable and was only implemented in our
115
design due to simplicity and budget constraints. A modification could be made to the
output by replacing the single photodiode with a photodiode array, where a larger number
of beams could be summed. However, alignment could be an issue as the beams leaving
the White cell exit at slightly different angles. Another modification would suggest the
use of the trapdoor device proposed and demonstrated at The Ohio State University
[65]
.
The device utilizes a White cell system and is independent of the number of input beams
to the system. The device take an array or a bundle of spatially separated optical beams
and steer them in White cell setup such that they all exit at the same location with the
same angle. The output could then be fed into a single photodiode head without any high
precision alignment needed.
6.2.4 Correlation Output Measurement Improvements
So far in our results, we only were able to analyze the correlation output in the
electronics domain by utilizing a highspeed oscilloscope. Since we only require a
thresholding measurement as a first indicator of the links health, an optical thresholding
device could be placed at the output arm. We suggest in chapter 2 the use of a saturable
absorber device, which would output a pulse if there is enough incident light intensity on
it from the correlation output. Whereas, if the signal were corrupt and the intensity
present was below the desired correlation threshold, no output would be present and we
can in realtime detect the channel failure.
116
APPENDIX (A)
MATLAB CODE FOR CORRELATION SIMULATIONS
117
APPENDIX (A): MATLAB SIMULATION CODE
In this appendix we describe the code used to simulate the effects of adding
optical impairments on the correlation function. We divide the code into four sections
with each section adding one type of impairment (i.e. attenuation, dispersion, noise,
and/or jitter) to a test bit sequence (e.g. [0 1 0]). The code is written in C programming
language and compiled using MatLab software.
The program is capable of handling the addition of multiple impairments onto any
chosen bit sequence. The correlation takes place by manually multiplying the delayed
copies of the impaired bit sequence with the desired weights and then summing all the
signals with any chosen tap resolution. The program outputs the final correlation
function in a graphical format. Additionally, the program calculates the area (energy) of
the output correlation function above any given threshold and compares the result to the
area of an eyediagram affected by the same type of impairments. The program
algorithm works as follows:
1. Create a bit stream with the desired number of samples per bit
2. Generate a new bit stream with one or more type of impairments added to it
3. Select which impaired bit stream do you want to correlate the original bit stream with
4. Sum the delayed copies of the impaired signal chosen to obtain the correlation
function
5. Determine the area of the correlation function over a defined threshold
6. Define the impairment thresholds to be used in calculating the area of the eye diagram
using the chosen impaired bit stream
7. Plot the desired output(s)
118
Simulation Code:
% PROGRAM TO SIMULATE THE EFFECT OF ADDING OPTICAL
IMPAIRMENTS ONTO A GIVEN BIT STREAM
%================================
% VARIABLE INITIALIZATION
%================================
clear all
array = [ 0 1 0 ]; % Test signal to be used
tap_resolution=6; % Number of correlator taps
total_bit_size=60; % Number of points per bit
Original_area=0;
amplitude=0.7; % Percentage of attenuation
bit_array=[];
bit_value=1;
step_size=1;
sigma_noise = 0.1; % Percentage noise added
sigma_jitter= 0.01; % Percentage jitter added
pi= 3.14159265;
%===============================================
% ARRAYS INITIALIZATION
%===============================================
array_size=length(array);
119
%===============================================
% FIRST: GENERATING A BIT ARRAY
%===============================================
start = 0.25*total_bit_size;
for i= 1 : array_size;
for x=start : step_size: start+(0.75*total_bit_size
0.25*total_bit_size)
original_array(x) = array(i);
end;
start= start+(0.75*total_bit_size0.25*total_bit_size);
end;
original_array(
(length(original_array)+1):(length(original_array)+1)+(0.25
*total_bit_size) )=0;
%===============================================
% SECOND: GENERATING IMPAIRMENTS
%===============================================
%===============================================
% SECOND(1): GENERATING DISPERSION
%===============================================
%=============== INITIALIZATION ================
dispersed_array_pointer=1;
cosine_window_pointer=1;
dispersed_array(dispersed_array_pointer:(dispersed_array_po
inter+(total_bit_size1)))=0;
dispersed_array_pointer=dispersed_array_pointer+
(total_bit_size);
%========= GENERATE DISPERSED ARRAY ============
for k= 1 : array_size; % [Dispersion LOOP]
for i
=(total_bit_size*0.25):step_size:(total_bit_size*0.75)
120
eight_bit_array(i) = array(k);
end;
eight_bit_array((total_bit_size*0.75)+1:(total_bit_size))=0
;
%append the remaining part with zeros
%====== CHECK IF BIT IS 1 or 0 ============
%IF BIT IS 1
if (array(k)==1)
% First: Create an original bit
% Second: Apply cosine function to the new bit
window_size=0.5*total_bit_size;% Percentage Dispersion
newbit_array=[];
%===========================================
%First:
newbit_array_size= (window_size*2)+(0.5*total_bit_size 
window_size);
newbit_array_start= ceil(((total_bit_size 
newbit_array_size)+1)/2);
newbit_array_finish= ceil(((total_bit_size 
newbit_array_size))/2)+newbit_array_size;
for i=newbit_array_start:step_size:(newbit_array_finish)
newbit_array(i)=bit_value;
end;
newbit_array((newbit_array_finish+1):(total_bit_size))=0;
%append the rest of the array with zeros
%==========================================
%Second
121
first_part= zeros(1,(newbit_array_start1));
second_part=(1+cos(pi*(1+[1:(window_size)]/(window_size))))
/2;
third_part= ones(1,(newbit_array_size(2*window_size)));
fourth_part=(1+cos(pi* [1:(window_size)]/(window_size)))
/2;
fifth_part= zeros(1,(length(newbit_array)
newbit_array_finish));
%======= GENERATE COSINE WINDOW ==========
cosine_window = newbit_array.*[first_part second_part
third_part fourth_part fifth_part];
end;
%IF BIT IS 0
if (array(k)==0)
cosine_window(1:total_bit_size)=0;
end;
%==========================================
%Create the Nbit dispersed array
% First: Shift the beginning of the generated cosine
window to the end of the dispersed array
% Second: Add the two arrays and adjust the pointer.
Then repeat the procedure for the next bit
% Third: Append the dispersed array with zeros up to the
beginning of the next bit and make it ready for the next
bit
%First:
for j=1 : length(cosine_window)
122
shifted_cosine_window(cosine_window_pointer)=cosine_window(
j);
cosine_window_pointer= cosine_window_pointer+1;
end;
cosine_window_pointer= cosine_window_pointer 
(0.75*total_bit_size0.25*total_bit_size);
%Second:
dispersed_array= dispersed_array+shifted_cosine_window;
%Third:
dispersed_array(dispersed_array_pointer:(dispersed_array_po
inter+(0.75*total_bit_size0.25*total_bit_size)1))=0;
dispersed_array_pointer=dispersed_array_pointer+(0.75*total
_bit_size0.25*total_bit_size);
shifted_cosine_window=0; %reset the shifted cosine window
end; % [END Dispersion LOOP]
original_array((length(original_array)+1):length(dispersed_
array))=0; % match the length of original_array to
dispersed_array
%===============================================
% SECOND(2): GENERATING NOISE
%===============================================
% add x% noise to bit 1
noise_1 = (sigma_noise/2)*randn (1,length(original_array));
% add x% noise to bit 0
123
noise_0 = (sigma_noise/2)*randn (1,length(original_array));
for k=1 : length (original_array)
if (original_array(k)==1)
noisy_array= original_array + noise_1;
else
noisy_array= original_array + noise_0;
end;
end;
%===============================================
% SECOND(3): GENERATING JITTER
%===============================================
% Add jitter to bit array
for i=1 : length(original_array+jitter_amount)
jitter (i+jitter_amount)= original_array(i);
end;
%===============================================
% SECOND(4): GENERATING JITTER
%===============================================
% Add jitter to bit array
Attenuated_array = Amplitude*original_array;
%===============================================
% THIRD: CHOSING IMPAIRED ARRAY
%===============================================
% Change this for different impairment effect
chosen_bit_array= dispersed_array; % example
124
%===============================================
% FORTH: CORRELATION FUNCTION
%===============================================
%=============== INITIALIZATION ================
tau=tap_resolution;
delay_increment= tau;
%========= GENERATE CORRELATION ARRAY =========
% Generate two arrays
1. Autocorrelation Array using original bit array
2. crosscorrelation Array using chosen bit array
for r=1 : (total_bit_size/tau)
for c=1 : length (original_array)
tap_delay_line(r,c+delay_increment)=chosen_bit_array(c
);
tap_delay_line_original_bit(r,c+delay_increment)=origi
nal_array(c);
end;
delay_increment=delay_increment+tau;
end;
tap_delay_line(1:(0.25*total_bit_size/tap_resolution),:)=0;
tap_delay_line((0.75*total_bit_size/tap_resolution):(total_
bit_size/tap_resolution),:)=0;
tap_delay_line_original_bit(1:25,:)=0;
tap_delay_line_original_bit(75:100,:)=0;
for c=1 : length(tap_delay_line)
cross_correlation_array(c)=(sum(tap_delay_line(1:(total_bit
_size/tau),c)))/50;
125
auto_correlation_array(c)=(sum(tap_delay_line_original_bit(
1:(total_bit_size/tau),c)))/50;
end;
%===============================================
%===============================================
% FIFTH: CORRELATION AREA
%===============================================
% Determine the correlation peak and upper 50% area
(normalized)
area_start= (total_bit_size  (0.25*total_bit_size));
area_finish= (total_bit_size + (0.25*total_bit_size));
for jj=area_start : area_finish
if (cross_correlation_array(jj) >= 0.5)
if (cross_correlation_array(jj) >
auto_correlation_array(jj))
correlation_area1= (correlation_area1 +
auto_correlation_array(jj))(0.5);
else
correlation_area2= (correlation_area2 +
cross_correlation_array(jj))(0.5);
end;
else
correlation_area1= correlation_area1 +0;
correlation_area2= correlation_area2 +0;
end;
end;
correlation_area= correlation_area1+correlation_area2;
126
%===============================================
% SIXTH: EYE DIAGRAM AREA
%===============================================
% Define the eye diagram thresholds to be at +2sigma_noise
and +3sigma_jitter of the bits amplitude, where sigma is
the standard deviation
% Determine the xvalue at each of the thresholds at four
points, two rising and two falling
% Determine the slope of the dispersed/attenuated bit
% Repeat for variable amount of each impairment and
tabulate results
%=============== INITIALIZATION ================
% Assuming that one's and zero's have the same amount of
distortion this covers the lower part as we will multiply
by two, when determining the eye area.
simulated_bit_array= chosen_bit_array;
half_upper_area_no_boundries=0;
half_upper_boundry_area=0;
bit_area=0;
upper_bit_area=0;
start=((.25*total_bit_size)+(0*sigma_jitter*total_bit_size)
);
finish=((.75*total_bit_size)+(0*sigma_jitter*total_bit_size
));
127
%========== CALCULATE EYE AREA =================
gg=1;
for ii=start:finish
if (chosen_bit_array(ii) > 0.5)
bit_area(gg)= bit_area(gg)+chosen_bit_array(ii)
0.5;
end;
if (chosen_bit_array(ii) > (1(2*sigma_noise)))
upper_bit_area(gg)= (upper_bit_area(gg) +
chosen_bit_array(ii))(12*sigma_noise);
end;
end;
bit_area_with_boundries= bit_area  upper_bit_area;
measured_bit_area= 2*bit_area_with_boundries;
normalized_measured_bit_area(gg)= measured_bit_area /
(total_bit_size);
for cc= start : finish
if (simulated_bit_array(cc) > 0.5)
if (simulated_bit_array(cc)<= chosen_bit_array(cc))
half_upper_area_no_boundries(gg)=
(half_upper_area_no_boundries(gg) +
simulated_bit_array(cc))0.5;
else (simulated_bit_array(cc)>
chosen_bit_array(cc));
half_upper_area_no_boundries(gg)=
(half_upper_area_no_boundries(gg) + chosen_bit_array(cc))
0.5;
end;
end;
if (simulated_bit_array(cc) > (1(2*sigma_noise)))
128
if (simulated_bit_array(cc)<= chosen_bit_array(cc))
half_upper_boundry_area(gg)=
(half_upper_boundry_area(gg) + simulated_bit_array(cc))(1
2*sigma_noise);
else (simulated_bit_array(cc) >
chosen_bit_array(cc));
half_upper_boundry_area(gg)=
(half_upper_boundry_area(gg) + chosen_bit_array(cc))(1
2*sigma_noise);
end;
end;
end;
%Calculate the area within the simulated bit until 50% of
the bit size and then add the second half from the
reference array to avoid the error in the falling slope of
the bit
upper_area_no_boundries=half_upper_area_no_boundries+(bit_a
rea/2);
upper_boundry_area=half_upper_boundry_area+(upper_bit_area/
2);
upper_area_with_boundries= upper_area_no_boundries 
upper_boundry_area;
measurement_area= 2 * upper_area_with_boundries;
129
%===============================================
% SEVENTH: PLOT
%===============================================
figure
subplot(17,1,1),plot (tap_delay_line (10,:))
subplot(17,1,2),plot (tap_delay_line (20,:))
subplot(17,1,3),plot (tap_delay_line (30,:))
subplot(17,1,4),plot (tap_delay_line (35,:))
subplot(17,1,5),plot (tap_delay_line (40,:))
subplot(17,1,6),plot (tap_delay_line (45,:))
subplot(17,1,7),plot (tap_delay_line (50,:))
subplot(17,1,8),plot (tap_delay_line (55,:))
subplot(17,1,9),plot (tap_delay_line (60,:))
subplot(17,1,10),plot (tap_delay_line (65,:))
subplot(17,1,11),plot (tap_delay_line (70,:))
subplot(17,1,12),plot (tap_delay_line (75,:))
subplot(17,1,13),plot (tap_delay_line (80,:))
subplot(17,1,14),plot (tap_delay_line (90,:))
subplot(17,1,15), plot (cross_correlation_array,'r')
130
APPENDIX (B)
RAY MATRIX OPTICS
131
APPENDIX (B): RAY MATRIX OPTICS
In this appendix we describe the concept of ray matrices, which we utilize to
evaluate the imaging conditions in the White cellbased TOC. Ray matrices or matrix
optics is a method of tracing a paraxial optical ray through an optical system. A ray is
described by two values: its positions and its angle with respect to the optical axis.
These two values vary as the beam traverses refractive and reflective surfaces throughout
the optical system
[19]
.
In the paraxial approximation, the ray is assumed to travel very close to the
optical axis such that its angle with respect to the optical axis is very small. This
approximation allows for the substitution of sin with , and tan with , where is the
ray angle. This allows relating the input and output planes of an optical system using
only two linear equations. Hence, in a paraxial system we can describe any optical
system with a 2X2 matrix. This matrix is often referred to as the transfer matrix of the
optical system. The transfer matrix of an optical system depends on the property of the
optical transmission medium, i.e. its numerical aperture n and on the surface curvature of
the optical medium, e.g. flat, refractive, reflective, or none.
Some standard transfer matrices that were used in our initial design are described
next:
Ray Propagation in FreeSpace:
Beams traveling in freespace assume the transmission medium to be air with a
refractive index, n
0
= 1. The transfer matrix, M, is quantified by the distance of travel of
the optical ray and is often referred to the translation matrix as it only affects the position
of the beam. The matrix representation is:
132
(
=
1 0
1 d
M , where d is the distance traveled in air
Ray Propagation through a Thin Lens:
When a beam travels through a lens, the beam gets refracted at the spherical
boundary of the lens causing the ray angle to change, while the beam position remains
unchanged. The output angle after refraction depends on the radius of curvature, R, of
the lens surfaces(s). Also, in thin lens approximation the thickness of the lens is assumed
to be negligible and has no effect on the ray propagation. The matrix representation is as
follows:
(
(
(
=
1
1
0 1
f
M , where f is the focal length of the lens and is equal to
2
R
Ray Reflection from a Spherical Mirror:
When a ray gets reflected of a spherical mirror, the direction of travel and the ray
angle get altered. The transfer matrix is defined by the radius of curvature, R, of the
spherical mirror. We similarly obtain the transfer matrix to be:
(
(
=
1
2
0 1
R
M
So, far we described the transfer matrices associated with a single system. If
several optical systems are cascaded, such that they all lay in a single plane (i.e. planar
geometry), the resultant transfer matrix of the entire system is obtained by multiplying all
matrices in reverse order. For example a system with N components (subsystems), M
1
,
M
2
M
N
, the resultant transfer matrix of the system M
SYSTEM
= M
N
.M
N1
M
2
.M
1
. This
concept was used when validating the imaging conditions of our White cell design.
133
The resultant transfer matrix can be represented by an ABCD matrix, where each
letter represents one of the entries of the 2X2 transfer matrix. A beam entering the
system at an initial position of y
0
and a slope
0
, will exist at a new position y
m
and new
slope of
m
. The matrix representation is as follows:
(
=
(
0
0
y
D C
B A y
m
m
Each entry of the ABCD has some significance. The first entry, A, determines the
magnification associated with an imaging system, while B determines the imaging
properties of the system. A value of B=0 indicates that the system is an imaging system.
We will not consider the other two entries, C and D, in our calculations and we will
refrain from discussing their functionality. For a detailed discussion of the ABCD matrix
please refer to
[20]
.
134
APPENDIX (C)
MAPLE CODE FOR WHITE CELL DESIGN
135
MAPLE CODE FOR WHITE CELL DESIGN
In this appendix we present the code used to validate the imaging conditions in
the White cellbased TOC and determine the distances between the optics used in the
design. We also show the code used to determine the divergence angle of the beam in the
TOC based on the input beam size and the MEMS pixels size. The code is also used to
find the minimum size of the optics needed such that 99.99% or more of the beam is
captured within the optics. Finally, we show the code used to determine the minimum
separation required between the MEMS and the field lens of each White cell arm.
The calculations are done using MAPLE 9.01 software and are divided into
several sections. In the first part we verify both imaging conditions described in chapter
2 for the Null cell and determine the distances between the optics used. Next, we design
the optics in the delay arm. We show the ray matrices used to design the lens train such
that it satisfies both of the imaging conditions of the White cell..
C.1 imaging between mirrors A and B through the MEMS
The first step in calculating the imaging conditions is to determine the distance
between the MEMS and the field lens. We choose the WC mirrors to be 2 concave
spherical with a radius of curvature of R = 1000 mm. We then chose different catalog
lenses for the field lens and determine what field lens would result in the smallest WC
arm size, while maintaining a high F# (seven or more). The distance between the field
lens and the White cell mirror, d1, is set to be the focal length of the field lens. Then the
distance between the MEMS and the field lens, d0, is found to be:
136
Plugging these values into the transfer matrix of the optical system we can
achieve both imaging conditions in the Null cell.
> f1:=412; d0:=d0; d1:=412; R:=1000;
> A1:= Matrix([[1,d0],[0,1]]);
> A2:= Matrix([[1,0],[(1/f1),1]]);
> A3:= Matrix([[1,d1],[0,1]]);
> A4:= Matrix([[1,0],[(2/R),1]]);
> M2:=A1.A2.A3.A4.A3.A2.A1;
> B1:=M2[1,2];
> d0:=evalf(solve(B1,d0));
> f1:=412; d0:=242.256; d1:=412; R:=1000;
> A1:= Matrix([[1,d0],[0,1]]);
> A2:= Matrix([[1,0],[(1/f1),1]]);
> A3:= Matrix([[1,d1],[0,1]]);
> A4:= Matrix([[1,0],[(2/R),1]]);
> M1:=A3.A2.A1.A1.A2.A3;
> M2:=A1.A2.A3.A4.A3.A2.A1;
137
The first imaging condition imaging from A to B through MEMS is shown in
matrix M1. We see that the total magnification is shown in the A term in the ABCD
matrix, which is 1. The imaging information is carried in the B term, which has a
value of 5.68x10
14
or almost zero. The second imaging condition imaging the MEMS
back through WC mirror is shown in the second matrix, M2, where again we find A=1
and B=0. Therefore, both imaging conditions are satisfied in the Null arm.
C.2 Delay Arm Design
Similarly the delay arm optics and distances were designed using the same
method as in C.1. The lens train in the delay arm consists of two lenses and their focal
lengths are chosen such that the total delay produced in the arm is larger than 5ns (due to
limitations in the speed of our detector). Hence, the added distance (roundtrip) had to be
more than approximately 1.8m. Using two lenses with equal focal lengths and 2f2f
imaging in the delay arm, where f is the focal length of each lens, our lenses had to have f
> 200 mm. Choosing two catalog biconvex lenses with f = 255.28 mm, we achieve our
requirement. What remains is to find the distances between the delay arm lenses, which
is shown in the following code:
138
Similarly as in C.1 we see that A=1 and B=0 for both imaging conditions. Note
that these results are based on paraxial approximation and might change in a real system.
> f1:=412; d0:=242.256; d1:=412; f2:=255.28; f3:=255.28;
d2:=2*f2;d3:=2*f3;R:=1000;
> A1:= Matrix([[1,d0],[0,1]]);
> A2:= Matrix([[1,0],[(1/f1),1]]);
> A3:= Matrix([[1,d1],[0,1]]);
> A4:= Matrix([[1,0],[(1/f2),1]]);
> A5:= Matrix([[1,d2],[0,1]]);
> A6:= Matrix([[1,0],[(1/f3),1]]);
> A7:= Matrix([[1,d3],[0,1]]);
> A8:= Matrix([[1,0],[2/(R),1]]);
> M1:=A7.A6.A5.A4.A3.A2.A1.A1.A2.A3.A4.A5.A6.A7;
> M2:=A1.A2.A3.A4.A5.A6.A7.A8.A7.A6.A5.A4.A3.A2.A1;
139
REFERENCES
1. B. Rajagopalan, J. Luciani, D. Awduche, B. Cain, B. Jamoussi, IP Over Optical
Networks A Framework, July 2004.
2. A. Chiu, J. Strand, Unique Features and Requirements for The Optical Layer
Control Plane, http://www.ietf.org/internetdrafts/draftchiustranduniqueolcp
05.txt May 2004.
3. D. Awduche, Y. Rekhter, J. Coltun, "MultiProtocol Lambda Swtiching:
Combining MPLS traffic Engineering Control with Optical Crossconeects, "
4. R. Ramaswami and K. N. sivarajan, Optical Networks: A Practical Prospective,
San Francisco, CA, Morgan Kaufmann Publishers, Inc. 1998
5. Stamatios V. Kartalopoulos, Introduction to DWDM Technology, Data in a
Rainbow, IEEE Press, New York, 2000.
6. J. White, Long optical paths for large aperture, J. Opt. Soc. Amer., vol. 32, no. 5,
pp. 285288, May 1942.
7. J. U. White, Very long optical paths in air, J. Opt. Soc. Amer., vol. 66,no. 5,
pp. 411416, 1976.
8. Galton, "Kinship and correlation", North American Review 150 (1890), 419431.
Reprinted in Statistical Science 4 (1989), 8186.
9. E.S. Pearson, J.W. Tukey, Approximate means and standard deviations based on
distances between percentage points of frequency curves, Biometrika (1965), 38,
21947
10. Rodney Loudon, The Quantum Theory of Light (Oxford University Press,
2000)
11. R. Mital, Design and Demonstration of a Novel Optical Ture Time Delay
Technique using Polynomial Cells based on White Cells, Ph.D. Dissertation,
2005.
140
12. John G. Proakis, Digital Communications, McGrawHill, Inc. 2nd. ed., 1989
13. High Speed Digital Design,
http://www.coe.montana.edu/ee/lameres/research/research.html
14. Politi, C.T.; Haunstein, H.; Schupke, D.A.; Duhovnikov, S.; Lehmann, G.;
Stavdas, A.; Gunkel, M.; Mrtensson, J.; Lord, A., "Integrated Design and
Operation of a Transparent Optical Network: A Systematic Approach to Include
Physical Layer Awareness and Cost Function," IEEE Communications Magazine,
February 2007.
15. P. S. Andr, J. L. Pinto, A. L. J. Teixeira, M. J. N. Lima, and J. F. da Rocha, Bit
error rate assessment in DWDM transparent networks using optical performance
monitor based on asynchronous sampling, to be presented at the 2002 Optical
Fiber Communication Conference, Anaheim, Calif., 1721 March 2002.
16. H. Chen, A. W. Poon, and X. R. Cao, "Transparent Monitoring of Rise Time
Using Asynchronous Amplitude Histograms in Optical Transmission Systems," J.
Lightwave Technol. 22, 1661 (2004)
17. Y. C. Chung, Optical monitoring technique for WDM networks, in
Proceedings of IEEE/LEOS Summer Topical Meetings 2000 (IEEE, New York,
2000), pp. 4344.
18. G. Rossi, T. E. Dimmick, and D. J. Blumentahal, Optical performance
monitoring in reconfigurable WDM optical networks using subcarrier
multiplexing, J. Lightwave Technol. 18, 16391648 (2000).
19. I. Shake, H. Takara, S. Kawanishi, and Y. Yamabayashi, Optical signal quality
monitoring method based on optical sampling, Electron. Lett. 34, 21522153
(1998).
20. Saleh, Bahaa E. A. / Teich, Malvin Carl Fundamentals of Photonics Wiley Series
in Pure and Applied Optics. 1. Edition  September 1991 119.
21. CliffordR. Pollock, Fundamentals of Optoelectronics, Richard D. Irwin, Inc.,
Chicago (1995).
22. Telecommunications: A Boost for Fibre Optics, Z. Valy Vardeny, Nature 416,
489491, 2002.
141
23. Fisher, R.A. (1956) Statistical Methods and Scientific Inference. Oliver and Boyd,
Edinburgh. (See p. 32.)
24. A. Chiu, etc., Features and Requirements for The Optical Layer Control Plane,
http://www.ietf.org/internetdrafts/draftchiustranduniqueolcp05.txt
25. G. R. Hill et al., A transport network layer based on optical network elements,
J. Lightwave Technol., vol. 11, pp. 667679, 1993.
26. G. Rossi, T. E. Dimmick, and D. J. Blumenthal, Optical performance monitoring
in reconfigurable WDM optical networks using subcarrier multiplexing, J.
Lightwave Technol., vol. 18, pp. 16391648, 2000.
27. K.P. Ho and J. M. Kahn, Methods for crosstalk measurement and reduction in
dense WDM systems, J. Lightwave Technol., vol. 14, pp.11271135, 1996.
28. T. Takahashi, T. Imai, and M. Aiki, Automatic compensation technique for
timewise fluctuating polarization mode dispersion in inline amplifier systems,
Electron. Lett., vol. 30, pp. 348349, 1994.
29. G. Ishikawa and H. Ooi, Polarizationmode dispersion sensitivity and monitoring
in 40Gbit/s OTDM and 10Gbit/s NRZ transmission experiments, in Conf.
Optical Fiber Communication (OFC) 1998, 1998, pp. 117119.
30. M. Rohde, E.J. Bachus, and F. Raub, Monitoring of transmission impairments
in longhaul transmission systems using the novel digital control modulation
technique, in Europ. Conf. Optical Commun. (ECOC), 2002.
31. T. E. Dimmick, G. Rossi, and D. J. Blumenthal, Optical dispersion monitoring
technique using double sideband subcarriers, IEEE Photon.Technol. Lett., vol.
12, pp. 900902, 2000.
32. M. Teshima, M. Koga, and K. I. Sato, Performance of multiwavelength
simultaneous monitoring circuit employing arrayedwaveguide grating, J.
Lightwave Technol., vol. 14, pp. 22772286, 1996.
33. L. E. Nelson, S. T. Cundiff, and C. R. Giles, Optical monitoring using data
correlation for WDM systems, IEEE Photon. Technol. Lett., vol. 10, pp. 1030
1032, 1998.
142
34. K. J. Park, S. K. Shin, and Y. C. Chung, Simple monitoring technique for WDM
networks, Electron. Lett., vol. 35, pp. 415417, 1999.
35. C. T. Chang, J. A. Cassaboom, and H. F. Taylor, Fibreoptic delayline devices
for R. F. signal processing, Electron. Lett.13, 678680 _1977.
36. J. E. Bowers, S. A. Newton, W. V. Sorin, and H. J. Shaw, Filter response of
singlemode fibre recirculating delay lines, Electron. Lett. 18, 110111 1982.
37. K. P. Jackson, S. A. Newton, B. Moslehi, M. Tur, C. C. Cutler, J. W. Goodman,
and H. J. Shaw, Optical fiber delayline signal processing, IEEE Trans.
Microwave Theory Tech. MTT33, 193209 _1985.
38. S. A. Newton, K. P. Jackson, and H. J. Shaw, Optical fiber Vgroove transversal
filter, Appl. Phys. Lett. 43, 149151 1983.
39. G. W. Euliss and R. A. Athale, Timeintegrating correlator based on fiberoptic
delay lines, Opt. Lett. 19, 649651 _1994.
40. A. G. Podoleanu, R. K. Harding, and D. A. Jackson, Lowcost highspeed
multichannel fiberoptic correlator, Opt. Lett. 20, 112114 _1995.
41. G.K. Chang G. Ellinas, J. K. Gamelin, M.Z.Iqbal, and C.A Brackett,
Multiwavelength reconfigurable WDM/ATM/SONET network testbed, J.
Lightwave Technol., vol. 14, pp. 13201340, June 1996.
42. D. C. Kilper, R. Bach, D. J. Blumenthal, D. Einstein, T. Landolsi, A. W. Willner,
Optical Performance Monitoring, Journal of Lightwave Technology, vol. 22,
NO.1, 2004.
43. T. Lou, Z. Pan, S. M. R. Motaghian Nezam, L. S. Yan, PMD Monitoring by
Tracking the ChromaticDispersionInsensitive RF Power of the Vestigial
Sideband, Photonics Technology Letters, vol. 16, NO. 9, 2004.
44. K. Asahi, M. Yamashita, T. Hosoi, K. Nakaya, and C. Konoshi, Optical
performance monitor built into EDFA repeaters for WDM network, in tech. Dig.
OFC 98, San Jose, CA, Feb 1998, paper Th02, pp. 318,319.
45. J. L. Wegener, T. A. Strasser, J. R. Pedrazzani, Fiber grating optical spectrum
analyzer tap, in Eur. Conf. Optical Communication (ECOC97), Edinburgh,
Scotland, Sept, 1997.
143
46. R. A. Sprague and C. L. Koliopoulos, Time integrating acoustooptic correlator,
Appl. Opt. 15, 8992 1976.
47. R. J. Berinato, Acoustooptic tapped delay line filter, Appl.Opt. 32, 57975809
1995 .
48. J. Campany, J. Cascn, D. Pastor, and B. Ortega, Reconfigurable fiberoptic
delay line filters incorporating electrooptic and electroabsorption modulators,
IEEE Photon. Technol.Lett. 11, 11741176 1999
49. B. L. Anderson, A. Durresi, D. Rabb, F. AbouGalala, "RealTime AllOptical
Quality of Service Monitoring Using Correlation and a Network Protocol to
Exploit It," Applied Optics, 42(5) pp. 11211130, March 2004.
50. S. J. B. Yoo, Wavelength conversion technologies for WDM network
applications, J. Lightwave Technol., vol. 14, pp. 955966, June 1996.
51. Modulator Designer Guide, http://www.jdsu.com
52. L.E. Nelson, S. T. Cundiff, and C.R. Giles, Optical Monitoring Using Data
Correlation for WDM Systems, IEEE Photonics Technology Letters, Vol. 10, No.
7, July 1998.
53. P. R. Prucnal and M. A. Santoro, Spread spectrum fiberoptic local area network
using optical processing, Journal of Lightwave Technology, vol. LT4, pp. 547
554,1986.
54. D. M. Gookin and M. H. Berry, Finite impulse response filter with large
dynamic range and high sampling rate, Applied Optics, col. 29, pp.10611062,
1990.
55. G.W. Euliss and R. A. Athale, Timeintegrating correlator based on fiberoptic
delay lines, Optics Letters, col. 19, pp. 649651, 1994.
56. A.G. Podoleanu, R. K. Harding, and D. A. Jackson, Lowcost highspeed
multichannel fiberoptic correlator, Optics Letters, col. 20, pp. 112114, 1995.
144
57. Y. L. Chang and M. E. Marhic, Fiberoptic ladder networks for inverse decoding
coherent CDMA, Journal of Lightwave Technology, vol. 10, pp. 19521062,
1992.
58. K. P. Jackson, S. A. Newton, B. Moslehi, M. Tur, C. C. Cutler, J. W. Goodman,
and H. J. Shaw, Optical fiber delayline signal processing, IEEE Transactions
on Microwave Theory and Techniques, vol. MTT33, pp. 193209, 1985.
59. B. Moslehi, Fiberoptic filters employing optical amplifiers to provide design
flexibility, Electronics Letters, vol. 28, pp. 226228, 1992.
60. B. Moslehi and J. W. Goodman, Novel amplified fiber optic recirculating delay
line processor, Journal of Lightwave Technology, col. 10, pp. 11421146, 1992.
61. P. Petropoulos, N. Wada, P. C. The, M. Ibsen, W. Chojo, K. I. Kitayama, and D. J.
Richardson, Demonstration of a 64chip OCDMA system using superstructured
fiber gratings and timegating detection, IEEE Photonics Technology Letters,
vol.13, pp.12391241, 2001.
62. B. L. Anderson, D. J. Rabb, C. M. Warnky, F. M. AbouGalala, "Binary Optical
True Time Delay Based on the White Cell: Design and Demonstration,"IEEE
Journal of Lightwave Technology, ," IEEE Journal of Lightwave Technology,
24(4), pp. 18861895, April, 2006.
63. B. L. Anderson, C. D. Liddle, Optical truetime delay for phased array antennas:
demonstration of a quadratic White cell, Applied Optics, 41(23), pp. 49124921,
2002.
64. C.M. Warnky, R. Mital, B. L. Anderson, "Demonstration of q quartic cell, a true
timedealy device based on the White cell," IEEE Journal of Lightwave
Technology, 24(10), pp. 38493855, Otober 1006.
65. V. ArguetaDiaz,B. L. Anderson, "Optical crossconnect system based on the
White cell and threestate MEMS: Experimental demonstration of the quartic
cell," Applied Optics 45(19) pp. 46584668, 2006.
Molto più che documenti.
Scopri tutto ciò che Scribd ha da offrire, inclusi libri e audiolibri dei maggiori editori.
Annulla in qualsiasi momento.