Sei sulla pagina 1di 20

FINAL PROJECT REPORT

ON

Decoherence and Quantum to Classical Transition


At
BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI Submitted in partial fulfillment of Study Oriented Project (BITS C323)

By YUVRAAJ KUMAR 2010B5A8634P

Under the guidance of Jayendra N Bandyopadhyay Assistant Professor Department of Physics

BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI APRIL, 2013

Acknowledgement
I take the opportunity to express my sincere gratitude to Dr. Jayendra N
Bandyopadhyay sir (Assistant Professor, Department of Physics, BITS-Pilani) for

allowing me to undertake this Study Oriented Project and guiding me through-out the semester to attain better understanding of the topic. I am extremely grateful to the Department of Physics, BITS-Pilani for offering us the courses of Quantum Information and Computation, Quantum Physics, Modern Physics, Physics-I and Physics-II. The knowledge attained during these courses had been of utmost importance for understanding the project content. I am also very grateful to Dr. Tapomoy Guha Sarkar sir (Assistant Professor, Department of Physics,
BITS-Pilani), for agreeing to evaluate my project.

Table of Contents 1. Acknowledgement 2. Abstract 3. Introduction 3. Measurement Problems


-von-Neumann Measurement Scheme -Dividing the Measurement Problem
A) The Problems of Definite Outcomes and of Collapse B) The Problem Of Interference C) The problem of the preferred basis

-Quantum to classical transition


The Stern Gerlach Experiment

4. The Decoherence Program


1) Environment-induced decoherence
2) Environment-induced superselection or einselection A) Stability criterion and Pointer Basis B) Einselection in Phase Space

5. The EPR Paradox and Bells Inequality


6. Applications 7. Conclusion 8. References

Abstract: The motive behind this Study Oriented Project is to get the insight of
Quantum Measurement Problems, its relation with Quantum to Classical Transition and to study of Zureks formalism of Decoherence as a possible resolution. In the process we will come across EPR Paradox, Bells Inequality, and the Stern-Gerlach Expt. which will be used as an example for Quantum Measurement Problems and Quantum to Classical Transition.

INTRODUCTION
The idea of decoherence in quantum mechanics is based on the insight that unlike classical systems, realistic quantum systems are never isolated, but are immersed in the surrounding environment and interact continuously with it. The decoherence studies, entirely within the basic quantum formalism the resulting formation of quantum correlations between the states of the system and its environment effects of these system-environment interactions. As for example lets consider a small particle immersed in photons. In a classical setting, when we consider the movement of the particle, it is perfectly safe to ignore the scattering of the photons. The amount of momentum transferred from the photon to the particle per collision is very small, but even when the interaction is strong the incident photons are usually distributed isotropically in position and direction, thus averaging out the momentum transfer to zero. While, in a quantum setting, considering the state of the particle to be , where (x) = <x|> generally every collision entangles a photon with the particle. This is the basic underlying idea of the theory of environment-induced decoherence. It is the purpose of this report to study the theory of decoherence, and its implications for the quantum measurement problem, intimately related to quantum to classical transition.

THE MEASUREMENT PROBLEM


The measurement problem arises quite naturally from Quantum Theory success of Quantum Theory in describing the realm of microscopic particles and allowing them to have indefinite values for quantities like position and momentum. The problem is that there is nothing in Quantum Theory that forbids the same indefiniteness from occurring for macroscopic objects like books, tables or cats, which does not agree with our perception. In more general language it has not been able to draw any specific boundary between Classical and Quantum world.

Von Neumann Measurement Scheme:


Von Neumann devised a scheme for describing a quantum measurement in 1932. The scheme is based on entanglement between the quantum system under measurement and the measuring apparatus used. Von Neumann therefore treated not only the system but also the apparatus as a quantum-mechanical object. In his scheme a microscopic system S, represented by basis vectors in a Hilbert space HS, interacts with a measurement apparatus A, described by basis vectors spanning a Hilbert space HA, where the are assumed to correspond to macroscopically distinguishable pointer positions that correspond to the outcome of a measurement if S is in the state . Now, if S is in a Superposition , and A is in the initial ready state , the linearity of the Schrodinger equation entails that the total system SA, assumed to be represented by the Hilbert product space HSHA, evolves by:

In the equation we can see that an initial superposition of the system states lead to a superposition of the apparatus pointer states. But if the apparatus pointer is in fact an actual macroscopic pointer on a display of our apparatus, this means that the pointer is in multiple positions at the same time! This is of course nothing other than a formal description of the infamous Schrodinger Cat Paradox, where the system takes the form of a superposition of an either decayed or undecayed

unstable atom, and the pointer takes the form of an either alive or dead cat. So, Measurement Problem is all about making sense of the above-given equation.

DIVIDING THE MEASUREMENT PROBLEM Von-Neumann Measurement Scheme:

..eq. (i)

A) The Problems of Definite Outcomes and of Collapse: The problem of outcomes: Why does one perceive a single outcome among the many possible ones in equation (i)? The final state in equation is the state after the von Neumann pre-measurement. Yet we must somehow have

in which we perceive the `or' to be mutually exclusive. After measurement one of the outcomes i with corresponding probability , and the state of the combined system-apparatus becomes . However this shifts the problem to the new problem of what constitutes a measurement. There is therefore a strict dualism between the system under measurement (to be described by QT) and the apparatus/observer (obeying classical physics). However this again shifts the problem, for where, on the way from our macroscopic system to an ever larger ensemble of atoms of the apparatus, does one draw the line. Ignorance Interpretable:

This term in the measurement scheme represents the linear superposition of multiple states unlike classical ensembles where the system is just in one state and we assign probabilities to each state just because of ignorance on our part. Taking the example of Double Slits Experiment with electrons, the interference pattern cannot be described by either one individual states of electrons from slit 1

and slit 2 i.e.|1> or |2> respectively, but with the superposition of wave functions i.e. (|1>+|2>). One of the traditional interpretations- Copenhagen Interpretation proposed by Niels Bohr (1928), insists that a classical apparatus is necessary to carry out measurements. Thus, quantum theory was not to be universal. The key feature of the Copenhagen Interpretation is creating mobile border between quantum and classical world. In yet another interpretation to the problem known as Many World Interpretation developed by Hugh Everett III claims that there is no need of the border altogether. Every potential outcome is accommodated by the ever-proliferating branches of the wave function of the Universe. The similarities between difficulties faced by these two interpretations become apparent when both seem unable to clearly answer the question that why as an observer one perceives only one of the outcomes?

B) The problem of the preferred basis:


As per Bi-orthogonal Decomposition Theorem the expansion of the final premeasurement system-apparatus state of Von-Neumann Measurement Scheme:

is unique, but only if all coefficients ci are distinct. Otherwise, we can in general rewrite the state in terms of different state vectors,

If we assume, like we did before, that the states are orthonormal (we are measuring an observable) and that we have designed our apparatus such that the states are orthonormal (we want to be able to distinguish between the possible measurements outcomes), the decomposition of the final state is in fact unique (by the Schmidt decomposition), unless two or more of the coefficients are equal. So if we can represent the state of the system using two different sets of

bases but its quite contradictory to how we design apparatuses. As it arises the question which base or observables has actually been measured? Making it simple lets say through the first set of bases, the apparatus being used, tells us about spin of electron in x-direction while the second set tells us the spin of electron in z-direction. This means same apparatus is measuring spin in x and z directions which clearly contradicts the theory of quantum mechanics which says that the spin observables are non-commutative because of different spatial arrangement.

QUANTUM TO CLASSICAL TRANSITION


The role of a measurement is to convert quantum states and quantum correlations into classical, definite outcomes. To get an idea of how this works mathematically, consider a spin1/2 particle as it passes through a Stern-Gerlach apparatus.

A spin1/2 particle is prepared in a state |S = | + | and is passed into a reversible Stern-Gerlach apparatus. The magnetic field separates the particle's trajectory based upon spin. The spin-up component follows the top track and the spin-down component follows the bottom track. The beams pass near a detector in initial state which acts as a measurement device. The interaction is idealized so that the path of the particle is not affected by the detector, but the state of the detector bears a record of the particle. The Hilbert space HS of the system is spanned by the orthonormal states | and |, while the states|d and |d span the HD of the

detector. Just for example | |d | |d that means if an electron in | state interacts with detector in |dstate; it changes the state of the detector to |d, which we can observe only by reading the state of detector. |S = | + | with the complex coefficients satisfying ||^2 + ||^2 = 1. The composite system starts as |i = |S|d Interaction results changes |i into a correlated state |c: |i = (| + |)|d ||d + ||d = |c This pre measurement outcome can be achieved by means of a Schrdinger equation with an appropriate interaction. Now, the correlated state vector |c implies that, if the detector is seen in the state |d, the system is guaranteed to be found in the state | but final measurement outcome is simple and fundamental. In the real world, even if we are unaware of outcome of a measurement, we know the possible alternatives which may occur. Such an assumption is wrong for a system described by |c. In order to express our ignorance in terms of probabilities a density matrix can be used to describe the probability distribution over the alternative outcomes. This process makes the outcomes independent of one another by taking the purestate density matrix:

c = |cc| = ||2|||dd| + *|||dd| + ||dd| + ||2|||dd|


Now canceling the off-diagonal terms that express purely quantum correlations (entanglement) we can obtain a reduced density matrix with only classical correlations which is:

r = ||2|||dd| + ||2|||dd| r is easier to interpret as a description of a completed measurement than c coefficients of r may be interpreted as classical probabilities. The density
The reduced matrix r can be used to describe the alternative states of a composite spin-detector system that has classical correlations. On the other hand, classical ignorance.

c cant represent such

Even the set of the alternative outcomes cant be decided by c. This can be shown by choosing = = 1/2 so that the density matrix constructed from the correlated state |c = (||d ||d)/2) And states of electron are:

c is a projection operator

which yields: where

are consequence of the superposition principle, states in the Hilbert space of the quantum detector. Therefore, the density matrix

c = |cc|
could have many different states of the subsystems on the diagonal.

THE DECOHERENCE PROGRAM


The theory of decoherence is based on a study of the effects brought about by the interaction of physical systems with their environment. In classical physics, the environment is usually viewed as a kind of disturbance or noise that alters the study of objective properties of the system under consideration. Quantum correlations can also disperse information throughout the degrees of freedom that are, in effect, inaccessible to the observer. Interaction with the degrees of freedom external to the systemwhich we shall summarily refer to as the environmentoffers such a possibility.

The decoherence-program has dealt with the following two main consequences of environmental interaction: (1) Environment-induced decoherence is the fast local suppression of interference between different states of the system. However, since only unitary time evolution is employed, global phase coherence is not actually destroyed it becomes absent from the local density matrix that describes the system alone (the reduced density matrix), but remains fully present in the total systemenvironment composition. Environment-induced decoherence is also stated to solve the measurement problems, or to constitute an interpretation of quantum mechanics by itself. Interaction with the environment typically leads to a rapid vanishing of the diagonal terms in the local density matrix describing the probability distribution for the outcomes of measurements on the system, become known as environment-induced decoherence. The decohered local density matrices describing the probability distribution of the outcomes of a measurement on the system-apparatus combination is formally (approximately) identical to the corresponding mixed-state density matrix. It has also frequently been claimed to imply at least a partial solution to the measurement problem.

In S-G expt. discussed above the reduction of the state vector, c r, decreases the information available to the observer about the composite system . And thus predicts the outcome in terms of classical probability. This loss in information about quantum correlation increases the entropy H = Tr lg because the initial state described by c was pure and so H(c ) = 0, while the reduced state is mixed. Information gainthe objective of the measurementis accomplished only when the observer interacts and becomes correlated with the detector in the already pre-measurement state r. To illustrate the process of the environment-induced decoherence, just like in Stern-Gerlach expt. consider a system , a detector , and an environment . The environment is also a quantum system. Following the first step of the measurement processestablishment of a correlation; environment similarly interacts and becomes correlated with the apparatus:

The final state of the combined extends the correlation beyond the

von Neumann chain of correlated systems pair. When the states of the environment

corresponding to the states |d and |d of the detector are orthogonal, the reduced density matrix for the detector-system combination is obtained by ignoring (tracing over) the information in the uncontrolled and unknown degrees of freedom possessed by environment. So reduced density matrix will be:

The resulting r is precisely the reduced density matrix that von Neumann called for. Now, in contrast to the situation described by a superposition of the records of the detector states is no longer a record of a superposition of the state of the system. A preferred basis of the detector, sometimes called the pointer basis has emerged. The preferred basis of the detector (of any open quantum system) is selected by the dynamics. (2) Environment-induced superselection or einselection: The concept of environmentally-induced superselection or einselection was introduced into quantum mechanics by W. Zurek as a solution to the preferred basis problem, -the problem of selecting one from among the arbitrarily many possible sets of basis vectors spanning the Hilbert space of a system of interest. It is a quantum process associated with selective loss of information. A) Stability criterion and Pointer Basis Einselected pointer states are stable. They can retain correlations with the rest of the Universe in spite of the environment. The decoherence program has attempted to define such a criterion based on the interaction with the environment and the idea of robustness and preservation of correlations. The environment thus plays a double role in suggesting a solution to the preferred-basis problem: it selects a preferred pointer basis, and it guarantees its uniqueness via the tridecompositional uniqueness theorem.

The tridecompositional uniqueness theorem ensures that the expansion of the final state in is unique, which fixes the ambiguity in the choice of the set of possible outcomes. It demonstrates that the inclusion of (at least) a third system (environment) is necessary to remove the basis ambiguity. But given any pure state in the composite Hilbert space , the tridecompositional uniqueness theorem neither tells us whether a Schmidt Decomposition exists nor specifies the unique expansion itself (if the decomposition is possible), and since the precise states of the environment are generally not known, an additional criterion is needed that determines what the preferred states will be. Now we will try to obtain the basic criterion for selection of precise environment state. For the composite Hilbert space , linearity of Schrodingers Eqn. yields the following time evolution of entire system

Where

represents states of environment corresponding to apparatus pointer

states . This ensures that environments faithful measurement that means it wont disturb established correlation between state of system and apparatus. This idea of faithful measurement is the criterion for preferred pointer basis selection as its the only way system-apparatus correlation can be conserved or we can say a reliable record of the state of the system S could be kept. Mathematically stating; we can say that corresponding pointer state projection vector interacting Hamiltonian should commute with apparatus-environment , i.e.

this implies that the environment determines through the form of the interaction

Hamiltonian , a preferred apparatus observable , and thereby also the states of the system that are measured by the apparatus, that is, reliably recorded through the formation of dynamically stable quantum correlations. The tridecompositional uniqueness theorem then guarantees the uniqueness of the expansion of the final state .

Another criterion for the selection of preferred pointer basis is based on vonNeumann Entropy or purity of state measurement. This method helps in finding out pointer states which gets least entangled with environment states. This method leads to a ranking of the possible pointer states with respect to their classicality, i.e., their robustness with respect to interaction with the environment, and thus allows for the selection of preferred pointer basis in terms of the most classical pointer states -the predictability sieve B) Einselection in Phase Space: The interaction between the apparatus and the environment singles out a set of mutually commuting observables. The detectorenvironment interaction Hamiltonian plays a decisive role. In particular, when the interaction with the environment dominates, eigenspaces of any observable that commutes with the interaction Hamiltonian , which means [, ] = 0 invariably end up on the diagonal of the reduced density matrix. This implies should be eigenstate of or typically eigenstate of position. This commutation relation has a simple physical implication: It guarantees that the pointer observable will be a constant of motion, a conserved quantity under the evolution generated by the interaction Hamiltonian. When a system is in an eigenstate or when the interaction with the environment is weak and dominates the evolution of the system (that is, when the environment is slow in the above sense), a case that frequently occurs in the microscopic domain, pointer states will arise that are energy eigenstates of .

In the intermediate case, when the evolution of the system is governed by and in roughly equal strength, the resulting preferred states will represent a compromise between the first two cases; for instance, the frequently studied model of quantum Brownian motion has shown the emergence of pointer states localized in phase space, i.e., in both position and momentum.

So we can say that environmentally-induced superselection of a preferred basis: (i) proposes an criterion for selection of pointer basisby claiming that only the pointer basis that leads to stable, and thus perceivable, records while apparatus-environment interactions is taken into account, (ii) claims that the preferred basis will make the outcome as experienced classically, since the governing interaction Hamiltonian will depend on pointers states. But it does not tell us precisely what the pointer basis will be in any given physical situation, since it will be difficult to write down interaction Hamiltonian in realistic cases which means it will be difficult to argue that any proposed criterion based on the interaction with the environment will generally lead us to determinate outcomes as we experience.

The clear merit of the approach of environment induced superselection lies in the fact that the preferred basis is not chosen in order to make our measurement records deterministic (for example, position). Instead the selection is motivated on physical, observer-free grounds, that is, through the system-environment interaction Hamiltonian .

THE EPR PARADOX AND BELLS INEQUALITY *The EPR Paradox: In 1935 Albert Einstein and two colleagues, Boris
Podolsky and Nathan Rosen (EPR) developed a thought experiment to demonstrate what they felt was a lack of completeness in quantum mechanics. This so-called "EPR Paradox" has led to much subsequent, and still ongoing, research. In the EPR paradox, Einstein and friends imagined a scenario that would let you measure, say, both the position and momentum (as an example) of a particle with absolute certainty, a big no-no in quantum mechanics.

A perfect example is the case of the neutral pion. A pion is a subatomic particle that decays into two photons, each with opposite spins. The pion has no spin. When the pion decays its no longer is a pion. It splits into two photons that shoot away from each other in opposite directions. Photons have spin, but these two photons came from a pion with no spin. So, if one knows the spin of one photon, one can find out the spin of the other photon because their spins have to add up to no spin at all. To reach the paradox lets follow the following statements: Spin along the x-axis with absolute certainty is measured for the photons that flew to the right (quite possible). But, alas, quantum mechanics won't let us measure the y-axis spin, since we already know the x-axis spin. So we go to the second photon that flew to the left. You already know its x-axis spin without even measuring it: it is the exact opposite of the other photon. The paradox is this: Can we measure the y-axis spin of the second photon with absolute certainty even though you already know it's x-axis spin without measuring it? Quantum mechanics says we can't measure the y-axis spin with absolute certainty. It doesn't matter if the two photons were separated by an inch or 10 miles, the very instant you measure the first photon's x-axis spin, the y-axis spin of the second photon is impossible to measure. Though Einstein believed that one should be able to carry out the measurement as how would the second photon "know" if one has measured the first photon or not? Relativity says that the "knowledge" of the measurement of the first photon can only travel the speed of light. But quantum mechanics requires the "knowledge" of the measurement to be instantaneous, because they have been entangled. Einstein called it "spooky action at a distance". To explain away this quirky paradox, some scientists said that there were "hidden variables" that exist in the photons that allow them to behave this way that we have yet to discover. They would be aspects of each of the photons that are the same, since they were entangled, but that did not depend on the other photon. Niels Bohr, one of the founders of QM, held the opposite view that there were no hidden variables. * Bells Inequality: In 1964 John Bell proposed a mechanism to test for the existence of these hidden variables, and he developed his famous inequality as the basis for such a test. He showed that if the inequality were ever not satisfied, then

it would be impossible to have a local hidden variable theory that accounted for the spin experiment.

Using the example of two photons configured in the singlet state, consider this: in the hidden variable theory, after separation, each photon will have spin values for each of the three axes of space, and each spin will have one of two values; call them "+" and "". Call the axes x, y, z, and call the spin on the x-axis x+ if it is "+" on that axis; otherwise call it x. Use similar definitions for the other two axes. Now perform the experiment. Measure the spin on one axis of one photon and the spin in another axis of the other photon. If EPR were correct, each photon will simultaneously have properties for spin in each of axes x, y and z. Now on performing the measurements with a number of sets of photons, we will use the symbol N(x+, y) to designate the words "the number of photons with x+ and y". Similarly for N(x+, y+), N(y, z+), etc. Also use the designation N(x+, y, z+) to mean "the number of photons with x+, y and z+", and so on. It's easy to demonstrate that for a set of photons that (1) N(x+, y) = N(x+, y, z+) + N(x+, y, z), because the z+ and z exhaust all possibilities. Let n[x+, y+] be the designation for "the number of measurements of pairs of photons in which the first photon measured x+, and the second photon measured y+". Use a similar designation for the other possible results. This is necessary because this is all that it is possible to measure. You can't measure both x and y for the same photon. Bell demonstrated that in an actual experiment, if (1) is true (indicating real properties), then the following must be true: (2) n[x+, y+] <= n[x+, z+] + n[y, z]. Additional inequality relations can be written by just making the appropriate permutations of the letters x, y and z and the two signs. This is Bell's Inequality, and it is proved to be true if there are real (perhaps hidden) variables to account for the measurements. So the question is if Quantum Mechanics violates Bells Inequality? Or there is something wrong with the inequalities assumptions which are: Valid logic.

There is a reality separate from its observation. No information can travel faster than light.

In experiment after experiment Bell's Inequality is not violated, but instantaneous communication, or "spooky at a distance", seems to occur. If you rule out instantaneous communication, Bell's Inequality is violated. The most interesting experiment was carried out by a physicist at the University of Geneva, Switzerland, Nicolas Gisin in 1997. He split a single photon into two "smaller" photons (which meant they were entangled) and sent them down fiber optic cable in opposite directions. When the photons where about 10 kilometers apart they ran into a detector. Gisin found that even though a large distance separate the photons, something done to one photon at one end very much affected the photon at the other end. . . instantaneously.

APPLICATIONS
The importance of decoherence theory is that it brings some understanding about the process of wave collapse. The tendency of a system to fall out of quantum superposition can be directly calculated. There are significant applications in the field of quantum computing where the goal is to build an entire computing device that can remain in coherent superposition. Decoherence places limits on the amount of time a system can be expected to remain in a superposition, although quantum error correction techniques could possibly provide a workaround. Although decoherence does not provide the last word in the measurement problem it does bring some light to the matter.
The general guideline of provides insight into how a system knows which observable is being measured and also why the position eigenstates seem to be favored in general. It is still not known at which point the wave actually collapses but the arena has been expanded. Environmental entanglement provides a mechanism in which wave collapse can propagate into the system from far away.

CONCLUSION
The theory of decoherence is based on a study of the effects brought about by the interaction of physical systems with their environment. In classical physics, the environment is usually viewed as a kind of disturbance or noise that alters the study of objective properties of the system under consideration . The Decoherence program has recognized the possibility of realistic description of Quantum Theory. Quantum entanglement which demonstrates the correlations between systems can alter the objective properties of the composite system that cannot be composed from the properties of the individual systems. Nonetheless, classical physics thats based on exact description theory of physical systems has influenced quantum theory for a long time. It is the decoherence program which emphasized on the role of system-environment correlations to provide a realistic physical modeling and a generalization of the quantum measurement process.

REFRENCES
Hasen, Bas, 2010, in Decoherence, the measurement problem, and interpretations of quantum mechanics. M. Schlosshauer, 2005, in Decoherence, the measurement problem, and interpretations of quantum theory, arXiv:quant-h/0312059v4. Nielsen, M. A., and I. L. Chuang, 2000, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge) Preskill, John, 2001, Lecture Notes, Quantum Information and Computation (California Institute of Technology) Zeh, H. D., 2000, in Decoherence: Theoretical, Experimental, and Conceptual Problems. Zurek, W. H., 1991, Physics Today 44, 36 . Zurek, W. H., 2002, Decoherence and the Transition from Quantum to Classical Revisited.

Potrebbero piacerti anche