quantum computing full report
project report tiger|
Active In SP
Joined: Feb 2010
11-02-2010, 08:58 PM
Quantum Computing.doc (Size: 105 KB / Downloads: 683)
The history of computer technology has involved a sequence of changes ~ZT\ one type of physical realization to another from gears to relays to valves to ~3nsistors to integrated circuits and so on. Today's advanced lithographic *r-:hniques can squeeze fraction of micron wide logic gates and wires onto the s^face of silicon chips. Soon they will yield even smaller parts and inevitably Ã‚Â¦each a point where logic gates are so small that they are made out of only a -^ndful of atoms; i.e. the size of the logic gates become comparable to the size r atoms.
On the atomic scale matter obeys the rules of quantum mechanics, which j-e quite different from the classical rules that determine the properties of conventional logic gates. So if computers are to become smaller in the future, quantum technology must replace or supplement what we have now. The :cint is, however, that quantum technology can offer much more than cramming â€ore and more bits to silicon and multiplying the clock-speed of - coprocessors. It can support entirely new kind of computation with qualitatively new algorithms based on quantum principles!
The story of quantum computation started as early as 1982, when the r "ysicist Richard Feynman considered simulation of quantum-mechanical rejects by other quantum systems. However, the unusual power of quantum computation was not really anticipated until the 1985 when David Deutsch of the University of Oxford published a crucial theoretical paper in which he described a -^iversal quantum computer. After the Deutsch paper, the hunt was on for something interesting for quantum computers to do. At the time all that could be found were a few rather contrived mathematical problems and the whole issue of quantum computation seemed little more than an academic curiosity. It all changed rather suddenly in 1994 when Peter Shor from AT&T's Bell Laboratories in New Jersey devised the first quantum algorithm that, in principle, can perform
: ent factorization. This became a 'killer application' something very useful Â¢tat only a quantum computer could do.
2. GENERAL CONCEPT OF INFORMATION
To explain what makes quantum computers so different from their : assical counterparts we begin by having a closer look at a basic chunk of
-""ormation namely one bit. A bit is the basic unit of information in a digital computer. From a physical point of view, a bit is a physical system which can be ; spared in one of the two different states representing two logical values - no or
es, false or true, or simply 0 or 1. For example, in digital computers, the voltage tetween the plates in a capacitor represents a bit of information: a charged capacitor denotes bit value 1 and an uncharged capacitor bit value 0. One bit of : "nation can be also encoded using two different polarizations of light or two : -erent electronic states of an atom. In any of the systems listed above, a bit : = n store a value of logical 1 or logical 0 using some method which depends on
- T system used. The size of the register to be used would be determined by the bKDdmum value that is to be used (m) and the number of bits in each register is r-rtermined using the equation
k = log 2 n
Where n is the smallest power of 2 greater than or equal to m.
3. CONCEPT OF INFORMATION IN QUANTUM COMPUTERS - THE QUBIT
In quantum computers also, the basic unit of information is a bit. The :c-ocept of quantum computing first arose when the use of an atom as a bit was suggested. If we choose an atom as a physical bit then quantum mechanics tells at apart from the two distinct electronic states (the excited state and the vid state), the atom can be also prepared in what is known as a coherent >-Derposition of the two states. This means that the atom can be both in state 0 and state 1 simultaneously. It is at this point that the concept of a quantum bit or a 3ubit arises. This concept is the backbone of the idea of quantum computing, for the same reason, let's see in detail what actually coherent superposition is.
4. COHERENT SUPERPOSITION
In any quantum mechanical system, a particular state of the system is represented by a mathematical function called as the wave function of that state. A wave function is a complex exponential which includes all possible phases of existence of that particular state.
Considering any quantum mechanical system, let \\>i and y2 be two wave functions that represent any two independent states of the system. Then quantum mechanics tells us that there exists a state of the same system that can be represented by the wave function Ci*j/i + c2v|/2.This state is called as a superposition of the two states represented by yi and vj/2. This would mean that the system would be in both the states of y-i and i|/2 simultaneously. All super positions of two quantum states of a system need not be stable. If the superposition is to be stable, then there should be some sort of coherence between the two states that are being super positioned. Such a superposition is called as a coherent superposition.
There can be more than one coherent superposition for a particular pair of states of a quantum mechanical system. So in our talk the term 'coherent superposition' would refer to that superposition which is the most stable one.
5. ADVANTAGE OF USING COHERENT-SUPERPOSITIONED MEMORY
The importance of coherent-superpositioned storage can be understood from the following example. Consider a register composed of three physical bits. Any classical register of that type can store in a given moment of time only one out of eight different numbers i.e. the register can be in only one out of eight possible configurations such as 000, 001, 010, ... 111. Consider the case of a quantum register at that place. Since a qubit can store both the values of 0 & 1 simultaneously, a quantum register composed of three qubits can store in a given moment of time all the eight numbers in a quantum superposition. The catch is that the memory size grows exponentially when additional bits are added compared to the linear growth in classical computers. Also, once the register is prepared in such a superposition, operations can be performed on all the numbers simultaneously.
6. ROLE OF COHERENT SUPERPOSITION IN COMPUTING OPERATIONS
We have seen that once a register is prepared in a superposition of different numbers, we can perform operations on all of them. For example, if qubits are atoms then suitably tuned laser pulses affect atomic electronic states and evolve initial super positions of encoded numbers into different super positions. During such evolution each number in the superposition is affected and as the result we generate a massive parallel computation albeit in one piece of quantum hardware. This means that a quantum computer can in only one computational step perform the same mathematical operation on 2L different input numbers encoded in coherent super positions of L qubits. In order to accomplish the same task, any classical computer has to repeat the same computation 2L times or one has to use 2L different processors working in parallel. In other words a quantum computer offers an enormous gain in the use of computational resources such as time and memory.
But this, after all, sounds as yet another purely technological progress. It looks like classical computers can do the same computations as quantum computers but simply need more time or more memory. The catch is that classical computers need exponentially more time or memory to match the power of quantum computers and this is really asking for too much because an exponential increase is really fast and we run out of available time or memory very quickly.
7. ALGORITHMS FOR QUANTUM COMPUTERS
In order to solve a particular problem computers follow a precise set of instructions that can be mechanically applied to yield the solution to any given instance of the problem. A specification of this set of instructions is called an algorithm. Examples of algorithms are the procedures taught in elementary schools for adding and multiplying whole numbers; when these procedures are mechanically applied, they always yield the correct result for any pair of whole numbers. Some algorithms are fast (e.g. multiplication) other are very slow (e.g. factorizations). Consider, for example, the following factorization problem
X = 29083
How long would it take you, using paper and pencil, to find the two whole numbers which should be written into the two boxes (the solution is unique) Probably about one hour. At the same time, solving the reverse problem
127x129 = ,
again using paper and pencil technique, takes less than a minute. All because we know fast algorithms for multiplication but we do not know equally fast ones for factorization.
What really counts for a "fast" or a "usable" algorithm, according to the standard definition, is not the actual time taken to multiply a particular pairs of number but the fact that the time does not increase too sharply when we apply the same method to ever larger numbers. The same standard text-book method of multiplication requires little extra work when we switch from two 3 - digit numbers to two 30 - digit numbers. By contrast, factoring a thirty digit number using the simplest trial division method is about 1013 times more time or memory consuming than factoring a three digit number. The use of computational resources is enormous when we keep increasing the number of digits. The largest number that has been factorized as a mathematical challenge, i.e. a number whose factors were secretly chosen by mathematicians in order to present a challenge to other mathematicians, had 129 digits. No one can even conceive of how one might factorize say thousand-digit numbers; the computation would take much more that the estimated age of the universe.
Apart form the standard definitions of a fast or a usable algorithm, computer scientists have a rigorous way of defining what makes an algorithm fast (and usable) or slow (and unusable). For an algorithm to be fast, the time it takes to execute the algorithm must increase no faster than a polynomial function of the size of the input. Informally, think about the input size as the total number of bits needed to specify the input to the problem, for example, the number of bits needed to encode the number we want to factorize. If the best algorithm we know for a particular problem has the execution time (viewed as a function of the size of the input) bounded by a polynomial then we say that the problem belongs to class P. Problems outside class P are known as hard problems. Thus we say, for example, that multiplication is in P whereas factorization is not in P and that is why it is a hard problem. Hard does not mean "impossible to solve" or "non-computable" - factorization is perfectly computable using a classical computer, however, the physical resources needed to factor a large number are such that for all practical purposes, it can be regarded as intractable. Purely technological progress can only increase the computational speed by a fixed multiplicative factor which does not help to change the exponential dependence between the size of the input and the execution time. Such change requires inventing new, better algorithms. Although quantum computation requires new quantum technology, its real power lies in new quantum algorithms which allow exploiting quantum superposition that can contain an exponential number of different terms. Quantum computers can be programmed in a qualitatively new way. For example, a quantum program can incorporate instructions such as "... and now take a superposition of all numbers from the previous operations..." this instruction is meaningless for any classical data processing device but makes lots of sense to a quantum computer. As the result we can construct new algorithms for solving problems, some of which can turn difficult mathematical problems, such as factorization, into easy ones! :
8. DEMONSTRATING QUANTUM COMPUTING
Due to technical obstacles, till date, a quantum computer has not yet been realized. But the concepts and ideas of quantum computing has been demonstrated using various methods. Here, we discuss four of the most important technologies that are used to demonstrate quantum computing. They are
i. Nuclear Magnetic Resonance
ii. Ion Trap
iii. Quantum Dot
iv. Optical Methods
While reading the following "top four technologies", two things should be kept in mind. The first is that the list will change over time. Some of the approaches valuable for exploring quantum computing in the laboratory are fundamentally un-scalable, and so will drop out of contention over the next few years. The second thing to keep in mind is that although there are a bewildering number of proposed methods for demonstrating quantum computing (a careful search will yield many more options that what is listed here); all of them are variations on three central themes:
(a) manipulating the spin of a nucleus or subatomic particle
(b) manipulating electrical charge
© manipulating the polarization of a photon.
In variation "a" a qubit is derived from superposition of up and down spins. In variation "b" a qubit is derived from superposition of two or more discrete locations of the charge. In the last variation , a qubit is derived from the superposition of polarization angles. Of the three , the manipulation of the spin is generally viewed as the most promising for practical large-scale application. Let's now see each of these techniques in detail.
i.Nuclear Magnetic Resonance
Using nuclear magnetic resonance (NMR) techniques, invented in the 1940's and widely used in chemistry and medicine today, these spins can be manipulated, initialized and measured. Most NMR applications treat spins as little "bar magnets", whereas in reality, the naturally well-isolated nuclei are non-classical objects.
A Nuclear Magnetic Resonance (NMR) quantum computer is based on control of nuclear spin. In demonstrations to date this has been achieved by manipulating the nuclear spins of several atoms making up a molecule - the most recent effort using a molecule of five fluorine atoms and two carbon atoms. The spin manipulation is accomplished by application of magnetic pulses within a magnetic field produced by the NMR chamber.
The quantum behavior of the spins can be exploited to perform quantum computation. Magnetic field produced by the NMR chamber can be used to manipulate the spin state of the nucleus. The manipulation of the spin is accomplished by application of magnetic pulses in the chamber. The entanglement of spins required to establish a qubit is created by the chemical bonds between neighboring atoms within a molecule - within 1018 molecules to be more precise, since a measurable signal (relative to the background noise from kinetic energy) is achieved by using a test tube of "processing liquid" rather than a single molecule.
For example, consider the carbon and hydrogen nuclei in a chloroform molecule to be representing two qubits. Applying a radio-frequency pulse to the hydrogen nucleus addresses that qubit, and causes it to rotate from a |0> state to a superposition state. Interactions through chemical bonds allow multiple-qubit logic to be performed. In this manner, applying newly developed techniques to allow bulk samples with many molecules to be used, small-scale quantum algorithms have been experimentally demonstrated with molecules such as Alanine, an amino acid. This includes the quantum search algorithm, and a predecessor to the quantum factoring algorithm.
The major drawback of this method is scalability; the signal strength of the answer decreases exponentially with the number of qubits.
ii. Ion Trap
An Ion Trap quantum computer is also based on control of nuclear spin (although using vibration modes or "phonons" has also been considered). In this approach the individual ions are, as the name implies, trapped or isolated by means of an electromagnetic field which is produced by means of an electromagnetic chamber.
Ordinarily, the energy difference between different spin states in nuclei is so small relative to the kinetic energy of the ions, that they are not measurable. In the prior technique (NMR) this problem was overcome by operating simultaneously on a large number of atoms. But in this case, the solution is a bit different. The trapped ions are cooled to the point where motion is essentially eliminated. They are then manipulated by laser pulses and a qubit arises from the superposition of lower and higher energy spin states.
This technique is potentially scalable, but a great disadvantage is that it requires a cryogenic environment - not to mention that to date no more than single qubit systems have been demonstrated.
A quantum dot is a particle of matter so small that the addition or removal of an electron changes its properties in some useful way. All atoms are, of course, quantum dots, but multi-molecular combinations can have this characteristic. In biochemistry, quantum dots are called redox groups. Quantum dots typically have dimensions measured in nanometers, where one nanometer is 10"9 meter or a millionth of a millimeter.
A Quantum Dot quantum computer can involve manipulation of electrical charge, spin, or energy state - the Australians have a patent on a spin based version. The idea is that a small number of electrons or possibly an individual electron is confined with a quantum dot, the quantum dot typically being a small "hill" of molecules grown on a silicon substrate. A computer would be made up of a regular array of such dots. As with the prior two methods, the most popular approach is to have spin up counted as zero, spin down counted as one, and use a superposition of spin states to create the qubit. Techniques for self assembly of large arrays of quantum dots have already been demonstrated and can be done using the industry standard silicon substrate. Thus, of all the approaches listed here, this one seems to have the highest potential for commercial scalability.
As the name indicates, an optical quantum computer uses the two different polarizations of a light beam to represent two logical states. As an example, we can consider the polarization of a light beam in the vertical plane to represent a logical 1 and the polarization of the beam in the horizontal plane to represent a logical 0. An Optical quantum computer would be based on manipulating the polarization of individual photons. Entanglement is achieved by coincident creation of identical photons. Identical photons in this context would mean photons having the same energy as well as same polarization. The superposition of polarization or phase state is manipulated using polarizing lenses, phase shifters, and beam splitters.
It was originally believed that non-linear optics would be required in order to create a working quantum computer, and this was considered to be the major technical obstacle to a photon-based approach. However, recent theoretical advances indicate that linear optics is sufficient. Several laboratories are working on a practical demonstration. The new stumbling block has been creating beam splitters of sufficient accuracy. The entire process can be carried out at room temperature and there is no reason, in principle, that it is not scalable to large numbers of qubytes.
9. ADVANTAGES of QUANTUM COMPUTING
Quantum computing principles use the principle of coherent superposition storage. As stated in the above example, it is quite remarkable that all eight numbers are physically present in the register but it should be no more surprising than a qubit being both in state 0 and 1 at the same-time. If we keep adding qubits to the register we increase its storage capacity exponentially i.e. three qubits can store 8 different numbers at once, four qubits can store 16 different numbers at once, and so on; in general L qubits can store 2L numbers at once. Once the register is prepared in a superposition of different numbers we can perform operations on all of them. For example, if qubits are atoms then suitably tuned laser pulses affect atomic electronic states and evolve initial super positions of encoded numbers into different super positions. During such evolution each number in the superposition is affected and as the result we generate a massive parallel computation albeit in one piece of quantum hardware. This means that a quantum computer can in only one computational step perform the same mathematical operation on 2L different input numbers encoded in coherent super positions of L qubits. In order to accomplish the same task any classical computer has to repeat the same computation 2L times or one has to use 2L different processors working in parallel. In other words a quantum computer offers an enormous gain in the use of computational resources such as time and memory. It looks like classical computers can do the same computations as quantum computers but simply need more time or more memory. The catch is that classical computers need exponentially more time or memory to match the power of quantum computers and this is really asking for too much because an exponential increase is really fast and we run out of available time or memory very quickly.
10. WHAT WILL QUANTUM COMPUTERS BE GOOD AT
These are the most important applications currently known:
Â¢ Cryptography: Perfectly secure communication.
Â¢ Searching, especially algorithmic searching (Graver's algorithm).
Â¢ Factorizing large numbers very rapidly (Shor's algorithm).
Â¢ Simulating quantum-mechanical systems efficiently.
11. OBSTACLES AND RESEARCH
The field of quantum information processing has made numerous promising advancements since its conception, including the building of two- and three-qubit quantum computers capable of some simple arithmetic and data sorting. However, a few potentially large obstacles stijl remain that prevent us from "just building one", or more precisely, building a quantum computer that can rival today's modern digital computer. Among these difficulties, error correction, decoherence, and hardware architecture are probably the most formidable.
We have seen that if a superposition of any two states of a quantum -mechanical system is to be stable over a period of time, there should be some sort of coherence between the states that are being superpositioned. But still, no superposition of any pair of given states are perfectly stable to any extent. This is because of the existence of a property by name decoherence which forces a quantum - mechanically superpositioned system to decay from a given quantum coherent - superpositioned state into an incoherent state as it entangles or interacts with the surroundings. The final effect is that the coherence between the two states that are superpositioned would be gradually "lost". For the same reason, no quantum memory can, at present, be used to hold data that is to be used for operations that take a long time. The time taken by the system to lose the coherence between the two states is known as decoherence time.
ii. Error Correction
Error correction is rather self explanatory, but what errors need correction The answer is primarily those errors that arise as a direct result of decoherence, or the tendency of a quantum computer to decay from a given quantum state into an incoherent state as it interacts, or entangles, with the state of the environment. These interactions between the environment and qubits are unavoidable, and induce the breakdown of information stored in the quantum computer, and thus errors in computation. Before any quantum computer will be capable of solving hard problems, research must devise a way to maintain decoherence and other potential sources of error at an acceptable level.
Thanks to the theory (and now reality) of quantum error correction, first proposed in 1995 and continually developed since, small scale quantum computers have been built and the prospects of large quantum computers are looking up. Probably the most important idea in this field is the application of error correction in phase coherence as a means to extract information and reduce error in a quantum system without actually measuring that system. In 1998, researches at Los Alamos National Laboratory and MIT managed to spread a single bit of quantum information (qubit) across three nuclear spins in each molecule of a liquid solution of alanine or trichloroethylene molecules. They accomplished this using the techniques of nuclear magnetic resonance (NMR). This experiment is significant because spreading out the information actually made it harder to corrupt. Quantum mechanics tells us that directly measuring the state of a qubit invariably destroys the superposition of states in which it exists, forcing it to become either a 0 or 1.
The technique of spreading out the information allows researchers to utilize the property of entanglement to study the interactions between states as an indirect method for analyzing the quantum information. Rather than a direct measurement, the group compared the spins to see if any new differences arose between them without learning the information itself. This technique gave them the ability to detect and fix errors in a qubit's phase coherence, and thus maintain a higher level of coherence in the quantum system. This milestone has provided argument against skeptics, and hope for believers. At this point, only a few of the benefits of quantum computation and quantum computers are readily obvious, but before more possibilities are uncovered theory must be put to the test. In order to do this, devices capable of quantum computation must be constructed. Quantum computing hardware is, however, still in its infancy.
As a result of several significant experiments, nuclear magnetic resonance (NMR) has become the most popular component in quantum hardware architecture. Only within the past year, a group from LOS Alamos National Laboratory and MIT constructed the first experimental demonstrations of a quantum computer using nuclear magnetic resonance (NMR) technology. Currently, research is underway to discover methods for battling the destructive effects of decoherence, to develop optimal hardware architecture for designing and building a quantum computer, and to further uncover quantum algorithms to utilize the immense computing power available in these devices. Naturally this pursuit is intimately related to quantum error correction codes and quantum algorithms, so a number of groups are doing simultaneous research in a number of these fields.
To date, designs have involved ion traps, cavity quantum electrodynamics (QED), and NMR. Though these devices have had mild success in performing interesting experiments, the technologies each have serious limitations. Ion trap computers are limited in speed by the vibration frequency of the modes in the trap. NMR devices have an exponential attenuation of signal to noise as the number of qubits in a system increases. Cavity QED is slightly more promising; however, it still has only been demonstrated with a few qubits. The future of quantum computer hardware architecture is likely to be very different from what we know today; however, the current research has helped to provide insight as to what obstacles the future will hold for these devices.
iii. Lack of Reliable Reading Mechanism
The techniques that exist till date have a big problem that trying to read from a super positioned qubit would invariably make it to lose its superpositioned state and make it to behave just as a classical bit - i.e. it would store only one among the values of 0 and 1.
Also, if we are given a quantum register comprising of n bits of which m bits are superpositioned ones, none among the reading mechanisms available today is able to determine which value from the superposition is to be read out. i.e. if we are given a 3 - bit register that contains, say, 4 values in a particular superposition (let them be 4,5,6,7), the reading mechanisms available today are unable to determine which how to access a specific value from the superposition.
12. FUTURE OUTLOOK
At present, quantum computers and quantum information technology remains in its pioneering stage. At this very moment obstacles are being surmounted that will provide the knowledge needed to thrust quantum computers up to their rightful position as the fastest computational machines in existence. Error correction has made promising progress to date, nearing a point now where we may have the tools required to build a computer robust enough to adequately withstand the effects of decoherence. Quantum hardware, on the other hand, remains an emerging field, but the work done thus far suggests that it will only be a matter time before we have devices large enough to test Shor's and other quantum algorithms. Thereby, quantum computers will emerge as the superior computational devices at the very least, and perhaps one day make today's modern computer obsolete. Quantum computation has its origins in highly specialized fields of theoretical physics, but its future undoubtedly lies in the profound effect it will have on the lives of all mankind.
13. DESIRABLE FEATURES OF AN IDEAL SYSTEM
Hoping for the best, if ever a quantum computer is being practically realized, then an ideal one should have some desirable features. They are as listed below.
i. Be a Scalable Physical System
Scalability in this context refers to the change that occurs in the signal strength of the system that would take place when the number of qubits keeps on increasing. The clause 'a scalable physical system' refers to a system which, under ideal conditions, does not suffer from any kinds of loss of signal strength when the number of qubits increases. If a system suffers from such a loss, then the problem that arises would be that the system cannot afford to handle more than a fixed number of qubits totally, which would lead to the occurrence of a large obstacle in the construction of an efficient quantum computer.
ii. Be Initializable to a Simple State
The term Initialization would have the same meaning it has as far as classical computers are considered, i.e. If a particular address or a memory location is given, then we should be able to initialize it to a particular state, may it be a single state or a superpositioned state by a single step, or at least, using a very few number of steps.
iii.Have Much Long Decoherence Times
As we have seen, no quantum memory is till date completely free from the grip of decoherence. For the same reason, we cannot completely avoid this evil.
So our aim should be as to reduce the troubles caused by it. One method to achieve this is to use memory having much longer decoherence time so that by the time he coherence between the states is lost, the required operation would have been performed on the stored data. Another approach is to improve the algorithms used, so that the time that a data is to be stored in a location would decrease. But the latter approach is left to the person who designs the algorithm, and therefore outside the control of the hardware designer or manufacturer.
Experimental and theoretical research in quantum computation is accelerating world-wide. New technologies for realizing quantum computers are being proposed, and new types of quantum computation with various advantages over classical computation are continually being discovered and analysed and we believe some of them will bear technological fruit. From a fundamental standpoint, however, it does not matter how useful quantum computation turns out to be, nor does it matter whether we build the first quantum computer tomorrow, next year or centuries from now. The quantum theory of computation must in any case be an integral part of the world view of anyone who seeks a fundamental understanding of the quantum theory and the processing of information.
Active In SP
Joined: Feb 2011
22-02-2011, 10:57 AM
rani.docx (Size: 37.83 KB / Downloads: 166)
Imagine a computer whose memory is exponentially larger than its apparent physicalsize; a computer that can manipulate an exponential set of inputs simultaneously; acomputer that computes in the twilight zone of Hilbert space. You would be thinking of aquantum computer. Relatively few and simple concepts from quantum mechanics areneeded to make quantum computers a possibility. The subtlety has been in learning tomanipulate these concepts. Is such a computer an inevitability or will it is too difficult tobuild?The subject of quantum computing brings together ideas from classical informationtheory, computer science, and quantum physics. This review aims to summarize not justquantum computing, but the whole subject of quantum information theory. It turns outthat information theory and quantum mechanics fit together very well. In order to explaintheir relationship, the review begins with an introduction to classical information theoryand computer science, including Shannon's theorem, error correcting codes, Turingmachines and computational complexity. The principles of quantum mechanics are thenoutlined, and the EPR experiment described. The EPR-Bell correlations and quantumentanglement in general, form the essential new ingredient which distinguishes quantumfrom classical information theory, and, arguably, quantum from classical physics. Basicquantum information ideas are described, including key distribution, teleportation, datacompression, quantum error correction, the universal quantum computer and quantumalgorithms. The common theme of all these ideas is the use of quantum entanglement as acomputational resource. Experimental methods for small quantum processors are brieflysketched, concentrating on ion traps, high Q cavities, and NMR. The review concludeswith an outline of the main features of quantum information physics, and avenues forfuture research.
What is quantum computing?
It's something that could have been thought up a long timeago - an idea whose time has come. For any physical theory one can ask: what sort ofmachines will do useful computation? or, what sort of processes will count as usefulcomputational acts? Alan Turing thought about this in 1936 with regard (implicitly) toclassical mechanics, and gave the world the paradigm classical computer: the Turingmachine.But even in 1936 classical mechanics was known to be false. Work is now under way -mostly theoretical, but tentatively, hesitantly groping towards the practical - in seeingwhat quantum mechanics means for computers and computing.The basic idea behind quantum computing in the merging of computer science,information theory and quantum mechanics from classical physics for better efficiency ofoperation of systems. Information is the most important element of any system.Information can be represented in many ways. It can be analyzed provided we know howit was encoded. For example, the two statements ``the quantum computer is veryinteresting'' and ``l'ordinateur quantique est tres interessant'' have something in common,although they share no words. The thing they have in common is their informationcontent. Essentially the same information could be expressed in many other ways, forexample by substituting numbers for letters in a scheme such as a -> 97, b -> 98, c -> 99and so on, in which case the English version of the above statement becomes 116 104101 32 113 117 97 110 116 117 109... . It is very significant that information can beexpressed in different ways without losing its essential nature, since this leads to thepossibility of the automatic manipulation of information: a machine need only be able tomanipulate quite simple things like integers in order to do surprisingly powerfulinformation processing, from document preparation to differential calculus, even totranslating between human languages. We are familiar with this now, because of theubiquitous computer, but even fifty years ago such a widespread significance ofautomated information processing was not foreseen.However, there is one thing that all ways of expressing information must have incommon: they all use real physical things to do the job. Spoken words are conveyed byair pressure fluctuations, written ones by arrangements of ink molecules on paper, eventhoughts depend on neurons (Landauer 1991). The rallying cry of the informationphysicist is ``no information without physical representation!'' Conversely, the fact thatinformation is insensitive to exactly how it is expressed, and can be freely translated fromone form to another, makes it an obvious candidate for a fundamentally important role inphysics, like energy and momentum and other such abstractions. However, until thesecond half of this century, the precise mathematical treatment of information, especiallyinformation processing, was undiscovered, so the significance of information in physicswas only hinted at in concepts such as entropy in thermodynamics. It now appears thatinformation may have a much deeper significance. Historically, much of fundamentalphysics has been concerned with discovering the fundamental particles of nature and theequations which describe their motions and interactions. It now appears that a differentprogramme may be equally important: to discover the ways that nature allows, andprevents, information to be expressed and manipulated, rather than particles to move. Forexample, the best way to state exactly what can and cannot travel faster than light is toidentify information as the speed-limited entity. In quantum mechanics, it is highlysignificant that the state vector must not contain, whether explicitly or implicitly, moreinformation than can meaningfully be associated with a given system. Among otherthings this produces the wave function symmetry requirements which lead to BoseEinstein and Fermi Dirac statistics, the periodic structure of atoms, and so on.The history of computer technology has involved a sequence of changes from one type ofphysical realization to another --- from gears to relays to valves to transistors tointegrated circuits and so on. Today's advanced lithographic techniques can squeezefraction of micron wide logic gates and wires onto the surface of silicon chips. Soon theywill yield even smaller parts and inevitably reach a point where logic gates are so smallthat they are made out of only a handful of atoms. On the atomic scale matter obeys therules of quantum mechanics, which are quite different from the classical rules thatdetermine the properties of conventional logic gates. So if computers are to becomesmaller in the future, new, quantum technology must replace or supplement what we havenow. The point is, however, that quantum technology can offer much more thancramming more and more bits to silicon and multiplying the clock-speed ofmicroprocessors. It can support entirely new kind of computation with qualitatively newalgorithms based on quantum principles!A bit is a fundamental unit of information, classically represented as a 0 or 1 in yourdigital computer. Each classical bit is physically realized through a macroscopic physicalsystem, such as the magnetization on a hard disk or the charge on a capacitor. Adocument, for example, comprised of n-characters stored on the hard drive of a typicalcomputer is accordingly described by a string of 8n zeros and ones. Herein lies a keydifference between your classical computer and a quantum computer. Where a classicalcomputer obeys the well understood laws of classical physics, a quantum computer is adevice that harnesses physical phenomenon unique to quantum mechanics (especiallyquantum interference) to realize a fundamentally new mode of information processing.In a quantum computer, the fundamental unit of information (called a quantum bit orqubit), is not binary but rather more quaternary in nature. This qubit property arises as adirect consequence of its adherence to the laws of quantum mechanics which differradically from the laws of classical physics. A qubit can exist not only in a statecorresponding to the logical state 0 or 1 as in a classical bit, but also in statescorresponding to a blend or superposition of these classical states. In other words, a qubitcan exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficientrepresenting the probability for each state. This may seem counterintuitive becauseeveryday phenomenon is governed by classical physics, not quantum mechanics -- whichtakes over at the atomic level.To explain what makes quantum computers so different from their classical counterpartswe begin by having a closer look at a basic chunk of information namely one bit. From aphysical point of view a bit is a physical system which can be prepared in one of the twodifferent states representing two logical values --- no or yes, false or true, or simply 0 or 1For example, in digital computers, the voltage between the plates in a capacitorrepresents a bit of information: a charged capacitor denotes bit value 1 and an unchargedcapacitor bit value 0. One bit of information can be also encoded using two differentpolarizations of light or two different electronic states of an atom. However, if we choosean atom as a physical bit then quantum mechanics tells us that apart from the two distinctelectronic states the atom can be also prepared in a coherent superposition of the twostates. This means that the atom is both in state 0 and state 1.
In an experiment like that in figure a, where a photon is fired at a half-silvered mirror, itcan be shown that the photon does not actually split by verifying that if one detectorregisters a signal, then no other detector does. With this piece of information, one mightthink that any given photon travels vertically or horizontally, randomly choosing betweenthe two paths. However, quantum mechanics predicts that the photon actually travelsboth paths simultaneously, collapsing down to one path only upon measurement. Thiseffect, known as single-particle interference, can be better illustrated in a slightly moreelaborate experiment, outlined in figure b.Figure b depicts an interesting experiment that demonstrates the phenomenon of singleparticleinterference. In this case, experiment shows that the photon always reachesdetector A, never detector B! If a single photon travels vertically and strikes the mirror,then, by comparison to the experiment in figure a, there should be an equal probabilitythat the photon will strike either detector A or detector B. The same goes for a photontraveling down the horizontal path. However, the actual result is drastically different.The only conceivable conclusion is therefore that the photon somehow traveled bothpaths simultaneously; creating interference at the point of intersection that destroyed thepossibility of the signal reaching B. This is known as quantum interference and resultsfrom the superposition of the possible photon states, or potential paths. So although onlya single photon is emitted, it appears as though an identical photon exists and travels the'path not taken,' only detectable by the interference it causes with the original photonwhen their paths come together again. If, for example, either of the paths are blockedwith an absorbing screen, then detector B begins registering hits again just as in the firstexperiment! This unique characteristic, among others, makes the current research inquantum computing not merely a continuation of today's idea of a computer, but rather anentirely new branch of thought. And it is because quantum computers harness thesespecial characteristics that give them the potential to be incredibly powerfulcomputational devices.The most exciting really new feature of quantum computing is quantum parallelism. Aquantum system is in general not in one "classical state", but in a "quantum state"consisting (crudely speaking) of a superposition of many classical or classical-like states.This superposition is not just a figure of speech, covering up our ignorance of whichclassical-like state it's "really" in. If that was all the superposition meant, you could dropall but one of the classical-like states (maybe only later, after you deduced retrospectivelywhich one was "the right one") and still get the time evolution right. But actually youneed the whole superposition to get the time evolution right. The system really is in somesense in all the classical-like states at once! If the superposition can be protected fromunwanted entanglement with its environment (known as decoherence), a quantumcomputer can output results depending on details of all its classical-like states. This isquantum parallelism - parallelism on a serial machine. And if that wasn't enough,machines that would already, in architectural terms, qualify as parallel can benefit fromquantum parallelism too - at which point the mind begins to seriously boggle!
The Potential and Power of Quantum Computing
Let us start by describing the problem at hand: factoring a number N into its prime factors(e.g., the number 51688 may be decomposed as ). A convenient wayto quantify how quickly a particular algorithm may solve a problem is to ask how thenumber of steps to complete the algorithm scales with the size of the ``input'' thealgorithm is fed. For the factoring problem, this input is just the number N we wish tofactor; hence the length of the input is . (The base of the logarithm is determined byour numbering system. Thus a base of 2 gives the length in binary; a base of 10 indecimal.) `Reasonable' algorithms are ones which scale as some small-degree polynomialin the input size (with a degree of perhaps 2 or 3).On conventional computers the best known factoring algorithm runs insteps. This algorithm, therefore, scalesexponentially with the input size . For instance, in 1994 a 129 digit number(known as RSA129) was successfully factored using this algorithm on approximately1600 workstations scattered around the world; the entire factorization took eight months .Using this to estimate the prefactor of the above exponential scaling, we find that itwould take roughly 800,000 years to factor a 250 digit number with the same computerpower; similarly, a 1000 digit number would require years (significantly lon ger thanthe age of the universe). The difficulty of factoring large numbers is crucial for publickeycryptosystems, such as ones used by banks. There, such codes rely on the difficulty
Active In SP
Joined: Feb 2011
12-03-2011, 12:10 PM
QUANTUM COMPUTING.doc (Size: 413.5 KB / Downloads: 180)
1.1 GENERAL INTRODUCTION
Quantum computing is a combination of physics, mathematics and computer science. Quantum algorithm exponentially “speed up” classical computation.
The basic paradigm for quantum algorithm is the quantum circuit model, which is composed of the basic quantum units of information(qubits) and quantum gates.
By interacting with each other while being isolated from the external environment, qubits can perform certain calculations exponentially faster than conventional computers.
The quantum computer, following the laws of quantum physics, would gain enormous processing power through the ability to be in multiple states, and to perform tasks using all possible permutations simultaneously.
By doing a computation on many different numbers at once, then interfering the results to get a single answer, a quantum computer has the potential to be much more powerful than a classical computer of the same size.
1.2 QUANTUM ALGORITHMS
The main quantum algorithms are:
Quantum circuit based algorithm
The Deutsch Oracle
The Deutsch Jozsa Oracle
The Simon Oracle
Measurement based algorithm
Topological quantum field theory(TQFT) algorithm
1.3 ENEMIES OF QUANTUM COMPUTING
There are two known enemies of quantum computing:
If we keep on putting quantum gates together into circuits we will quickly run into some serious practical problems. The more interacting qubits are involved the harder it tends to be to engineer the interaction that would display the quantum interference. Apart from the technical difficulties of working at single-atom and single-photon scales, one of the most important problems is that of preventing the surrounding environment from being affected by the interactions that generate quantum superposition. The more components the more likely it is that quantum computation will spread outside the computational unit and will irreversibly dissipate useful information to the environment. This process is called “Decoherence”. Even though we try to isolate the quantum system from the environment much as we can, we cannot supply total isolation. Therefore, the interaction of the quantum system and the environment result in “Decoherence” of the quantum state, which is equivalent to a partial measurement of the state by the environment.
b) Gate Inaccuracies
Decoherence is not the only problem with quantum computing. Gates, whether they are classical or quantum, are not perfect. The gates are usually combined together. So small errors in gates can combine together during computation and eventually causing failure, and it is not clear how to correct these small errors.
The simplest example of error correcting code is a repetition code: replacing the bit we want to protect by 3 copies of the bit,
0 → (000)
Now an error may occur that causes one of the three bits to flip; If it’s the first bit, say,
(000) → (100)
(111) → (011)
Now in spite of the error, the bit can be encoded correctly, by majority voting.
CONCEPTS OF QUANTUM COMPUTING
2.1 ELEMENTS OF QUANTUM COMPUTING
Generally we’ll think of a quantum computer as a classical computer with a quantum circuit attached to it with some kind of interface between conventional
and quantum logic. Since there are only a few things a quantum computer does better than a classical computer it makes sense to do the bulk of the processing on the classical machine.
1) Bits and Qubits
These are the”nuts and bolts” of quantum computing. It describes qubits, gates, and circuits. Quantum computers perform operations on qubits which are analogous to conventional bits but they have an additional property in that they can be in a superposition.
A quantum register with 3 qubits can store 8 numbers in superposition simultaneously, and a 250 qubit register holds more numbers (superposed) than
there are atoms in the universe.
Representation of data-qubits
2) Single Qubit
Classical computers use two discrete states to represent a unit of information, this state is called a binary digit (or bit for short). A bit has the following two values:
0 and 1
There is no intermediate state between them, i.e. the value of the bit cannot be in a superposition.
Quantum bits, or qubits, can on the other hand be in a state ”between” 0 and
1, but only during the computational phase of a quantum operation. When
measured, a qubit can become either:
The | > symbolic notation is part of the Dirac notation.
3) Multiple Qubit
The potential amount of information available during the computational phase
grows exponentially with the size of the system, i.e. the number of qubits.
This is because if we have n qubits the number of basis states is 2n. E.g. if
we have two qubits, forming a quantum register then there are four (=22)
computational basis states: forming
Here |01> means that qubit 1 is in state |0> and qubit 2 is in state |1>, etc.
2.2 CONCEPTS OF QUANTUM COMPUTING
The following concepts are important for quantum computing:
Superposition means a system can be in two or more of its states simultaneously. For example a single particle can be traveling along two different paths at once. This implies that the particle has wave-like properties, which can mean that the waves from the different paths can interfere with each other. Interference can cause the particle to act in ways that are impossible to explain without these wave-like properties.
The ability for the particle to be in a superposition is where we get the parallel nature of quantum computing: If each of the states corresponds to a different value then, if we have a superposition of such states and act on the system, we effectively act on all the states simultaneously.
In 1935 Einstein (along with colleagues Podolski and Rosen) demonstrated a paradox (named EPR after them) in an attempt to refute the undefined nature of quantum systems. The results of their experiment seemed to show that quantum systems were defined, having local state BEFORE measurement. Although the original hypothesis was later proven wrong (i.e. it was proven that quantum systems do not have local state before measurement). The effect they demonstrated was still important, and later became known as entanglement.
Entanglement is the ability for pairs of particles to interact over any distance instantaneously. Particles don’t exactly communicate, but there is a statistical correlation between results of measurements on each particle that is hard to understand using classical physics. To become entangled, two particles are allowed to interact; they then separate and, on measuring say, the velocity of one of them (regardless of the distance between them), we can be sure of the value
of velocity of the other one (before it is measured). The reason we say that they communicate instantaneously is because they store no local state and only have well defined state once they are measured. Because of this limitation particles can’t be used to transmit classical messages faster than the speed of light as we only know the states upon measurement. Entanglement has applications in a wide variety of quantum algorithms and machinery.
The quantum world is irreducibly small so it’s impossible to measure a quantum
system without having an effect on that system as our measurement device is also quantum mechanical. As a result there is no way of accurately predicting all of the properties of a particle. There is a trade off - the properties occur in complementary pairs (like position and momentum, or vertical spin and horizontal spin) and if we know one property with a high degree of certainty then we must know almost nothing about the other property.
That unknown property’s behaviour is essentially random. An example of this is a particle’s position and velocity: if we know exactly where it is then we know nothing about how fast it is going. This indeterminacy is exploited in quantum cryptography.
Active In SP
Joined: Feb 2011
16-03-2011, 02:09 PM
quantumComputers.ppt (Size: 223.5 KB / Downloads: 213)
What is a quantum computer?
A quantum computer is a machine that performs calculations based on the laws of quantum mechanics, which is the behavior of particles at the sub-atomic level.
“I think I can safely say that nobody understands quantum mechanics” - Feynman
1982 - Feynman proposed the idea of creating machines based on the laws of quantum mechanics instead of the laws of classical physics.
1985 - David Deutsch developed the quantum turing machine, showing that quantum circuits are universal.
1994 - Peter Shor came up with a quantum algorithm to factor very large numbers in polynomial time.
1997 - Lov Grover develops a quantum search algorithm with O(√N) complexity
Representation of Data - Qubits
A bit of data is represented by a single atom that is in one of two states denoted by |0> and |1>. A single bit of this form is known as a qubit
A physical implementation of a qubit could use the two energy levels of an atom. An excited state representing |1> and a ground state representing |0>.
In general, an n qubit register can represent the numbers 0 through 2^n-1 simultaneously.
Sound too good to be true?…It is!
If we attempt to retrieve the values represented within a superposition, the superposition randomly collapses to represent just one of the original values.
Relationships among data – Entanglement
Entanglement is the ability of quantum systems to exhibit correlations between states within a superposition.
Imagine two qubits, each in the state |0> + |1> (a superposition of the 0 and 1.) We can entangle the two qubits such that the measurement of one qubit is always correlated to the measurement of the other qubit.
Quantum Gates are similar to classical gates, but do not have a degenerate output. i.e. their original input state can be derived from their output state, uniquely. They must be reversible.
This means that a deterministic computation can be performed on a quantum computer only if it is reversible. Luckily, it has been shown that any deterministic computation can be made reversible.(Charles Bennet, 1973)
Quantum Gates – Hadamard
Simplest gate involves one qubit and is called a Hadamard Gate (also known as a square-root of NOT gate.) Used to put qubits into superposition
Active In SP
Joined: May 2011
13-05-2011, 10:35 PM
May i know the author of this article
Thinking To Register
22-09-2012, 07:13 AM
i want a seminar and presentation report on quantum computing under nano technology...........