Sunday, August 30, 2009

Reversible Computing: Sometimes the best step forward is backward


I have been fascinated with the idea of Reversible Computing after being introduced to it recently while reading Ray Kurzweil's book on the Singularity. This post is a quick primer on the subject for folks discovering this late(like me). One of the exciting reasons for implementing reversible computing is that they offer a way to build extremely energy efficient computers. As per the Neumann-Landauer limit, every irreversible bit operation releases energy of kT ln 2 (K being Boltzmann's constant & T being temperature). The key idea here is to not destroy input states but simply move them around and not release additional information/energy to the environment.

A necessary condition for reversibility is that the transition function mapping states from input to output should be one-to-one. This makes sense since if the function was many-to-one, its not possible to recreate state t from state t+1. Its easy to see that AND, OR gates are not reversible (For OR, 01,11,11 all map to 1). In a normal gate, input states are lost since there is less information in the output than the input and this lost information is released as heat. Since charges are grounded and flow away, energy is lost. In a reversible gate, input information is not lost and thus energy is not released.

The NOT gate is reversible and Landauer (IBM) showed that the NOT operation could be performed without putting energy in or taking heat out. Think about the implications of this for a second. But the NOT gate is of course not universal.

The Fredkin & Toffoli gates are examples of reversible gates that are also universal.

The Fredkin gate is a simple controlled swap gate. i.e. if one of the bits (the control bit) is 1, the other 2 input bits are swapped in the output. The state of the control bit is preserved.



The logic function is,
For inputs A, B, C and outputs P, Q, R
P=A
Q=B XOR Swap
R=C XOR Swap
Swap=(B XOR C) AND A

|A|B|C|P|Q|R|
|1|0|0|1|0|0|
|1|0|1|1|1|0|
|1|1|0|1|0|1|
|1|1|1|1|1|1|
|0|0|0|0|0|0|
|0|0|1|0|0|1|
|0|1|0|0|1|0|
|0|1|1|0|1|1|

Points to note from the truth table,
1. The number of 0s and 1s are preserved from input to output, showing
how the model is not wasteful.

2. If you feed back the output you get the input state that created it.
(trivially for the last 4 rows)

So why is reversible computing interesting?

Current computing paradigms rely on irreversible computing where we destroy the input states as we move to subsequent states, storing only the intermediate results that are needed. When we selectively erase input information, energy is released as heat to the surrounding environment thus increasing its entropy. With reversible computing the input bit stays in the computer but just changes location (see truth table above), hence releasing no heat into the environment and requiring no energy from the outside environment.

Some caveats as pointed out by Ray,
1. Even though in theory energy might not be required for computation, we will still need energy for transmitting the results of the computation which is a irreversible process.

2. Logic operations have an inherent error rate. Standard error detection and correction codes are irreversible and these will dissipate energy.

Even if we can't get to the ideal theoretical limit of reversible computing in terms of energy efficient processing, we can get close which should be extremely exciting!


[Image courtesy:http://bit-player.org/wp-content/Fredkingate.png,
http://www.ocam.cl/blog/wp-content/uploads/2007/11/computer_cooling.jpg]