Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > March-April 2006 > Article Detail

COMPUTING SCIENCE

Reverse Engineering

backward and forward both run to need may they ,faster run to are computers If

Brian Hayes

Back to the Future

Making a computer run backwards is not just a problem for hardware hackers and logic designers. There are also issues at higher levels of abstraction. What will it be like to write and run a program on a machine that can bounce back and forth in time?

The evolution of a computational process...Click to Enlarge Image

Most of us already have some experience with a rudimentary form of reversibility—namely the undo command. Even this simple facility for unwinding a computation has its semantic hazards. (Should "undo undo" leave you two steps back, or none?) In many programs, undo is a bit of a sham; it doesn't turn the program around but just revisits saved snapshots. Software that really runs in reverse has some harder problems to solve.

Consider the matter of rounding. Most numerical calculations are done with limited precision; all numbers have to fit into 32 bits, say. When that's not enough room, the least-significant digits are discarded. But dropping digits is a no-no in the world of reversible computing. If 1/3 gets converted into 0.33333, then reversing the process must somehow yield exactly 1/3 again.

Even arithmetic with exact numbers can get us into trouble, if one of those numbers happens to be zero. Division by zero is forbidden everywhere in civil society; in a reversible machine, we might also have to outlaw multiplication by zero, because it amounts to erasure, yielding a result of zero no matter what the value of the multiplicand. Personally, I rather like this idea of treating division and multiplication symmetrically, but it runs counter to hundreds of years of habit.

Henry G. Baker, in a thoughtful essay that touches on several issues of reversibility, cites another example: Newton's method for approximating square roots. In Newton's algorithm, you approach the square root of N by starting with an arbitrary guess, x, and then calculating (x + N/x)/2. The result of this expression becomes a new guess, and you repeat the procedure until you reach the desired accuracy. Almost any guess will converge on the true root; in other words, the algorithm "exemplifies a large class of computations in which the output appears to be independent of the input." The method also appears to be highly irreversible, since all those different inputs lead to the same output. If the computation were reversed, how could the algorithm find its way back from the one output to each of the many possible inputs? But in fact it can (at least if the initial x is greater than the square root of N). If the computation is done without rounding or truncating intermediate results, all of the information needed to reverse the process is preserved in the "insignificant" decimal places of the answers. Each initial guess can be reconstructed exactly.

Baker and others also point out that a reversible computer would be a wonderful device for debugging programs. With conventional hardware, you can set a breakpoint to stop a program when it goes off the rails, but then it's a struggle to figure out what path brought it to the trouble spot. With a reversible device, you can just back it up. Running a compiler in reverse might also be fun. Generating source code from an executable program is a handy trick (which the vendors of commercial software would no doubt want to disable).




comments powered by Disqus
 

EMAIL TO A FRIEND :

Of Possible Interest

Engineering: The Story of Two Houses

Letters to the Editors: The Truth about Models

Spotlight: Briefings

Subscribe to American Scientist