Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG

COMPUTING SCIENCE

Accidental Algorithms

A strange new family of algorithms probes the boundary between easy and hard problems

Brian Hayes

Conscientious Cheating

The letters NP might well be translated "notorious problems," but the abbreviation actually stands for "nondeterministic polynomial." The term refers to a hypothetical computing machine that can solve problems through systematic guesswork. For the problems in NP, you may or may not be able to compute an answer in polynomial time, but if you happen to guess the answer, or if someone whispers it in your ear, then you can quickly verify its correctness. NP is the complexity class for conscientious cheaters—students who don't do their own homework but who at least check their cribbed answers before they turn them in.

Detecting a Hamiltonian circuit is one example of a problem in NP. Even though I don't know how to solve the problem efficiently for all graphs, if you show me a purported Hamiltonian circuit, I can readily check whether it passes through every vertex once:

 Click to Enlarge Image

(Note that this verification scheme works only when the answer to the decision problem is "yes." If you claim that a graph doesn't have a Hamiltonian circuit, the only way to prove it is to enumerate all possible paths.)

Within the class NP dwells the elite group of problems labeled NP-complete. They have an extraordinary property: If any one of these problems has a polynomial-time solution, then that method can be adapted to quickly solve all problems in NP (both the complete ones and the rest). In other words, such an algorithm would establish that P = NP. The two categories would merge.

The very concept of NP-completeness has a whiff of the miraculous about it. How can you possibly be sure that a solution to one problem will work for every other problem in NP as well? After all, you can't even know in advance what all those problems are. The answer is so curious and improbable that it's worth a brief digression.

The first proof of NP-completeness, published in 1971 by Stephen A. Cook of the University of Toronto, concerns a problem called satisfiability. You are given a formula in Boolean logic, constructed from a set of variables, each of which can take on the values true or false, and the logical connectives AND, OR and NOT. The decision problem asks: Is there a way of assigning true and false values to the variables that makes the entire formula true? With n variables there are 2 n possible assignments, so the brute-force approach is exponential and unappealing. But a lucky guess is easily verified, so the problem qualifies as a member of NP.

Cook's proof of NP-completeness is beautiful in its conception and a shambling Rube Goldberg contraption in its details. The key insight is that Boolean formulas can describe the circuits and operations of a computer. Cook showed how to write an intricate formula that encodes the entire operation of a computer executing a program to guess a solution and check its correctness. If and only if the Boolean formula has a satisfying assignment, the simulated computer program succeeds. Thus if you could determine in polynomial time whether or not any Boolean formula is satisfiable, you could also solve the encoded decision problem. The proof doesn't depend on the details of that problem, only on the fact that it has a polynomial-time checking procedure.

Thousands of problems are now known to be NP-complete. They form a vast fabric of interdependent computations. Either all of them are hard, or everything in NP is easy.




comments powered by Disqus
 

EMAIL TO A FRIEND :

Subscribe to American Scientist