COMPUTING SCIENCE

# On the Threshold

Where the Hard Problems Are

The new algorithm weaves together threads from at least three disciplines: mathematics, computer science and physics. The theme that binds them all together is the presence of sudden transitions from one kind of behavior to another.

The mathematical thread begins in the 1960s with the study of random graphs, initiated by Paul Erdos and Alfred Rényi. In this context a graph is not a chart or plot but a more abstract mathematical structure—a collection of vertices and edges, generally drawn as a network of dots (the vertices) and connecting lines (the edges). To draw a random graph, start by sprinkling *n* vertices on the page, then consider all possible pairings of the vertices, choosing randomly with probability *p* whether or not to draw an edge connecting each pair. When *p* is near 0, edges are rare, and the graph consists of many small, disconnected pieces, or components. As *p* increases, the graph comes to be dominated by a single "giant" component, which includes most of the vertices. The existence of this giant component is hardly a surprise, but the manner in which it develops is not obvious. The component does not evolve gradually as *p* increases but emerges suddenly when a certain threshold is crossed. The threshold is defined by a parameter I'll call , which is the number of edges divided by the number of vertices. The giant component is born when is about 1/2.

In computer science, a similar threshold phenomenon came to widespread attention in the early 1990s. In this case the threshold governs the likelihood that certain computational problems have a solution. One of these problems comes straight out of graph theory: It is the *k*-coloring problem, which asks you to paint each vertex of a graph with one of *k* colors, under the rule that two vertices joined by an edge may not have the same color. Finding an acceptable coloring gets harder as increases, because there are more edges imposing constraints on each vertex. Again, the threshold is sharp: Below a certain ratio, almost all graphs are *k*-colorable, and above this threshold almost none are. Moreover, the threshold affects not only the existence of solutions but also the difficulty of finding them. The computational effort needed to decide whether a graph is *k*-colorable has a dramatic peak near the critical value of . (An influential paper about this effect was aptly titled "Where the *really* hard problems are.")

Physicists also know something about threshold phenomena; they call them phase transitions. But are the changes of state observed in random graphs and in constraint-satisfaction problems truly analogous to physical events such as the freezing of water and the onset of magnetization in iron? Or is the resemblance a mere coincidence? For a time there was controversy over this issue, but it's now clear that the threshold phenomena in graphs and other mathematical structures are genuine phase transitions. The tools and techniques of statistical physics are ideally suited to them. In particular, the *k*-coloring problem can be mapped directly onto a model of a magnetic system in solid-state physics. The survey-propagation algorithm draws on ideas developed originally to describe such physical models.

EMAIL TO A FRIEND :

**Of Possible Interest**

**Computing Science**: Computer Vision and Computer Hallucinations

**Engineering**: From Lowly Paper Clips to Towering Suspension Bridges

**Spotlight**: Orion's Path to Liftoff

**Foreign-Language PDFs**