COMPUTING SCIENCE

# How to Avoid Yourself

# Random Thoughts on Self-Avoidance

Unless a new algorithm comes along, exhaustive enumerations of self-avoiding walks seem unlikely to advance much beyond the current limit of *n*=51. Knowledge of longer walks has come mainly from random sampling. This process too is computationally intensive.

For most purposes, a random sample is meant to be selected with uniform probability from the set of all *n*-step self-avoiding walks. Unfortunately, the most obvious algorithm does not yield walks with this distribution. It's easy enough to build an *n*-step walk one step at a time by choosing directions at random; the question is what to do if the walk collides with itself before reaching *n* steps. The temptation is simply to back up one step and try another direction, but that practice leads to a biased sample of walks. To ensure a fair sample you have to abandon a failed walk entirely and start over.

My own experiments with random sampling have relied on the ternary-number representation. I choose an *n*-digit balanced-ternary number at random, then check the corresponding walk for self-intersections. If the walk fails the test, I generate a new random number and try again.

Algorithms like this one readily produce large samples of 60- or 70-step walks, or smaller numbers of 100-step walks. As the walks get longer, however, the proportion of candidates that pass the self-avoidance test declines sharply. At *n*=100 you are proposing more than 50,000 walks for every one that turns out to be self-avoiding. At *n*=200 the acceptable walks would be rarer than one in a billion.

Other algorithms extend the range of exploration into the thousands of steps. Thirty years ago Zeev Alexandrowicz of the Weizmann Institute of Science suggested a method called dimerization, which exploits a divide-and-conquer strategy familiar in many other areas of computer science. Dimerization works because it's much easier to create two 50-step walks than a single 100-step walk. You build the two shorter walks, and string them together end-to-end. Of course the two half-walks may collide, in which case you have to start over, but failure turns out to be much less likely than in the step-by-step technique. The procedure can be invoked recursively to build the 50-step walks from 25-step components, and so on. What's particularly sweet about this algorithm is that it lends itself to a very simple and transparent implementation; I found it easier to get right than the less-efficient step-by-step methods.

Another technique, called the pivot algorithm, also goes back 30 years; it was first described by Moti Lal of the Unilever Research Laboratory and more recently has been refined and extended by Neal Madras of York University and Alan D. Sokal of New York University. The pivot algorithm is quite different from all the others described here. It does not actually generate a self-avoiding walk but instead takes one walk and transforms it into another. The idea is to randomly choose a pivot point somewhere along the walk, and then rotate or reflect or reverse the segment on one side of the pivot. If the result is a self-avoiding path, the move is accepted; otherwise you choose a new pivot and try again. Successive walks in the sequence are highly correlated, but repeating the transformation many times wipes out all memory of former configurations.

EMAIL TO A FRIEND :

**Of Possible Interest**

**Feature Article**: Candy Crush's Puzzling Mathematics

**Computing Science**: Belles lettres Meets Big Data

**Technologue**: Quantum Randomness