Traditional SETI searches suffer from two limitations: First, they assume intelligent aliens (if they exist) are trying to talk directly to us. Second, they assume that we’d recognize those messages if we found them. Recent advances in artificial intelligence (AI) are opening up exciting ways to reexamine all that data in search of subtle anomalies that have been overlooked. This idea is at the heart of a new SETI strategy: scanning for anomalous patterns that are not necessarily communication signals, but rather are the by-products of a technologically advanced civilization going about its business. The goal is to develop a versatile and intelligent anomaly engine that can work out which data values and interconnecting patterns are unusual when compared with a baseline.
This strategy helps mitigate a great struggle of SETI to date: the natural tension between making assumptions about what you are looking for so that you can search efficiently, balanced against the intuition that our definition of technology is very nascent indeed and so the less we assume the better.
The AI anomaly engine assumes only that the activities of an alien civilization might have some detectable effect on our observable universe. Best of all, the anomaly engine is a win-win proposition: Even if a strange observation has nothing to do with alien technology, it demands an explanation that could expand our understanding of the natural universe.
For example, in the early morning of July 25, 2001, a powerful burst of radio energy, less than 5 milliseconds in duration, swept through the Solar System and washed over the Southern Hemisphere of Earth. This extraordinary event went unnoticed for more than six years. It was not until November 2007 that astronomer Duncan Lorimer and his research student stumbled across the evidence for this intense spike of radio energy. The evidence had been hiding in plain sight among the mountains of archived data from the Parkes radio telescope in Australia. Perhaps an AI anomaly engine could have found the evidence for these fast radio bursts much sooner. And what is more important, there may be other surprises that human eyes have missed that are still waiting to be uncovered in data archives.
Indeed, as the capabilities of AI improve, new computer applications, such as deep-learning models, are expected to intelligently isolate similar anomalies within the huge data archives that have been collected across space science disciplines.
However, anomaly detection within multivariant data remains a dark art for even the best human experts, so developing an intelligent and flexible anomaly engine will be no easy feat. One approach is to train a deep neural network to be an autocorrelator that finds unusual examples of data. The input data must be compressed down to flow through a pinch-point in the neural net, like sand flowing through the waist of an hourglass. The more often this AI system is shown data of a similar nature, the better it becomes at compressing and accurately restoring the information. But if it is shown data that are unusual in some way, output would be poorly replicated and could be flagged as anomalous.
The problem is that these simple autocorrelators work best within a narrow domain of data and still lack the broad flexibility we need. However, AI research is making great strides forward. Could it be that, waiting within the petabytes of space observational data we have already collected, is the unnoticed evidence that we’ve had company all along?