Alice and Bob in Cipherspace
A new form of encryption allows you to compute with data you cannot read
Computing in cipherspace is a cute theoretical novelty, but can it ever become a practical technology? Questions of computational efficiency and overhead are more challenging in FHE than in other kinds of cryptography. When encryption is used only to create a secure communications channel, it has no direct effect on the efficiency of computations done at either end of the connection. Homomorphic encryption is different: The cryptosystem becomes the computing platform, and any inefficiency slows the entire process.
Many homomorphic schemes exact a high price for security. During encryption, data undergo a kind of cosmic inflation: A single bit of plaintext may blow up to become thousands or even millions of bits of ciphertext. The encryption key can also become huge—from megabytes to gigabytes. Merely transmitting such bulky items would be costly; computing with the inflated ciphertext makes matters worse. Whereas adding or multiplying a few bits of plaintext can be done with a single machine instruction, performing the same operation on the inflated ciphertext requires elaborate software for high-precision arithmetic.
Much current work is directed toward mitigating these problems. For example, instead of encrypting each plaintext bit separately, multiple bits can be packed together, thereby “amortizing” the encryption effort and reducing overhead.
The ultimate test of practicality is to create a working implementation. Nigel P. Smart of the University of Bristol and Frederik Vercauteren of the Catholic University of Leuven were the first to try this. They built a somewhat homomorphic system, but could not extend it to full homomorphism; the bottleneck was an unwieldy process for generating huge encryption keys.
Gentry and Halevi, working with a somewhat different variant of the lattice-based algorithm, did manage to get a full system running. And they didn’t need to build it on IBM’s Blue Gene supercomputer, as they had initially planned; a desktop workstation was adequate. Nevertheless, the public key ballooned to 2.3 gigabytes, and generating it took two hours. The noise-abating re-encryptions took 30 minutes each.
In another implementation effort, Kristin Lauter of Microsoft Research, Michael Naehrig of the Eindhoven Institute of Technology and Vaikuntanathan show that large gains in efficiency are possible if you are willing to compromise on the requirement of full homomorphism. They do not promise to evaluate circuits of unbounded depth, but instead commit only to some small, fixed number of multiplications, along with unlimited additions. They have working code based on the learning-with-errors paradigm. Except at the highest security levels, key sizes are roughly a megabyte. Homomorphic addition takes milliseconds, multiplication generally less than a second. These timings are a vast improvement over earlier efforts, but it’s sobering to reflect that they are still an order of magnitude slower than the performance of the ENIAC in 1946.