LETTERS TO THE EDITORS
To the Editors:
“Delving into Deep Learning” by Brian Hayes (May–June, Computing Science) provides a well-written background for posing a simple yet fundamental question about deep-learning artificial neural networks. After describing the success of such a network constructed by Geoffrey Hinton’s team, which achieved much greater accuracy in a recent artificial intelligence competition than the runner-up system, Hayes correctly points out that without considerable practical experience like that obtained by Hinton’s team, to construct such a successful network, an “immense space of network architectures” must be explored, as Hayes states on page 189.
The essential question: How much of the success of such a breakthrough artificial neural network should be credited to the learning within the human minds of the construction team, trained via feedback from many experiments with such networks, and how much to the machine learning of the resulting network, eventually trained on the examples within the actual data?
University of Richmond, Virginia
Mr. Hayes responds:
Professor Charlesworth frames the question very well. The superior performance of neural networks built by groups with deep experience does indeed suggest that the networks are not entirely autonomous agents, and some of the learning takes place in the mind of the programmer. I would add, however, that this observation is one that I find highly encouraging. It’s a relief to know that machine-learning systems are not totally opaque black boxes, offering answers but no explanations. If we can learn to build them better, then we must be learning something about how they work.