Hinton Publishes "A Fast Learning Algorithm for Deep Belief Nets"
Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh publish "A Fast Learning Algorithm for Deep Belief Nets" in Neural Computation (vol. 18, no. 7, pp. 1527-1554), demonstrating that deep neural networks with many hidden layers can be effectively trained using greedy layer-wise unsupervised pretraining with Restricted Boltzmann Machines. The paper proved that the longstanding failure to train deep networks was not a fundamental flaw of depth itself but a problem of initialization — random weights trapped backpropagation in poor local minima, while layer-wise pretraining provided good starting points that backpropagation could then fine-tune. Hinton had co-authored the landmark 1986 backpropagation paper in Nature, then spent twenty years as one of the few researchers who refused to abandon neural networks during the era when Support Vector Machines and kernel methods dominated machine learning. The paper achieved state-of-the-art results on MNIST and was accompanied by a companion paper in Science with Ruslan Salakhutdinov ("Reducing the Dimensionality of Data with Neural Networks," vol. 313, pp. 504-507, July 28 2006) demonstrating deep autoencoders that outperformed PCA. Together these papers shattered the prevailing consensus that deep networks were impractical, rebranding the field as "deep learning" and triggering an avalanche of research that led directly to AlexNet (supervised by Hinton himself), the ImageNet revolution, and ultimately every large-scale neural network architecture that followed. Cited over 18,000 times, the paper is widely regarded as the moment the second AI winter ended and the modern deep learning era began. Hinton, along with Yann LeCun and Yoshua Bengio, received the 2018 ACM Turing Award for this body of work.