ALPHA Timepoint is in alpha Talk to Us
Backpropagation Paper Published in Nature

Backpropagation Paper Published in Nature

Rumelhart, Hinton, and Williams publish "Learning representations by back-propagating errors" in Nature (vol. 323, pp. 533-536), demonstrating that multi-layer neural networks can learn useful internal representations by propagating error gradients backward through hidden layers — overcoming the limitations of single-layer perceptrons identified by Minsky and Papert in 1969 and reigniting the connectionist revolution that would eventually produce modern deep learning

Year 1986
Date 10/9
Location La Jolla, California, United States
Layer 2
Visibility PUBLIC
artificial-intelligence neural-networks deep-learning machine-learning computer-science cognitive-science

Key Figures

David Rumelhart Geoffrey Hinton Ronald Williams

Related Moments

I
ImageNet: A Large-Scale Hierarchical Image Database
· thematic
M
Minsky & Papert Publish "Perceptrons"
· thematic
R
Rosenblatt Demonstrates the Perceptron
· thematic
S
Shannon Publishes "A Mathematical Theory of Communication"
· thematic
H
Hinton Publishes "A Fast Learning Algorithm for Deep Belief Nets"
· same figure
T
Turing Publishes "Computing Machinery and Intelligence"
· thematic
H
Hinton Publishes "A Fast Learning Algorithm for Deep Belief Nets"
· thematic
P
Publication of "Attention Is All You Need"
· thematic