Backpropagation Paper Published in Nature
Rumelhart, Hinton, and Williams publish "Learning representations by back-propagating errors" in Nature (vol. 323, pp. 533-536), demonstrating that multi-layer neural networks can learn useful internal representations by propagating error gradients backward through hidden layers — overcoming the limitations of single-layer perceptrons identified by Minsky and Papert in 1969 and reigniting the connectionist revolution that would eventually produce modern deep learning