“In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.” Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department. Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory. The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.”
Related Content
Related Posts:
- Cobalt-free batteries could power cars of the future
- Researchers 3D print components for a portable mass spectrometer
- A blueprint for making quantum computers easier to program
- MIT researchers discover “neutronic molecules”
- MIT scientists tune the entanglement structure in an array of qubits
- New software enables blind and low-vision users to create interactive, accessible charts
- Researchers 3D print key components for a point-of-care mass spectrometer
- Self-powered sensor automatically harvests magnetic energy
- This 3D printer can figure out how to print with an unknown material
- With inspiration from “Tetris,” MIT researchers develop a better radiation detector