“In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.” Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department. Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory. The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.”
Related Content
Related Posts:
- MIT engineers 3D print the electromagnets at the heart of many electronics
- MIT scientists use a new type of nanoparticle to make vaccines more powerful
- Researchers discover new channels to excite magnetic waves with terahertz light
- Researchers harness 2D magnetic materials for energy-efficient computing
- This tiny, tamper-proof ID tag can authenticate almost anything
- Accelerating AI tasks while preserving data security
- Engineers develop an efficient process to make fuel from carbon dioxide
- New laser setup probes metamaterial structures with ultrafast pulses
- Physicists trap electrons in a 3D crystal for the first time
- Team engineers nanoparticles using ion irradiation to advance clean energy and fuel conversion