“Many astounding feats of computer intelligence, from automated language translation to self-driving cars, are based on neural networks: machine learning systems that figure out how to solve problems with minimal human guidance. But that makes the inner workings of neural networks like a black box, opaque even to the engineers who initiate the machine learning process. “If you look at the neural network, there won’t be any logical flow that a human can understand, which is very different from traditional software,” explains Guy Katz, a postdoctoral research fellow in computer science at Stanford. That opacity can be worrisome when it comes to using neural networks in safety-critical applications, like preventing aircraft collisions. Running hundreds of thousands of successful simulations isn’t enough to inspire full confidence, because even one system failure in a million can spell disaster.”
Related Content
Related Posts:
- Exploring the ultrasmall and ultrafast through advances in attosecond science
- A replacement for traditional motors could enhance next-gen robots
- New high-speed microscale 3D printing technique
- Researchers show an old law still holds for quirky quantum materials
- Groundbreaking study shows defects spreading through diamond faster than the speed of sound
- ‘Computer vision’ reveals unprecedented physical and chemical details of how a lithium-ion battery works
- Unlocking the mysteries of freezing in supercooled water droplets
- A molecular additive enhances next-gen LEDs – but shortens their lifespans
- Researchers show how to increase X-ray laser brightness and power using a crystal cavity and diamond mirrors
- New nontoxic powder uses sunlight to quickly disinfect contaminated drinking water