Main Content

Finally, a peek inside the ‘black box' of machine learning systems

Many astounding feats of computer intelligence, from automated language translation to self-driving cars, are based on neural networks: machine learning systems that figure out how to solve problems with minimal human guidance. But that makes the inner workings of neural networks like a black box, opaque even to the engineers who initiate the machine learning process. “If you look at the neural network, there won’t be any logical flow that a human can understand, which is very different from traditional software,” explains Guy Katz, a postdoctoral research fellow in computer science at Stanford. That opacity can be worrisome when it comes to using neural networks in safety-critical applications, like preventing aircraft collisions. Running hundreds of thousands of successful simulations isn’t enough to inspire full confidence, because even one system failure in a million can spell disaster.”

Link to article