Main Content

A TensorFlow Lite Micro Speech model that detects wake words and turns on a different coloured LED light to emulate traffic lights.

Introduction and Motivation

Machine learning typically involves lots of computing power, and these are usually in the form of a large data center with GPUs and the costs of training a deep neural network can be astronomical. The emergence of tiny neural networks, which are as small as 14 KB, opens a plethora of doors to new applications that can analyze data right on the microprocessor itself and derive actionable insights (Warden and Situnayake, 2019). This saves time and prevents latency because we do not have to transmit data to a cloud data center for it to be processed and wait for it to come back (Warden and Situnayake, 2019). Such a phenomenon is called Edge Computing and allows for data to be processed and computed on the device that it is stored (Lea, 2020).

Learning Process: The Model Training

For starters, I had no idea what edge computing or what an Arduino was before I started this project. As the list of technologies demonstrate I had to work with and orchestrate an entire ecosystem of tools to achieve my goal of deploying a speech recognition on an Arduino board that worked fairly well.

The first thing I did was download the VS Code IDE and ensured that the Platform IO extension was installed. In tandem, I had to download the Arduino IDE and include the TensorFlow Lite library. In VS Code, I imported from the Arduino IDE the built-in micro speech example to serve as an example that worked well. This model received inputs of “yes”, which turned on the green LED on the Arduino; “no” which turned on the red LED on the Arduino; all other words which turned on the blue LED on the Arduino; or silence which didn’t turn on the LED.”

Link to article