Main Content

In a world where communication is the cornerstone of human connection, there exists a gap that separates Deaf people from us.

I was profoundly worried about the unhappy lives of Deaf individuals in a busy suburban town and how they could not communicate with hearing people and vice versa. I set out to build the Deaf Link Project because I was motivated by a vision and inspired by the welfare of my own child.

What is the Project About?
“Deaf Link” emerges as a bridge between spoken language and sign language. This groundbreaking project offers real-time translation, empowering everyone to express themselves freely, fostering understanding, and promoting inclusivity. Welcome to Deaf Link Communicator—where words meet signs, and hearts connect.

Why did I decide to Make It?
The startling facts about how difficult it is for hearing people like myself to communicate with Deaf people since we don’t comprehend sign language inspired me. I set out to develop a system that would allow for the translation of spoken words into sign language and sign language into spoken words, seeing the necessity for a technology-driven solution to prevent misunderstandings between the two communications.

How Does It Work?
It on operates Raspberry pi including two options “Sign to Speech” and “Speech to Sign”.

1.SignToSpeech:

In this program, The Deaf One uses sign language which are captured by the Pi Camera as visual input then these visual inputs are converted into sign language inputs by the OpenCv and Mediapipe using the TensorFlow file of machine learning made by Google Teachable Machines by using hundreds of different photos of different hand signs.

Then after the matching and detecting which sign is done the corresponding textual output is generated which is then converted into audio output by using GTTS which is “Google Text o Speech” API and then this audio output is used to speak through the Speaker in words for the Hearing One.

2.SpeechToSign:

In this program, The Hearing One speaks in the mic which takes audio input and then the “Google Speech To Text” API converts that audio input into textual output which then through MQTT Broker, is sent to the Arduino Nano 33 IOT in form of Arduino commands for example “index finger”, “number two”, “thumb finger”, etc.

Which are then recognized by the Arduino code as different instructions to move Servo Motors and accordingly the six servo motors of the Robotic Hand moves to make that specific symbol or hand sign as sign output for the Deaf One.”

Link to article