Priyanjali Gupta, a third-year computer science student from Tamil Nadu’s Vellore Institute of Technology (VIT), specializes in data science and received a challenge from her mother last year “to do something now that she’s studying engineering”. So, she created an artificial intelligence-powered model capable of translating American sign language to English in real-time. This new model was developed using Tensorflow object detection API.
The dataset used is generated manually by running the Image Collection Python file that collects images from your webcam for or all the mentioned below signs in the American Sign Language: Hello, I Love You, Thank you, Please, Yes and No. In other words, rather than tracking the entire video feed of a user’s webcam, it’s mainly focused on single frames. What’s next? Gupta is currently working on video detection, but that would require the use of Long-Short Term Memory networks (LSTM).
- Aspect Ratio:16:9.Connectivity Technology: HDMI,USB,Ethernet,WiFi,Bluetooth
- Dominate the Game: With the 10th Gen Intel Core i5-10300H processor, your Nitro 5 is packed with incredible power for all your games
- RTX, It's On: The latest NVIDIA GeForce RTX 3050 (4GB dedicated GDDR6 VRAM) is powered by award-winning architecture with new Ray Tracing Cores,...
The dataset is manually made with a computer webcam and given annotations. The model, for now, is trained on single frames. To detect videos, the model has to be trained on multiple frames for which I’m likely to use LSTM. I’m currently researching on it. According to me, researchers and developers are trying their best to find a solution that can be implemented. However, I think the first step would be to normalize sign languages and other modes of communication with the specially-abled and work on bridging the communication gap,” said Gupta.