Google Creates Sign Language AI for Smartphones
Google has recently decided to publish a set of artificial intelligence-based programmes that enable smartphones to detect sign language through their mobile phone camera. Surprisingly, these new algorithms have been released as open-source software. This means that instead of Google designing its own smartphone app for its own smartphone OS (android), other developers can create their own apps using as much of Google’s code as they want. Previously, this kind of software only existed on desktop computers.
Google’s software uses three different AI systems. Firstly, a palm detector model, called BlazePalm, scans the images taken by the camera and crops until only a box containing the hand is visible. Then, a hand landmark model takes the edited picture of the hand and works out 3d points on the hand. This means that the final AI algorithm, the gesture recogniser, can detect which gestures have been recorded by analysing the movement of key parts of the hand.
Training these AI programmes took a lot of time and effort. BlazePalm was forced to initially be trained on recognising palms rather than hands as hands are much harder to detect. This is because of the huge range of movements the fingers can produce. Only after was the BlazePalm team able to add finger detection to the AI. In addition, the hand landmark model had to be trained using over 30,000 real images of hands that were manually labelled. In order to train the AI to distinguish between hands and their background, the algorithm was also developed using highly detailed synthetic hand models pictured over a range of backgrounds and labelled with coordinates.
The artificial intelligence algorithms are so powerful, that Google has claimed they can identify up to 21 different 3D points on a hand from just one image on a mobile phone.
To find out more, visit google’s blog post.
Thumbnail source: signlanguagemaster.com