Franz (l) and Mathias (r) keeping "Astro" company |
Okay, the first line should probably read "part of the Astromobile project" but I'm too excited to consider small details like that. :)
While our project partner, the ARTS lab of the Scuola Superiore Sant'Anna, has extended their navigation and localization part another couple of weeks to really finish it, the voice and touchscreen interaction and with it the part of Simon Listens has been developed, deployed and tested successfully on the robot prototype.
Have a look at the video below and see how Simon, Simone, Simontouch and even a bit of ownCloud fit together.
(Direct link to the video)
3 Kommentare:
Very nice demostration. Great job.
How hard is to add new languages support?
Hi Wiglot,
it really depends on the language and the application (amounts of words that need to be recognized).
If you have to start form scratch (like we did for Italian) you need to record training samples - for a general recognition you should cover a fair amount of different speakers of your target user group. For the Astromobile project we had around 3 work days of recording with 2 teams recording in parallel (using SSC and three microphones each). The resulting Italian speech model worked extremely well.
But of course there are existing models for some language: http://www.simon-listens.org/wiki/index.php/English:_Base_models
Best regards,
Peter
Amazing!
Kommentar veröffentlichen