Abstract
We present a novel robotic implementation of an embedded linux system in Shimi, a musical robot companion. We discuss the challenges and benefits of this transition as well as a system and technical overview. We also present a unique approach to robotic gesture generation and a new voice generation system designed for robot audio vocalization of any MIDI file. Our interactive system combines NLP, audio capture and processing, and emotion and contour analysis from human speech input. Shimi ultimately acts as an exploration into how a robot can use music as a driver for human engagement.
Original language | English |
---|---|
Title of host publication | Proceedings of the Linux Audio Conference 2019 |
Publisher | Stanford University |
Pages | 101-105 |
Number of pages | 5 |
ISBN (Electronic) | 9780359463879 |
Publication status | Published - 2019 |
Externally published | Yes |
Event | Linux Audio Conference (17th : 2019) - Stanford University, United States Duration: 23 Mar 2019 → 26 Mar 2019 |
Conference
Conference | Linux Audio Conference (17th : 2019) |
---|---|
Country/Territory | United States |
Period | 23/03/19 → 26/03/19 |