Earlier academic research projects and speech corpora. All research listed here was done prior to my current employment. See Google Scholar for my full publication list.
Speech Corpora
TaL: The Tongue and Lips Corpus
The Tongue and Lips (TaL) corpus is a multi-speaker corpus of ultrasound images of the tongue and video images of lips. This corpus contains synchronised imaging data of extraoral (lips) and intraoral (tongue) articulators from 82 native speakers of English. The TaL corpus was collected under the Silent Speech Interfaces for all project (Carnegie Trust for the Universities of Scotland Research Incentive Grant – grant number RIG008585). [paper | documentation | code | data]
The UltraSuite Repository
UltraSuite is a repository of synchronized ultrasound and acoustic data from child speech therapy sessions. Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It is increasingly being used for speech therapy, making it important to develop automatic methods to assist various time-consuming manual tasks currently performed by speech therapists. The UltraSuite repository includes three data sets: one from typically developing children and two from children with speech sound disorders. [paper | documentation | code | data]
Parallel Audiobook Corpus
The Parallel Audiobook Corpus (version 1.0) is a collection of parallel readings of audiobooks. The corpus consists of approximately 121 hours of data across 4 books and 59 speakers. This corpus was prepared for speech synthesis, voice conversion, or prosody modelling. [documentation | data]
SIWIS Multilingual Database
The SIWIS database is a parallel multilingual speech database with acted emphasis. It includes recordings of 36 bilingual and trilingual speakers of English, French, German and Italian with applications to speech to speech translation (S2ST). The database was designed for various scenarios: training CLSA systems (cross-language speaker adaptation), conveying emphasis through S2ST systems, and evaluating TTS systems. [paper | [data]
Projects
Silent Speech Interfaces for all Recognising speech from ultrasound images of the tongue
Silent speech interfaces perform speech recognition and synthesis from articulatory data in order to restore spoken communication for users with voice impairments (for example, after laryngectomy) or to allow silent communication in situations where audible speech is undesirable. Much of the previous work in this area has focused on models learned on data from single speakers (called speaker-dependent models), which do not generalize to unknown speakers. This project proposes to investigate the first speaker-independent silent speech interface for continuous speech recognition from ultrasound images of the tongue. This interface will be benchmarked against a system trained on high-quality data from a single speaker (speaker-dependent model). Additionally, this project will investigate speaker adaptation techniques, which use small amounts of speaker-specific data to bridge the gap between speaker-dependent and independent systems. Funded by the Carnegie Trust for the Universities of Scotland Research Incentive Grant – grant number RIG008585 (“Silent speech interfaces for all – recognising speech from ultrasound images of the tongue”).
Other projects
During my PhD, I collaborated with the SIWIS (Spoken Interaction with Interpretation in Switzerland) project (link). During my post-doc, I was funded by the Ultrax2020 project (link).