Andrea Gulli

Andrea Gulli


+39 0432 558411

Stanza / Room: NS4

Research Project

Deep hearing: Intelligent Sound Processing for cochlear implants in pediatric patients

Audio signal processing is a subfield of discrete-time signal processing that deals with the digital manipulation of audio signals. Processing methods and application areas include storage, data compression, music information retrieval, speech processing, localization, acoustic detection, transmission, noise cancellation, acoustic fingerprinting, sound recognition, synthesis, and enhancement.

Intelligent sound processing is a particular signal processing technique that makes use of machine learning tools to create algorithms, that improve their efficiency according to the acoustic scene in which they operate. The first step of my Ph.D. is the synthesis of a hybrid model, in which the task of canceling artifacts is first dealt with effectively and economically by using traditional sound processing techniques and then improved with the aid of artificial intelligence.

My application of intelligent sound processing algorithms will target persons with sensorineural hearing loss, and provide them with a modified sense of sound perception. Assistive listening techniques allow audio communication to occur between a device and its user. Available prominent devices are hearing aids and cochlear implants: we will focus on the latter. Moreover, assistive listening techniques are much needed by patients of pediatric age (from birth to adolescence), when auditory perception is crucial in developing essential auditory and linguistic skills. The opportunity to apply this study has been made possible by the collaboration with the Otorhinolaryngology and Audiology laboratory of the Burlo Garofolo Pediatric Institute of Trieste.

My Ph.D. research deals with auditory improvement in the still largely neglected field of signal enhancement not limited to speech. To date, the most important cochlear implant companies intend to guarantee natural hearing by adapting speech enhancement algorithms to different acoustic scenes. My end goal is, therefore, to integrate my neural filter for signal enhancement with a model able to identify acoustic environments and classify acoustic scenes, possibly through the use of communication between different devices (IoT), thus enabling an algorithm adaptation to enhance certain signals.