Real-Time Audio Processing for Hearing Aids Using a Model-Based Bayesian Inference Framework
May 25, 2020·
,,,·
0 min read
Martin Roa Villescas
Bert de vries
Sander stuijk
Henk corporaal
Abstract
Development of hearing aid (HA) signal processing algorithms entails an iterative process between two design steps, namely algorithm development and the embedded implementation. Algorithm designers favor high-level programming languages for several reasons including higher productivity, code readability and, perhaps most importantly, availability of state-of-the-art signal processing frameworks that open new research directions. Embedded software, on the other hand, is preferably implemented using a low-level programming language to allow finer control of the hardware, an essential trait in real-time processing applications. In this paper we present a technique that allows deploying DSP algorithms written in Julia, a modern high-level programming language, on a real-time HA processing platform known as openMHA. We demonstrate this technique by using a model-based Bayesian inference framework to perform real-time audio processing.
Publication
In Proceedings of the 23th International Workshop on Software and Compilers for Embedded Systems

Authors
Teacher & Researcher
Martin Roa Villescas holds a BSc in Electronic Engineering from the National
University of Colombia and an MSc in Embedded Systems from Eindhoven
University of Technology (TU/e). He worked at Philips Research as an
embedded software designer from 2013 to 2018. He later returned to TU/e for
his doctoral research in model-based machine learning, carried out within
the PhD-Teaching Assistant trajectory combining research and teaching. Since
2023, he has been working at Fontys University of Applied Sciences in the
Netherlands, where he teaches in the Information and Communication
Technology program and conducts research in robotics and smart industry.
Authors
Authors
Authors