EarPCG: Recovering Heart Sounds from in-Ear Audio via Physics-Informed Neural Network  

Project Description:

While earables present a promising avenue for cardiac sens- ing, whether they may replace the stethoscope to perform heart sound (a.k.a. PCG) monitoring remains questionable. The latest effort attempts to generate PCG-like waveform out of in-ear au- dio collected via earphones, yet its data-driven approach does not seem to be grounded in the underlying physics. To this end, this paper introduces EarPCG, a system for continuous PCG monitor- ing leveraging physics-informed neural models. As opposed to the debatable belief that bone-conducted PCG appears within ear canal, EarPCG generates PCG waveforms from the (actually existing) pho- toplethysmography (PPG) waveforms conveyed via blood vessels. Arising from pressure variations induced by heartbeats, PPG can be mathematically described by a Partial Differential Equation (PDE). Therefore, solving this PDE inversely may reconstruct cardiac dy- namics and in turn enable the generation of PCG waveforms with another PDE characterizing the pressure oscillations propagating through soft tissues. Pipelining the two PDE-solving neural models, EarPCG achieves accurate PCG monitoring from in-ear audio, while requiring minimal training. Our extensive experiments leveraging a custom-built prototype demonstrate the efficacy of our proposed system. Furthermore, we have conducted clinical trials, with clini- cians reporting no perceptible difference between authentic PCG and the sounds reconstructed by EarPCG.

The early diagnosis and effective management of Pulmonary Hypertension (PH) are critically dependent on the accurate monitoring of Pulmonary Arterial Pressure (PAP). However, current clinical methods are either invasive or resource- intensive, which limits their application for continuous PAP monitoring. To overcome this challenge, we introduce a novel, noninvasive method for PAP monitoring utilizing in-ear audio. Our approach traces the Heart Sound (HS) through a physics- guided modeling from in-ear audio. Subsequently, we derive PAP indicators from the second heart sound (S2) using a sophisticated signal processing pipeline. This pipeline incorporates mode de- composition, spectral clustering, and Morphological Component Analysis (MCA). Finally, a Dendritic Neuron Regression (DNR) network is employed to determine absolute PAP values from the extracted indicators. Validation of our method on real-world data from 26 participants demonstrates clinically acceptable error margins in the estimation of mean PAP (mPAP), with an estimation error of 1.31 ± 2.83 mmHg. These promising results highlight the potential of our in-ear audio approach for devel- oping wearable, noninvasive, and continuous PAP monitoring devices. This could significantly improve the management of PH by providing a more accessible and patient-friendly monitoring solution.

Acknowledgement:

  • This project is filed on December 2021 and is patented. For more information please visit our git repository

    People:

    • Dr. Chao Cai (Associate Proffessor, College of Life Science & Engineering) - Huazhong University of Science and Technology

    Related Publications:

  • [1] Junyi, Zhou, Chao Cai, et. al. "EarPCG: Recovering Heart Sounds from in-Ear Audio via Physics-Informed Neural Network", in ACM SenSys,2026
  • [2] Junyi, Zhou, Chao Cai, et. al. "Continuous Pulmonary Artery Pressure Monitoring using In-ear Microphone", in IEEE INFOCOM,2026
居中显示的GIF
demo