Leveraging pre-trained autoencoders for interpretable prototype learning of music audio.

Alonso-Jiménez, Pablo - Pepino, Leonardo - Batlle-Roca, Roser - Zinemanas, Pablo - Bogdanov, Dmitry - Serra, Xavier - Rocamora, Martín

Resumen:

We present PECMAE an interpretable model for music audio classification based on prototype learning. Our model is based on a previous method, APNet, which jointly learns an autoencoder and a prototypical network. Instead, we propose to decouple both training processes. This enables us to leverage existing self-supervised autoencoders pre-trained on much larger data (EnCodecMAE), providing representations with better generalization. APNet allows prototypes’ reconstruction to waveforms for interpretability relying on the nearest training data samples. In contrast, we explore using a diffusion decoder that allows reconstruction without such dependency. We evaluate our method on datasets for music instrument classification (Medley-Solos-DB) and genre recognition (GTZAN and a larger in-house dataset), the latter being a more challenging task not addressed with prototypical networks before. We find that the prototype-based models preserve most of the performance achieved with the autoencoder embeddings, while the sonification of prototypes benefits understanding the behavior of the classifier.


Detalles Bibliográficos
2024
Ministerio de Ciencia, Innovación y Universidades (España) y Agencia Estatal de Investigación (AEI).
Prototypical learning
Self-supervised learning
Music audio classification
Interpretable AI
Inglés
Universidad de la República
COLIBRI
https://hdl.handle.net/20.500.12008/45254
Acceso abierto
Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)

Resultados similares