Leveraging pre-trained autoencoders for interpretable prototype learning of music audio.

Alonso-Jiménez, Pablo - Pepino, Leonardo - Batlle-Roca, Roser - Zinemanas, Pablo - Bogdanov, Dmitry - Serra, Xavier - Rocamora, Martín

Resumen:

We present PECMAE an interpretable model for music audio classification based on prototype learning. Our model is based on a previous method, APNet, which jointly learns an autoencoder and a prototypical network. Instead, we propose to decouple both training processes. This enables us to leverage existing self-supervised autoencoders pre-trained on much larger data (EnCodecMAE), providing representations with better generalization. APNet allows prototypes’ reconstruction to waveforms for interpretability relying on the nearest training data samples. In contrast, we explore using a diffusion decoder that allows reconstruction without such dependency. We evaluate our method on datasets for music instrument classification (Medley-Solos-DB) and genre recognition (GTZAN and a larger in-house dataset), the latter being a more challenging task not addressed with prototypical networks before. We find that the prototype-based models preserve most of the performance achieved with the autoencoder embeddings, while the sonification of prototypes benefits understanding the behavior of the classifier.


Detalles Bibliográficos
2024
Ministerio de Ciencia, Innovación y Universidades (España) y Agencia Estatal de Investigación (AEI).
Prototypical learning
Self-supervised learning
Music audio classification
Interpretable AI
Inglés
Universidad de la República
COLIBRI
https://hdl.handle.net/20.500.12008/45254
Acceso abierto
Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)
_version_ 1807522995043827712
author Alonso-Jiménez, Pablo
author2 Pepino, Leonardo
Batlle-Roca, Roser
Zinemanas, Pablo
Bogdanov, Dmitry
Serra, Xavier
Rocamora, Martín
author2_role author
author
author
author
author
author
author_facet Alonso-Jiménez, Pablo
Pepino, Leonardo
Batlle-Roca, Roser
Zinemanas, Pablo
Bogdanov, Dmitry
Serra, Xavier
Rocamora, Martín
author_role author
bitstream.checksum.fl_str_mv 6429389a7df7277b72b7924fdc7d47a9
a006180e3f5b2ad0b88185d14284c0e0
df0749cf944f9d2754bc76e8ce56250c
489f03e71d39068f329bdec8798bce58
cd590f624d1e69119f78a23d8ad12dde
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
bitstream.url.fl_str_mv http://localhost:8080/xmlui/bitstream/20.500.12008/45254/5/license.txt
http://localhost:8080/xmlui/bitstream/20.500.12008/45254/2/license_url
http://localhost:8080/xmlui/bitstream/20.500.12008/45254/3/license_text
http://localhost:8080/xmlui/bitstream/20.500.12008/45254/4/license_rdf
http://localhost:8080/xmlui/bitstream/20.500.12008/45254/1/APBZBSR24.pdf
collection COLIBRI
dc.contributor.filiacion.none.fl_str_mv Alonso-Jiménez Pablo, Universidad Pompeu Fabra, España.
Pepino Leonardo, CONICET-UBA, Argentina.
Batlle-Roca Roser, Universidad Pompeu Fabra, España.
Zinemanas Pablo, Universidad Pompeu Fabra, España.
Bogdanov Dmitry, Universidad Pompeu Fabra, España.
Serra Xavier, Universidad Pompeu Fabra, España.
Rocamora Martín, Universidad de la República (Uruguay). Facultad de Ingeniería.
dc.creator.none.fl_str_mv Alonso-Jiménez, Pablo
Pepino, Leonardo
Batlle-Roca, Roser
Zinemanas, Pablo
Bogdanov, Dmitry
Serra, Xavier
Rocamora, Martín
dc.date.accessioned.none.fl_str_mv 2024-08-09T13:37:01Z
dc.date.available.none.fl_str_mv 2024-08-09T13:37:01Z
dc.date.issued.none.fl_str_mv 2024
dc.description.abstract.none.fl_txt_mv We present PECMAE an interpretable model for music audio classification based on prototype learning. Our model is based on a previous method, APNet, which jointly learns an autoencoder and a prototypical network. Instead, we propose to decouple both training processes. This enables us to leverage existing self-supervised autoencoders pre-trained on much larger data (EnCodecMAE), providing representations with better generalization. APNet allows prototypes’ reconstruction to waveforms for interpretability relying on the nearest training data samples. In contrast, we explore using a diffusion decoder that allows reconstruction without such dependency. We evaluate our method on datasets for music instrument classification (Medley-Solos-DB) and genre recognition (GTZAN and a larger in-house dataset), the latter being a more challenging task not addressed with prototypical networks before. We find that the prototype-based models preserve most of the performance achieved with the autoencoder embeddings, while the sonification of prototypes benefits understanding the behavior of the classifier.
dc.description.sponsorship.none.fl_txt_mv Ministerio de Ciencia, Innovación y Universidades (España) y Agencia Estatal de Investigación (AEI).
dc.format.extent.es.fl_str_mv 5 p.
dc.format.mimetype.es.fl_str_mv application/pdf
dc.identifier.citation.es.fl_str_mv Alonso-Jiménez, P., Pepino, L., Batlle-Roca, R. y otros. Leveraging pre-trained autoencoders for interpretable prototype learning of music audio [Preprint] Publicado en : IEEE ICASSP 2024 Workshop on Explainable AI for Speech and Audio (XAI-SA), 15 apr. 2024, pp. 1-5.
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12008/45254
dc.language.iso.none.fl_str_mv en
eng
dc.rights.license.none.fl_str_mv Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
dc.source.none.fl_str_mv reponame:COLIBRI
instname:Universidad de la República
instacron:Universidad de la República
dc.subject.es.fl_str_mv Prototypical learning
Self-supervised learning
Music audio classification
Interpretable AI
dc.title.none.fl_str_mv Leveraging pre-trained autoencoders for interpretable prototype learning of music audio.
dc.type.es.fl_str_mv Preprint
dc.type.none.fl_str_mv info:eu-repo/semantics/preprint
dc.type.version.none.fl_str_mv info:eu-repo/semantics/submittedVersion
description We present PECMAE an interpretable model for music audio classification based on prototype learning. Our model is based on a previous method, APNet, which jointly learns an autoencoder and a prototypical network. Instead, we propose to decouple both training processes. This enables us to leverage existing self-supervised autoencoders pre-trained on much larger data (EnCodecMAE), providing representations with better generalization. APNet allows prototypes’ reconstruction to waveforms for interpretability relying on the nearest training data samples. In contrast, we explore using a diffusion decoder that allows reconstruction without such dependency. We evaluate our method on datasets for music instrument classification (Medley-Solos-DB) and genre recognition (GTZAN and a larger in-house dataset), the latter being a more challenging task not addressed with prototypical networks before. We find that the prototype-based models preserve most of the performance achieved with the autoencoder embeddings, while the sonification of prototypes benefits understanding the behavior of the classifier.
eu_rights_str_mv openAccess
format preprint
id COLIBRI_1cb9166e6309b959a9045026419da305
identifier_str_mv Alonso-Jiménez, P., Pepino, L., Batlle-Roca, R. y otros. Leveraging pre-trained autoencoders for interpretable prototype learning of music audio [Preprint] Publicado en : IEEE ICASSP 2024 Workshop on Explainable AI for Speech and Audio (XAI-SA), 15 apr. 2024, pp. 1-5.
instacron_str Universidad de la República
institution Universidad de la República
instname_str Universidad de la República
language eng
language_invalid_str_mv en
network_acronym_str COLIBRI
network_name_str COLIBRI
oai_identifier_str oai:colibri.udelar.edu.uy:20.500.12008/45254
publishDate 2024
reponame_str COLIBRI
repository.mail.fl_str_mv mabel.seroubian@seciu.edu.uy
repository.name.fl_str_mv COLIBRI - Universidad de la República
repository_id_str 4771
rights_invalid_str_mv Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)
spelling Alonso-Jiménez Pablo, Universidad Pompeu Fabra, España.Pepino Leonardo, CONICET-UBA, Argentina.Batlle-Roca Roser, Universidad Pompeu Fabra, España.Zinemanas Pablo, Universidad Pompeu Fabra, España.Bogdanov Dmitry, Universidad Pompeu Fabra, España.Serra Xavier, Universidad Pompeu Fabra, España.Rocamora Martín, Universidad de la República (Uruguay). Facultad de Ingeniería.2024-08-09T13:37:01Z2024-08-09T13:37:01Z2024Alonso-Jiménez, P., Pepino, L., Batlle-Roca, R. y otros. Leveraging pre-trained autoencoders for interpretable prototype learning of music audio [Preprint] Publicado en : IEEE ICASSP 2024 Workshop on Explainable AI for Speech and Audio (XAI-SA), 15 apr. 2024, pp. 1-5.https://hdl.handle.net/20.500.12008/45254We present PECMAE an interpretable model for music audio classification based on prototype learning. Our model is based on a previous method, APNet, which jointly learns an autoencoder and a prototypical network. Instead, we propose to decouple both training processes. This enables us to leverage existing self-supervised autoencoders pre-trained on much larger data (EnCodecMAE), providing representations with better generalization. APNet allows prototypes’ reconstruction to waveforms for interpretability relying on the nearest training data samples. In contrast, we explore using a diffusion decoder that allows reconstruction without such dependency. We evaluate our method on datasets for music instrument classification (Medley-Solos-DB) and genre recognition (GTZAN and a larger in-house dataset), the latter being a more challenging task not addressed with prototypical networks before. We find that the prototype-based models preserve most of the performance achieved with the autoencoder embeddings, while the sonification of prototypes benefits understanding the behavior of the classifier.Submitted by Ribeiro Jorge (jribeiro@fing.edu.uy) on 2024-08-08T19:59:39Z No. of bitstreams: 2 license_rdf: 25790 bytes, checksum: 489f03e71d39068f329bdec8798bce58 (MD5) APBZBSR24.pdf: 240976 bytes, checksum: cd590f624d1e69119f78a23d8ad12dde (MD5)Approved for entry into archive by Machado Jimena (jmachado@fing.edu.uy) on 2024-08-09T12:47:28Z (GMT) No. of bitstreams: 2 license_rdf: 25790 bytes, checksum: 489f03e71d39068f329bdec8798bce58 (MD5) APBZBSR24.pdf: 240976 bytes, checksum: cd590f624d1e69119f78a23d8ad12dde (MD5)Made available in DSpace by Luna Fabiana (fabiana.luna@seciu.edu.uy) on 2024-08-09T13:37:01Z (GMT). No. of bitstreams: 2 license_rdf: 25790 bytes, checksum: 489f03e71d39068f329bdec8798bce58 (MD5) APBZBSR24.pdf: 240976 bytes, checksum: cd590f624d1e69119f78a23d8ad12dde (MD5) Previous issue date: 2024Ministerio de Ciencia, Innovación y Universidades (España) y Agencia Estatal de Investigación (AEI).5 p.application/pdfenengLas obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad de la República.(Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014)info:eu-repo/semantics/openAccessLicencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)Prototypical learningSelf-supervised learningMusic audio classificationInterpretable AILeveraging pre-trained autoencoders for interpretable prototype learning of music audio.Preprintinfo:eu-repo/semantics/preprintinfo:eu-repo/semantics/submittedVersionreponame:COLIBRIinstname:Universidad de la Repúblicainstacron:Universidad de la RepúblicaAlonso-Jiménez, PabloPepino, LeonardoBatlle-Roca, RoserZinemanas, PabloBogdanov, DmitrySerra, XavierRocamora, MartínLICENSElicense.txtlicense.txttext/plain; charset=utf-84267http://localhost:8080/xmlui/bitstream/20.500.12008/45254/5/license.txt6429389a7df7277b72b7924fdc7d47a9MD55CC-LICENSElicense_urllicense_urltext/plain; charset=utf-850http://localhost:8080/xmlui/bitstream/20.500.12008/45254/2/license_urla006180e3f5b2ad0b88185d14284c0e0MD52license_textlicense_texttext/html; charset=utf-822527http://localhost:8080/xmlui/bitstream/20.500.12008/45254/3/license_textdf0749cf944f9d2754bc76e8ce56250cMD53license_rdflicense_rdfapplication/rdf+xml; charset=utf-825790http://localhost:8080/xmlui/bitstream/20.500.12008/45254/4/license_rdf489f03e71d39068f329bdec8798bce58MD54ORIGINALAPBZBSR24.pdfAPBZBSR24.pdfapplication/pdf240976http://localhost:8080/xmlui/bitstream/20.500.12008/45254/1/APBZBSR24.pdfcd590f624d1e69119f78a23d8ad12ddeMD5120.500.12008/452542024-08-09 10:37:01.783oai:colibri.udelar.edu.uy:20.500.12008/45254VGVybWlub3MgeSBjb25kaWNpb25lcyByZWxhdGl2YXMgYWwgZGVwb3NpdG8gZGUgb2JyYXMKCgpMYXMgb2JyYXMgZGVwb3NpdGFkYXMgZW4gZWwgUmVwb3NpdG9yaW8gc2UgcmlnZW4gcG9yIGxhIE9yZGVuYW56YSBkZSBsb3MgRGVyZWNob3MgZGUgbGEgUHJvcGllZGFkIEludGVsZWN0dWFsICBkZSBsYSBVbml2ZXJzaWRhZCBEZSBMYSBSZXDDumJsaWNhLiAoUmVzLiBOwrogOTEgZGUgQy5ELkMuIGRlIDgvSUlJLzE5OTQg4oCTIEQuTy4gNy9JVi8xOTk0KSB5ICBwb3IgbGEgT3JkZW5hbnphIGRlbCBSZXBvc2l0b3JpbyBBYmllcnRvIGRlIGxhIFVuaXZlcnNpZGFkIGRlIGxhIFJlcMO6YmxpY2EgKFJlcy4gTsK6IDE2IGRlIEMuRC5DLiBkZSAwNy8xMC8yMDE0KS4gCgpBY2VwdGFuZG8gZWwgYXV0b3IgZXN0b3MgdMOpcm1pbm9zIHkgY29uZGljaW9uZXMgZGUgZGVww7NzaXRvIGVuIENPTElCUkksIGxhIFVuaXZlcnNpZGFkIGRlIFJlcMO6YmxpY2EgcHJvY2VkZXLDoSBhOiAgCgphKSBhcmNoaXZhciBtw6FzIGRlIHVuYSBjb3BpYSBkZSBsYSBvYnJhIGVuIGxvcyBzZXJ2aWRvcmVzIGRlIGxhIFVuaXZlcnNpZGFkIGEgbG9zIGVmZWN0b3MgZGUgZ2FyYW50aXphciBhY2Nlc28sIHNlZ3VyaWRhZCB5IHByZXNlcnZhY2nDs24KYikgY29udmVydGlyIGxhIG9icmEgYSBvdHJvcyBmb3JtYXRvcyBzaSBmdWVyYSBuZWNlc2FyaW8gIHBhcmEgZmFjaWxpdGFyIHN1IHByZXNlcnZhY2nDs24geSBhY2Nlc2liaWxpZGFkIHNpbiBhbHRlcmFyIHN1IGNvbnRlbmlkby4KYykgcmVhbGl6YXIgbGEgY29tdW5pY2FjacOzbiBww7pibGljYSB5IGRpc3BvbmVyIGVsIGFjY2VzbyBsaWJyZSB5IGdyYXR1aXRvIGEgdHJhdsOpcyBkZSBJbnRlcm5ldCBtZWRpYW50ZSBsYSBwdWJsaWNhY2nDs24gZGUgbGEgb2JyYSBiYWpvIGxhIGxpY2VuY2lhIENyZWF0aXZlIENvbW1vbnMgc2VsZWNjaW9uYWRhIHBvciBlbCBwcm9waW8gYXV0b3IuCgoKRW4gY2FzbyBxdWUgZWwgYXV0b3IgaGF5YSBkaWZ1bmRpZG8geSBkYWRvIGEgcHVibGljaWRhZCBhIGxhIG9icmEgZW4gZm9ybWEgcHJldmlhLCAgcG9kcsOhIHNvbGljaXRhciB1biBwZXLDrW9kbyBkZSBlbWJhcmdvIHNvYnJlIGxhIGRpc3BvbmliaWxpZGFkIHDDumJsaWNhIGRlIGxhIG1pc21hLCBlbCBjdWFsIGNvbWVuemFyw6EgYSBwYXJ0aXIgZGUgbGEgYWNlcHRhY2nDs24gZGUgZXN0ZSBkb2N1bWVudG8geSBoYXN0YSBsYSBmZWNoYSBxdWUgaW5kaXF1ZSAuCgpFbCBhdXRvciBhc2VndXJhIHF1ZSBsYSBvYnJhIG5vIGluZnJpZ2UgbmluZ8O6biBkZXJlY2hvIHNvYnJlIHRlcmNlcm9zLCB5YSBzZWEgZGUgcHJvcGllZGFkIGludGVsZWN0dWFsIG8gY3VhbHF1aWVyIG90cm8uCgpFbCBhdXRvciBnYXJhbnRpemEgcXVlIHNpIGVsIGRvY3VtZW50byBjb250aWVuZSBtYXRlcmlhbGVzIGRlIGxvcyBjdWFsZXMgbm8gdGllbmUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yLCAgaGEgb2J0ZW5pZG8gZWwgcGVybWlzbyBkZWwgcHJvcGlldGFyaW8gZGUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yLCB5IHF1ZSBlc2UgbWF0ZXJpYWwgY3V5b3MgZGVyZWNob3Mgc29uIGRlIHRlcmNlcm9zIGVzdMOhIGNsYXJhbWVudGUgaWRlbnRpZmljYWRvIHkgcmVjb25vY2lkbyBlbiBlbCB0ZXh0byBvIGNvbnRlbmlkbyBkZWwgZG9jdW1lbnRvIGRlcG9zaXRhZG8gZW4gZWwgUmVwb3NpdG9yaW8uCgpFbiBvYnJhcyBkZSBhdXRvcsOtYSBtw7psdGlwbGUgL3NlIHByZXN1bWUvIHF1ZSBlbCBhdXRvciBkZXBvc2l0YW50ZSBkZWNsYXJhIHF1ZSBoYSByZWNhYmFkbyBlbCBjb25zZW50aW1pZW50byBkZSB0b2RvcyBsb3MgYXV0b3JlcyBwYXJhIHB1YmxpY2FybGEgZW4gZWwgUmVwb3NpdG9yaW8sIHNpZW5kbyDDqXN0ZSBlbCDDum5pY28gcmVzcG9uc2FibGUgZnJlbnRlIGEgY3VhbHF1aWVyIHRpcG8gZGUgcmVjbGFtYWNpw7NuIGRlIGxvcyBvdHJvcyBjb2F1dG9yZXMuCgpFbCBhdXRvciBzZXLDoSByZXNwb25zYWJsZSBkZWwgY29udGVuaWRvIGRlIGxvcyBkb2N1bWVudG9zIHF1ZSBkZXBvc2l0YS4gTGEgVURFTEFSIG5vIHNlcsOhIHJlc3BvbnNhYmxlIHBvciBsYXMgZXZlbnR1YWxlcyB2aW9sYWNpb25lcyBhbCBkZXJlY2hvIGRlIHByb3BpZWRhZCBpbnRlbGVjdHVhbCBlbiBxdWUgcHVlZGEgaW5jdXJyaXIgZWwgYXV0b3IuCgpBbnRlIGN1YWxxdWllciBkZW51bmNpYSBkZSB2aW9sYWNpw7NuIGRlIGRlcmVjaG9zIGRlIHByb3BpZWRhZCBpbnRlbGVjdHVhbCwgbGEgVURFTEFSICBhZG9wdGFyw6EgdG9kYXMgbGFzIG1lZGlkYXMgbmVjZXNhcmlhcyBwYXJhIGV2aXRhciBsYSBjb250aW51YWNpw7NuIGRlIGRpY2hhIGluZnJhY2Npw7NuLCBsYXMgcXVlIHBvZHLDoW4gaW5jbHVpciBlbCByZXRpcm8gZGVsIGFjY2VzbyBhIGxvcyBjb250ZW5pZG9zIHkvbyBtZXRhZGF0b3MgZGVsIGRvY3VtZW50byByZXNwZWN0aXZvLgoKTGEgb2JyYSBzZSBwb25kcsOhIGEgZGlzcG9zaWNpw7NuIGRlbCBww7pibGljbyBhIHRyYXbDqXMgZGUgbGFzIGxpY2VuY2lhcyBDcmVhdGl2ZSBDb21tb25zLCBlbCBhdXRvciBwb2Ryw6Egc2VsZWNjaW9uYXIgdW5hIGRlIGxhcyA2IGxpY2VuY2lhcyBkaXNwb25pYmxlczoKCgpBdHJpYnVjacOzbiAoQ0MgLSBCeSk6IFBlcm1pdGUgdXNhciBsYSBvYnJhIHkgZ2VuZXJhciBvYnJhcyBkZXJpdmFkYXMsIGluY2x1c28gY29uIGZpbmVzIGNvbWVyY2lhbGVzLCBzaWVtcHJlIHF1ZSBzZSByZWNvbm96Y2EgYWwgYXV0b3IuCgpBdHJpYnVjacOzbiDigJMgQ29tcGFydGlyIElndWFsIChDQyAtIEJ5LVNBKTogUGVybWl0ZSB1c2FyIGxhIG9icmEgeSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcywgaW5jbHVzbyBjb24gZmluZXMgY29tZXJjaWFsZXMsIHBlcm8gbGEgZGlzdHJpYnVjacOzbiBkZSBsYXMgb2JyYXMgZGVyaXZhZGFzIGRlYmUgaGFjZXJzZSBtZWRpYW50ZSB1bmEgbGljZW5jaWEgaWTDqW50aWNhIGEgbGEgZGUgbGEgb2JyYSBvcmlnaW5hbCwgcmVjb25vY2llbmRvIGEgbG9zIGF1dG9yZXMuCgpBdHJpYnVjacOzbiDigJMgTm8gQ29tZXJjaWFsIChDQyAtIEJ5LU5DKTogUGVybWl0ZSB1c2FyIGxhIG9icmEgeSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcywgc2llbXByZSB5IGN1YW5kbyBlc29zIHVzb3Mgbm8gdGVuZ2FuIGZpbmVzIGNvbWVyY2lhbGVzLCByZWNvbm9jaWVuZG8gYWwgYXV0b3IuCgpBdHJpYnVjacOzbiDigJMgU2luIERlcml2YWRhcyAoQ0MgLSBCeS1ORCk6IFBlcm1pdGUgZWwgdXNvIGRlIGxhIG9icmEsIGluY2x1c28gY29uIGZpbmVzIGNvbWVyY2lhbGVzLCBwZXJvIG5vIHNlIHBlcm1pdGUgZ2VuZXJhciBvYnJhcyBkZXJpdmFkYXMsIGRlYmllbmRvIHJlY29ub2NlciBhbCBhdXRvci4KCkF0cmlidWNpw7NuIOKAkyBObyBDb21lcmNpYWwg4oCTIENvbXBhcnRpciBJZ3VhbCAoQ0Mg4oCTIEJ5LU5DLVNBKTogUGVybWl0ZSB1c2FyIGxhIG9icmEgeSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcywgc2llbXByZSB5IGN1YW5kbyBlc29zIHVzb3Mgbm8gdGVuZ2FuIGZpbmVzIGNvbWVyY2lhbGVzIHkgbGEgZGlzdHJpYnVjacOzbiBkZSBsYXMgb2JyYXMgZGVyaXZhZGFzIHNlIGhhZ2EgbWVkaWFudGUgbGljZW5jaWEgaWTDqW50aWNhIGEgbGEgZGUgbGEgb2JyYSBvcmlnaW5hbCwgcmVjb25vY2llbmRvIGEgbG9zIGF1dG9yZXMuCgpBdHJpYnVjacOzbiDigJMgTm8gQ29tZXJjaWFsIOKAkyBTaW4gRGVyaXZhZGFzIChDQyAtIEJ5LU5DLU5EKTogUGVybWl0ZSB1c2FyIGxhIG9icmEsIHBlcm8gbm8gc2UgcGVybWl0ZSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcyB5IG5vIHNlIHBlcm1pdGUgdXNvIGNvbiBmaW5lcyBjb21lcmNpYWxlcywgZGViaWVuZG8gcmVjb25vY2VyIGFsIGF1dG9yLgoKTG9zIHVzb3MgcHJldmlzdG9zIGVuIGxhcyBsaWNlbmNpYXMgaW5jbHV5ZW4gbGEgZW5hamVuYWNpw7NuLCByZXByb2R1Y2Npw7NuLCBjb211bmljYWNpw7NuLCBwdWJsaWNhY2nDs24sIGRpc3RyaWJ1Y2nDs24geSBwdWVzdGEgYSBkaXNwb3NpY2nDs24gZGVsIHDDumJsaWNvLiBMYSBjcmVhY2nDs24gZGUgb2JyYXMgZGVyaXZhZGFzIGluY2x1eWUgbGEgYWRhcHRhY2nDs24sIHRyYWR1Y2Npw7NuIHkgZWwgcmVtaXguCgpDdWFuZG8gc2Ugc2VsZWNjaW9uZSB1bmEgbGljZW5jaWEgcXVlIGhhYmlsaXRlIHVzb3MgY29tZXJjaWFsZXMsIGVsIGRlcMOzc2l0byBkZWJlcsOhIHNlciBhY29tcGHDsWFkbyBkZWwgYXZhbCBkZWwgamVyYXJjYSBtw6F4aW1vIGRlbCBTZXJ2aWNpbyBjb3JyZXNwb25kaWVudGUuCg==Universidadhttps://udelar.edu.uy/https://www.colibri.udelar.edu.uy/oai/requestmabel.seroubian@seciu.edu.uyUruguayopendoar:47712024-08-13T03:01:00.203618COLIBRI - Universidad de la Repúblicafalse
spellingShingle Leveraging pre-trained autoencoders for interpretable prototype learning of music audio.
Alonso-Jiménez, Pablo
Prototypical learning
Self-supervised learning
Music audio classification
Interpretable AI
status_str submittedVersion
title Leveraging pre-trained autoencoders for interpretable prototype learning of music audio.
title_full Leveraging pre-trained autoencoders for interpretable prototype learning of music audio.
title_fullStr Leveraging pre-trained autoencoders for interpretable prototype learning of music audio.
title_full_unstemmed Leveraging pre-trained autoencoders for interpretable prototype learning of music audio.
title_short Leveraging pre-trained autoencoders for interpretable prototype learning of music audio.
title_sort Leveraging pre-trained autoencoders for interpretable prototype learning of music audio.
topic Prototypical learning
Self-supervised learning
Music audio classification
Interpretable AI
url https://hdl.handle.net/20.500.12008/45254