Universal priors for sparse modeling
Resumen:
Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. In this work, we use tools from information theory to propose a sparsity regularization term which has several theoretical and practical advantages over the more standard ¿ 0 or ¿ 1 ones, and which leads to improved coding performance and accuracy in reconstruction tasks. We also briefly report on further improvements obtained by imposing low mutual coherence and Gram matrix norm on the learned dictionaries
2009 | |
Inglés | |
Universidad de la República | |
COLIBRI | |
https://hdl.handle.net/20.500.12008/38679 | |
Acceso abierto | |
Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0) |
Resultados similares
-
Second generation sparse models
Autor(es):: Ramirez, Ignacio
Fecha de publicación:: (2011) -
Image inpainting using patch consensus and DCT priors.
Autor(es):: Ramírez Paulino, Ignacio
Fecha de publicación:: (2021) -
Universal Reliability Bounds for Sparse Networks
Autor(es):: Romero, Pablo
Fecha de publicación:: (2021) -
Model selection techniques & Sparse Markov Chains
Autor(es):: Fraiman, Nicolás
Fecha de publicación:: (2008) -
Collaborative sources identification in mixed signals via hierarchical sparse modeling
Autor(es):: Sprechmann, Pablo
Fecha de publicación:: (2011)