Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns.

Rodriguez, Mariano - Facciolo, Gabriele - Grompone von Gioi, Rafael - Musé, Pablo - Delon, Julie - Morel, Jean-Michel

Resumen:

The classic approach to image matching consists in the detection, description and matching of keypoints. In the description, the local information surrounding the keypoint is encoded. This locality enables affine invariant methods. Indeed, smooth deformations caused by viewpoint changes are well approximated by affine maps. Despite numerous efforts, affine invariant descriptors have remained elusive. This has led to the development of IMAS (Image Matching by Affine Simulation) methods that simulate viewpoint changes to attain the desired invariance. Yet, recent CNN-based methods seem to provide a way to learn affine invariant descriptors. Still, as a first contribution, we show that current CNN-based methods are far from the state-of-the-art performance provided by IMAS. This confirms that there is still room for improvement for learned methods. Second, we show that recent advances in affine patch normalization can be used to create adaptive IMAS methods that select their affine simulations depending on query and target images. The proposed methods are shown to attain a good compromise: on the one hand, they reach the performance of state-of-the-art IMAS methods but are faster; on the other hand, they perform significantly better than non-simulating methods, including recent ones. Source codes are available at https://rdguez-mariano.github.io/pages/adimas.


Detalles Bibliográficos
2020
Cameras
Adaptation models
Image matching
Mathematical model
Estimation
Optical imaging
Distortion
Image comparison
Affine invariance
IMAS
SIFT
RootSIFT
Convolutional neural networks
Inglés
Universidad de la República
COLIBRI
https://hdl.handle.net/20.500.12008/27062
Acceso abierto
Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)
_version_ 1807522897091100672
author Rodriguez, Mariano
author2 Facciolo, Gabriele
Grompone von Gioi, Rafael
Musé, Pablo
Delon, Julie
Morel, Jean-Michel
author2_role author
author
author
author
author
author_facet Rodriguez, Mariano
Facciolo, Gabriele
Grompone von Gioi, Rafael
Musé, Pablo
Delon, Julie
Morel, Jean-Michel
author_role author
bitstream.checksum.fl_str_mv 6429389a7df7277b72b7924fdc7d47a9
a006180e3f5b2ad0b88185d14284c0e0
36c32e9c6da50e6d55578c16944ef7f6
1996b8461bc290aef6a27d78c67b6b52
d57342967f4fbaab6e45e0388199e511
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
bitstream.url.fl_str_mv http://localhost:8080/xmlui/bitstream/20.500.12008/27062/5/license.txt
http://localhost:8080/xmlui/bitstream/20.500.12008/27062/2/license_url
http://localhost:8080/xmlui/bitstream/20.500.12008/27062/3/license_text
http://localhost:8080/xmlui/bitstream/20.500.12008/27062/4/license_rdf
http://localhost:8080/xmlui/bitstream/20.500.12008/27062/1/RFGMDM20.pdf
collection COLIBRI
dc.contributor.filiacion.none.fl_str_mv Rodriguez Mariano, Centre Borelli, ENS Paris-Saclay, Université Paris-Saclay, CNRS, France
Facciolo Gabriele, Centre Borelli, ENS Paris-Saclay, Université Paris-Saclay, CNRS, France
Grompone von Gioi Rafael, Centre Borelli, ENS Paris-Saclay, Université Paris-Saclay, CNRS, France
Musé Pablo, Universidad de la República (Uruguay). Facultad de Ingeniería.
Delon Julie, Université de Paris, CNRS, MAP5 and Institut Universitaire de France
Morel Jean-Michel, Centre Borelli, ENS Paris-Saclay, Université Paris-Saclay, CNRS, France
dc.creator.none.fl_str_mv Rodriguez, Mariano
Facciolo, Gabriele
Grompone von Gioi, Rafael
Musé, Pablo
Delon, Julie
Morel, Jean-Michel
dc.date.accessioned.none.fl_str_mv 2021-04-13T16:11:34Z
dc.date.available.none.fl_str_mv 2021-04-13T16:11:34Z
dc.date.issued.none.fl_str_mv 2020
dc.description.abstract.none.fl_txt_mv The classic approach to image matching consists in the detection, description and matching of keypoints. In the description, the local information surrounding the keypoint is encoded. This locality enables affine invariant methods. Indeed, smooth deformations caused by viewpoint changes are well approximated by affine maps. Despite numerous efforts, affine invariant descriptors have remained elusive. This has led to the development of IMAS (Image Matching by Affine Simulation) methods that simulate viewpoint changes to attain the desired invariance. Yet, recent CNN-based methods seem to provide a way to learn affine invariant descriptors. Still, as a first contribution, we show that current CNN-based methods are far from the state-of-the-art performance provided by IMAS. This confirms that there is still room for improvement for learned methods. Second, we show that recent advances in affine patch normalization can be used to create adaptive IMAS methods that select their affine simulations depending on query and target images. The proposed methods are shown to attain a good compromise: on the one hand, they reach the performance of state-of-the-art IMAS methods but are faster; on the other hand, they perform significantly better than non-simulating methods, including recent ones. Source codes are available at https://rdguez-mariano.github.io/pages/adimas.
dc.description.es.fl_txt_mv El PDF se corresponde a un preprint depositado en https://hal.archives-ouvertes.fr/hal-02494121
Presentado en el 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25-28 oct, pp 2201-2205, 2020
dc.format.extent.es.fl_str_mv 5 p.
dc.format.mimetype.es.fl_str_mv application/pdf
dc.identifier.citation.es.fl_str_mv Rodriguez, M., Facciolo, G., Grompone von Gioi, R. y otros. Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns [Preprint]. Publicado en: 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25-28 oct, 2020, pp. 2201-2205. DOI: 10.1109/ICIP40778.2020.9191245.
dc.identifier.other.none.fl_str_mv hal-02494121
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12008/27062
dc.language.iso.none.fl_str_mv en
eng
dc.rights.license.none.fl_str_mv Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
dc.source.none.fl_str_mv reponame:COLIBRI
instname:Universidad de la República
instacron:Universidad de la República
dc.subject.es.fl_str_mv Cameras
Adaptation models
Image matching
Mathematical model
Estimation
Optical imaging
Distortion
Image comparison
Affine invariance
IMAS
SIFT
RootSIFT
Convolutional neural networks
dc.title.none.fl_str_mv Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns.
dc.type.es.fl_str_mv Preprint
dc.type.none.fl_str_mv info:eu-repo/semantics/preprint
dc.type.version.none.fl_str_mv info:eu-repo/semantics/submittedVersion
description El PDF se corresponde a un preprint depositado en https://hal.archives-ouvertes.fr/hal-02494121
eu_rights_str_mv openAccess
format preprint
id COLIBRI_3d1a973c931208e7f82fcdd0c1fc285d
identifier_str_mv Rodriguez, M., Facciolo, G., Grompone von Gioi, R. y otros. Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns [Preprint]. Publicado en: 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25-28 oct, 2020, pp. 2201-2205. DOI: 10.1109/ICIP40778.2020.9191245.
hal-02494121
instacron_str Universidad de la República
institution Universidad de la República
instname_str Universidad de la República
language eng
language_invalid_str_mv en
network_acronym_str COLIBRI
network_name_str COLIBRI
oai_identifier_str oai:colibri.udelar.edu.uy:20.500.12008/27062
publishDate 2020
reponame_str COLIBRI
repository.mail.fl_str_mv mabel.seroubian@seciu.edu.uy
repository.name.fl_str_mv COLIBRI - Universidad de la República
repository_id_str 4771
rights_invalid_str_mv Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)
spelling Rodriguez Mariano, Centre Borelli, ENS Paris-Saclay, Université Paris-Saclay, CNRS, FranceFacciolo Gabriele, Centre Borelli, ENS Paris-Saclay, Université Paris-Saclay, CNRS, FranceGrompone von Gioi Rafael, Centre Borelli, ENS Paris-Saclay, Université Paris-Saclay, CNRS, FranceMusé Pablo, Universidad de la República (Uruguay). Facultad de Ingeniería.Delon Julie, Université de Paris, CNRS, MAP5 and Institut Universitaire de FranceMorel Jean-Michel, Centre Borelli, ENS Paris-Saclay, Université Paris-Saclay, CNRS, France2021-04-13T16:11:34Z2021-04-13T16:11:34Z2020Rodriguez, M., Facciolo, G., Grompone von Gioi, R. y otros. Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns [Preprint]. Publicado en: 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25-28 oct, 2020, pp. 2201-2205. DOI: 10.1109/ICIP40778.2020.9191245.hal-02494121https://hdl.handle.net/20.500.12008/27062El PDF se corresponde a un preprint depositado en https://hal.archives-ouvertes.fr/hal-02494121Presentado en el 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25-28 oct, pp 2201-2205, 2020The classic approach to image matching consists in the detection, description and matching of keypoints. In the description, the local information surrounding the keypoint is encoded. This locality enables affine invariant methods. Indeed, smooth deformations caused by viewpoint changes are well approximated by affine maps. Despite numerous efforts, affine invariant descriptors have remained elusive. This has led to the development of IMAS (Image Matching by Affine Simulation) methods that simulate viewpoint changes to attain the desired invariance. Yet, recent CNN-based methods seem to provide a way to learn affine invariant descriptors. Still, as a first contribution, we show that current CNN-based methods are far from the state-of-the-art performance provided by IMAS. This confirms that there is still room for improvement for learned methods. Second, we show that recent advances in affine patch normalization can be used to create adaptive IMAS methods that select their affine simulations depending on query and target images. The proposed methods are shown to attain a good compromise: on the one hand, they reach the performance of state-of-the-art IMAS methods but are faster; on the other hand, they perform significantly better than non-simulating methods, including recent ones. Source codes are available at https://rdguez-mariano.github.io/pages/adimas.Submitted by Ribeiro Jorge (jribeiro@fing.edu.uy) on 2021-04-13T05:49:38Z No. of bitstreams: 2 license_rdf: 23149 bytes, checksum: 1996b8461bc290aef6a27d78c67b6b52 (MD5) RFGMDM20.pdf: 3337429 bytes, checksum: d57342967f4fbaab6e45e0388199e511 (MD5)Approved for entry into archive by Machado Jimena (jmachado@fing.edu.uy) on 2021-04-13T16:07:20Z (GMT) No. of bitstreams: 2 license_rdf: 23149 bytes, checksum: 1996b8461bc290aef6a27d78c67b6b52 (MD5) RFGMDM20.pdf: 3337429 bytes, checksum: d57342967f4fbaab6e45e0388199e511 (MD5)Made available in DSpace by Luna Fabiana (fabiana.luna@fic.edu.uy) on 2021-04-13T16:11:34Z (GMT). No. of bitstreams: 2 license_rdf: 23149 bytes, checksum: 1996b8461bc290aef6a27d78c67b6b52 (MD5) RFGMDM20.pdf: 3337429 bytes, checksum: d57342967f4fbaab6e45e0388199e511 (MD5) Previous issue date: 20205 p.application/pdfenengLas obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad de la República.(Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014)info:eu-repo/semantics/openAccessLicencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)CamerasAdaptation modelsImage matchingMathematical modelEstimationOptical imagingDistortionImage comparisonAffine invarianceIMASSIFTRootSIFTConvolutional neural networksCnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns.Preprintinfo:eu-repo/semantics/preprintinfo:eu-repo/semantics/submittedVersionreponame:COLIBRIinstname:Universidad de la Repúblicainstacron:Universidad de la RepúblicaRodriguez, MarianoFacciolo, GabrieleGrompone von Gioi, RafaelMusé, PabloDelon, JulieMorel, Jean-MichelLICENSElicense.txtlicense.txttext/plain; charset=utf-84267http://localhost:8080/xmlui/bitstream/20.500.12008/27062/5/license.txt6429389a7df7277b72b7924fdc7d47a9MD55CC-LICENSElicense_urllicense_urltext/plain; charset=utf-850http://localhost:8080/xmlui/bitstream/20.500.12008/27062/2/license_urla006180e3f5b2ad0b88185d14284c0e0MD52license_textlicense_texttext/html; charset=utf-838616http://localhost:8080/xmlui/bitstream/20.500.12008/27062/3/license_text36c32e9c6da50e6d55578c16944ef7f6MD53license_rdflicense_rdfapplication/rdf+xml; charset=utf-823149http://localhost:8080/xmlui/bitstream/20.500.12008/27062/4/license_rdf1996b8461bc290aef6a27d78c67b6b52MD54ORIGINALRFGMDM20.pdfRFGMDM20.pdfapplication/pdf3337429http://localhost:8080/xmlui/bitstream/20.500.12008/27062/1/RFGMDM20.pdfd57342967f4fbaab6e45e0388199e511MD5120.500.12008/270622024-05-03 15:19:20.948oai:colibri.udelar.edu.uy:20.500.12008/27062VGVybWlub3MgeSBjb25kaWNpb25lcyByZWxhdGl2YXMgYWwgZGVwb3NpdG8gZGUgb2JyYXMKCgpMYXMgb2JyYXMgZGVwb3NpdGFkYXMgZW4gZWwgUmVwb3NpdG9yaW8gc2UgcmlnZW4gcG9yIGxhIE9yZGVuYW56YSBkZSBsb3MgRGVyZWNob3MgZGUgbGEgUHJvcGllZGFkIEludGVsZWN0dWFsICBkZSBsYSBVbml2ZXJzaWRhZCBEZSBMYSBSZXDDumJsaWNhLiAoUmVzLiBOwrogOTEgZGUgQy5ELkMuIGRlIDgvSUlJLzE5OTQg4oCTIEQuTy4gNy9JVi8xOTk0KSB5ICBwb3IgbGEgT3JkZW5hbnphIGRlbCBSZXBvc2l0b3JpbyBBYmllcnRvIGRlIGxhIFVuaXZlcnNpZGFkIGRlIGxhIFJlcMO6YmxpY2EgKFJlcy4gTsK6IDE2IGRlIEMuRC5DLiBkZSAwNy8xMC8yMDE0KS4gCgpBY2VwdGFuZG8gZWwgYXV0b3IgZXN0b3MgdMOpcm1pbm9zIHkgY29uZGljaW9uZXMgZGUgZGVww7NzaXRvIGVuIENPTElCUkksIGxhIFVuaXZlcnNpZGFkIGRlIFJlcMO6YmxpY2EgcHJvY2VkZXLDoSBhOiAgCgphKSBhcmNoaXZhciBtw6FzIGRlIHVuYSBjb3BpYSBkZSBsYSBvYnJhIGVuIGxvcyBzZXJ2aWRvcmVzIGRlIGxhIFVuaXZlcnNpZGFkIGEgbG9zIGVmZWN0b3MgZGUgZ2FyYW50aXphciBhY2Nlc28sIHNlZ3VyaWRhZCB5IHByZXNlcnZhY2nDs24KYikgY29udmVydGlyIGxhIG9icmEgYSBvdHJvcyBmb3JtYXRvcyBzaSBmdWVyYSBuZWNlc2FyaW8gIHBhcmEgZmFjaWxpdGFyIHN1IHByZXNlcnZhY2nDs24geSBhY2Nlc2liaWxpZGFkIHNpbiBhbHRlcmFyIHN1IGNvbnRlbmlkby4KYykgcmVhbGl6YXIgbGEgY29tdW5pY2FjacOzbiBww7pibGljYSB5IGRpc3BvbmVyIGVsIGFjY2VzbyBsaWJyZSB5IGdyYXR1aXRvIGEgdHJhdsOpcyBkZSBJbnRlcm5ldCBtZWRpYW50ZSBsYSBwdWJsaWNhY2nDs24gZGUgbGEgb2JyYSBiYWpvIGxhIGxpY2VuY2lhIENyZWF0aXZlIENvbW1vbnMgc2VsZWNjaW9uYWRhIHBvciBlbCBwcm9waW8gYXV0b3IuCgoKRW4gY2FzbyBxdWUgZWwgYXV0b3IgaGF5YSBkaWZ1bmRpZG8geSBkYWRvIGEgcHVibGljaWRhZCBhIGxhIG9icmEgZW4gZm9ybWEgcHJldmlhLCAgcG9kcsOhIHNvbGljaXRhciB1biBwZXLDrW9kbyBkZSBlbWJhcmdvIHNvYnJlIGxhIGRpc3BvbmliaWxpZGFkIHDDumJsaWNhIGRlIGxhIG1pc21hLCBlbCBjdWFsIGNvbWVuemFyw6EgYSBwYXJ0aXIgZGUgbGEgYWNlcHRhY2nDs24gZGUgZXN0ZSBkb2N1bWVudG8geSBoYXN0YSBsYSBmZWNoYSBxdWUgaW5kaXF1ZSAuCgpFbCBhdXRvciBhc2VndXJhIHF1ZSBsYSBvYnJhIG5vIGluZnJpZ2UgbmluZ8O6biBkZXJlY2hvIHNvYnJlIHRlcmNlcm9zLCB5YSBzZWEgZGUgcHJvcGllZGFkIGludGVsZWN0dWFsIG8gY3VhbHF1aWVyIG90cm8uCgpFbCBhdXRvciBnYXJhbnRpemEgcXVlIHNpIGVsIGRvY3VtZW50byBjb250aWVuZSBtYXRlcmlhbGVzIGRlIGxvcyBjdWFsZXMgbm8gdGllbmUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yLCAgaGEgb2J0ZW5pZG8gZWwgcGVybWlzbyBkZWwgcHJvcGlldGFyaW8gZGUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yLCB5IHF1ZSBlc2UgbWF0ZXJpYWwgY3V5b3MgZGVyZWNob3Mgc29uIGRlIHRlcmNlcm9zIGVzdMOhIGNsYXJhbWVudGUgaWRlbnRpZmljYWRvIHkgcmVjb25vY2lkbyBlbiBlbCB0ZXh0byBvIGNvbnRlbmlkbyBkZWwgZG9jdW1lbnRvIGRlcG9zaXRhZG8gZW4gZWwgUmVwb3NpdG9yaW8uCgpFbiBvYnJhcyBkZSBhdXRvcsOtYSBtw7psdGlwbGUgL3NlIHByZXN1bWUvIHF1ZSBlbCBhdXRvciBkZXBvc2l0YW50ZSBkZWNsYXJhIHF1ZSBoYSByZWNhYmFkbyBlbCBjb25zZW50aW1pZW50byBkZSB0b2RvcyBsb3MgYXV0b3JlcyBwYXJhIHB1YmxpY2FybGEgZW4gZWwgUmVwb3NpdG9yaW8sIHNpZW5kbyDDqXN0ZSBlbCDDum5pY28gcmVzcG9uc2FibGUgZnJlbnRlIGEgY3VhbHF1aWVyIHRpcG8gZGUgcmVjbGFtYWNpw7NuIGRlIGxvcyBvdHJvcyBjb2F1dG9yZXMuCgpFbCBhdXRvciBzZXLDoSByZXNwb25zYWJsZSBkZWwgY29udGVuaWRvIGRlIGxvcyBkb2N1bWVudG9zIHF1ZSBkZXBvc2l0YS4gTGEgVURFTEFSIG5vIHNlcsOhIHJlc3BvbnNhYmxlIHBvciBsYXMgZXZlbnR1YWxlcyB2aW9sYWNpb25lcyBhbCBkZXJlY2hvIGRlIHByb3BpZWRhZCBpbnRlbGVjdHVhbCBlbiBxdWUgcHVlZGEgaW5jdXJyaXIgZWwgYXV0b3IuCgpBbnRlIGN1YWxxdWllciBkZW51bmNpYSBkZSB2aW9sYWNpw7NuIGRlIGRlcmVjaG9zIGRlIHByb3BpZWRhZCBpbnRlbGVjdHVhbCwgbGEgVURFTEFSICBhZG9wdGFyw6EgdG9kYXMgbGFzIG1lZGlkYXMgbmVjZXNhcmlhcyBwYXJhIGV2aXRhciBsYSBjb250aW51YWNpw7NuIGRlIGRpY2hhIGluZnJhY2Npw7NuLCBsYXMgcXVlIHBvZHLDoW4gaW5jbHVpciBlbCByZXRpcm8gZGVsIGFjY2VzbyBhIGxvcyBjb250ZW5pZG9zIHkvbyBtZXRhZGF0b3MgZGVsIGRvY3VtZW50byByZXNwZWN0aXZvLgoKTGEgb2JyYSBzZSBwb25kcsOhIGEgZGlzcG9zaWNpw7NuIGRlbCBww7pibGljbyBhIHRyYXbDqXMgZGUgbGFzIGxpY2VuY2lhcyBDcmVhdGl2ZSBDb21tb25zLCBlbCBhdXRvciBwb2Ryw6Egc2VsZWNjaW9uYXIgdW5hIGRlIGxhcyA2IGxpY2VuY2lhcyBkaXNwb25pYmxlczoKCgpBdHJpYnVjacOzbiAoQ0MgLSBCeSk6IFBlcm1pdGUgdXNhciBsYSBvYnJhIHkgZ2VuZXJhciBvYnJhcyBkZXJpdmFkYXMsIGluY2x1c28gY29uIGZpbmVzIGNvbWVyY2lhbGVzLCBzaWVtcHJlIHF1ZSBzZSByZWNvbm96Y2EgYWwgYXV0b3IuCgpBdHJpYnVjacOzbiDigJMgQ29tcGFydGlyIElndWFsIChDQyAtIEJ5LVNBKTogUGVybWl0ZSB1c2FyIGxhIG9icmEgeSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcywgaW5jbHVzbyBjb24gZmluZXMgY29tZXJjaWFsZXMsIHBlcm8gbGEgZGlzdHJpYnVjacOzbiBkZSBsYXMgb2JyYXMgZGVyaXZhZGFzIGRlYmUgaGFjZXJzZSBtZWRpYW50ZSB1bmEgbGljZW5jaWEgaWTDqW50aWNhIGEgbGEgZGUgbGEgb2JyYSBvcmlnaW5hbCwgcmVjb25vY2llbmRvIGEgbG9zIGF1dG9yZXMuCgpBdHJpYnVjacOzbiDigJMgTm8gQ29tZXJjaWFsIChDQyAtIEJ5LU5DKTogUGVybWl0ZSB1c2FyIGxhIG9icmEgeSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcywgc2llbXByZSB5IGN1YW5kbyBlc29zIHVzb3Mgbm8gdGVuZ2FuIGZpbmVzIGNvbWVyY2lhbGVzLCByZWNvbm9jaWVuZG8gYWwgYXV0b3IuCgpBdHJpYnVjacOzbiDigJMgU2luIERlcml2YWRhcyAoQ0MgLSBCeS1ORCk6IFBlcm1pdGUgZWwgdXNvIGRlIGxhIG9icmEsIGluY2x1c28gY29uIGZpbmVzIGNvbWVyY2lhbGVzLCBwZXJvIG5vIHNlIHBlcm1pdGUgZ2VuZXJhciBvYnJhcyBkZXJpdmFkYXMsIGRlYmllbmRvIHJlY29ub2NlciBhbCBhdXRvci4KCkF0cmlidWNpw7NuIOKAkyBObyBDb21lcmNpYWwg4oCTIENvbXBhcnRpciBJZ3VhbCAoQ0Mg4oCTIEJ5LU5DLVNBKTogUGVybWl0ZSB1c2FyIGxhIG9icmEgeSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcywgc2llbXByZSB5IGN1YW5kbyBlc29zIHVzb3Mgbm8gdGVuZ2FuIGZpbmVzIGNvbWVyY2lhbGVzIHkgbGEgZGlzdHJpYnVjacOzbiBkZSBsYXMgb2JyYXMgZGVyaXZhZGFzIHNlIGhhZ2EgbWVkaWFudGUgbGljZW5jaWEgaWTDqW50aWNhIGEgbGEgZGUgbGEgb2JyYSBvcmlnaW5hbCwgcmVjb25vY2llbmRvIGEgbG9zIGF1dG9yZXMuCgpBdHJpYnVjacOzbiDigJMgTm8gQ29tZXJjaWFsIOKAkyBTaW4gRGVyaXZhZGFzIChDQyAtIEJ5LU5DLU5EKTogUGVybWl0ZSB1c2FyIGxhIG9icmEsIHBlcm8gbm8gc2UgcGVybWl0ZSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcyB5IG5vIHNlIHBlcm1pdGUgdXNvIGNvbiBmaW5lcyBjb21lcmNpYWxlcywgZGViaWVuZG8gcmVjb25vY2VyIGFsIGF1dG9yLgoKTG9zIHVzb3MgcHJldmlzdG9zIGVuIGxhcyBsaWNlbmNpYXMgaW5jbHV5ZW4gbGEgZW5hamVuYWNpw7NuLCByZXByb2R1Y2Npw7NuLCBjb211bmljYWNpw7NuLCBwdWJsaWNhY2nDs24sIGRpc3RyaWJ1Y2nDs24geSBwdWVzdGEgYSBkaXNwb3NpY2nDs24gZGVsIHDDumJsaWNvLiBMYSBjcmVhY2nDs24gZGUgb2JyYXMgZGVyaXZhZGFzIGluY2x1eWUgbGEgYWRhcHRhY2nDs24sIHRyYWR1Y2Npw7NuIHkgZWwgcmVtaXguCgpDdWFuZG8gc2Ugc2VsZWNjaW9uZSB1bmEgbGljZW5jaWEgcXVlIGhhYmlsaXRlIHVzb3MgY29tZXJjaWFsZXMsIGVsIGRlcMOzc2l0byBkZWJlcsOhIHNlciBhY29tcGHDsWFkbyBkZWwgYXZhbCBkZWwgamVyYXJjYSBtw6F4aW1vIGRlbCBTZXJ2aWNpbyBjb3JyZXNwb25kaWVudGUuCg==Universidadhttps://udelar.edu.uy/https://www.colibri.udelar.edu.uy/oai/requestmabel.seroubian@seciu.edu.uyUruguayopendoar:47712024-07-25T14:33:10.793688COLIBRI - Universidad de la Repúblicafalse
spellingShingle Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns.
Rodriguez, Mariano
Cameras
Adaptation models
Image matching
Mathematical model
Estimation
Optical imaging
Distortion
Image comparison
Affine invariance
IMAS
SIFT
RootSIFT
Convolutional neural networks
status_str submittedVersion
title Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns.
title_full Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns.
title_fullStr Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns.
title_full_unstemmed Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns.
title_short Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns.
title_sort Cnn-assisted coverings in the space of tilts : Best affine invariant performances with the speed of cnns.
topic Cameras
Adaptation models
Image matching
Mathematical model
Estimation
Optical imaging
Distortion
Image comparison
Affine invariance
IMAS
SIFT
RootSIFT
Convolutional neural networks
url https://hdl.handle.net/20.500.12008/27062