Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline
Resumen:
Machine Learning has had a significant impact on microscopy, enabling faster and more accurate analysis of biological imaging data. In particular, Generative Adversarial Networks (GANs) and U-Net have emerged as powerful tools in this field. GANs (I. Goodfellow et al. 2020) are a type of deep learning model that consists of two neural networks, a generator and a discriminator. The generator creates synthetic images while the discriminator attempts to differentiate between the synthetic images and real images. Through this adversarial process, the generator improves its ability to generate realistic images, while the discriminator improves its ability to differentiate between real and synthetic images. In microscopy, GANs can be used to generate synthetic microscopy images or to fill in missing or degraded image data as we show in this work. U-Net (O. Ronneberger et al. 2017) is a type of convolutional neural network that is specifically designed for image segmentation tasks. The architecture of U-Net consists of an encoder and a decoder, with skip connections between corresponding layers in the encoder and decoder. In microscopy, U-Net has been used to segment objects of interest in microscopy images, such as cells or subcellular structures, enabling more accurate analysis of the images. Overall, the integration of machine learning techniques, particularly GANs and U-Net, into microscopy has enabled researchers to analyze imaging data more efficiently and effectively, leading to new insights and advances in the field of biology(K. Dunn 2019, F.Long 2020). In this work, a GAN architecture is trained to generate confocal fluorescence microscopy synthetic images from blood monocyte stacks from control individuals and patients, where nuclei and mitochondria were marked with different fluorescent probes. These images are then processed by an own implemented pipeline consisting of deconvolution, segmentation and feature extraction for mitochondria classification In the deconvolution stage, the methods implemented in the ImageJ plugin "DeconvolutionLab2" (D. Sage et al. 2017) are used, where their performance is analyzed based on the parameters used and their characteristics, such as whether they are regularized algorithms or if they are iterative or non-iterative. For segmentation, different approaches are evaluated, starting with traditional histogram-based thresholding methods (Otsu, Huang, Li, among others), non-supervised clustering methods such as K-Means (Lloyd 1957; MacQueen 1967), and Deep Learning methods such as the U-Net neural network. In feature extraction, morphological and connectivity features are obtained. The morphological characteristics obtained are the usual ones (volume, area, sphericity, among others). The connectivity characteristics are found from skeletonization, pruning and graph modeling (M. Zanin et al 2020). The parameters found are the number of nodes, the density of links and the efficiency, among others. Finally, for the mitochondrial classification, classical approaches such as Decision Tree, Logistic Regression and Support Vector Machine (SVM) were used. The work was done in the Python programming language. We are currently working on making this framework publicly available. The final result of the work is an end-to-end pipeline with different processing options in the deconvolution and segmentation stages usable for different microscopy data, a synthetic data generator that achieves performance when it comes to simulating the effect of fluorescence in binary masks, and an application of both products for the mitochondrial classification with an accuracy result greater than 70%. It is concluded that neural networks have a fundamental role in the processing of medical and biological images, and can be used for data augmentation, segmentation and classification.
2023 | |
Confocal microscopy Data generation Deconvolution Segmentation Morphological characteristics and classification |
|
Inglés | |
Universidad de la República | |
COLIBRI | |
https://www.mmc-series.org.uk/abstract/1068-deep-learning-in-confocal-fluorescence-microscopy-for-synthetic-image-generation-and-processing-pipeline.html
https://hdl.handle.net/20.500.12008/40840 |
|
Acceso abierto | |
Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0) |
_version_ | 1807522936520704000 |
---|---|
author | Silvera, Diego |
author2 | Millán, María José Merlo, Emiliano Lecumberry, Federico Gómez, Alvaro Cassina, Patricia Winiarski, Erik |
author2_role | author author author author author author |
author_facet | Silvera, Diego Millán, María José Merlo, Emiliano Lecumberry, Federico Gómez, Alvaro Cassina, Patricia Winiarski, Erik |
author_role | author |
bitstream.checksum.fl_str_mv | 6429389a7df7277b72b7924fdc7d47a9 a006180e3f5b2ad0b88185d14284c0e0 1274339f512f00ecc522a4c5febd859e 489f03e71d39068f329bdec8798bce58 cc098832ec564f00bd025c459dfe5b4d |
bitstream.checksumAlgorithm.fl_str_mv | MD5 MD5 MD5 MD5 MD5 |
bitstream.url.fl_str_mv | http://localhost:8080/xmlui/bitstream/20.500.12008/40840/5/license.txt http://localhost:8080/xmlui/bitstream/20.500.12008/40840/2/license_url http://localhost:8080/xmlui/bitstream/20.500.12008/40840/3/license_text http://localhost:8080/xmlui/bitstream/20.500.12008/40840/4/license_rdf http://localhost:8080/xmlui/bitstream/20.500.12008/40840/1/SMMLGCW23.pdf |
collection | COLIBRI |
dc.contributor.filiacion.none.fl_str_mv | Silvera Diego, Universidad de la República (Uruguay). Facultad de Ingeniería. Millán María José, Universidad de la República (Uruguay). Facultad de Ingeniería. Merlo Emiliano, Universidad de la República (Uruguay). Facultad de Ingeniería. Lecumberry Federico, Universidad de la República (Uruguay). Facultad de Ingeniería. Gómez Alvaro, Universidad de la República (Uruguay). Facultad de Ingeniería. Cassina Patricia, Universidad de la República (Uruguay). Facultad de Medicina. Winiarski Erik, Universidad de la República (Uruguay). Facultad de Medicina. |
dc.creator.none.fl_str_mv | Silvera, Diego Millán, María José Merlo, Emiliano Lecumberry, Federico Gómez, Alvaro Cassina, Patricia Winiarski, Erik |
dc.date.accessioned.none.fl_str_mv | 2023-10-26T19:07:35Z |
dc.date.available.none.fl_str_mv | 2023-10-26T19:07:35Z |
dc.date.issued.none.fl_str_mv | 2023 |
dc.description.abstract.none.fl_txt_mv | Machine Learning has had a significant impact on microscopy, enabling faster and more accurate analysis of biological imaging data. In particular, Generative Adversarial Networks (GANs) and U-Net have emerged as powerful tools in this field. GANs (I. Goodfellow et al. 2020) are a type of deep learning model that consists of two neural networks, a generator and a discriminator. The generator creates synthetic images while the discriminator attempts to differentiate between the synthetic images and real images. Through this adversarial process, the generator improves its ability to generate realistic images, while the discriminator improves its ability to differentiate between real and synthetic images. In microscopy, GANs can be used to generate synthetic microscopy images or to fill in missing or degraded image data as we show in this work. U-Net (O. Ronneberger et al. 2017) is a type of convolutional neural network that is specifically designed for image segmentation tasks. The architecture of U-Net consists of an encoder and a decoder, with skip connections between corresponding layers in the encoder and decoder. In microscopy, U-Net has been used to segment objects of interest in microscopy images, such as cells or subcellular structures, enabling more accurate analysis of the images. Overall, the integration of machine learning techniques, particularly GANs and U-Net, into microscopy has enabled researchers to analyze imaging data more efficiently and effectively, leading to new insights and advances in the field of biology(K. Dunn 2019, F.Long 2020). In this work, a GAN architecture is trained to generate confocal fluorescence microscopy synthetic images from blood monocyte stacks from control individuals and patients, where nuclei and mitochondria were marked with different fluorescent probes. These images are then processed by an own implemented pipeline consisting of deconvolution, segmentation and feature extraction for mitochondria classification In the deconvolution stage, the methods implemented in the ImageJ plugin "DeconvolutionLab2" (D. Sage et al. 2017) are used, where their performance is analyzed based on the parameters used and their characteristics, such as whether they are regularized algorithms or if they are iterative or non-iterative. For segmentation, different approaches are evaluated, starting with traditional histogram-based thresholding methods (Otsu, Huang, Li, among others), non-supervised clustering methods such as K-Means (Lloyd 1957; MacQueen 1967), and Deep Learning methods such as the U-Net neural network. In feature extraction, morphological and connectivity features are obtained. The morphological characteristics obtained are the usual ones (volume, area, sphericity, among others). The connectivity characteristics are found from skeletonization, pruning and graph modeling (M. Zanin et al 2020). The parameters found are the number of nodes, the density of links and the efficiency, among others. Finally, for the mitochondrial classification, classical approaches such as Decision Tree, Logistic Regression and Support Vector Machine (SVM) were used. The work was done in the Python programming language. We are currently working on making this framework publicly available. The final result of the work is an end-to-end pipeline with different processing options in the deconvolution and segmentation stages usable for different microscopy data, a synthetic data generator that achieves performance when it comes to simulating the effect of fluorescence in binary masks, and an application of both products for the mitochondrial classification with an accuracy result greater than 70%. It is concluded that neural networks have a fundamental role in the processing of medical and biological images, and can be used for data augmentation, segmentation and classification. |
dc.format.extent.es.fl_str_mv | 15 p. |
dc.format.mimetype.es.fl_str_mv | application/pdf |
dc.identifier.citation.es.fl_str_mv | Silvera, D., Millán, M., Merlo, E. y otros. Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline. [en línea]. Póster, 2023. |
dc.identifier.uri.none.fl_str_mv | https://www.mmc-series.org.uk/abstract/1068-deep-learning-in-confocal-fluorescence-microscopy-for-synthetic-image-generation-and-processing-pipeline.html https://hdl.handle.net/20.500.12008/40840 |
dc.language.iso.none.fl_str_mv | en eng |
dc.publisher.es.fl_str_mv | Royal Microscopical Society |
dc.relation.ispartof.es.fl_str_mv | Microscience Microscopy Congress 2023 incorporating EMAG 2023, Manchester, United Kingdom, 4-6 jul. 2023, pp. 1-15. |
dc.rights.license.none.fl_str_mv | Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0) |
dc.rights.none.fl_str_mv | info:eu-repo/semantics/openAccess |
dc.source.none.fl_str_mv | reponame:COLIBRI instname:Universidad de la República instacron:Universidad de la República |
dc.subject.es.fl_str_mv | Confocal microscopy Data generation Deconvolution Segmentation Morphological characteristics and classification |
dc.title.none.fl_str_mv | Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline |
dc.type.es.fl_str_mv | Póster |
dc.type.none.fl_str_mv | info:eu-repo/semantics/conferenceObject |
dc.type.version.none.fl_str_mv | info:eu-repo/semantics/publishedVersion |
description | Machine Learning has had a significant impact on microscopy, enabling faster and more accurate analysis of biological imaging data. In particular, Generative Adversarial Networks (GANs) and U-Net have emerged as powerful tools in this field. GANs (I. Goodfellow et al. 2020) are a type of deep learning model that consists of two neural networks, a generator and a discriminator. The generator creates synthetic images while the discriminator attempts to differentiate between the synthetic images and real images. Through this adversarial process, the generator improves its ability to generate realistic images, while the discriminator improves its ability to differentiate between real and synthetic images. In microscopy, GANs can be used to generate synthetic microscopy images or to fill in missing or degraded image data as we show in this work. U-Net (O. Ronneberger et al. 2017) is a type of convolutional neural network that is specifically designed for image segmentation tasks. The architecture of U-Net consists of an encoder and a decoder, with skip connections between corresponding layers in the encoder and decoder. In microscopy, U-Net has been used to segment objects of interest in microscopy images, such as cells or subcellular structures, enabling more accurate analysis of the images. Overall, the integration of machine learning techniques, particularly GANs and U-Net, into microscopy has enabled researchers to analyze imaging data more efficiently and effectively, leading to new insights and advances in the field of biology(K. Dunn 2019, F.Long 2020). In this work, a GAN architecture is trained to generate confocal fluorescence microscopy synthetic images from blood monocyte stacks from control individuals and patients, where nuclei and mitochondria were marked with different fluorescent probes. These images are then processed by an own implemented pipeline consisting of deconvolution, segmentation and feature extraction for mitochondria classification In the deconvolution stage, the methods implemented in the ImageJ plugin "DeconvolutionLab2" (D. Sage et al. 2017) are used, where their performance is analyzed based on the parameters used and their characteristics, such as whether they are regularized algorithms or if they are iterative or non-iterative. For segmentation, different approaches are evaluated, starting with traditional histogram-based thresholding methods (Otsu, Huang, Li, among others), non-supervised clustering methods such as K-Means (Lloyd 1957; MacQueen 1967), and Deep Learning methods such as the U-Net neural network. In feature extraction, morphological and connectivity features are obtained. The morphological characteristics obtained are the usual ones (volume, area, sphericity, among others). The connectivity characteristics are found from skeletonization, pruning and graph modeling (M. Zanin et al 2020). The parameters found are the number of nodes, the density of links and the efficiency, among others. Finally, for the mitochondrial classification, classical approaches such as Decision Tree, Logistic Regression and Support Vector Machine (SVM) were used. The work was done in the Python programming language. We are currently working on making this framework publicly available. The final result of the work is an end-to-end pipeline with different processing options in the deconvolution and segmentation stages usable for different microscopy data, a synthetic data generator that achieves performance when it comes to simulating the effect of fluorescence in binary masks, and an application of both products for the mitochondrial classification with an accuracy result greater than 70%. It is concluded that neural networks have a fundamental role in the processing of medical and biological images, and can be used for data augmentation, segmentation and classification. |
eu_rights_str_mv | openAccess |
format | conferenceObject |
id | COLIBRI_01ed2c0cc4d7ed26d66123850b518f52 |
identifier_str_mv | Silvera, D., Millán, M., Merlo, E. y otros. Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline. [en línea]. Póster, 2023. |
instacron_str | Universidad de la República |
institution | Universidad de la República |
instname_str | Universidad de la República |
language | eng |
language_invalid_str_mv | en |
network_acronym_str | COLIBRI |
network_name_str | COLIBRI |
oai_identifier_str | oai:colibri.udelar.edu.uy:20.500.12008/40840 |
publishDate | 2023 |
reponame_str | COLIBRI |
repository.mail.fl_str_mv | mabel.seroubian@seciu.edu.uy |
repository.name.fl_str_mv | COLIBRI - Universidad de la República |
repository_id_str | 4771 |
rights_invalid_str_mv | Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0) |
spelling | Silvera Diego, Universidad de la República (Uruguay). Facultad de Ingeniería.Millán María José, Universidad de la República (Uruguay). Facultad de Ingeniería.Merlo Emiliano, Universidad de la República (Uruguay). Facultad de Ingeniería.Lecumberry Federico, Universidad de la República (Uruguay). Facultad de Ingeniería.Gómez Alvaro, Universidad de la República (Uruguay). Facultad de Ingeniería.Cassina Patricia, Universidad de la República (Uruguay). Facultad de Medicina.Winiarski Erik, Universidad de la República (Uruguay). Facultad de Medicina.2023-10-26T19:07:35Z2023-10-26T19:07:35Z2023Silvera, D., Millán, M., Merlo, E. y otros. Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline. [en línea]. Póster, 2023.https://www.mmc-series.org.uk/abstract/1068-deep-learning-in-confocal-fluorescence-microscopy-for-synthetic-image-generation-and-processing-pipeline.htmlhttps://hdl.handle.net/20.500.12008/40840Machine Learning has had a significant impact on microscopy, enabling faster and more accurate analysis of biological imaging data. In particular, Generative Adversarial Networks (GANs) and U-Net have emerged as powerful tools in this field. GANs (I. Goodfellow et al. 2020) are a type of deep learning model that consists of two neural networks, a generator and a discriminator. The generator creates synthetic images while the discriminator attempts to differentiate between the synthetic images and real images. Through this adversarial process, the generator improves its ability to generate realistic images, while the discriminator improves its ability to differentiate between real and synthetic images. In microscopy, GANs can be used to generate synthetic microscopy images or to fill in missing or degraded image data as we show in this work. U-Net (O. Ronneberger et al. 2017) is a type of convolutional neural network that is specifically designed for image segmentation tasks. The architecture of U-Net consists of an encoder and a decoder, with skip connections between corresponding layers in the encoder and decoder. In microscopy, U-Net has been used to segment objects of interest in microscopy images, such as cells or subcellular structures, enabling more accurate analysis of the images. Overall, the integration of machine learning techniques, particularly GANs and U-Net, into microscopy has enabled researchers to analyze imaging data more efficiently and effectively, leading to new insights and advances in the field of biology(K. Dunn 2019, F.Long 2020). In this work, a GAN architecture is trained to generate confocal fluorescence microscopy synthetic images from blood monocyte stacks from control individuals and patients, where nuclei and mitochondria were marked with different fluorescent probes. These images are then processed by an own implemented pipeline consisting of deconvolution, segmentation and feature extraction for mitochondria classification In the deconvolution stage, the methods implemented in the ImageJ plugin "DeconvolutionLab2" (D. Sage et al. 2017) are used, where their performance is analyzed based on the parameters used and their characteristics, such as whether they are regularized algorithms or if they are iterative or non-iterative. For segmentation, different approaches are evaluated, starting with traditional histogram-based thresholding methods (Otsu, Huang, Li, among others), non-supervised clustering methods such as K-Means (Lloyd 1957; MacQueen 1967), and Deep Learning methods such as the U-Net neural network. In feature extraction, morphological and connectivity features are obtained. The morphological characteristics obtained are the usual ones (volume, area, sphericity, among others). The connectivity characteristics are found from skeletonization, pruning and graph modeling (M. Zanin et al 2020). The parameters found are the number of nodes, the density of links and the efficiency, among others. Finally, for the mitochondrial classification, classical approaches such as Decision Tree, Logistic Regression and Support Vector Machine (SVM) were used. The work was done in the Python programming language. We are currently working on making this framework publicly available. The final result of the work is an end-to-end pipeline with different processing options in the deconvolution and segmentation stages usable for different microscopy data, a synthetic data generator that achieves performance when it comes to simulating the effect of fluorescence in binary masks, and an application of both products for the mitochondrial classification with an accuracy result greater than 70%. It is concluded that neural networks have a fundamental role in the processing of medical and biological images, and can be used for data augmentation, segmentation and classification.Submitted by Ribeiro Jorge (jribeiro@fing.edu.uy) on 2023-10-25T00:27:24Z No. of bitstreams: 2 license_rdf: 25790 bytes, checksum: 489f03e71d39068f329bdec8798bce58 (MD5) SMMLGCW23.pdf: 112181 bytes, checksum: cc098832ec564f00bd025c459dfe5b4d (MD5)Approved for entry into archive by Machado Jimena (jmachado@fing.edu.uy) on 2023-10-26T18:50:35Z (GMT) No. of bitstreams: 2 license_rdf: 25790 bytes, checksum: 489f03e71d39068f329bdec8798bce58 (MD5) SMMLGCW23.pdf: 112181 bytes, checksum: cc098832ec564f00bd025c459dfe5b4d (MD5)Made available in DSpace by Luna Fabiana (fabiana.luna@seciu.edu.uy) on 2023-10-26T19:07:35Z (GMT). No. of bitstreams: 2 license_rdf: 25790 bytes, checksum: 489f03e71d39068f329bdec8798bce58 (MD5) SMMLGCW23.pdf: 112181 bytes, checksum: cc098832ec564f00bd025c459dfe5b4d (MD5) Previous issue date: 202315 p.application/pdfenengRoyal Microscopical SocietyMicroscience Microscopy Congress 2023 incorporating EMAG 2023, Manchester, United Kingdom, 4-6 jul. 2023, pp. 1-15.Las obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad de la República.(Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014)info:eu-repo/semantics/openAccessLicencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)Confocal microscopyData generationDeconvolutionSegmentationMorphological characteristics and classificationDeep learning in confocal fluorescence microscopy for synthetic image generation and processing pipelinePósterinfo:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/publishedVersionreponame:COLIBRIinstname:Universidad de la Repúblicainstacron:Universidad de la RepúblicaSilvera, DiegoMillán, María JoséMerlo, EmilianoLecumberry, FedericoGómez, AlvaroCassina, PatriciaWiniarski, ErikLICENSElicense.txtlicense.txttext/plain; charset=utf-84267http://localhost:8080/xmlui/bitstream/20.500.12008/40840/5/license.txt6429389a7df7277b72b7924fdc7d47a9MD55CC-LICENSElicense_urllicense_urltext/plain; charset=utf-850http://localhost:8080/xmlui/bitstream/20.500.12008/40840/2/license_urla006180e3f5b2ad0b88185d14284c0e0MD52license_textlicense_texttext/html; charset=utf-814403http://localhost:8080/xmlui/bitstream/20.500.12008/40840/3/license_text1274339f512f00ecc522a4c5febd859eMD53license_rdflicense_rdfapplication/rdf+xml; charset=utf-825790http://localhost:8080/xmlui/bitstream/20.500.12008/40840/4/license_rdf489f03e71d39068f329bdec8798bce58MD54ORIGINALSMMLGCW23.pdfSMMLGCW23.pdfapplication/pdf112181http://localhost:8080/xmlui/bitstream/20.500.12008/40840/1/SMMLGCW23.pdfcc098832ec564f00bd025c459dfe5b4dMD5120.500.12008/408402023-10-26 16:07:35.233oai:colibri.udelar.edu.uy:20.500.12008/40840VGVybWlub3MgeSBjb25kaWNpb25lcyByZWxhdGl2YXMgYWwgZGVwb3NpdG8gZGUgb2JyYXMKCgpMYXMgb2JyYXMgZGVwb3NpdGFkYXMgZW4gZWwgUmVwb3NpdG9yaW8gc2UgcmlnZW4gcG9yIGxhIE9yZGVuYW56YSBkZSBsb3MgRGVyZWNob3MgZGUgbGEgUHJvcGllZGFkIEludGVsZWN0dWFsICBkZSBsYSBVbml2ZXJzaWRhZCBEZSBMYSBSZXDDumJsaWNhLiAoUmVzLiBOwrogOTEgZGUgQy5ELkMuIGRlIDgvSUlJLzE5OTQg4oCTIEQuTy4gNy9JVi8xOTk0KSB5ICBwb3IgbGEgT3JkZW5hbnphIGRlbCBSZXBvc2l0b3JpbyBBYmllcnRvIGRlIGxhIFVuaXZlcnNpZGFkIGRlIGxhIFJlcMO6YmxpY2EgKFJlcy4gTsK6IDE2IGRlIEMuRC5DLiBkZSAwNy8xMC8yMDE0KS4gCgpBY2VwdGFuZG8gZWwgYXV0b3IgZXN0b3MgdMOpcm1pbm9zIHkgY29uZGljaW9uZXMgZGUgZGVww7NzaXRvIGVuIENPTElCUkksIGxhIFVuaXZlcnNpZGFkIGRlIFJlcMO6YmxpY2EgcHJvY2VkZXLDoSBhOiAgCgphKSBhcmNoaXZhciBtw6FzIGRlIHVuYSBjb3BpYSBkZSBsYSBvYnJhIGVuIGxvcyBzZXJ2aWRvcmVzIGRlIGxhIFVuaXZlcnNpZGFkIGEgbG9zIGVmZWN0b3MgZGUgZ2FyYW50aXphciBhY2Nlc28sIHNlZ3VyaWRhZCB5IHByZXNlcnZhY2nDs24KYikgY29udmVydGlyIGxhIG9icmEgYSBvdHJvcyBmb3JtYXRvcyBzaSBmdWVyYSBuZWNlc2FyaW8gIHBhcmEgZmFjaWxpdGFyIHN1IHByZXNlcnZhY2nDs24geSBhY2Nlc2liaWxpZGFkIHNpbiBhbHRlcmFyIHN1IGNvbnRlbmlkby4KYykgcmVhbGl6YXIgbGEgY29tdW5pY2FjacOzbiBww7pibGljYSB5IGRpc3BvbmVyIGVsIGFjY2VzbyBsaWJyZSB5IGdyYXR1aXRvIGEgdHJhdsOpcyBkZSBJbnRlcm5ldCBtZWRpYW50ZSBsYSBwdWJsaWNhY2nDs24gZGUgbGEgb2JyYSBiYWpvIGxhIGxpY2VuY2lhIENyZWF0aXZlIENvbW1vbnMgc2VsZWNjaW9uYWRhIHBvciBlbCBwcm9waW8gYXV0b3IuCgoKRW4gY2FzbyBxdWUgZWwgYXV0b3IgaGF5YSBkaWZ1bmRpZG8geSBkYWRvIGEgcHVibGljaWRhZCBhIGxhIG9icmEgZW4gZm9ybWEgcHJldmlhLCAgcG9kcsOhIHNvbGljaXRhciB1biBwZXLDrW9kbyBkZSBlbWJhcmdvIHNvYnJlIGxhIGRpc3BvbmliaWxpZGFkIHDDumJsaWNhIGRlIGxhIG1pc21hLCBlbCBjdWFsIGNvbWVuemFyw6EgYSBwYXJ0aXIgZGUgbGEgYWNlcHRhY2nDs24gZGUgZXN0ZSBkb2N1bWVudG8geSBoYXN0YSBsYSBmZWNoYSBxdWUgaW5kaXF1ZSAuCgpFbCBhdXRvciBhc2VndXJhIHF1ZSBsYSBvYnJhIG5vIGluZnJpZ2UgbmluZ8O6biBkZXJlY2hvIHNvYnJlIHRlcmNlcm9zLCB5YSBzZWEgZGUgcHJvcGllZGFkIGludGVsZWN0dWFsIG8gY3VhbHF1aWVyIG90cm8uCgpFbCBhdXRvciBnYXJhbnRpemEgcXVlIHNpIGVsIGRvY3VtZW50byBjb250aWVuZSBtYXRlcmlhbGVzIGRlIGxvcyBjdWFsZXMgbm8gdGllbmUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yLCAgaGEgb2J0ZW5pZG8gZWwgcGVybWlzbyBkZWwgcHJvcGlldGFyaW8gZGUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yLCB5IHF1ZSBlc2UgbWF0ZXJpYWwgY3V5b3MgZGVyZWNob3Mgc29uIGRlIHRlcmNlcm9zIGVzdMOhIGNsYXJhbWVudGUgaWRlbnRpZmljYWRvIHkgcmVjb25vY2lkbyBlbiBlbCB0ZXh0byBvIGNvbnRlbmlkbyBkZWwgZG9jdW1lbnRvIGRlcG9zaXRhZG8gZW4gZWwgUmVwb3NpdG9yaW8uCgpFbiBvYnJhcyBkZSBhdXRvcsOtYSBtw7psdGlwbGUgL3NlIHByZXN1bWUvIHF1ZSBlbCBhdXRvciBkZXBvc2l0YW50ZSBkZWNsYXJhIHF1ZSBoYSByZWNhYmFkbyBlbCBjb25zZW50aW1pZW50byBkZSB0b2RvcyBsb3MgYXV0b3JlcyBwYXJhIHB1YmxpY2FybGEgZW4gZWwgUmVwb3NpdG9yaW8sIHNpZW5kbyDDqXN0ZSBlbCDDum5pY28gcmVzcG9uc2FibGUgZnJlbnRlIGEgY3VhbHF1aWVyIHRpcG8gZGUgcmVjbGFtYWNpw7NuIGRlIGxvcyBvdHJvcyBjb2F1dG9yZXMuCgpFbCBhdXRvciBzZXLDoSByZXNwb25zYWJsZSBkZWwgY29udGVuaWRvIGRlIGxvcyBkb2N1bWVudG9zIHF1ZSBkZXBvc2l0YS4gTGEgVURFTEFSIG5vIHNlcsOhIHJlc3BvbnNhYmxlIHBvciBsYXMgZXZlbnR1YWxlcyB2aW9sYWNpb25lcyBhbCBkZXJlY2hvIGRlIHByb3BpZWRhZCBpbnRlbGVjdHVhbCBlbiBxdWUgcHVlZGEgaW5jdXJyaXIgZWwgYXV0b3IuCgpBbnRlIGN1YWxxdWllciBkZW51bmNpYSBkZSB2aW9sYWNpw7NuIGRlIGRlcmVjaG9zIGRlIHByb3BpZWRhZCBpbnRlbGVjdHVhbCwgbGEgVURFTEFSICBhZG9wdGFyw6EgdG9kYXMgbGFzIG1lZGlkYXMgbmVjZXNhcmlhcyBwYXJhIGV2aXRhciBsYSBjb250aW51YWNpw7NuIGRlIGRpY2hhIGluZnJhY2Npw7NuLCBsYXMgcXVlIHBvZHLDoW4gaW5jbHVpciBlbCByZXRpcm8gZGVsIGFjY2VzbyBhIGxvcyBjb250ZW5pZG9zIHkvbyBtZXRhZGF0b3MgZGVsIGRvY3VtZW50byByZXNwZWN0aXZvLgoKTGEgb2JyYSBzZSBwb25kcsOhIGEgZGlzcG9zaWNpw7NuIGRlbCBww7pibGljbyBhIHRyYXbDqXMgZGUgbGFzIGxpY2VuY2lhcyBDcmVhdGl2ZSBDb21tb25zLCBlbCBhdXRvciBwb2Ryw6Egc2VsZWNjaW9uYXIgdW5hIGRlIGxhcyA2IGxpY2VuY2lhcyBkaXNwb25pYmxlczoKCgpBdHJpYnVjacOzbiAoQ0MgLSBCeSk6IFBlcm1pdGUgdXNhciBsYSBvYnJhIHkgZ2VuZXJhciBvYnJhcyBkZXJpdmFkYXMsIGluY2x1c28gY29uIGZpbmVzIGNvbWVyY2lhbGVzLCBzaWVtcHJlIHF1ZSBzZSByZWNvbm96Y2EgYWwgYXV0b3IuCgpBdHJpYnVjacOzbiDigJMgQ29tcGFydGlyIElndWFsIChDQyAtIEJ5LVNBKTogUGVybWl0ZSB1c2FyIGxhIG9icmEgeSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcywgaW5jbHVzbyBjb24gZmluZXMgY29tZXJjaWFsZXMsIHBlcm8gbGEgZGlzdHJpYnVjacOzbiBkZSBsYXMgb2JyYXMgZGVyaXZhZGFzIGRlYmUgaGFjZXJzZSBtZWRpYW50ZSB1bmEgbGljZW5jaWEgaWTDqW50aWNhIGEgbGEgZGUgbGEgb2JyYSBvcmlnaW5hbCwgcmVjb25vY2llbmRvIGEgbG9zIGF1dG9yZXMuCgpBdHJpYnVjacOzbiDigJMgTm8gQ29tZXJjaWFsIChDQyAtIEJ5LU5DKTogUGVybWl0ZSB1c2FyIGxhIG9icmEgeSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcywgc2llbXByZSB5IGN1YW5kbyBlc29zIHVzb3Mgbm8gdGVuZ2FuIGZpbmVzIGNvbWVyY2lhbGVzLCByZWNvbm9jaWVuZG8gYWwgYXV0b3IuCgpBdHJpYnVjacOzbiDigJMgU2luIERlcml2YWRhcyAoQ0MgLSBCeS1ORCk6IFBlcm1pdGUgZWwgdXNvIGRlIGxhIG9icmEsIGluY2x1c28gY29uIGZpbmVzIGNvbWVyY2lhbGVzLCBwZXJvIG5vIHNlIHBlcm1pdGUgZ2VuZXJhciBvYnJhcyBkZXJpdmFkYXMsIGRlYmllbmRvIHJlY29ub2NlciBhbCBhdXRvci4KCkF0cmlidWNpw7NuIOKAkyBObyBDb21lcmNpYWwg4oCTIENvbXBhcnRpciBJZ3VhbCAoQ0Mg4oCTIEJ5LU5DLVNBKTogUGVybWl0ZSB1c2FyIGxhIG9icmEgeSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcywgc2llbXByZSB5IGN1YW5kbyBlc29zIHVzb3Mgbm8gdGVuZ2FuIGZpbmVzIGNvbWVyY2lhbGVzIHkgbGEgZGlzdHJpYnVjacOzbiBkZSBsYXMgb2JyYXMgZGVyaXZhZGFzIHNlIGhhZ2EgbWVkaWFudGUgbGljZW5jaWEgaWTDqW50aWNhIGEgbGEgZGUgbGEgb2JyYSBvcmlnaW5hbCwgcmVjb25vY2llbmRvIGEgbG9zIGF1dG9yZXMuCgpBdHJpYnVjacOzbiDigJMgTm8gQ29tZXJjaWFsIOKAkyBTaW4gRGVyaXZhZGFzIChDQyAtIEJ5LU5DLU5EKTogUGVybWl0ZSB1c2FyIGxhIG9icmEsIHBlcm8gbm8gc2UgcGVybWl0ZSBnZW5lcmFyIG9icmFzIGRlcml2YWRhcyB5IG5vIHNlIHBlcm1pdGUgdXNvIGNvbiBmaW5lcyBjb21lcmNpYWxlcywgZGViaWVuZG8gcmVjb25vY2VyIGFsIGF1dG9yLgoKTG9zIHVzb3MgcHJldmlzdG9zIGVuIGxhcyBsaWNlbmNpYXMgaW5jbHV5ZW4gbGEgZW5hamVuYWNpw7NuLCByZXByb2R1Y2Npw7NuLCBjb211bmljYWNpw7NuLCBwdWJsaWNhY2nDs24sIGRpc3RyaWJ1Y2nDs24geSBwdWVzdGEgYSBkaXNwb3NpY2nDs24gZGVsIHDDumJsaWNvLiBMYSBjcmVhY2nDs24gZGUgb2JyYXMgZGVyaXZhZGFzIGluY2x1eWUgbGEgYWRhcHRhY2nDs24sIHRyYWR1Y2Npw7NuIHkgZWwgcmVtaXguCgpDdWFuZG8gc2Ugc2VsZWNjaW9uZSB1bmEgbGljZW5jaWEgcXVlIGhhYmlsaXRlIHVzb3MgY29tZXJjaWFsZXMsIGVsIGRlcMOzc2l0byBkZWJlcsOhIHNlciBhY29tcGHDsWFkbyBkZWwgYXZhbCBkZWwgamVyYXJjYSBtw6F4aW1vIGRlbCBTZXJ2aWNpbyBjb3JyZXNwb25kaWVudGUuCg==Universidadhttps://udelar.edu.uy/https://www.colibri.udelar.edu.uy/oai/requestmabel.seroubian@seciu.edu.uyUruguayopendoar:47712024-07-25T14:33:33.852305COLIBRI - Universidad de la Repúblicafalse |
spellingShingle | Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline Silvera, Diego Confocal microscopy Data generation Deconvolution Segmentation Morphological characteristics and classification |
status_str | publishedVersion |
title | Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline |
title_full | Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline |
title_fullStr | Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline |
title_full_unstemmed | Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline |
title_short | Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline |
title_sort | Deep learning in confocal fluorescence microscopy for synthetic image generation and processing pipeline |
topic | Confocal microscopy Data generation Deconvolution Segmentation Morphological characteristics and classification |
url | https://www.mmc-series.org.uk/abstract/1068-deep-learning-in-confocal-fluorescence-microscopy-for-synthetic-image-generation-and-processing-pipeline.html https://hdl.handle.net/20.500.12008/40840 |