Property Checking with Interpretable Error Characterization for Recurrent Neural Networks

Mayr, Franz - Yovine, Sergio - Visca, Ramiro

Resumen:

This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm.


Detalles Bibliográficos
2021
recurrent neural networks
probably approximately correct learning
black-box explainability
Ciencias Naturales y Exactas
Ciencias de la Computación e Información
Inglés
Agencia Nacional de Investigación e Innovación
REDI
https://hdl.handle.net/20.500.12381/457
https://doi.org/10.3390/make3010010
Acceso abierto
Reconocimiento 4.0 Internacional. (CC BY)
_version_ 1814959261672800256
author Mayr, Franz
author2 Yovine, Sergio
Visca, Ramiro
author2_role author
author
author_facet Mayr, Franz
Yovine, Sergio
Visca, Ramiro
author_role author
bitstream.checksum.fl_str_mv 2d97768b1a25a7df5a347bb58fd2d77f
53c5327dc1f73914f51101aed18641fa
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
bitstream.url.fl_str_mv https://redi.anii.org.uy/jspui/bitstream/20.500.12381/457/2/license.txt
https://redi.anii.org.uy/jspui/bitstream/20.500.12381/457/1/make-03-00010.pdf
collection REDI
dc.creator.none.fl_str_mv Mayr, Franz
Yovine, Sergio
Visca, Ramiro
dc.date.accessioned.none.fl_str_mv 2021-09-30T13:22:36Z
dc.date.available.none.fl_str_mv 2021-09-30T13:22:36Z
dc.date.issued.none.fl_str_mv 2021-02
dc.description.abstract.none.fl_txt_mv This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm.
dc.identifier.anii.es.fl_str_mv POS_ICT4V_2016_1_15, FSDA_1_2018_1_154419, FMV_1_2019_1_155913.
dc.identifier.doi.none.fl_str_mv https://doi.org/10.3390/make3010010
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12381/457
dc.language.iso.none.fl_str_mv eng
dc.publisher.es.fl_str_mv MDPI
dc.rights.es.fl_str_mv Acceso abierto
dc.rights.license.none.fl_str_mv Reconocimiento 4.0 Internacional. (CC BY)
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
dc.source.es.fl_str_mv Machine Learning and Knowledge Extraction
dc.source.none.fl_str_mv reponame:REDI
instname:Agencia Nacional de Investigación e Innovación
instacron:Agencia Nacional de Investigación e Innovación
dc.subject.anii.none.fl_str_mv Ciencias Naturales y Exactas
Ciencias de la Computación e Información
dc.subject.es.fl_str_mv recurrent neural networks
probably approximately correct learning
black-box explainability
dc.title.none.fl_str_mv Property Checking with Interpretable Error Characterization for Recurrent Neural Networks
dc.type.es.fl_str_mv Artículo
dc.type.none.fl_str_mv info:eu-repo/semantics/article
dc.type.version.es.fl_str_mv Publicado
dc.type.version.none.fl_str_mv info:eu-repo/semantics/publishedVersion
description This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm.
eu_rights_str_mv openAccess
format article
id REDI_05929683b16282ec6a55793d04fe5fd3
identifier_str_mv POS_ICT4V_2016_1_15, FSDA_1_2018_1_154419, FMV_1_2019_1_155913.
instacron_str Agencia Nacional de Investigación e Innovación
institution Agencia Nacional de Investigación e Innovación
instname_str Agencia Nacional de Investigación e Innovación
language eng
network_acronym_str REDI
network_name_str REDI
oai_identifier_str oai:redi.anii.org.uy:20.500.12381/457
publishDate 2021
reponame_str REDI
repository.mail.fl_str_mv jmaldini@anii.org.uy
repository.name.fl_str_mv REDI - Agencia Nacional de Investigación e Innovación
repository_id_str 9421
rights_invalid_str_mv Reconocimiento 4.0 Internacional. (CC BY)
Acceso abierto
spelling Reconocimiento 4.0 Internacional. (CC BY)Acceso abiertoinfo:eu-repo/semantics/openAccess2021-09-30T13:22:36Z2021-09-30T13:22:36Z2021-02https://hdl.handle.net/20.500.12381/457POS_ICT4V_2016_1_15, FSDA_1_2018_1_154419, FMV_1_2019_1_155913.https://doi.org/10.3390/make3010010This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm.engMDPIMachine Learning and Knowledge Extractionreponame:REDIinstname:Agencia Nacional de Investigación e Innovacióninstacron:Agencia Nacional de Investigación e Innovaciónrecurrent neural networksprobably approximately correct learningblack-box explainabilityCiencias Naturales y ExactasCiencias de la Computación e InformaciónProperty Checking with Interpretable Error Characterization for Recurrent Neural NetworksArtículoPublicadoinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/article//Ciencias Naturales y Exactas/Ciencias de la Computación e Información/Ciencias de la Computación e InformaciónMayr, FranzYovine, SergioVisca, RamiroLICENSElicense.txtlicense.txttext/plain; charset=utf-84746https://redi.anii.org.uy/jspui/bitstream/20.500.12381/457/2/license.txt2d97768b1a25a7df5a347bb58fd2d77fMD52ORIGINALmake-03-00010.pdfmake-03-00010.pdfapplication/pdf1004799https://redi.anii.org.uy/jspui/bitstream/20.500.12381/457/1/make-03-00010.pdf53c5327dc1f73914f51101aed18641faMD5120.500.12381/4572021-09-30 10:22:38.275oai:redi.anii.org.uy:20.500.12381/457PHA+QWNlcHRhbmRvIGxhIGNlc2nDs24gZGUgZGVyZWNob3MgZWwgdXN1YXJpbyBERUNMQVJBIHF1ZSBvc3RlbnRhIGxhIGNvbmRpY2nDs24gZGUgYXV0b3IgZW4gZWwgc2VudGlkbyBxdWUgb3RvcmdhIGxhIGxlZ2lzbGFjacOzbiB2aWdlbnRlIHNvYnJlICBwcm9waWVkYWQgaW50ZWxlY3R1YWwgZGUgbGEgb2JyYSBvcmlnaW5hbCBxdWUgZXN0w6EgZW52aWFuZG8gKOKAnGxhIG9icmHigJ0pLiBFbiBjYXNvIGRlIHNlciBjb3RpdHVsYXIsIGVsIGF1dG9yIGRlY2xhcmEgcXVlIGN1ZW50YSBjb24gZWwgIGNvbnNlbnRpbWllbnRvIGRlIGxvcyByZXN0YW50ZXMgdGl0dWxhcmVzIHBhcmEgaGFjZXIgbGEgcHJlc2VudGUgY2VzacOzbi4gRW4gY2FzbyBkZSBwcmV2aWEgY2VzacOzbiBkZSBsb3MgZGVyZWNob3MgZGUgZXhwbG90YWNpw7NuIHNvYnJlIGxhIG9icmEgYSB0ZXJjZXJvcywgZWwgYXV0b3IgZGVjbGFyYSBxdWUgdGllbmUgbGEgYXV0b3JpemFjacOzbiBleHByZXNhIGRlIGRpY2hvcyB0aXR1bGFyZXMgZGUgZGVyZWNob3MgYSBsb3MgZmluZXMgZGUgZXN0YSBjZXNpw7NuLCBvIGJpZW4gcXVlIGhhIGNvbnNlcnZhZG8gbGEgZmFjdWx0YWQgZGUgY2VkZXIgZXN0b3MgZGVyZWNob3MgZW4gbGEgZm9ybWEgcHJldmlzdGEgZW4gbGEgcHJlc2VudGUgY2VzacOzbi48L3A+DQoNCjxwPkNvbiBlbCBmaW4gZGUgZGFyIGxhIG3DoXhpbWEgZGlmdXNpw7NuIGEgbGEgb2JyYSBhIHRyYXbDqXMgZGUgUkVESSwgZWwgQVVUT1IgQ0VERSBhIEFOSUksIGRlIGZvcm1hIGdyYXR1aXRhIHkgTk8gRVhDTFVTSVZBLCBjb24gY2Fyw6FjdGVyIGlycmV2b2NhYmxlIGUgaWxpbWl0YWRvIGVuIGVsIHRpZW1wbyB5IGNvbiDDoW1iaXRvIG11bmRpYWwsIGxvcyBkZXJlY2hvcyBkZSByZXByb2R1Y2Npw7NuLCBkZSBkaXN0cmlidWNpw7NuLCBkZSBjb211bmljYWNpw7NuIHDDumJsaWNhLCBpbmNsdWlkbyBlbCBkZXJlY2hvIGRlIHB1ZXN0YSBhIGRpc3Bvc2ljacOzbiBlbGVjdHLDs25pY2EsIHBhcmEgcXVlIHB1ZWRhIHNlciB1dGlsaXphZGEgZGUgZm9ybWEgbGlicmUgeSBncmF0dWl0YSBwb3IgdG9kb3MgbG9zIHF1ZSBsbyBkZXNlZW4uPC9wPg0KDQo8cD5MYSBjZXNpw7NuIHNlIHJlYWxpemEgYmFqbyBsYXMgc2lndWllbnRlcyBjb25kaWNpb25lczo8L3A+DQoNCjxwPkxhIHRpdHVsYXJpZGFkIGRlIGxhIG9icmEgc2VndWlyw6EgY29ycmVzcG9uZGllbmRvIGFsIEF1dG9yIHkgbGEgcHJlc2VudGUgY2VzacOzbiBkZSBkZXJlY2hvcyBwZXJtaXRpcsOhIGEgUkVESTo8L3A+DQoNCjx1bD4gPGxpIHZhbHVlPShhKT5UcmFuc2Zvcm1hciBsYSBvYnJhIGVuIGxhIG1lZGlkYSBlbiBxdWUgc2VhIG5lY2VzYXJpbyBwYXJhIGFkYXB0YXJsYSBhIGN1YWxxdWllciB0ZWNub2xvZ8OtYSBzdXNjZXB0aWJsZSBkZSBpbmNvcnBvcmFjacOzbiBhIEludGVybmV0OyByZWFsaXphciBsYXMgYWRhcHRhY2lvbmVzIG5lY2VzYXJpYXMgcGFyYSBoYWNlciBwb3NpYmxlIHN1IGFjY2VzbyB5IHZpc3VhbGl6YWNpw7NuIHBlcm1hbmVudGUsIGHDum4gcG9yIHBhcnRlIGRlIHBlcnNvbmFzIGNvbiBkaXNjYXBhY2lkYWQsIHJlYWxpemFyIGxhcyBtaWdyYWNpb25lcyBkZSBmb3JtYXRvcyBwYXJhIGFzZWd1cmFyIGxhIHByZXNlcnZhY2nDs24gYSBsYXJnbyBwbGF6bywgaW5jb3Jwb3JhciBsb3MgbWV0YWRhdG9zIG5lY2VzYXJpb3MgcGFyYSByZWFsaXphciBlbCByZWdpc3RybyBkZSBsYSBvYnJhLCBlIGluY29ycG9yYXIgdGFtYmnDqW4g4oCcbWFyY2FzIGRlIGFndWHigJ0gbyBjdWFscXVpZXIgb3RybyBzaXN0ZW1hIGRlIHNlZ3VyaWRhZCBvIGRlIHByb3RlY2Npw7NuIG8gZGUgaWRlbnRpZmljYWNpw7NuIGRlIHByb2NlZGVuY2lhLiBFbiBuaW5nw7puIGNhc28gZGljaGFzIG1vZGlmaWNhY2lvbmVzIGltcGxpY2Fyw6FuIGFkdWx0ZXJhY2lvbmVzIGVuIGVsIGNvbnRlbmlkbyBkZSBsYSBvYnJhLjwvbGk+IA0KPGxpIHZhbHVlPShiKT5SZXByb2R1Y2lyIGxhIG9icmEgZW4gdW4gbWVkaW8gZGlnaXRhbCBwYXJhIHN1IGluY29ycG9yYWNpw7NuIGEgc2lzdGVtYXMgZGUgYsO6c3F1ZWRhIHkgcmVjdXBlcmFjacOzbiwgaW5jbHV5ZW5kbyBlbCBkZXJlY2hvIGEgcmVwcm9kdWNpciB5IGFsbWFjZW5hcmxhIGVuIHNlcnZpZG9yZXMgdSBvdHJvcyBtZWRpb3MgZGlnaXRhbGVzIGEgbG9zIGVmZWN0b3MgZGUgc2VndXJpZGFkIHkgcHJlc2VydmFjacOzbi48L2xpPiANCjxsaSB2YWx1ZT0oYyk+UGVybWl0aXIgYSBsb3MgdXN1YXJpb3MgbGEgZGVzY2FyZ2EgZGUgY29waWFzIGVsZWN0csOzbmljYXMgZGUgbGEgb2JyYSBlbiB1biBzb3BvcnRlIGRpZ2l0YWwuPC9saT4gDQo8bGkgdmFsdWU9KGQpPlJlYWxpemFyIGxhIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EgeSBwdWVzdGEgYSBkaXNwb3NpY2nDs24gZGUgbGEgb2JyYSBhY2Nlc2libGUgZGUgbW9kbyBsaWJyZSB5IGdyYXR1aXRvIGEgdHJhdsOpcyBkZSBJbnRlcm5ldC48L3VsPg0KDQo8cD5FbiB2aXJ0dWQgZGVsIGNhcsOhY3RlciBubyBleGNsdXNpdm8gZGUgbGEgY2VzacOzbiwgZWwgQXV0b3IgY29uc2VydmEgdG9kb3MgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIHNvYnJlIGxhIG9icmEsIHkgcG9kcsOhIHBvbmVybGEgYSBkaXNwb3NpY2nDs24gZGVsIHDDumJsaWNvIGVuIGVzdGEgeSBlbiBwb3N0ZXJpb3JlcyB2ZXJzaW9uZXMsIGEgdHJhdsOpcyBkZSBsb3MgbWVkaW9zIHF1ZSBlc3RpbWUgb3BvcnR1bm9zLjwvcD4NCg0KPHA+RWwgQXV0b3IgZGVjbGFyYSBiYWpvIGp1cmFtZW50byBxdWUgbGEgcHJlc2VudGUgY2VzacOzbiBubyBpbmZyaW5nZSBuaW5nw7puIGRlcmVjaG8gZGUgdGVyY2Vyb3MsIHlhIHNlYW4gZGUgcHJvcGllZGFkIGluZHVzdHJpYWwsIGludGVsZWN0dWFsIG8gY3VhbHF1aWVyIG90cm8geSBnYXJhbnRpemEgcXVlIGVsIGNvbnRlbmlkbyBkZSBsYSBvYnJhIG5vIGF0ZW50YSBjb250cmEgbG9zIGRlcmVjaG9zIGFsIGhvbm9yLCBhIGxhIGludGltaWRhZCB5IGEgbGEgaW1hZ2VuIGRlIHRlcmNlcm9zLCBuaSBlcyBkaXNjcmltaW5hdG9yaW8uIFJFREkgZXN0YXLDoSBleGVudG8gZGUgbGEgcmV2aXNpw7NuIGRlbCBjb250ZW5pZG8gZGUgbGEgb2JyYSwgcXVlIGVuIHRvZG8gY2FzbyBwZXJtYW5lY2Vyw6EgYmFqbyBsYSByZXNwb25zYWJpbGlkYWQgZXhjbHVzaXZhIGRlbCBBdXRvci48L3A+DQoNCjxwPkxhIG9icmEgc2UgcG9uZHLDoSBhIGRpc3Bvc2ljacOzbiBkZSBsb3MgdXN1YXJpb3MgcGFyYSBxdWUgaGFnYW4gZGUgZWxsYSB1biB1c28ganVzdG8geSByZXNwZXR1b3NvIGRlIGxvcyBkZXJlY2hvcyBkZWwgYXV0b3IgeSBjb24gZmluZXMgZGUgZXN0dWRpbywgaW52ZXN0aWdhY2nDs24sIG8gY3VhbHF1aWVyIG90cm8gZmluIGzDrWNpdG8uIEVsIG1lbmNpb25hZG8gdXNvLCBtw6FzIGFsbMOhIGRlIGxhIGNvcGlhIHByaXZhZGEsIHJlcXVlcmlyw6EgcXVlIHNlIGNpdGUgbGEgZnVlbnRlIHkgc2UgcmVjb25vemNhIGxhIGF1dG9yw61hLiBBIHRhbGVzIGZpbmVzIGVsIEF1dG9yIGFjZXB0YSBlbCB1c28gZGUgbGljZW5jaWFzIENyZWF0aXZlIENvbW1vbnMgeSBFTElHRSB1bmEgZGUgZXN0YXMgbGljZW5jaWFzIGVzdGFuZGFyaXphZGFzIGEgbG9zIGZpbmVzIGRlIGNvbXVuaWNhciBzdSBvYnJhLjwvcD4NCg0KPHA+RWwgQXV0b3IsIGNvbW8gZ2FyYW50ZSBkZSBsYSBhdXRvcsOtYSBkZSBsYSBvYnJhIHkgZW4gcmVsYWNpw7NuIGEgbGEgbWlzbWEsIGRlY2xhcmEgcXVlIGxhIEFOSUkgc2UgZW5jdWVudHJhIGxpYnJlIGRlIHRvZG8gdGlwbyBkZSByZXNwb25zYWJpbGlkYWQsIHNlYSDDqXN0YSBjaXZpbCwgYWRtaW5pc3RyYXRpdmEgbyBwZW5hbCwgeSBxdWUgw6lsIG1pc21vIGFzdW1lIGxhIHJlc3BvbnNhYmlsaWRhZCBmcmVudGUgYSBjdWFscXVpZXIgcmVjbGFtbyBvIGRlbWFuZGEgcG9yIHBhcnRlIGRlIHRlcmNlcm9zLiBMQSBBTklJIGVzdGFyw6EgZXhlbnRhIGRlIGVqZXJjaXRhciBhY2Npb25lcyBsZWdhbGVzIGVuIG5vbWJyZSBkZWwgQXV0b3IgZW4gZWwgc3VwdWVzdG8gZGUgaW5mcmFjY2lvbmVzIGEgZGVyZWNob3MgZGUgcHJvcGllZGFkIGludGVsZWN0dWFsIGRlcml2YWRvcyBkZWwgZGVww7NzaXRvIHkgYXJjaGl2byBkZSBsYSBvYnJhLjwvcD4NCg0KPHA+QU5JSSBub3RpZmljYXLDoSBhbCBBdXRvciBkZSBjdWFscXVpZXIgcmVjbGFtYWNpw7NuIHF1ZSByZWNpYmEgZGUgdGVyY2Vyb3MgZW4gcmVsYWNpw7NuIGNvbiBsYSBvYnJhIHksIGVuIHBhcnRpY3VsYXIsIGRlIHJlY2xhbWFjaW9uZXMgcmVsYXRpdmFzIGEgbG9zIGRlcmVjaG9zIGRlIHByb3BpZWRhZCBpbnRlbGVjdHVhbCBzb2JyZSBlbGxhLjwvcD4NCg0KPHA+RWwgQXV0b3IgcG9kcsOhIHNvbGljaXRhciBlbCByZXRpcm8gbyBsYSBpbnZpc2liaWxpemFjacOzbiBkZSBsYSBvYnJhIGRlIFJFREkgc8OzbG8gcG9yIGNhdXNhIGp1c3RpZmljYWRhLiBBIHRhbCBmaW4gZGViZXLDoSBtYW5pZmVzdGFyIHN1IHZvbHVudGFkIGVuIGZvcm1hIGZlaGFjaWVudGUgeSBhY3JlZGl0YXIgZGViaWRhbWVudGUgbGEgY2F1c2EganVzdGlmaWNhZGEuIEFzaW1pc21vIEFOSUkgcG9kcsOhIHJldGlyYXIgbyBpbnZpc2liaWxpemFyIGxhIG9icmEgZGUgUkVESSwgcHJldmlhIG5vdGlmaWNhY2nDs24gYWwgQXV0b3IsIGVuIHN1cHVlc3RvcyBzdWZpY2llbnRlbWVudGUganVzdGlmaWNhZG9zLCBvIGVuIGNhc28gZGUgcmVjbGFtYWNpb25lcyBkZSB0ZXJjZXJvcy48L3A+Gobiernohttps://www.anii.org.uy/https://redi.anii.org.uy/oai/requestjmaldini@anii.org.uyUruguayopendoar:94212021-09-30T13:22:38REDI - Agencia Nacional de Investigación e Innovaciónfalse
spellingShingle Property Checking with Interpretable Error Characterization for Recurrent Neural Networks
Mayr, Franz
recurrent neural networks
probably approximately correct learning
black-box explainability
Ciencias Naturales y Exactas
Ciencias de la Computación e Información
status_str publishedVersion
title Property Checking with Interpretable Error Characterization for Recurrent Neural Networks
title_full Property Checking with Interpretable Error Characterization for Recurrent Neural Networks
title_fullStr Property Checking with Interpretable Error Characterization for Recurrent Neural Networks
title_full_unstemmed Property Checking with Interpretable Error Characterization for Recurrent Neural Networks
title_short Property Checking with Interpretable Error Characterization for Recurrent Neural Networks
title_sort Property Checking with Interpretable Error Characterization for Recurrent Neural Networks
topic recurrent neural networks
probably approximately correct learning
black-box explainability
Ciencias Naturales y Exactas
Ciencias de la Computación e Información
url https://hdl.handle.net/20.500.12381/457
https://doi.org/10.3390/make3010010