On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks

Mayr, Franz - Yovine, Sergio - Visca, Ramiro

Resumen:

We propose a procedure for checking properties of recurrent neural networks without any access to their internal structure nor code. Our approach is a case of black-box checking based on learning a prob- ably approximately correct, regular approximation of the intersection of the language of the black-box (the network) with the complement of the property to be checked, without explicitly building automata-based in- dividual representations of them. When the algorithm returns an empty language, there is a proven upper bound on the probability of the network not verifying the requirement. When the returned language is nonempty, it is certain the network does not satisfy the property. In this case, a regular language approximating the intersection is output together with true sequences of the network violating the property. We show that this approach offers better guarantees than post-learning verification where the property is checked on a learned model of the network alone. Be- sides, it does not require resorting to an external decision procedure for verification nor fixing a specific requirement specification formalism.


Detalles Bibliográficos
2020
Agencia Nacional de Investigación e Innovación
Artificial intelligence
Recurrent neural networks
Verification
Ciencias Naturales y Exactas
Ciencias de la Computación e Información
Inglés
Agencia Nacional de Investigación e Innovación
REDI
https://hdl.handle.net/20.500.12381/466
https://doi.org/10.1007/978-3-030-57321-8_19
Acceso abierto
Reconocimiento 4.0 Internacional. (CC BY)
_version_ 1814959255550164992
author Mayr, Franz
author2 Yovine, Sergio
Visca, Ramiro
author2_role author
author
author_facet Mayr, Franz
Yovine, Sergio
Visca, Ramiro
author_role author
bitstream.checksum.fl_str_mv 2d97768b1a25a7df5a347bb58fd2d77f
dfe05b368e917fede4189fce6e398fa5
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
bitstream.url.fl_str_mv https://redi.anii.org.uy/jspui/bitstream/20.500.12381/466/2/license.txt
https://redi.anii.org.uy/jspui/bitstream/20.500.12381/466/1/On_the_fly_Verification_of_Recurrent_Neural_Networks_through_Automata_Learning.pdf
collection REDI
dc.creator.none.fl_str_mv Mayr, Franz
Yovine, Sergio
Visca, Ramiro
dc.date.accessioned.none.fl_str_mv 2021-10-07T18:31:53Z
dc.date.available.none.fl_str_mv 2021-10-07T18:31:53Z
dc.date.issued.none.fl_str_mv 2020-08
dc.description.abstract.none.fl_txt_mv We propose a procedure for checking properties of recurrent neural networks without any access to their internal structure nor code. Our approach is a case of black-box checking based on learning a prob- ably approximately correct, regular approximation of the intersection of the language of the black-box (the network) with the complement of the property to be checked, without explicitly building automata-based in- dividual representations of them. When the algorithm returns an empty language, there is a proven upper bound on the probability of the network not verifying the requirement. When the returned language is nonempty, it is certain the network does not satisfy the property. In this case, a regular language approximating the intersection is output together with true sequences of the network violating the property. We show that this approach offers better guarantees than post-learning verification where the property is checked on a learned model of the network alone. Be- sides, it does not require resorting to an external decision procedure for verification nor fixing a specific requirement specification formalism.
dc.description.sponsorship.none.fl_txt_mv Agencia Nacional de Investigación e Innovación
dc.identifier.anii.es.fl_str_mv POS_ICT4V_2016_1_15, FSDA_1_2018_1_154419, FMV_1_2019_1_155913.
dc.identifier.doi.none.fl_str_mv https://doi.org/10.1007/978-3-030-57321-8_19
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12381/466
dc.language.iso.none.fl_str_mv eng
dc.rights.es.fl_str_mv Acceso abierto
dc.rights.license.none.fl_str_mv Reconocimiento 4.0 Internacional. (CC BY)
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
dc.source.es.fl_str_mv Machine Learning and Knowledge Extraction - International Cross-Domain Conference, CD-MAKE 2020
dc.source.none.fl_str_mv reponame:REDI
instname:Agencia Nacional de Investigación e Innovación
instacron:Agencia Nacional de Investigación e Innovación
dc.subject.anii.none.fl_str_mv Ciencias Naturales y Exactas
Ciencias de la Computación e Información
dc.subject.es.fl_str_mv Artificial intelligence
Recurrent neural networks
Verification
dc.title.none.fl_str_mv On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks
dc.type.es.fl_str_mv Documento de conferencia
dc.type.none.fl_str_mv info:eu-repo/semantics/conferenceObject
dc.type.version.es.fl_str_mv Enviado
dc.type.version.none.fl_str_mv info:eu-repo/semantics/submittedVersion
description We propose a procedure for checking properties of recurrent neural networks without any access to their internal structure nor code. Our approach is a case of black-box checking based on learning a prob- ably approximately correct, regular approximation of the intersection of the language of the black-box (the network) with the complement of the property to be checked, without explicitly building automata-based in- dividual representations of them. When the algorithm returns an empty language, there is a proven upper bound on the probability of the network not verifying the requirement. When the returned language is nonempty, it is certain the network does not satisfy the property. In this case, a regular language approximating the intersection is output together with true sequences of the network violating the property. We show that this approach offers better guarantees than post-learning verification where the property is checked on a learned model of the network alone. Be- sides, it does not require resorting to an external decision procedure for verification nor fixing a specific requirement specification formalism.
eu_rights_str_mv openAccess
format conferenceObject
id REDI_81302d01ffd59d86934960062190a194
identifier_str_mv POS_ICT4V_2016_1_15, FSDA_1_2018_1_154419, FMV_1_2019_1_155913.
instacron_str Agencia Nacional de Investigación e Innovación
institution Agencia Nacional de Investigación e Innovación
instname_str Agencia Nacional de Investigación e Innovación
language eng
network_acronym_str REDI
network_name_str REDI
oai_identifier_str oai:redi.anii.org.uy:20.500.12381/466
publishDate 2020
reponame_str REDI
repository.mail.fl_str_mv jmaldini@anii.org.uy
repository.name.fl_str_mv REDI - Agencia Nacional de Investigación e Innovación
repository_id_str 9421
rights_invalid_str_mv Reconocimiento 4.0 Internacional. (CC BY)
Acceso abierto
spelling Reconocimiento 4.0 Internacional. (CC BY)Acceso abiertoinfo:eu-repo/semantics/openAccess2021-10-07T18:31:53Z2021-10-07T18:31:53Z2020-08https://hdl.handle.net/20.500.12381/466POS_ICT4V_2016_1_15, FSDA_1_2018_1_154419, FMV_1_2019_1_155913.https://doi.org/10.1007/978-3-030-57321-8_19We propose a procedure for checking properties of recurrent neural networks without any access to their internal structure nor code. Our approach is a case of black-box checking based on learning a prob- ably approximately correct, regular approximation of the intersection of the language of the black-box (the network) with the complement of the property to be checked, without explicitly building automata-based in- dividual representations of them. When the algorithm returns an empty language, there is a proven upper bound on the probability of the network not verifying the requirement. When the returned language is nonempty, it is certain the network does not satisfy the property. In this case, a regular language approximating the intersection is output together with true sequences of the network violating the property. We show that this approach offers better guarantees than post-learning verification where the property is checked on a learned model of the network alone. Be- sides, it does not require resorting to an external decision procedure for verification nor fixing a specific requirement specification formalism.Agencia Nacional de Investigación e InnovaciónengMachine Learning and Knowledge Extraction - International Cross-Domain Conference, CD-MAKE 2020reponame:REDIinstname:Agencia Nacional de Investigación e Innovacióninstacron:Agencia Nacional de Investigación e InnovaciónArtificial intelligenceRecurrent neural networksVerificationCiencias Naturales y ExactasCiencias de la Computación e InformaciónOn-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural NetworksDocumento de conferenciaEnviadoinfo:eu-repo/semantics/submittedVersioninfo:eu-repo/semantics/conferenceObject//Ciencias Naturales y Exactas/Ciencias de la Computación e Información/Ciencias de la Computación e InformaciónMayr, FranzYovine, SergioVisca, RamiroLICENSElicense.txtlicense.txttext/plain; charset=utf-84746https://redi.anii.org.uy/jspui/bitstream/20.500.12381/466/2/license.txt2d97768b1a25a7df5a347bb58fd2d77fMD52ORIGINALOn_the_fly_Verification_of_Recurrent_Neural_Networks_through_Automata_Learning.pdfOn_the_fly_Verification_of_Recurrent_Neural_Networks_through_Automata_Learning.pdfapplication/pdf482161https://redi.anii.org.uy/jspui/bitstream/20.500.12381/466/1/On_the_fly_Verification_of_Recurrent_Neural_Networks_through_Automata_Learning.pdfdfe05b368e917fede4189fce6e398fa5MD5120.500.12381/4662021-10-07 15:31:54.177oai:redi.anii.org.uy:20.500.12381/466PHA+QWNlcHRhbmRvIGxhIGNlc2nDs24gZGUgZGVyZWNob3MgZWwgdXN1YXJpbyBERUNMQVJBIHF1ZSBvc3RlbnRhIGxhIGNvbmRpY2nDs24gZGUgYXV0b3IgZW4gZWwgc2VudGlkbyBxdWUgb3RvcmdhIGxhIGxlZ2lzbGFjacOzbiB2aWdlbnRlIHNvYnJlICBwcm9waWVkYWQgaW50ZWxlY3R1YWwgZGUgbGEgb2JyYSBvcmlnaW5hbCBxdWUgZXN0w6EgZW52aWFuZG8gKOKAnGxhIG9icmHigJ0pLiBFbiBjYXNvIGRlIHNlciBjb3RpdHVsYXIsIGVsIGF1dG9yIGRlY2xhcmEgcXVlIGN1ZW50YSBjb24gZWwgIGNvbnNlbnRpbWllbnRvIGRlIGxvcyByZXN0YW50ZXMgdGl0dWxhcmVzIHBhcmEgaGFjZXIgbGEgcHJlc2VudGUgY2VzacOzbi4gRW4gY2FzbyBkZSBwcmV2aWEgY2VzacOzbiBkZSBsb3MgZGVyZWNob3MgZGUgZXhwbG90YWNpw7NuIHNvYnJlIGxhIG9icmEgYSB0ZXJjZXJvcywgZWwgYXV0b3IgZGVjbGFyYSBxdWUgdGllbmUgbGEgYXV0b3JpemFjacOzbiBleHByZXNhIGRlIGRpY2hvcyB0aXR1bGFyZXMgZGUgZGVyZWNob3MgYSBsb3MgZmluZXMgZGUgZXN0YSBjZXNpw7NuLCBvIGJpZW4gcXVlIGhhIGNvbnNlcnZhZG8gbGEgZmFjdWx0YWQgZGUgY2VkZXIgZXN0b3MgZGVyZWNob3MgZW4gbGEgZm9ybWEgcHJldmlzdGEgZW4gbGEgcHJlc2VudGUgY2VzacOzbi48L3A+DQoNCjxwPkNvbiBlbCBmaW4gZGUgZGFyIGxhIG3DoXhpbWEgZGlmdXNpw7NuIGEgbGEgb2JyYSBhIHRyYXbDqXMgZGUgUkVESSwgZWwgQVVUT1IgQ0VERSBhIEFOSUksIGRlIGZvcm1hIGdyYXR1aXRhIHkgTk8gRVhDTFVTSVZBLCBjb24gY2Fyw6FjdGVyIGlycmV2b2NhYmxlIGUgaWxpbWl0YWRvIGVuIGVsIHRpZW1wbyB5IGNvbiDDoW1iaXRvIG11bmRpYWwsIGxvcyBkZXJlY2hvcyBkZSByZXByb2R1Y2Npw7NuLCBkZSBkaXN0cmlidWNpw7NuLCBkZSBjb211bmljYWNpw7NuIHDDumJsaWNhLCBpbmNsdWlkbyBlbCBkZXJlY2hvIGRlIHB1ZXN0YSBhIGRpc3Bvc2ljacOzbiBlbGVjdHLDs25pY2EsIHBhcmEgcXVlIHB1ZWRhIHNlciB1dGlsaXphZGEgZGUgZm9ybWEgbGlicmUgeSBncmF0dWl0YSBwb3IgdG9kb3MgbG9zIHF1ZSBsbyBkZXNlZW4uPC9wPg0KDQo8cD5MYSBjZXNpw7NuIHNlIHJlYWxpemEgYmFqbyBsYXMgc2lndWllbnRlcyBjb25kaWNpb25lczo8L3A+DQoNCjxwPkxhIHRpdHVsYXJpZGFkIGRlIGxhIG9icmEgc2VndWlyw6EgY29ycmVzcG9uZGllbmRvIGFsIEF1dG9yIHkgbGEgcHJlc2VudGUgY2VzacOzbiBkZSBkZXJlY2hvcyBwZXJtaXRpcsOhIGEgUkVESTo8L3A+DQoNCjx1bD4gPGxpIHZhbHVlPShhKT5UcmFuc2Zvcm1hciBsYSBvYnJhIGVuIGxhIG1lZGlkYSBlbiBxdWUgc2VhIG5lY2VzYXJpbyBwYXJhIGFkYXB0YXJsYSBhIGN1YWxxdWllciB0ZWNub2xvZ8OtYSBzdXNjZXB0aWJsZSBkZSBpbmNvcnBvcmFjacOzbiBhIEludGVybmV0OyByZWFsaXphciBsYXMgYWRhcHRhY2lvbmVzIG5lY2VzYXJpYXMgcGFyYSBoYWNlciBwb3NpYmxlIHN1IGFjY2VzbyB5IHZpc3VhbGl6YWNpw7NuIHBlcm1hbmVudGUsIGHDum4gcG9yIHBhcnRlIGRlIHBlcnNvbmFzIGNvbiBkaXNjYXBhY2lkYWQsIHJlYWxpemFyIGxhcyBtaWdyYWNpb25lcyBkZSBmb3JtYXRvcyBwYXJhIGFzZWd1cmFyIGxhIHByZXNlcnZhY2nDs24gYSBsYXJnbyBwbGF6bywgaW5jb3Jwb3JhciBsb3MgbWV0YWRhdG9zIG5lY2VzYXJpb3MgcGFyYSByZWFsaXphciBlbCByZWdpc3RybyBkZSBsYSBvYnJhLCBlIGluY29ycG9yYXIgdGFtYmnDqW4g4oCcbWFyY2FzIGRlIGFndWHigJ0gbyBjdWFscXVpZXIgb3RybyBzaXN0ZW1hIGRlIHNlZ3VyaWRhZCBvIGRlIHByb3RlY2Npw7NuIG8gZGUgaWRlbnRpZmljYWNpw7NuIGRlIHByb2NlZGVuY2lhLiBFbiBuaW5nw7puIGNhc28gZGljaGFzIG1vZGlmaWNhY2lvbmVzIGltcGxpY2Fyw6FuIGFkdWx0ZXJhY2lvbmVzIGVuIGVsIGNvbnRlbmlkbyBkZSBsYSBvYnJhLjwvbGk+IA0KPGxpIHZhbHVlPShiKT5SZXByb2R1Y2lyIGxhIG9icmEgZW4gdW4gbWVkaW8gZGlnaXRhbCBwYXJhIHN1IGluY29ycG9yYWNpw7NuIGEgc2lzdGVtYXMgZGUgYsO6c3F1ZWRhIHkgcmVjdXBlcmFjacOzbiwgaW5jbHV5ZW5kbyBlbCBkZXJlY2hvIGEgcmVwcm9kdWNpciB5IGFsbWFjZW5hcmxhIGVuIHNlcnZpZG9yZXMgdSBvdHJvcyBtZWRpb3MgZGlnaXRhbGVzIGEgbG9zIGVmZWN0b3MgZGUgc2VndXJpZGFkIHkgcHJlc2VydmFjacOzbi48L2xpPiANCjxsaSB2YWx1ZT0oYyk+UGVybWl0aXIgYSBsb3MgdXN1YXJpb3MgbGEgZGVzY2FyZ2EgZGUgY29waWFzIGVsZWN0csOzbmljYXMgZGUgbGEgb2JyYSBlbiB1biBzb3BvcnRlIGRpZ2l0YWwuPC9saT4gDQo8bGkgdmFsdWU9KGQpPlJlYWxpemFyIGxhIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EgeSBwdWVzdGEgYSBkaXNwb3NpY2nDs24gZGUgbGEgb2JyYSBhY2Nlc2libGUgZGUgbW9kbyBsaWJyZSB5IGdyYXR1aXRvIGEgdHJhdsOpcyBkZSBJbnRlcm5ldC48L3VsPg0KDQo8cD5FbiB2aXJ0dWQgZGVsIGNhcsOhY3RlciBubyBleGNsdXNpdm8gZGUgbGEgY2VzacOzbiwgZWwgQXV0b3IgY29uc2VydmEgdG9kb3MgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIHNvYnJlIGxhIG9icmEsIHkgcG9kcsOhIHBvbmVybGEgYSBkaXNwb3NpY2nDs24gZGVsIHDDumJsaWNvIGVuIGVzdGEgeSBlbiBwb3N0ZXJpb3JlcyB2ZXJzaW9uZXMsIGEgdHJhdsOpcyBkZSBsb3MgbWVkaW9zIHF1ZSBlc3RpbWUgb3BvcnR1bm9zLjwvcD4NCg0KPHA+RWwgQXV0b3IgZGVjbGFyYSBiYWpvIGp1cmFtZW50byBxdWUgbGEgcHJlc2VudGUgY2VzacOzbiBubyBpbmZyaW5nZSBuaW5nw7puIGRlcmVjaG8gZGUgdGVyY2Vyb3MsIHlhIHNlYW4gZGUgcHJvcGllZGFkIGluZHVzdHJpYWwsIGludGVsZWN0dWFsIG8gY3VhbHF1aWVyIG90cm8geSBnYXJhbnRpemEgcXVlIGVsIGNvbnRlbmlkbyBkZSBsYSBvYnJhIG5vIGF0ZW50YSBjb250cmEgbG9zIGRlcmVjaG9zIGFsIGhvbm9yLCBhIGxhIGludGltaWRhZCB5IGEgbGEgaW1hZ2VuIGRlIHRlcmNlcm9zLCBuaSBlcyBkaXNjcmltaW5hdG9yaW8uIFJFREkgZXN0YXLDoSBleGVudG8gZGUgbGEgcmV2aXNpw7NuIGRlbCBjb250ZW5pZG8gZGUgbGEgb2JyYSwgcXVlIGVuIHRvZG8gY2FzbyBwZXJtYW5lY2Vyw6EgYmFqbyBsYSByZXNwb25zYWJpbGlkYWQgZXhjbHVzaXZhIGRlbCBBdXRvci48L3A+DQoNCjxwPkxhIG9icmEgc2UgcG9uZHLDoSBhIGRpc3Bvc2ljacOzbiBkZSBsb3MgdXN1YXJpb3MgcGFyYSBxdWUgaGFnYW4gZGUgZWxsYSB1biB1c28ganVzdG8geSByZXNwZXR1b3NvIGRlIGxvcyBkZXJlY2hvcyBkZWwgYXV0b3IgeSBjb24gZmluZXMgZGUgZXN0dWRpbywgaW52ZXN0aWdhY2nDs24sIG8gY3VhbHF1aWVyIG90cm8gZmluIGzDrWNpdG8uIEVsIG1lbmNpb25hZG8gdXNvLCBtw6FzIGFsbMOhIGRlIGxhIGNvcGlhIHByaXZhZGEsIHJlcXVlcmlyw6EgcXVlIHNlIGNpdGUgbGEgZnVlbnRlIHkgc2UgcmVjb25vemNhIGxhIGF1dG9yw61hLiBBIHRhbGVzIGZpbmVzIGVsIEF1dG9yIGFjZXB0YSBlbCB1c28gZGUgbGljZW5jaWFzIENyZWF0aXZlIENvbW1vbnMgeSBFTElHRSB1bmEgZGUgZXN0YXMgbGljZW5jaWFzIGVzdGFuZGFyaXphZGFzIGEgbG9zIGZpbmVzIGRlIGNvbXVuaWNhciBzdSBvYnJhLjwvcD4NCg0KPHA+RWwgQXV0b3IsIGNvbW8gZ2FyYW50ZSBkZSBsYSBhdXRvcsOtYSBkZSBsYSBvYnJhIHkgZW4gcmVsYWNpw7NuIGEgbGEgbWlzbWEsIGRlY2xhcmEgcXVlIGxhIEFOSUkgc2UgZW5jdWVudHJhIGxpYnJlIGRlIHRvZG8gdGlwbyBkZSByZXNwb25zYWJpbGlkYWQsIHNlYSDDqXN0YSBjaXZpbCwgYWRtaW5pc3RyYXRpdmEgbyBwZW5hbCwgeSBxdWUgw6lsIG1pc21vIGFzdW1lIGxhIHJlc3BvbnNhYmlsaWRhZCBmcmVudGUgYSBjdWFscXVpZXIgcmVjbGFtbyBvIGRlbWFuZGEgcG9yIHBhcnRlIGRlIHRlcmNlcm9zLiBMQSBBTklJIGVzdGFyw6EgZXhlbnRhIGRlIGVqZXJjaXRhciBhY2Npb25lcyBsZWdhbGVzIGVuIG5vbWJyZSBkZWwgQXV0b3IgZW4gZWwgc3VwdWVzdG8gZGUgaW5mcmFjY2lvbmVzIGEgZGVyZWNob3MgZGUgcHJvcGllZGFkIGludGVsZWN0dWFsIGRlcml2YWRvcyBkZWwgZGVww7NzaXRvIHkgYXJjaGl2byBkZSBsYSBvYnJhLjwvcD4NCg0KPHA+QU5JSSBub3RpZmljYXLDoSBhbCBBdXRvciBkZSBjdWFscXVpZXIgcmVjbGFtYWNpw7NuIHF1ZSByZWNpYmEgZGUgdGVyY2Vyb3MgZW4gcmVsYWNpw7NuIGNvbiBsYSBvYnJhIHksIGVuIHBhcnRpY3VsYXIsIGRlIHJlY2xhbWFjaW9uZXMgcmVsYXRpdmFzIGEgbG9zIGRlcmVjaG9zIGRlIHByb3BpZWRhZCBpbnRlbGVjdHVhbCBzb2JyZSBlbGxhLjwvcD4NCg0KPHA+RWwgQXV0b3IgcG9kcsOhIHNvbGljaXRhciBlbCByZXRpcm8gbyBsYSBpbnZpc2liaWxpemFjacOzbiBkZSBsYSBvYnJhIGRlIFJFREkgc8OzbG8gcG9yIGNhdXNhIGp1c3RpZmljYWRhLiBBIHRhbCBmaW4gZGViZXLDoSBtYW5pZmVzdGFyIHN1IHZvbHVudGFkIGVuIGZvcm1hIGZlaGFjaWVudGUgeSBhY3JlZGl0YXIgZGViaWRhbWVudGUgbGEgY2F1c2EganVzdGlmaWNhZGEuIEFzaW1pc21vIEFOSUkgcG9kcsOhIHJldGlyYXIgbyBpbnZpc2liaWxpemFyIGxhIG9icmEgZGUgUkVESSwgcHJldmlhIG5vdGlmaWNhY2nDs24gYWwgQXV0b3IsIGVuIHN1cHVlc3RvcyBzdWZpY2llbnRlbWVudGUganVzdGlmaWNhZG9zLCBvIGVuIGNhc28gZGUgcmVjbGFtYWNpb25lcyBkZSB0ZXJjZXJvcy48L3A+Gobiernohttps://www.anii.org.uy/https://redi.anii.org.uy/oai/requestjmaldini@anii.org.uyUruguayopendoar:94212021-10-07T18:31:54REDI - Agencia Nacional de Investigación e Innovaciónfalse
spellingShingle On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks
Mayr, Franz
Artificial intelligence
Recurrent neural networks
Verification
Ciencias Naturales y Exactas
Ciencias de la Computación e Información
status_str submittedVersion
title On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks
title_full On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks
title_fullStr On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks
title_full_unstemmed On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks
title_short On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks
title_sort On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks
topic Artificial intelligence
Recurrent neural networks
Verification
Ciencias Naturales y Exactas
Ciencias de la Computación e Información
url https://hdl.handle.net/20.500.12381/466
https://doi.org/10.1007/978-3-030-57321-8_19