On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks

Mayr, Franz - Yovine, Sergio - Visca, Ramiro

Resumen:

We propose a procedure for checking properties of recurrent neural networks without any access to their internal structure nor code. Our approach is a case of black-box checking based on learning a prob- ably approximately correct, regular approximation of the intersection of the language of the black-box (the network) with the complement of the property to be checked, without explicitly building automata-based in- dividual representations of them. When the algorithm returns an empty language, there is a proven upper bound on the probability of the network not verifying the requirement. When the returned language is nonempty, it is certain the network does not satisfy the property. In this case, a regular language approximating the intersection is output together with true sequences of the network violating the property. We show that this approach offers better guarantees than post-learning verification where the property is checked on a learned model of the network alone. Be- sides, it does not require resorting to an external decision procedure for verification nor fixing a specific requirement specification formalism.


Detalles Bibliográficos
2020
Agencia Nacional de Investigación e Innovación
Artificial intelligence
Recurrent neural networks
Verification
Ciencias Naturales y Exactas
Ciencias de la Computación e Información
Inglés
Agencia Nacional de Investigación e Innovación
REDI
https://hdl.handle.net/20.500.12381/466
https://doi.org/10.1007/978-3-030-57321-8_19
Acceso abierto
Reconocimiento 4.0 Internacional. (CC BY)
Resumen:
Sumario:We propose a procedure for checking properties of recurrent neural networks without any access to their internal structure nor code. Our approach is a case of black-box checking based on learning a prob- ably approximately correct, regular approximation of the intersection of the language of the black-box (the network) with the complement of the property to be checked, without explicitly building automata-based in- dividual representations of them. When the algorithm returns an empty language, there is a proven upper bound on the probability of the network not verifying the requirement. When the returned language is nonempty, it is certain the network does not satisfy the property. In this case, a regular language approximating the intersection is output together with true sequences of the network violating the property. We show that this approach offers better guarantees than post-learning verification where the property is checked on a learned model of the network alone. Be- sides, it does not require resorting to an external decision procedure for verification nor fixing a specific requirement specification formalism.