Safexplain: Safe and Explainable Critical Embedded Systems Based on AI

J Abella, J Perez, C Englund, B Zonooz, G Giordana, C Donzella, FJ Cazorla, E Mezzetti, I Serra, A Brando, I Agirre, F Eizaguirre, TH Bui, E Arani, F Sarfraz, A Balasubramaniam, A Badar, I Bloise, L Feruglio, I CinelliD Brighenti, D Cunial, Ieee (Editor)

Producció científica: LLibre/informeLlibreRecercaAvaluat per experts

Resum

Deep Learning (DL) techniques are at the heart of most future advanced software functions in Critical Autonomous AI-based Systems (CAIS), where they also represent a major competitive factor. Hence, the economic success of CAIS industries (e.g., automotive, space, railway) depends on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. However, there is a fundamental gap between Functional Safety (FUSA) requirements on CAIS and the nature of DL solutions. This gap stems from the development process of DL libraries and affects high-level safety concepts such as (1) explainability and traceability, (2) suitability for varying safety requirements, (3) FUSA-compliant implementations, and (4) real-time constraints. As a matter of fact, the data-dependent and stochastic nature of DL algorithms clashes with current FUSA practice, which instead builds on deterministic, verifiable, and pass/fail test-based software. The SAFEXPLAIN project tackles these challenges and targets by providing a flexible approach to allow the certification - hence adoption - of DL-based solutions in CAIS building on: (1) DL solutions that provide end-to-end traceability, with specific approaches to explain whether predictions can be trusted and strategies to reach (and prove) correct operation, in accordance to certification standards; (2) alternative and increasingly sophisticated design safety patterns for DL with varying criticality and fault tolerance requirements; (3) DL library implementations that adhere to safety requirements; and (4) computing platform configurations, to regain determinism, and probabilistic timing analyses, to handle the remaining non-determinism.
Idioma originalAnglès
Nombre de pàgines6
DOIs
Estat de la publicacióPublicada - 2023

Fingerprint

Navegar pels temes de recerca de 'Safexplain: Safe and Explainable Critical Embedded Systems Based on AI'. Junts formen un fingerprint únic.

Com citar-ho