TY - BOOK
T1 - Safexplain: Safe and Explainable Critical Embedded Systems Based on AI
AU - Abella, J
AU - Perez, J
AU - Englund, C
AU - Zonooz, B
AU - Giordana, G
AU - Donzella, C
AU - Cazorla, FJ
AU - Mezzetti, E
AU - Serra, I
AU - Brando, A
AU - Agirre, I
AU - Eizaguirre, F
AU - Bui, TH
AU - Arani, E
AU - Sarfraz, F
AU - Balasubramaniam, A
AU - Badar, A
AU - Bloise, I
AU - Feruglio, L
AU - Cinelli, I
AU - Brighenti, D
AU - Cunial, D
A2 - Ieee, null
PY - 2023
Y1 - 2023
N2 - Deep Learning (DL) techniques are at the heart of most future advanced software functions in Critical Autonomous AI-based Systems (CAIS), where they also represent a major competitive factor. Hence, the economic success of CAIS industries (e.g., automotive, space, railway) depends on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. However, there is a fundamental gap between Functional Safety (FUSA) requirements on CAIS and the nature of DL solutions. This gap stems from the development process of DL libraries and affects high-level safety concepts such as (1) explainability and traceability, (2) suitability for varying safety requirements, (3) FUSA-compliant implementations, and (4) real-time constraints. As a matter of fact, the data-dependent and stochastic nature of DL algorithms clashes with current FUSA practice, which instead builds on deterministic, verifiable, and pass/fail test-based software. The SAFEXPLAIN project tackles these challenges and targets by providing a flexible approach to allow the certification - hence adoption - of DL-based solutions in CAIS building on: (1) DL solutions that provide end-to-end traceability, with specific approaches to explain whether predictions can be trusted and strategies to reach (and prove) correct operation, in accordance to certification standards; (2) alternative and increasingly sophisticated design safety patterns for DL with varying criticality and fault tolerance requirements; (3) DL library implementations that adhere to safety requirements; and (4) computing platform configurations, to regain determinism, and probabilistic timing analyses, to handle the remaining non-determinism.
AB - Deep Learning (DL) techniques are at the heart of most future advanced software functions in Critical Autonomous AI-based Systems (CAIS), where they also represent a major competitive factor. Hence, the economic success of CAIS industries (e.g., automotive, space, railway) depends on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. However, there is a fundamental gap between Functional Safety (FUSA) requirements on CAIS and the nature of DL solutions. This gap stems from the development process of DL libraries and affects high-level safety concepts such as (1) explainability and traceability, (2) suitability for varying safety requirements, (3) FUSA-compliant implementations, and (4) real-time constraints. As a matter of fact, the data-dependent and stochastic nature of DL algorithms clashes with current FUSA practice, which instead builds on deterministic, verifiable, and pass/fail test-based software. The SAFEXPLAIN project tackles these challenges and targets by providing a flexible approach to allow the certification - hence adoption - of DL-based solutions in CAIS building on: (1) DL solutions that provide end-to-end traceability, with specific approaches to explain whether predictions can be trusted and strategies to reach (and prove) correct operation, in accordance to certification standards; (2) alternative and increasingly sophisticated design safety patterns for DL with varying criticality and fault tolerance requirements; (3) DL library implementations that adhere to safety requirements; and (4) computing platform configurations, to regain determinism, and probabilistic timing analyses, to handle the remaining non-determinism.
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=uab_pure&SrcAuth=WosAPI&KeyUT=WOS:001027444200173&DestLinkType=FullRecord&DestApp=WOS_CPL
UR - https://www.scopus.com/pages/publications/85162662708
U2 - 10.23919/DATE56975.2023.10137128
DO - 10.23919/DATE56975.2023.10137128
M3 - Book
SN - 979-8-3503-9624-9
BT - Safexplain: Safe and Explainable Critical Embedded Systems Based on AI
ER -