CAREER: Explainability for Integrated Cyber-Physical Systems

The research project aims to develop groundbreaking architectures and methodologies for creating explainable, reliable, and trustworthy integrated Cyber-Physical Systems (i-CPS), with a specific focus on societal-scale systems characterized by deep learning components, human expertise, real-time interactions, and dynamic physical environments. The core hypothesis is that rigorous explainability fosters system understanding and builds trust by ensuring transparent, well-documented system behaviors.

The project's key innovation is a hierarchical approach to developing explainers using formal methods that comprehensively analyze and quantify uncertainty across sensing, perception, control, and actuation processes. By integrating domain experts' knowledge and developing novel formal methods, the research seeks to transform CPS trustworthiness in real-world deployments.

The research is structured across four primary thrusts: (1) developing an explainable CPS architecture with formal interpretation of system components and interactions, (2) formalizing and incorporating domain knowledge to enhance learning-enabled components, (3) evaluating methods through real-world emergency response deployment in Nashville, TN, and (4) creating an interdisciplinary educational approach to CPS explainability.

The broader impacts of this research are significant, promising to enhance trust, safety, and accountability in critical domains like healthcare, transportation, and industrial automation by making complex system decision-making processes transparent and understandable. By empowering users and facilitating cross-disciplinary collaboration, the project aims to advance human-centered design science and promote broader acceptance of cyber-physical technologies.

Sponsors
NSF
Lead PI
Meiyi Ma