On Safety, Assurance and Reliability: A Software Engineering Perspective
Par
Marsha Chechik
Université de Toronto
Jeudi 31 mars 2022, 15:30-16:30 EST
Sur zoom
Pour assister à la conférence, remplissez le formulaire Google avant mercredi 30 mars, 19h.
Résumé:
From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from test cases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Building safety arguments for traditional software systems is difficult they are lengthy and expensive to maintain, especially as software undergoes change. Safety is also notoriously noncompositional each subsystem might be safe but together they may create unsafe behaviors. It is also easy to miss cases, which in the simplest case would mean developing an argument for when a condition is true but missing arguing for a false condition. Furthermore, many MLbased systems are becoming safety critical. For example, recent Tesla self driving cars misclassified emergency vehicles and caused multiple crashes. MLbased systems typically do not have precisely specified and machineverifiable requirements. While some safety requirements can be stated clearly: "the system should detect all pedestrians at a crossing", these requirements are for the entire system, making them too highl evel for safety analysis of individual components. Thus, systems with ML components (MLCs) add a significant layer of complexity for safety assurance. I argue that safety assurance should be an integral part of building safe and reliable software systems, but this process needs support from advanced software engineering and software analysis. In this talk, I outline a few approaches for development of principled, tool-supported methodologies for creating and managing assurance arguments. I then describe recent work on specifying and verifying reliability requirements for machine-learned components in safety-critical domains.
Biographie :
Marsha Chechik is Professor and Chair in the Department of Computer Science at the University of Toronto. She received her Ph.D. from the University of Maryland in 1996. Prof. Chechik’s research interests are in the application of formal methods to improve the quality of software. She has authored over 200 papers in formal methods, software specification and verification, computer safety and security and requirements engineering. Marsha Chechik is Program Chair of the 2023 International Conference on Formal Methods (FM'23). She has been Program Committee Co-Chair of the 2021 International Conference on Foundations of Software Engineering (ESEC/FSE’21), 2018 International Conference in Software Engineering (ICSE’18), 2016 International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS'16), the 2016 Working Conference on Verified Software: Theories, Tools, and Experiments (VSTTE16), the 2014 International Conference on Automated Software Engineering (ASE'14), the 2008 International Conference on Concurrency Theory (CONCUR'08), the 2008 International Conference on Computer Science and Software Engineering (CASCON'08), and the 2009 International Conference on Formal Aspects of Software Engineering (FASE'09). She is a Distinguished Member of ACM, a Vice Chair of ACM SIGSOFT, and holds a Bell University Chair in Software Engineering.