Working Group "Health Aware Control Design and Safe Learning for Safety Critical Systems"

The working group will focus on Health-Aware Control (HAC) design and Safe Control Learning with special focus on safety critical dynamical systems, emphasizing on the development of approaches that envisage incorporation of system health or remaining useful life estimates in control design to preserve or extend system lifespan while ensuring specific performance levels. The working group will assemble researchers and practitioners from across the domains including reliability, maintenance, diagnosis, and prognostics, as well as control theory including model based: model predictive control and optimal control, as well as, learning based: reinforcement learning, safe RL, robust RL as deep learning domains. The working group will encourage development of fundamental work encompassing estimation theory and control theory along with fundamental principles of reinforcement learning and the emerging field of safe control learning. At the same time, the working group will utilize the results of fundamental research to solve contemporary problems in safety-critical systems such as power networks, transportation systems, robotics, autonomous vehicles etc.

Description

In recent decades, research efforts have predominantly centered around fault detection, fault isolation, and failure prognostics. Prognostics of failure typically involves accurate estimation of the state of health (SOH) of system followed by its projection into the future to predict the remaining useful life (RUL) of assets. Hybrid approaches that combine data and models remain much efficient owing to the severe nonlinearity and non-availability of exact models. Most of the existing works in prognostics, whether Artificial Intelligence (AI) or estimation theory-based, have focused on open-loop dynamical systems, neglecting the influence of controller inputs/actions and controller reconfiguration.

Needless to say that closed-loop system operation is inevitable for ensuring stability, efficiency and autonomy in safety-critical systems such as power, water and transport infrastructure, whilst remaining sensitive to functional degradation, it becomes imperative to comprehensively integrate prognostics-based information into control design/learning processes.

 To address this scientific challenge, Health Aware Control (HAC) has recently emerged as a promising axis wherein system SOH estimation and prognostics are taken into the account for controller design as well as reconfiguration, to preserve or extend system lifespan while ensuring specific performance levels. HAC promises enhanced reliability and extended RUL for systems under functional degradation. Over the past decade, there has been a significant surge in interest regarding the integration of state of health within the control design process. This heightened attention is evident across various control domains, including model predictive control, linear matrix inequalities-based approaches, linear parametric varying model-based methods, set-based strategies, and learning-based approaches. These diverse methodologies incorporate predictions of the state of health, remaining useful life, or system reliability within the control framework, resulting in what is now referred to as health-aware control (HAC) design approaches. However, the usefulness/practical viability of such approaches remains limited, in the absence of exact system models under healthy and faulty conditions and knowledge of degradation/failure models.

 On the other hand, as most degradation mechanisms are nonlinear and unknown in nature, it is essential to consider data for prognostics as well as control reconfiguration calling for development of data-driven based approaches. In this context, Reinforcement learning (RL) based approaches for learning optimal control policies/laws are suitable for designing optimal controllers in the presence of input-output data (without exact knowledge of system models). However, assuring safety during the learning phase (exploration) as well as operational phase (exploitation) is of paramount importance when it comes to dynamical systems in RL framework. As such, there has been an unprecedented and swift development of control learning approaches recently, termed as Safe Reinforcement Learning (Safe RL) which prioritize the safety, stability, and optimality of systems, providing guarantees over an infinite time horizon.

Activities:

- Specific Workshop / Invited Session in IFAC TC 6.4 & World Congress, as well as to relevant IFAC Symposia

- Special Issue in JCR indexed journals 

- A survey in Annual Revies in Control

- Organization of Seminar & Round table Discussion sessions at national (e.g. GDR MACS in France, World Class Maintenance in the Netherlands) and international level (during conferences)

Members

  • Vicenç Puig (Spain)
  • C. Bérenguer (France)
  • Silvio Simani (Italy)
  • Didier Theilliol (France)
  • Youmin Zhang (Canada)
  • K. Valavanis (USA)
  • J. Sun (USA)
  • Gautam Biswas (USA)
  • Chetan Kulkarni (USA)
  • Olga Fink (Switzerland)
  • Tiedo Tinga (Netherlands) 
  • Alfredo Nuñez Vicencio (Netherlands)
  • Prof Jin Jiang (Canada)
  • Marcos Orchard (Chile)
  • M. Galeotta (Industry ESA, Italy)
  • S. Le Gonec (Industry Ariane Group, France)
  • M. Mensler (Industrial Nissan Automotive Europe, France)
  • Sohaïb El Outmani, Entroview, France

Contact Persons

Mayank S JHA1, Vasso Reppa2, John-Jairo Martinez Molina3

1CRAN, UMR CNRS 7039, Université de Lorraine, France

2Maritime & Transport Technology Dept, TU Delft, Netherlands

3GIPSA-lab, Université de Grenoble, France