Reinforcement Learning-based Event-Triggered Model Predictive Control for Autonomous Vehicle Path Following

Document Type

Conference Proceeding

Publication Date

1-1-2022

Abstract

Event-triggered model predictive control (MPC) has been proposed in literature to alleviate the high computational requirement of MPC. Compared to conventional time-triggered MPC, event-triggered MPC solves the optimal control problem only when an event is triggered. Several event-trigger policies have been studied in literature, typically requiring a prior knowledge of the MPC closed-loop system behavior. This paper addresses such limitation by investigating the use of model-free reinforcement learning (RL) to trigger MPC. Specifically, the optimal event-trigger policy is learnt by an RL agent through interactions with the MPC closed-loop system, whose dynamical behavior is assumed to be unknown to the RL agent. A reward function is defined to balance the closed-loop control performance and event frequency. As an illustrative example, the autonomous vehicle path following problem is used to demonstrate the applicability of using RL to learn and execute trigger policy for event-triggered MPC.

Publication Source (Journal or Book title)

Proceedings of the American Control Conference

First Page

3342

Last Page

3347

This document is currently not available here.

Share

COinS