Skip to main content
eScholarship
Open Access Publications from the University of California

UC Davis

UC Davis Electronic Theses and Dissertations bannerUC Davis

Constrained Control of a Process Network using Multi-Agent Reinforcement Learning

Abstract

This thesis examines the application of model-free multi-agent reinforcement learning for autonomous model-free control and constrained optimization of process networks comprised of multiple interconnected processes. Initially, different types of multi-agent reinforcement learning algorithms are outlined, and the challenges faced in their implementation for process control are identified. Reinforcement learning is then combined with optimal control, where the reward function is designed based on quadratic penalty functions to improve computational time and achieve improved performance compared to sparse rewards proposed in previous research studies. Several multi-agent reinforcement learning strategies, including centralized, decentralized, and mixed-type (centralized action and decentralized execution) are considered and compared. An assessment of the benefits and drawbacks of each control strategy is presented. Finally, an evaluation of the robustness of the control system against parametric uncertainties and sensor noise which are of practical significance is given. Throughout the thesis, a model system comprising of two interconnected non-isothermal chemical reactors with a recycle stream and multiple reactions is used to illustrate the design and implementation of the reinforcement learning agents. The control objective is to regulate the reactors' temperatures and concentrations at desired set-points. The proposed framework can be applied to more general process networks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View