2019 |
Jeong Woo Kim / Hyungbo Shim / Insoon Yang On Improving the Robustness of Reinforcement Learning-Based Controllers Using Disturbance Observer Proceedings Article In: Proc. of 2019 IEEE 58th Conference on Decision and Control, pp. 8487-852, IEEE, Nice, France, 2019. Abstract | Links | BibTeX | Tags: Disturbance observer, Reinforcement learning @inproceedings{KimShimYang19, Because reinforcement learning (RL) may cause issues in stability and safety when directly applied to physical systems, a simulator is often used to learn a control policy. However, the control performance may be easily deteriorated in a real plant due to the discrepancy between the simulator and the plant. In this paper, we propose an idea to enhance the robustness of such RL-based controllers by utilizing the disturbance observer (DOB). This method compensates for the mismatch between the plant and simulator, and rejects disturbance to maintain the nominal performance while guaranteeing robust stability. Furthermore, the proposed approach can be applied to partially observable systems. We also characterize conditions under which the learned controller has a provable performance bound when connected to the physical system. |
List of English Publication
2019 |
On Improving the Robustness of Reinforcement Learning-Based Controllers Using Disturbance Observer Proceedings Article In: Proc. of 2019 IEEE 58th Conference on Decision and Control, pp. 8487-852, IEEE, Nice, France, 2019. |