filler

    Keynote speaker: Zhong-Ping Jiang

    Title: Robust Adaptive Dynamic Programming with Applications to Power Systems and Neuroscience

    Abstract:

    • Bellman's Dynamic Programming is a powerful theory for addressing multi-stage decision making problems, and has been used to solve the optimal control problem. However, it suffers from the ‘curse of dimensionality' and the ‘curse of modeling’. In this talk, a new framework of robust adaptive dynamic programming (RADP) is proposed to relax these two restrictions and, as opposed to the past literature, will focus exclusively on continuous-time dynamic systems. By means of reinforcement learning and nonlinear control techniques, tools for the design of adaptive optimal nonlinear controllers will be developed. We will show that RADP is also a significant extension of the existing work in approximate/adaptive dynamic programming (ADP) in that the order of the dynamic processes in question is not known. The mismatch between the real plant and the simplified model is called dynamic uncertainty. Applications to power systems and biological motor control are presented to illustrate the effectiveness of RADP.

      Bio:

    • Zhong-Ping Jiang received his PhD degree in automatic control and applied mathematics from the Ecole des Mines de Paris in 1989. Currently, he is a Professor of Electrical and Computer Engineering at the Tandon School of Engineering, New York University. His main research interests include stability theory, robust/adaptive/distributed nonlinear control, adaptive dynamic programming and their applications to information, mechanical and biological systems. In these fields, he has authored over 180 journal papers and numerous conference papers with Google Scholar h-index 61. He is coauthor of the books Stability and Stabilization of Nonlinear Systems (with Dr. I. Karafyllis, Springer, 2011) and Nonlinear Control of Dynamic Networks (with Drs. T. Liu and D.J. Hill, Taylor & Francis, 2014). Professor Jiang is an IEEE Fellow and an IFAC Fellow.

      ADPRL 1

      Special session 2

    • Hao Xu
    • Finite Horizon Optimal Control and Communication Co-design for Uncertain Networked Control System with Transmit Power Constraint
    • Avijit Das, Zhen Ni and Xiangnan Zhong
    • Near Optimal Control for Microgrid Energy Systems Considering Battery Lifetime Characteristics
    • Gokhan Cetin, M.Sami Fadali and Hao Xu
    • Model-free Q-learning Optimal Resource Allocation in Uncertain Communication Networks
    • Dongbin Zhao, Haitao Wang, Kun Shao and Yuanheng Zhu
    • Deep Reinforcement Learning with Experience Replay Based on SARSA

      Regular session

    • Tommaso Mannucci and Erik-Jan van Kampen
    • A Hierarchical Maze Navigation Algorithm with Reinforcement Learning and Mapping
    • Arryon Tijsma, Madalina M. Drugan and Marco Wiering
    • Comparing Exploration Strategies for Q-learning in Random Stochastic Mazes

      ADPRL 2

      Special session 3

    • Wang Chong
    • Frequency Stabilization Design for Interconnected Microgrid based on T-S Fuzzy Model with Multiple Time Delays
    • Jingwei Hu, Qiuye Sun and Fei Teng
    • A Game-Theoretic Pricing Model for Energy Internet in Day-Ahead Trading Market Considering Distributed Generations Uncertainty
    • Yihui Zuo and Xiangjun Li
    • Game Theory Applied in System of Renewable Power Generation with HVDC Out-sending Facilitated by Hundred Megawatts Battery Energy Storage Station
    • Qinglai Wei, Ruizhuo Song and Derong Liu
    • Iterative Q-Learning-Based Nonlinear Optimal Tracking Control

      Regular session

    • Suhas Shyamsundar, Tommaso Mannucci and Erik-Jan van Kampen
    • Reinforcement Learning based Algorithm with Safety Handling and Risk Perception
    • Mathijs Pieters and Marco Wiering<\li> Q-learning with Experience Replay in a Dynamic Environment

      ADPRL 3

      Keynote speaker

      Regular session

    • Simone Parisi, Alexander Blank, Tobias Viernickel and Jan Peters<\li> Local-utopia Policy Selection for Multi-objective Reinforcement Learning
    • Toru Hishinuma and Kei Senda
    • Robust and Explorative Behavior in Model-based Bayesian Reinforcement Learning
    • Zhentao Tang, Dongbin Zhao, Kun Shao and Le Lv
    • ADP with MCTS algorithm for Gomoku

      Poster session

    • Feng Liu, Tao Zheng and Xia Hua
    • A Multi-Criteria Value Iteration Algorithm for POMDP problems
    • Weisheng Qian, Quan Liu, Zongzhang Zhang, Zhiyuan Pan and Shan Zhong
    • Policy Graph Pruning and Optimization in Monte Carlo Value Iteration for Continuous-State POMDPs