2019 IEEE Symposium Series on Computational Intelligence


IEEE Symposium Series on Computational Intelligence

December 6-9, 2019 Xiamen, China

 

IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (IEEE ADPRL)

Adaptive dynamic programming (ADP) and reinforcement learning (RL) are two related paradigms for solving decision making problems where a performance index must be optimized over time. ADP and RL methods are enjoying a growing popularity and success in applications, fueled by their ability to deal with general and complex problems, including features such as uncertainty, stochastic effects, and nonlinearity.
ADP tackles these challenges by developing optimal control methods that adapt to uncertain systems over time. A user-defined cost function is optimized with respect to an adaptive control law, conditioned on prior knowledge of the system and its state, in the presence of uncertainties. A numerical search over the present value of the control minimizes a nonlinear cost function forward-in-time providing a basis for real-time, approximate optimal control. The ability to improve performance over time subject to new or unexplored objectives or dynamics has made ADP successful in applications from optimal control and estimation, operation research, and computational intelligence.
RL takes the perspective of an agent that optimizes its behavior by interacting with its environment and learning from the feedback received. The long-term performance is optimized by learning a value function that predicts the future intake of rewards over time. A core feature of RL is that it does not require any a priori knowledge about the environment. Therefore, the agent must explore parts of the environment it does not know well, while at the same time exploiting its knowledge to maximize performance. RL thus provides a framework for learning to behave optimally in unknown environments, which has already been applied to robotics, game playing, network management and traffic control.
The goal of the IEEE Symposium on ADPRL is to provide an outlet and a forum for interaction between researchers and practitioners in ADP and RL, in which the clear parallels between the two fields are brought together and exploited. We equally welcome contributions from control theory, computer science, operations research, computational intelligence, neuroscience, as well as other novel perspectives on ADPRL. We host original papers on methods, analysis, applications, and overviews of ADPRL. We are interested in applications from engineering, artificial intelligence, economics, medicine, and other relevant fields.

Topics

  • Deep learning combined with ADPRL
  • Convergence and performance analysis
  • RL and ADP-based control
  • Function approximation and value function representation
  • Complexity issues in RL and ADP
  • Policy gradient and actor-critic methods
  • Direct policy search
  • Planning and receding-horizon methods
  • Monte-Carlo tree search and other Monte-Carlo methods
  • Adaptive feature discovery
  • Parsimoneous function representation
  • Statistical learning and PAC bounds for RL
  • Learning rules and architectures
  • Bandit techniques for exploration
  • Bayesian RL and exploration
  • Finite-sample analysis
  • Partially observable Markov decision processes
  • Neuroscience and biologically inspired control
  • ADP and RL for multiplayer games and multiagent systems
  • Distributed intelligent systems
  • Multi-objective optimization for ADPRL
  • Transfer learning
  • Applications of ADP and RL

Accepted Special Sessions


Title: Knowledge Representation and Transfer for Efficient Reinforcement Learning

Special Session Chairs

Prof. Minwoo Jake Lee, University of North Carolina, Charlotte
Email:

Content

To ensure an RL agent's reliable and robust operation in diverse environments, the agent must possess and maintain the requisite knowledge or skills to do their tasks properly.
From the three stages of cognitive information processing, knowledge acquisition, retention and transfer, this special topic focuses on retention (or representation) and transfer of acquired knowledge for efficiency.

Many recent successes in reinforcement learning has faced the challenges in a long learning time and lack of generalized solutions even with the long time spent for training. Transfer learning has been attempted in various ways to reuse previously learned knowledge to a new, similar task. Also, modeling or developing an effective and generalizable representation have arisen as a key to improve knowledge transfer.

This session is expected to promote discussions on diverse approaches to accelerate and generalize reinforcement learning with knowledge retention and transfer. Our goal is to attract researchers who is working on different aspects to improve transfer learning. We intend to make this an exciting event for not only discussing diverse transfer and retention approaches but also exchanging future research directions and challenges. The discussion in topics below will help the researchers scale up reinforcement learning methods to solve complex and eventually real-world problems.

Scope and Topics

Submissions are expected from, but not limited to the following topics:

  • Transfer Learning for (deep) reinforcement learning
  • Transfer in heterogeneous environments
  • Online Transfer learning in (deep) Reinforcement Learning
  • Knowledge Sharing/Transfer Learning for multi-agent reinforcement learning
  • Simulation to Real Transfer
  • Skill/Option Learning and Transfer
  • Meta Learning
  • Imitation/Human Demonstration/Inverse Reinforcement Learning
  • Lifelong Reinforcement Learning
  • Knowledge Representation
  • Representation Learning
  • Experience Replay

Title: Online Data Learning Designs for Networked Multi-Agent Systems under Constraints

Special Session Chairs

Hao Xu, University of Nevada, NV, USA
Email:

Avimanyu Sahoo, Oklahoma State University, OK, USA
Email:

Content

Through the recent years, online data learning approaches and frameworks have been developed, including reinforcement learning, adaptive dynamic programming, biologically inspired reinforcement learning and so on, to effectively address the intelligent and resilient issues in networked multi-agent systems especially under various constraints including limited computation and communication resources and so on. Many emerging smart and network connected multi-agent systems are using online data learning based approaches to find real-time optimal control, such as smart transportation systems, smart grid, smart communities, networked control systems and cyber-physical systems. They are able to efficiently solve the optimal and reliable designs for networked multiagent systems in a distributed and timely manner as well as relaxing the impractical requirement about actual knowledge of system dynamics. This special session will enhance the discussion among different societies to explore more challenging cross-disciplinary topics along this direction

Scope and Topics

This special session will provide a forum to deliver and discuss original research results and new techniques in online data learning for networked multi-agent systems under constraints. We are particularly interested in the following topics:

  • Online data learning design for networked multi-agent systems
  • Online data learning based optimal control with constraints
  • Online data learning based robust adaptive control with contraints
  • Online data learning based event-triggered/self-triggered control
  • Online data learning based network and control co-design
  • Biologically inspired nnline data learning for multiplayer games
  • Novel online data learning algorithms, stability analysis and convergence
  • New data/self-learning for smart transportation systems

Title: Deep Reinforcement Learning and Adaptive Dynamic Programming for autonomous Driving

Special Session Chairs

Dr. Qichao Zhang , Institute of Automation, Chinese Academy of Sciences, China
E-mail:

Dr. Yaran Chen, Institute of Automation, Chinese Academy of Sciences, China
E-mail:

Content

Recently, autonomous driving has received considerable attention from many companies and research institutions. Autonomous driving cars are expected to play a key role in the future of urban transportation systems, as they can increase driving safety, ease traffic congestion, reduce energy consumption, set the driver free and so on. However, the intelligence of self-driving cars is not enough for complex urban scenarios now. Deep reinforcement learning (DRL) and Adaptive Dynamic Programming (ADP), which can make agents learn how to act through interaction with the environment, is a strong AI paradigm. Recently, the approaches based on deep reinforcement learning and adaptive dynamic programming are used to address the navigation/perception/prediction/ planning/control tasks instead of the traditional methods for autonomous driving in complex or rare scenarios. In addition, neural architecture search with reinforcement learning can provide the optimal models for the perception and prediction tast. This special session aims to discuss the recent development and existed problems of ADP and DRL in autonomous driving.

Scope and Topics

The aim of this special session will be to provide an account of the state-of-the-art in this fast moving and cross-disciplinary field of adaptive dynamic programming and reinforcement learning in autonomous driving. It is expected to bring together the researchers in relevant areas to discuss latest progress, propose new research problems for future research. All the original papers related to ADPRL and autonomous driving are welcome.

  • The topics of the special session include, but are not limited to:
  • Deep reinforcement learning algorithms
  • Inverse reinforcement learning algorithms
  • Multi-agent reinforcement learning algorithms
  • Adaptive dynamic programming algorithms
  • Navigation/Perception/Planning/Control schemes for autonomous driving
  • Reinforcement learning for autonomous driving
  • Deep reinforcement learning for autonomous driving
  • Transfer learning for autonomous driving
  • Dataset, Hardware implementation and algorithms acceleration for autonomous driving
  • Neural architecture search with reinforcement learning

Symposium Chairs

Dongbin Zhao

Chinese Academy of Sciences, China.

Email:

Homepage

Hao Xu

University of Nevada, Reno, USA.

Email:

Homepage

Jagannathan Sarangapani

Missouri University of Science and Technology, USA.

Email:

Homepage

Program Committee

Zhuo Wang [email protected] Beijing University of Aeronautics and Astronautics
Dazi Li [email protected] Beijing University of Chemical Technology
Daoyi Dong [email protected] The University of New South Wales
Ming Feng [email protected] University of nevada, reno
Howard Schwartz [email protected] Carleton University
Zejian Zhou [email protected] University of Nevada, Reno
Yanjie Li [email protected] Harbin Institute of Technology Shenzhen
Qichao Zhang [email protected] Chinese Academy of Sciences
Xuesong Wang [email protected] China University of Mining and Technology
Mohammad Jafari [email protected] University of California, Santa Cruz
Yaran Chen chenyaran[email protected] Chinese Academy of Sciences
Lucian Busoniu [email protected] Technical University of Cluj-Napoca
Minwoo Lee [email protected] UNC Charlotte
Xiong Luo [email protected] University of Science and Technology Beijing
El-Sayed M. El-Alfy [email protected] King Fahd University of Petroleum and Minerals
Bo An [email protected] Nanyang Technological University
Ding Wang [email protected] Institute of Automation, Chinese Academy of Sciences
Zengguang Hou [email protected] State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences
Yuhu Cheng [email protected] China University of Mining and Technology
Zongzhang Zhang [email protected] Soochow University
Ana Madureira [email protected] Departamento de Engenharia Inform├ítica
Robert Babuska [email protected] Delft University of Technology
Marco Wiering [email protected] University of Groningen
Yuanheng Zhu [email protected] Chinese Academy of Sciences, Institute of Automation
Warren Powell [email protected] Princeton University
Zhanshan Wang [email protected] Northeastern University
Wen Yu [email protected] Cinvestav
Wulong Liu [email protected] Huawei Noah's Ark Lab
Yanhong Luo [email protected] Northeastern University
Qing-Shan Jia [email protected] Tsinghua University
Koichi Moriyama [email protected] Nagoya Insitute of Technology
Jennie Si [email protected] Arizona State University
Xiangnan Zhong [email protected] University of North Texas
Kun Shao [email protected] Institute of Automation Chinese Academy of Sciences
Dong Li [email protected]
Yongliang Yang [email protected] University of Science and Technology Beijing
Liu Yong [email protected] Zhejiang University
Shengbo Li [email protected] Tsinghua University
Jens Kober [email protected] Delft University of Technology
Zhen Zhang [email protected] Qingdao University
Jiajun Duan [email protected]
Sanket Lokhande [email protected]
Avimanyu Sahoo [email protected]
Weinan Gao [email protected] Georgia Southern University
Zhen Ni [email protected]
Ali Heydari [email protected]
Kang Li [email protected]
Weirong Liu [email protected] Central South University
Athanasios Vasilakos [email protected]
Boris Defourny [email protected] Lehigh University
Chaoxu Mu [email protected]
Jianye Hao [email protected] Massachusetts Institute of Technology
Xiaojun Ban [email protected] Harbin Institute of Technology