2019 IEEE Symposium Series on Computational Intelligence


IEEE Symposium Series on Computational Intelligence

December 6-9, 2019 Xiamen, China

 

IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (IEEE ADPRL)

Adaptive dynamic programming (ADP) and reinforcement learning (RL) are two related paradigms for solving decision making problems where a performance index must be optimized over time. ADP and RL methods are enjoying a growing popularity and success in applications, fueled by their ability to deal with general and complex problems, including features such as uncertainty, stochastic effects, and nonlinearity.
ADP tackles these challenges by developing optimal control methods that adapt to uncertain systems over time. A user-defined cost function is optimized with respect to an adaptive control law, conditioned on prior knowledge of the system and its state, in the presence of uncertainties. A numerical search over the present value of the control minimizes a nonlinear cost function forward-in-time providing a basis for real-time, approximate optimal control. The ability to improve performance over time subject to new or unexplored objectives or dynamics has made ADP successful in applications from optimal control and estimation, operation research, and computational intelligence.
RL takes the perspective of an agent that optimizes its behavior by interacting with its environment and learning from the feedback received. The long-term performance is optimized by learning a value function that predicts the future intake of rewards over time. A core feature of RL is that it does not require any a priori knowledge about the environment. Therefore, the agent must explore parts of the environment it does not know well, while at the same time exploiting its knowledge to maximize performance. RL thus provides a framework for learning to behave optimally in unknown environments, which has already been applied to robotics, game playing, network management and traffic control.
The goal of the IEEE Symposium on ADPRL is to provide an outlet and a forum for interaction between researchers and practitioners in ADP and RL, in which the clear parallels between the two fields are brought together and exploited. We equally welcome contributions from control theory, computer science, operations research, computational intelligence, neuroscience, as well as other novel perspectives on ADPRL. We host original papers on methods, analysis, applications, and overviews of ADPRL. We are interested in applications from engineering, artificial intelligence, economics, medicine, and other relevant fields.

Topics

  • Deep learning combined with ADPRL
  • Convergence and performance analysis
  • RL and ADP-based control
  • Function approximation and value function representation
  • Complexity issues in RL and ADP
  • Policy gradient and actor-critic methods
  • Direct policy search
  • Planning and receding-horizon methods
  • Monte-Carlo tree search and other Monte-Carlo methods
  • Adaptive feature discovery
  • Parsimoneous function representation
  • Statistical learning and PAC bounds for RL
  • Learning rules and architectures
  • Bandit techniques for exploration
  • Bayesian RL and exploration
  • Finite-sample analysis
  • Partially observable Markov decision processes
  • Neuroscience and biologically inspired control
  • ADP and RL for multiplayer games and multiagent systems
  • Distributed intelligent systems
  • Multi-objective optimization for ADPRL
  • Transfer learning
  • Applications of ADP and RL

Accepted Special Sessions


Title: Knowledge Representation and Transfer for Efficient Reinforcement Learning

Special Session Chairs

Prof. Minwoo Jake Lee, University of North Carolina, Charlotte
Email: minwoo.lee@uncc.edu

Content

To ensure an RL agent's reliable and robust operation in diverse environments, the agent must possess and maintain the requisite knowledge or skills to do their tasks properly.
From the three stages of cognitive information processing, knowledge acquisition, retention and transfer, this special topic focuses on retention (or representation) and transfer of acquired knowledge for efficiency.

Many recent successes in reinforcement learning has faced the challenges in a long learning time and lack of generalized solutions even with the long time spent for training. Transfer learning has been attempted in various ways to reuse previously learned knowledge to a new, similar task. Also, modeling or developing an effective and generalizable representation have arisen as a key to improve knowledge transfer.

This session is expected to promote discussions on diverse approaches to accelerate and generalize reinforcement learning with knowledge retention and transfer. Our goal is to attract researchers who is working on different aspects to improve transfer learning. We intend to make this an exciting event for not only discussing diverse transfer and retention approaches but also exchanging future research directions and challenges. The discussion in topics below will help the researchers scale up reinforcement learning methods to solve complex and eventually real-world problems.

Scope and Topics

Submissions are expected from, but not limited to the following topics:

  • Transfer Learning for (deep) reinforcement learning
  • Transfer in heterogeneous environments
  • Online Transfer learning in (deep) Reinforcement Learning
  • Knowledge Sharing/Transfer Learning for multi-agent reinforcement learning
  • Simulation to Real Transfer
  • Skill/Option Learning and Transfer
  • Meta Learning
  • Imitation/Human Demonstration/Inverse Reinforcement Learning
  • Lifelong Reinforcement Learning
  • Knowledge Representation
  • Representation Learning
  • Experience Replay

Title: Online Data Learning Designs for Networked Multi-Agent Systems under Constraints

Special Session Chairs

Hao Xu, University of Nevada, NV, USA
Email: haox@unr.edu

Avimanyu Sahoo, Oklahoma State University, OK, USA
Email: avimanyu.sahoo@okstate.edu

Content

Through the recent years, online data learning approaches and frameworks have been developed, including reinforcement learning, adaptive dynamic programming, biologically inspired reinforcement learning and so on, to effectively address the intelligent and resilient issues in networked multi-agent systems especially under various constraints including limited computation and communication resources and so on. Many emerging smart and network connected multi-agent systems are using online data learning based approaches to find real-time optimal control, such as smart transportation systems, smart grid, smart communities, networked control systems and cyber-physical systems. They are able to efficiently solve the optimal and reliable designs for networked multiagent systems in a distributed and timely manner as well as relaxing the impractical requirement about actual knowledge of system dynamics. This special session will enhance the discussion among different societies to explore more challenging cross-disciplinary topics along this direction

Scope and Topics

This special session will provide a forum to deliver and discuss original research results and new techniques in online data learning for networked multi-agent systems under constraints. We are particularly interested in the following topics:

  • Online data learning design for networked multi-agent systems
  • Online data learning based optimal control with constraints
  • Online data learning based robust adaptive control with contraints
  • Online data learning based event-triggered/self-triggered control
  • Online data learning based network and control co-design
  • Biologically inspired nnline data learning for multiplayer games
  • Novel online data learning algorithms, stability analysis and convergence
  • New data/self-learning for smart transportation systems

Title: Deep Reinforcement Learning and Adaptive Dynamic Programming for autonomous Driving

Special Session Chairs

Dr. Qichao Zhang , Institute of Automation, Chinese Academy of Sciences, China
E-mail:zhangqichao2014@ia.ac.cn

Dr. Yaran Chen, Institute of Automation, Chinese Academy of Sciences, China
E-mail:chenyaran2013@ia.ac.cn

Content

Recently, autonomous driving has received considerable attention from many companies and research institutions. Autonomous driving cars are expected to play a key role in the future of urban transportation systems, as they can increase driving safety, ease traffic congestion, reduce energy consumption, set the driver free and so on. However, the intelligence of self-driving cars is not enough for complex urban scenarios now. Deep reinforcement learning (DRL) and Adaptive Dynamic Programming (ADP), which can make agents learn how to act through interaction with the environment, is a strong AI paradigm. Recently, the approaches based on deep reinforcement learning and adaptive dynamic programming are used to address the navigation/perception/prediction/ planning/control tasks instead of the traditional methods for autonomous driving in complex or rare scenarios. In addition, neural architecture search with reinforcement learning can provide the optimal models for the perception and prediction tast. This special session aims to discuss the recent development and existed problems of ADP and DRL in autonomous driving.

Scope and Topics

The aim of this special session will be to provide an account of the state-of-the-art in this fast moving and cross-disciplinary field of adaptive dynamic programming and reinforcement learning in autonomous driving. It is expected to bring together the researchers in relevant areas to discuss latest progress, propose new research problems for future research. All the original papers related to ADPRL and autonomous driving are welcome.

  • The topics of the special session include, but are not limited to:
  • Deep reinforcement learning algorithms
  • Inverse reinforcement learning algorithms
  • Multi-agent reinforcement learning algorithms
  • Adaptive dynamic programming algorithms
  • Navigation/Perception/Planning/Control schemes for autonomous driving
  • Reinforcement learning for autonomous driving
  • Deep reinforcement learning for autonomous driving
  • Transfer learning for autonomous driving
  • Dataset, Hardware implementation and algorithms acceleration for autonomous driving
  • Neural architecture search with reinforcement learning

Symposium Chairs

Dongbin Zhao

Chinese Academy of Sciences, China.

Email: dongbin.zhao@ia.ac.cn

Homepage

Hao Xu

University of Nevada, Reno, USA.

Email: haoxu@unr.edu

Homepage

Jagannathan Sarangapani

Missouri University of Science and Technology, USA.

Email: sarangap@mst.edu

Homepage

Program Committee

Zhuo Wang zwang8381@foxmail.com Beijing University of Aeronautics and Astronautics
Dazi Li lidz@mail.buct.edu.cn Beijing University of Chemical Technology
Daoyi Dong daoyidong@gmail.com The University of New South Wales
Ming Feng mingf@nevada.unr.edu University of nevada, reno
Howard Schwartz schwartz@sce.carleton.ca Carleton University
Zejian Zhou zejianz@nevada.unr.edu University of Nevada, Reno
Yanjie Li autolyj@hit.edu.cn Harbin Institute of Technology Shenzhen
Qichao Zhang zhangqichao2014@ia.ac.cn Chinese Academy of Sciences
Xuesong Wang wangxuesongcumt@163.com China University of Mining and Technology
Mohammad Jafari mo.jafari@nevada.unr.edu University of California, Santa Cruz
Yaran Chen chenyaran2013@ia.ac.cn Chinese Academy of Sciences
Lucian Busoniu lucian@busoniu.net Technical University of Cluj-Napoca
Minwoo Lee minwoo.lee@uncc.edu UNC Charlotte
Xiong Luo xluo@ustb.edu.cn University of Science and Technology Beijing
El-Sayed M. El-Alfy alfy@kfupm.edu.sa King Fahd University of Petroleum and Minerals
Bo An boancqu@gmail.com Nanyang Technological University
Ding Wang ding.wang@ia.ac.cn Institute of Automation, Chinese Academy of Sciences
Zengguang Hou zengguang.hou@ia.ac.cn State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences
Yuhu Cheng chengyuhu@163.com China University of Mining and Technology
Zongzhang Zhang zzzhang@suda.edu.cn Soochow University
Ana Madureira amd@isep.ipp.pt Departamento de Engenharia Informática
Robert Babuska r.babuska@tudelft.nl Delft University of Technology
Marco Wiering m.a.wiering@rug.nl University of Groningen
Yuanheng Zhu yuanheng.zhu@ia.ac.cn Chinese Academy of Sciences, Institute of Automation
Warren Powell powell@princeton.edu Princeton University
Zhanshan Wang wangzhanshan@ise.neu.edu.cn Northeastern University
Wen Yu yuw@ctrl.cinvestav.mx Cinvestav
Wulong Liu liuwulong@huawei.com Huawei Noah's Ark Lab
Yanhong Luo luoyanhong@ise.neu.edu.cn Northeastern University
Qing-Shan Jia jiaqs@tsinghua.edu.cn Tsinghua University
Koichi Moriyama moriyama.koichi@nitech.ac.jp Nagoya Insitute of Technology
Jennie Si si@asu.edu Arizona State University
Xiangnan Zhong Xiangnan.Zhong@unt.edu University of North Texas
Kun Shao shaokun2014@ia.ac.cn Institute of Automation Chinese Academy of Sciences
Dong Li lidong2014@ia.ac.cn
Yongliang Yang y.yang.2016@ieee.org University of Science and Technology Beijing
Liu Yong cckaffe@hotmail.com Zhejiang University
Shengbo Li lishbo@tsinghua.edu.cn Tsinghua University
Jens Kober j.kober@tudelft.nl Delft University of Technology
Zhen Zhang tbsunshine8@163.com Qingdao University
Jiajun Duan jiajunduan.ee@gmail.com
Sanket Lokhande slokhande@nevada.unr.edu
Avimanyu Sahoo avimanyu.sahoo@okstate.edu
Weinan Gao wgao@georgiasouthern.edu Georgia Southern University
Zhen Ni zhen.ni@sdstate.edu
Ali Heydari aheydari@lyle.smu.edu
Kang Li k.li@qub.ac.uk
Weirong Liu weirong_liu@126.com Central South University
Athanasios Vasilakos vasilako@ath.forthnet.gr
Boris Defourny defourny@lehigh.edu Lehigh University
Chaoxu Mu cxmu@tju.edu.cn
Jianye Hao haojianye@gmail.com Massachusetts Institute of Technology
Xiaojun Ban banxiaojun@hit.edu.cn Harbin Institute of Technology