报告题目:  The State of the of Art of Neurodynamic Optimization – Past,  Present and Future
报 告 人:  Jun Wang (王钧)(IEEE  Fellow,  IAPR Fellow)
报告时间:  2013年12月31号10:30
报告地点: 南一楼中311会议室
邀 请 方:“多谱信息处理技术”国家级重点实验室
报告摘要: 
Optimization  is omnipresent in nature and society, and an important tool for problem-solving  in science, engineering, and commerce. Optimization problems arise in a wide  variety of applications such as the design, planning, control, operation, and  management of engineering systems. In many applications (e.g., online pattern  recognition and in-chip signal processing in mobile devices), real-time  optimization is necessary or desirable.
For  such applications, conventional optimization techniques may not be competent due  to stringent requirement on computational time. It is computationally  challenging when optimization procedures have to be performed in real time to  optimize the performance of dynamical systems.
The  brain is a profound dynamic system and its neurons are always active from their  birth to death. When a decision is to be made in the brain, many of its neurons  are highly activated to gather information, search memory, compare differences,  and make inference and decision. Recurrent neural networks are brain-like  nonlinear dynamic system models and can be properly designed to imitate  biological counterparts and serve as goal-seeking parallel computational models  for solving optimization problems in a variety of settings. Neurodynamic  optimization can be realized physically in designated hardware such as  application-specific integrated circuits (ASICs) where optimization is carried  out in a parallel and distributed manner, where the convergence rate of the  optimization process is independent of the problem dimensionality. Because of  the inherent nature of parallel and distributed   information processing, neurodynamic optimization can handle large-scale  problems.  In addition, neurodynamic  optimization may be used for optimizing dynamic systems in multiple time-scales  with parameter-controlled convergence rate.
These  salient features are particularly desirable for dynamic optimization in  decentralized decision-making scenarios. While population-based evolutionary  approaches to optimization emerged as prevailing heuristic and stochastic  methods in recent years, neurodynamic optimization deserves great attention in  its own rights due to its close ties with optimization and dynamical systems  theories, as well as its biological plausibility and circuit implementability  with VLSI or optical technologies.
The  past three decades witnessed the birth and growth of neurodynamic optimization.  Although a couple of circuit-based optimization methods were developed in  earlier, it was perhaps Hopfield and Tank who spearheaded the neurodynamic  optimization research  in the context of  neural computation with their seminal works in mid-1980's. Tank and Hopfield  extended the continuous-time Hopfield network for linear programming. Kennedy  and Chua developed a neural network for nonlinear programming. It is proven that  the state of the neurodynamics is globally convergent and an equilibrium  corresponding to an approximate optimal solution of the given optimization  problems. Over the years, the neurodynamic optimization research has made  significant progresses with numerous models with improved features for solving  various optimization problems. Substantial improvements of neurodynamic  optimization theory and models have been made in the following  dimensions:
(i)  Solution quality: Designed based on smooth penalty methods with finite penalty  parameter, the earliest neurodynamic optimization models can converge to  approximate solutions only. Later on, our models designed based on other design  principles can guarantee to state or output convergence to exact optima of  solvable optimization problems.
(ii)  Solvability scope: The solvability scope of our neurodynamic optimization has  been expanded from linear programming problems, to quadratic programming, to  smooth convex programming problems with various constraints, to nonsmooth convex  optimization problems, recently to nonsmooth optimization with generalized  convex objective functions or constraints. 
(iii)  Convergence  property: The convergence  property of our neurodynamic optimization models has been extended from  near-optimum, to conditional exact-optimum global convergence, to guaranteed  global convergence,  to faster global  exponential convergence to even more desirable finite-time convergence,  with  increasing convergence  rate.
(iv)  Model complexity: The neurodynamic optimization models for constrained  optimization are essentially of multi-layer due to the introduction of  instrumental variables for constraint handling (e.g., Lagrange multipliers or  dual variables).  The architectures of  our recent neurodynamic optimization models for solving linearly constrained  optimization problems have been reduced from multi-layer structures to  single-layer ones  with decreasing model  complexity to facilitate their implementation.
In  this talk, starting with the idea and motivation of neurodynamic optimization,  we will review the historic review and present the state of the art of  neurodynamic optimization with many models and selected applications.  Theoretical results about the state stability, output convergence, and solution  optimality of the neurodynamic optimization models will be given along with many  illustrative examples and simulation results. Four classes of neurodynamic  optimization model design methodologies (i.e., penalty methods, Lagrange  methods, duality methods, and optimality methods) will be delineated with  discussions of their characteristics. In addition, it will be shown that many  real-time computational optimization problems in information processing, system  control, and robotics (e.g., parallel data selection and sorting, robust pole  assignment in linear feedback control systems, robust model predictive control  for nonlinear systems, collision-free motion planning and control of  kinematically redundant robot manipulators with or without torque optimization,  and grasping force optimization of multi-fingered robotic hands) can be solved  by means of neurodynamic optimization. Finally, prospective future research  directions will be discussed.
 
报告人简历:
Jun  Wang is a Professor and the Director of the Computational Intelligence  Laboratory in the Department of Mechanical and Automation Engineering at the  Chinese University of Hong Kong. Prior to this position, he held various  academic positions at Dalian University of Technology, Case Western Reserve  University, and University of North Dakota. He also held various short-term  visiting positions at USAF Armstrong Laboratory (1995), RIKEN Brain Science  Institute (2001), Universite Catholique de Louvain (2001), Chinese Academy of  Sciences (2002), Huazhong University of Science and Technology (2006–2007), and  Shanghai Jiao Tong University (2008-2011) as a Changjiang Chair Professor. Since  2011, he is a National Thousand-Talent Chair Professor at Dalian University of  Technology on a part-time basis. He received a B.S. degree in electrical  engineering and an M.S. degree in systems engineering from Dalian University of  Technology, Dalian, China. He received his Ph.D. degree in systems engineering  from Case Western Reserve University, Cleveland, Ohio, USA. His current research  interests include neural networks and their applications. He published 160  journal papers, 13 book chapters, 8 edited books, and numerous conference papers  in these areas. He has been an Associate Editor of the IEEE Transactions on Cybernetics (and its  predecessor) since 2003 and a member of the editorial board of Neural  Networks since 2012. He also served as an Associate Editor of the IEEE Transactions on Neural Networks  (1999-2009) and IEEE Transactions on  Systems, Man, and Cybernetics – Part C (2002–2005), as a member of the  editorial advisory board of International Journal of Neural  Systems, as a guest editor of special issues of European Journal of Operational  Research (1996), International  Journal of Neural Systems (2007), Neurocomputing (2008), and  International Journal of Fuzzy Systems (2010, 2011). He was an organizer of  several international conferences such as the General Chair of the 13th  International Conference on Neural Information Processing (2006) and the 2008  IEEE World Congress on Computational Intelligence. He was an IEEE Computational  Intelligence Society Distinguished Lecturer (2010-2012). In addition, he served  as President of Asia Pacific Neural Network Assembly (APNNA) in 2006 and many  organizations such as IEEE Fellow Committee (2011-2012); IEEE Computational  Intelligence Society Awards Committee (2008, 2012), IEEE Systems, Man, and  Cybernetics Society Board of Directors (2013-2015), He is an IEEE Fellow, IAPR  Fellow, and a recipient of an IEEE  Transactions on Neural Networks Outstanding Paper Award and APNNA  Outstanding Achievement Award in 2011, Natural Science Awards from Shanghai  Municipal Government (2009) and Ministry of Education of China (2012), among  others.