| ISBN | 出版时间 | 包装 | 开本 | 页数 | 字数 |
|---|---|---|---|---|---|
| 未知 | 暂无 | 暂无 | 未知 | 0 | 暂无 |
Chapter 1 Introduction
1.1 Brief history of the optimal control theory
1.2 Basic concepts of the control theory
1.3 Optimal control problem
1.4 Some background on finite-dimensional optimization
Chapter 2 Calculus of Variations in Optimal Control
2.1 Foundations of variation
2.2 The Euler-Lagrange equation
2.3 Optimal control problem based on variation
2.4 Summary for chapter 2
2.5 Exercise for chapter 2
Chapter 3 The Maximum Principle
3.1 The limitation of classical variation in optimal control
3.2 The maximum principle for continuous systems
3.3 Two applications of the minimum principle
3.4 Summary for chapter 3
3.5 Exercise for chapter 3
Chapter 4 Dynamic Programming
4.1 The basic principle of dynamic programming
4.2 Dynamic programming of discrete system
4.3 Dynamic programming of continuous system
4.4 The relation between dynamic programming and variational method and maximum value principle
4.5 Summary for chapter 4
4.6 Exercise for chapter 4
Chapter 5 Linear Quadratic Optimal Control Problem
5.1 Linear quadratic problem
5.2 State adjuster
5.3 Output regulator
5.4 Output tracker
5.5 Linear quadratic optimal control of discrete system
5.6 Summary for chapter 5
5.7 Exercise for chapter 5
References