The development and application of theory and methods for solving optimal control and planning problems, the development of numerical optimization methods, as well as the closed-loop implementation of optimal controllers on real-time computer systems and networked architectures. Particular methods include, but are not limited to, the calculus of variations, Pontryagin's maximum principle, dynamic programming, model predictive control, reinforcement learning, optimization-based estimation, and differential games. Control methodologies can be based on first-principles models, data-driven models or a combination of both.