IEEE Huntsville JRACS Seminar on An [almost] invisible controller for the unexpected unexpected! by By Dr. Farbod Fahimi,
Talk Abstract: Model-Free Online Reinforcement Learning (MFORL) controllers learn how to control a system only by interacting with it in real-time, the same way humans learn how to control machines. MFORL controllers have great potentials when it comes to control design for complex nonlinear systems, where the model-based methods fall short due to impracticality of formulating a mathematical model for the system. With availability of the MFORL control, unmodeled complex systems can be automated. Even if a model could be formulated, MFORL control derivation is extremely more economical for two major reasons. First, the need for lengthy process of system modeling, identification, and verification is eliminated. Second, once an MFORL controller is found for a dynamic system, it can be easily used for any system whose governing differential equation resembles that of the original dynamic system. In addition, MFORL controllers can relearn a completely new control law rapidly if the system dynamics suddenly changes due to an "unexpected" component break down. So, MFORL controllers can successfully deal with unexpected unexpected (situations that cannot be foreseen at the design stage) whereas robust/adaptive controllers can only deal with expected unexpected at best (known ranges of change in system parameters). In this talk, the theory behind MFORL controllers is discussed, and some sample benchmark applications are presented.
IEEE Huntsville JRACS Seminar on An [almost] invisible controller for the unexpected unexpected! by By Dr. Farbod Fahimi website
Leandro G. Barajas, Ph.D., PMP