Model Predictive Control

Hacking the fortune-telling

Marko Švec
6 min readNov 30, 2020
MPC uses a model of a system to tell its fortune

A bit of history

Model Predictive Control (MPC) is an advanced method of process control that takes into account the dynamical model of a system, as well as its constraints. It is the only advanced control strategy that has had a significant and far-reaching impact on the industry. The idea was born in the 1960s, but the real success in the process industry and theoretical development began in the 1980s.

Since it requires computationally expensive real-time optimization, it has been implemented in the past, mainly in the chemical industry, where the system dynamics is relatively slow [1]. However, with the rapid increase in computational power of modern hardware and the development of new control approaches, MPC has also found its application on the time scale of milli-, micro- and nanoseconds in the automotive industry, in energy systems and in computer control [2].

MPC against the world

Classical control methods such as, e.g. PID controller or lead-lag compensator, dominantly focus on ensuring disturbance rejection and noise insensitivity and handling model uncertainty in the frequency domain. However, when a controller for a system is being designed, it often has to satisfy some constraints. These constraints can be physical constraints (e.g. actuator limits), performance constraints (e.g. overshoot) or safety constraints (e.g. temperature/pressure limits). With the classical approach the controller doesn’t know that the constraints exist, so the best way to make sure that they are not violated is to set the set point value sufficiently far away from them. This often leads to suboptimal plant operation.

Classical controller response (left) vs. MPC response (right)

On the other hand, the main issue addressed in the MPC is exactly control and process constraints satisfaction (in the time domain). A predictive controller has the information about the constraints included in its design and thus provides the optimal setpoint value and the optimal plant operation.

How does it work?

In the MPC, the control action is obtained by solving a constrained finite horizon optimal control problem for the current state of the plant at each sampling time. The optimal control input sequence is computed for a predicted evolution of the system model over a finite horizon (i.e. for a finite number of samples), but only the first element of the control sequence is applied. The state of the system is then re-measured/estimated at the next sampling time and the whole procedure is repeated. This strategy, called Receding Horizon Control (RHC), introduces feedback into the system, making it closed-loop and allowing compensation of potential modelling errors and disturbances [3].

Mathematically speaking, it solves the optimization problem:

After the problem is solved, the first input from the optimal input sequence is applied.

Wait, what?

In case the words in the last section did not make much sense, let us examine this more closely:

Cost function
One of the most common choices for the cost function is a quadratic norm

where P, Q and R are weight matrices of corresponding dimensions. MPC with quadratic cost can be translated into a quadratic program (QP) [3].

System dynamics
A discrete-time linear system is described by

and is subject to constraints

where x(k) is the state and u(k) is the control input. If we want our MPC to be translated into a quadratic program, the constraints must be linear (sorry, but that is what QP is).

Terminal cost and terminal constraint

Both the cost function and the constraints normally differ for the state variable at the end of the prediction horizon x(N). This has a strong theoretical background and ensures both optimality and feasibility (the ability to solve the problem at any time instance ), but it is a bit too much for the moment. You can find more about this in [2] or [3]. In the following example we assume that these differences do not exist.

Let’s code!

The following example is coded in Matlab and uses YALMIP for parsing [4]. I also borrowed the example from the official YALMIP website.

YALMIP was created by Johan Löfberg in 2004 and is making life easier for engineers, students and optimization enthusiasts ever since. It has definitely made my life easier many times and I would like to thank Johan for that.

Numerical data for our control problem:

yalmip('clear')
clear all

% Model data
A = [2 -1;1 0.2];
B = [1;0];
nx = 2; % Number of states
nu = 1; % Number of inputs

% MPC data
Q = eye(2);
R = 2;
N = 5;

After we have defined the system, the MPC formulation looks like this:

u = sdpvar(repmat(nu,1,N),repmat(1,1,N)); % YALMIP synthax
x0 = sdpvar(2,1); % YALMIP synthax

constraints = [];
objective = 0;
x = x0;
for k = 1:N
x = A*x + B*u{k};
objective = objective + x'*Q*x + u{k}'*R*u{k};
constraints = [constraints, -1 <= u{k}<= 1, -5 <= x <= 5];
end

Finally, for some initial state, the optimization is started like this:

optimize([constraints, x0 == xInit], objective);

Simulation

The simulation and result display is performed with the following code:

x = [3;1]; % initial state
nSim = 50;
input = zeros(nSim,nu);
states = zeros(nSim+1,nx);
states(1,:) = x';
for k= 1:nSim
optimize([constraints, x0 == x], objective);
U = value(u{1}); % apply the first input from the optimal sequence
x = A*x + B*U;
input(k) = U;
states(k+1,:) = x';
end
% Plot states and inputs
figure;
plot([0:nSim], states(:,1)); hold;
plot([0:nSim], states(:,2)); grid; xlabel('sample'); ylabel('states');
legend('x_1','x_2');
figure;
stairs([0:nSim-1], input); grid; xlabel('sample'); ylabel('input');
Simulated evolution of the states
The optimal input signal

The code given in this example is far from optimized when it comes to computational speed and complexity and it serves as an introductory material. More can, of course, be found among YALMIP examples.

Wrap-Up

What to take home?

  1. MPC is a powerful combination of control engineering and optimization.
  2. It enables us to deal with constraints, minimize a desired cost function and predict the future behavior of a system.
  3. It is a time domain based control approach and is, therefore, quite intuitive.

It can be used for different control and even some non-control purposes (if we think of it as a way to formulate optimization problems), but that is a topic for some other article.

[1] Maciejowski, J. Predictive Control with Constraints. Harlow: Pearson Education Limited, Prentice Hall, 2002.
[2] Jones, C., Borrelli, F., Morari, M. Model Predictive Control Part I — Introduction, https://engineering.utsa.edu/ataha/wp-content/uploads/sites/38/2017/10/MPC_Intro.pdf
[3] Zeilinger, M. N. Real-time Model Predictive Control. Doctoral thesis. ETH Zürich, 2011.
[4] Löfberg, J. YALMIP: A toolbox for modeling and optimization in Matlab, 2004 IEEE international conference on robotics and automation. IEEE, 2004, pp. 284–289.

--

--

Marko Švec
Marko Švec

Written by Marko Švec

Passionate about working with control systems combined with other fields, especially automotive industry, intelligent systems and optimization.