UI /

# Regulation problem

## Finite horizon MPC

To create an MPC controller in MPT3, use the MPCController constructor:

mpc = MPCController(sys)

where sys represents the prediction model, created by LTISystem, PWASystem or MLDSystem constructors (see here).

As an example, let us consider an LTI prediction model, represented by the state-update equation {$x^+ = A x + B u$} with {$A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$} and {$B = \begin{bmatrix} 1 \\ 0.5 \end{bmatrix}$}

model = LTISystem('A', [1 1; 0 1], 'B', [1; 0.5]);

Next we specify more properties of the model:

• Add state constraints {$\begin{bmatrix} -5 \\ -5 \end{bmatrix} \le x \le \begin{bmatrix} 5 \\ 5 \end{bmatrix}$}
model.x.min = [-5; -5];
model.x.max = [5; 5];
• Add input constraints {$-1 \le u \le 1$}:
model.u.min = -1;
model.u.max = 1;
• Add quadratic state penalty, i.e., penalize {$x_k^T Q x_k$} with {$Q = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$}
Q = [1 0; 0 1];
• Use quadratic penalization of inputs, i.e., {$u_k^T R u_k$} with {$R=1$}:
R = 1;

A number of other constraints and penalties can be added using the concept of filters.

### Online solution

Finally, we can synthesize an MPC controller with prediction horizon say {$N=5$}:

N = 5;
mpc = MPCController(model, N)

Now the mpc object is ready to be used as an MPC controller. For instance, we can ask it for the optimal control input for a given value of the initial state:

x0 = [4; 0];
u = mpc.evaluate(x0)

u =

-1

You can also retrieve the full open-loop predictions:

[u, feasible, openloop] = mpc.evaluate(x0)

openloop =

cost: 32.4898
U: [-1 -1 0.1393 0.3361 -5.2042e-16]
X: [2x6 double]
Y: [0x5 double]

### Explicit solution

Once you are satisfied with the controller's performance, you can convert it to an explicit form by calling the toExplicit() method:

expmpc = mpc.toExplicit();

This will generate an instance of the EMPCController class, which represents all explicit MPC controllers in MPT3. Explicit controllers can be used for evaluation/simulation in exactly the same way, i.e.,

[u, feasible, openloop] = expmpc.evaluate(x0)

openloop =

cost: 32.4898
U: [-1 -1 0.1393 0.3361 -5.2042e-16]
X: [2x6 double]
Y: [0x5 double]

## Finite horizon MPC with terminal penalty and terminal set constraints

Consider an oscillator model defined in 2D.

A = [ 0.5403, -0.8415; 0.8415,  0.5403];
B = [ -0.4597; 0.8415];
C = [1 0];
D = 0;

Linear discrete-time model with sample time 1

sys = ss(A,B,C,D,1);
model = LTISystem(sys);

Set constraints on output

model.y.min = -10;
model.y.max = 10;

Set constraints on input

model.u.min = -1;
model.u.max = 1;

Include weights on states/inputs

Compute terminal set

Tset = model.LQRSet;

Compute terminal weight

PN = model.LQRPenalty;

Add terminal set and terminal penalty

model.x.with('terminalSet');
model.x.terminalSet = Tset;
model.x.with('terminalPenalty');
model.x.terminalPenalty = PN;

Formulate finite horizon MPC problem

ctrl = MPCController(model,5);

## Regulation to a nonzero steady state

The next example demonstrates how to achieve regulation to a nonzero steady state. Consider a model of an aircraft discretized with 0.2s sampling time

A = [0.99 0.01 0.18 -0.09 0; 0 0.94 0 0.29 0; 0 0.14 0.81 -0.9 0; 0 -0.2 0 0.95 0; 0 0.09 0 0 0.9];
B = [0.01  -0.02; -0.14 0; 0.05 -0.2; 0.02 0; -0.01 0];
C = [0 1 0 0 -1; 0 0 1 0 0; 0 0 0 1 0; 1 0 0 0 0];
model = LTISystem('A', A, 'B', B, 'C', C, 'Ts', 0.2);

Given the dynamics of the model, one can compute the new steady state values for a particular value of the input, e.g.

us = [0.8; -0.3];
ys = C*( (eye(5)-A)\B*us );

The computed values will be used as references for the desired steady state which can be added using "reference" filter

model.u.with('reference');
model.u.reference = us;
model.y.with('reference');
model.y.reference = ys;

To fomulate MPC optimization problem one shall provide constraints and penalties on the system signals. In this example, only the input constraints are considered

model.u.min = [-5; -6];
model.u.max = [5; 6];

The penalties on outputs and inputs are provided as quadratic functions

model.u.penalty = QuadFunction( diag([3, 2]) );
model.y.penalty = QuadFunction( diag([10, 10, 10, 10]) );

which yields a quadratic optimization problem to be solved in MPC. The online MPC controller object is constructed with a horizon 6

ctrl = MPCController(model, 6)

and simulated for zero initial conditions over 30 samples

loop = ClosedLoop(ctrl, model);
x0 = [0; 0; 0; 0; 0];
Nsim = 30;
data = loop.simulate(x0, Nsim);

The simulation results can be plotted to inspect that the convergence to the new steady state has been achieved

subplot(2,1,1)
plot(1:Nsim, data.Y);
hold on;
plot(1:Nsim, ys*ones(1, Nsim), 'k--')
title('outputs')
subplot(2,1,2)
plot(1:Nsim, data.U);
hold on;
plot(1:Nsim, us*ones(1, Nsim), 'k--')
title('inputs') Back to MPC synthesis overview.