# Parametric optimization

Say you want to find the optimal value of {$x$} as a function of the parameter {$\theta$} which solves the following parametric linear problem: {$$\min \ (c + D \theta)^T x \\ \text{s.t.} \ A x \le b,\\ \ x \ge 0,\\ \ 0 \le \theta \le u,$$} where {$A$}, {$b$}, {$c$}, {$d$}, {$u$} are fixed problem data, e.g.:

% problem data
A = [eye(2); -eye(2)];
b = [1; 1; -0.5; -0.5];
u = 2;
c = [1; 1];
D = [1; 1];

We first formulate the problem in YALMIP:

% optimization variables
nx = size(A, 2);
x = sdpvar(nx, 1);

% parameter
theta = sdpvar(1, 1);

% objective function
J = (c + D*theta)'*x;

% constraints
C = [ A*x <= b, x >= 0, 0 <= theta <= u ];

Then we convert the problem into the MPT format:

plp = Opt(C, J, theta, x);

and tell MPT to constuct the parametric solution:

solution = plp.solve();

The parametric solution maps the parameters onto optimization variables. For the case of a parametric linear problem, such a map takes a form of a piecewise affine function {$x^{\star} = F_i \theta + g_i$} if {$\theta \in \mathcal{R}_i$}, where {$\mathcal{R}_i$}, {$i = 1, \ldots, N$} are so-called critical regions. The function can be plotted using the fplot() method:

for i = 1:nx
figure;
solution.xopt.fplot('primal', 'position', i);
xlabel('t');
ylabel(sprintf('x_%d(t)', i));
end

We can also plot the value function versus the parameter by

figure;
solution.xopt.fplot('obj');
xlabel('t');
ylabel('J(t)');

To plot just the critical regions, use solution.xopt.plot().

Finally, the parametric solution can be used to easily obtain the values of the optimization variables for a particular value of the parameter using the feval() method:

t0 = 0.5;
x_t0 = solution.xopt.feval(t0, 'primal')
J_t0 = solution.xopt.feval(t0, 'obj')