Suppose there were feasible \(\mathbf{x_0}, \mathbf{y_0}\) for which \(f(\mathbf{x_0},)= g(\mathbf{y_0})\text{.}\) What then must be true about these solutions? Can we prove our assertion?
Recall Activity 2.1.12 and Exploration 4.1.1. Consider the primal max and dual min of the associated problems. How does our assertion fit these problems?
Come up with a primal max problem (and corresponding min dual) where \(A, b, c, d, \mathbf{x}, \mathbf{y}\) all have integer values, so that the primal max and dual min problems achieve optimal solutions \(\mathbf{x_0}, \mathbf{y_0}\text{,}\) where \(f(\mathbf{x_0}) \lt g(\mathbf{y_0})\text{.}\)
Using the same values for \(A, b, c, d\) for the problem we just constructed, suppose we relax the condition that all our values must be integers. What can we say about the optimal solutions then?
We have now that if \(f=g\) for a pair of feasible solutions, then we have optimality for both problems. It would be good if the converse were also true. This is encapsulated by the following theorem.
Given a pair of primal max-dual min problems, the primal max problem has an optimal solution \(\x^*\) if and only if the dual min problem has an optimal solution \(\y^*\text{.}\) Moreover, \(f(\x^*)=g(\y^*)\text{.}\)
To prove this, we recall the idea we explored in Section 4.1 that dual variables were coefficients or weights of normal variables of the bounding hyperplanes.
Before we delve into the proof, we illustrate the idea with a constrained case. Consider a canonical optimization problem captured by the following tableau (we assume for simplicities sake to let \(d=0\)):
Let \(y_i^*=-c^*_?\) if \(c_?^*\) is the coefficient for \(t_i\) in the optimal tableau, and let \(y^*_i=0\) otherwise. Similarly let \(s_j^*=-c^*_?\) if \(c_?^*\) is the coefficient for \(x_j\) in the optimal tableau, and let \(s^*_j=0\) otherwise.
Recall that the optimal tableau represents a reformulation of the original problem where \(f=c_1x_1+c_2x_2+c_3x_3+c_4x_4\) is instead rewritten as \(f=c_1^*t_3+c_2^*x_3+c_3^*x_1+c_4^*t_2-d^*\text{.}\)
Use the fact that \(t_i = b_i-\left(\displaystyle \sum_{j=1}^4 a_{ij}x_j\right)\) to rewrite the above equality without \(t_i\)’s. Regroup the right hand side so we have the form:
\begin{equation*}
\displaystyle \sum_{j=1}^4 c_jx_j = (\text{linear function in terms of }x_j\text{'s})+(\text{constant}).
\end{equation*}
Recall that the above equality we established is an equality as functions of the \(x_j\)’s. What must be the constant portion of the right hand side be equal to? What must the linear function in terms of \(x_j\)’s on the right hand side be equal to?
We now need to show that these \(y_i^*\)’s we found is a feasible solution to the dual problem. Why do we have that \(c_j = \displaystyle \sum_{i=1}^4 y_i^*a_{ij}-s_j^*\text{?}\)
Why is each \(y_i^*, s_j^*\geq 0\text{?}\) Why does this show that \(\displaystyle \sum_{i=1}^4 y_i^*a_{ij} \geq c_j\text{?}\) This shows that \(s_j\)’s are nonnegative slack variables for the \(y_i\) and that the \(y_i^*\)’s are a feasible solution.
Suppose \(\x\) is a feasible extreme point, and \(\y\) and \(\vs\) are the corresponding coefficients for the normal vectors. If \(x_j\neq 0\text{,}\) what does that say about \(s_j\text{?}\)
\(s_j\) is the coefficient of the normal vector for the plane \(-x_j=0\text{.}\) If the feasible solution does not lie on \(-x_j=0\text{,}\) what can we say about \(s_j\text{?}\)
Give an example of a linear optimization primal-dual problem, and feasible solutions \(\x, \y\) with slack variables \(\vt, \vs\) where \(\vs^\top\x + \y^\top\vt > 0\text{.}\)
If \(\x^*, \y^*\) are feasible optimal solutions with slack variables \(\vt^*, \vs^*\text{,}\) what must be true about \((\vs^*)^\top\x^* + (\y^*)^\top\vt^*\text{?}\)