Skip to main content

Section 4.4 Summary of Chapter 4

We recall that at each corner or extreme point solution, this solution lies on the intersection of \(n\) bounding hyperplanes of the feasible region. We also note that the orthogonal vector for the objective plane may be written as a linear combination of the orthogonal vectors for these bounding hyperplanes, and in particular, the coefficients of the linear combination are exactly the negative of the \(c_j's\) in the corresponding tableau.
\(x_1\) \(x_2\) \(\cdots\) \(x_m\) \(-1\)
\(a_{11}\) \(a_{12}\) \(\cdots\) \(a_{1m}\) \(b_1\) \(=-t_1\)
\(a_{21}\) \(a_{22}\) \(\cdots\) \(a_{2m}\) \(b_2\) \(=-t_2\)
\(\vdots\) \(\vdots\) \(\ddots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(a_{n1}\) \(a_{n2}\) \(\cdots\) \(a_{nm}\) \(b_n\) \(=-t_n\)
\(c_1\) \(c_2\) \(\cdots\) \(c_m\) \(d\)
In some sense what \(-c_j\) tells us is how much \(f\) increases if the corresponding \(x_j\) decreases by one, i.e. the bound associated with \(x_j\) were relaxed by one. This gives us a way of determining how relaxing each of the constraints compare to each other, and their relative value to each other. So in the case of Exploration 4.1.1, We have
\(x_1\) \(x_2\) \(-1\)
\(3\) \(1\) \(34\) \(=-t_1\)
\(1\) \(1\) \(14\) \(=-t_2\)
\(3\) \(2\) \(0\) \(=f\)
We have that the normal vector for the objective plane, \(\begin{bmatrix} 3\\2\end{bmatrix}\) may be written as \(\begin{bmatrix} 3\\2\end{bmatrix} = -3\begin{bmatrix} -1\\0\end{bmatrix}+(-2)\begin{bmatrix} 0\\-1\end{bmatrix}\text{,}\) and lowering \(x_1\) by 1 decreases the objective by 3, and lowering \(x_2\) by 1. decreases it by 2.
But should we pivot to
\(t_1\) \(t_2\) \(-1\)
\(1/2\) \(-1/2\) \(10\) \(=-t_1\)
\(-1/4\) \(3/2\) \(4\) \(=-t_2\)
\(-1/2\) \(-3/2\) \(-38\) \(=f\)
Then \(\begin{bmatrix} 3\\2\end{bmatrix} = \frac{1}{2}\begin{bmatrix} 3\\1\end{bmatrix}+\frac{3}{2}\begin{bmatrix} 1\\1\end{bmatrix}\) and decreasing \(t_1\) by 1 increases the objective by \(\frac{1}{2}\) and decreasing \(t_2\) by 1 increases the objective by \(\frac{3}{2}\text{.}\) Note that this is equivalent to saying how much value would be gained should there by one more newt eye or frog respectively.
This inspires us to define the dual problem of a primal maximization problem in Definition 4.1.5, where the \(y_i\) are the weights or coefficients of the normal vectors for bounding hyperplanes of the feasible region. We may define the \(s_j\) to be the slack variable for each dual constraint so that \(\displaystyle \left( \sum_{i=1}^n a_{ij}y_i\right)+s_j=c_jx\text{,}\) or equivalently:
\begin{align*} \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_m \end{bmatrix} & = y_1\begin{bmatrix} a_{11} \\ a_{12}\\ \vdots \\ a_{1m} \end{bmatrix}+y_2\begin{bmatrix} a_{21} \\ a_{22}\\ \vdots \\ a_{2m} \end{bmatrix} + \cdots + y_n\begin{bmatrix} a_{n1} \\ a_{n2}\\ \vdots \\ a_{nm} \end{bmatrix}\\ & +s_1\begin{bmatrix} 1 \\ 0\\ \vdots \\ 0 \end{bmatrix}+s_2\begin{bmatrix} 0 \\ 1\\ \vdots \\ 0 \end{bmatrix}+\cdots+s_m\begin{bmatrix} 0 \\ 0\\ \vdots \\ 1 \end{bmatrix} \end{align*}
and we have feasibility if and only if each \(y_i, s_j\geq 0\text{.}\)
Figure 4.4.1. Normal Vectors and Sensitivity Analysis.
We then note that by reformulating the primal and dual problem using matrix algebra, then for feasible \(\x, \y\text{,}\) we have that
\begin{equation*} f=\vc^\top\x \leq \vc^\top A\vb\leq \y^\top \vb=g \end{equation*}
which is the statement of the Weak Duality Theorem Proposition 4.2.3. In particular, if \(f=g\) for feasible \(\x, \y\text{,}\) then both \(f, g\) are optimal.
It turns out the converse to this statement is true as well. To show this, we suppose that \(f\) achieves optimality at
\(t_?\) \(x_?\) \(\cdots\) \(t_?\) \(-1\)
\(a_{11}^*\) \(a_{12}^*\) \(\cdots\) \(a_{1m}^*\) \(b_1^*\) \(=-x_?\)
\(a_{21}^*\) \(a_{22}^*\) \(\cdots\) \(a_{2m}^*\) \(b_2^*\) \(=-x_?\)
\(\vdots\) \(\vdots\) \(\ddots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(a_{n1}^*\) \(a_{n2}^*\) \(\cdots\) \(a_{nm}^*\) \(b_n^*\) \(=-t_?\)
\(c_1^*\) \(c_2^*\) \(\cdots\) \(c_m^*\) \(d^*\)
We let \(-c_j = t_i, s_j\) depending on which variable \(c_j\) is the coefficient for, and by letting each other \(y_i, s_j=0\text{,}\) we have that
\begin{align*} f = \sum_{i=1}c_jx_j -d \amp = \sum_{i=1}^n (-y_i)t_i + \sum_{j=1}^m (-s_j)x_j -d^*\\ \amp = \sum_{i=1}^n (-y_i)\left(b_i-\sum_{j=1}^m a_{ij}x_j\right) + \sum_{j=1}^m (-s_j)x_j -d^* \\ \amp = \sum_{j=1}^m\left(\left(\sum_{i=1}^n y_ia_{ij}\right)-s_j\right)x_j -\left(d^*+\sum_{i=1}^n y_ib_i\right) \end{align*}
We note that since the above tableau satisfies the conditions for an optimal basic solution, each \(y_i, s_j\geq0\text{.}\) Since the above equality is an equlity of functions of \(x_1,\ldots, x_m\text{,}\) by comparing coefficients of \(x_j\text{,}\) we have \(\displaystyle \sum_{i=1}^n y_ia_{ij}\geq c_j\) and \(\y\) is a feasible solution. Then we also have that \(-d^*=\displaystyle \sum_{i=1}^ny_ib_i-d\text{.}\) The left hand side is the optimal value for \(f\text{,}\) and the right hand side is the current value of \(g\) for this choice of \(\y\text{.}\) So by Proposition 4.2.3, we prove the Strong Duality Theorem Theorem 4.2.4.
Figure 4.4.2. Proofs of the duality theorems.
We then note that the dual problem may be recorded simultaneously as the primal problem within the same tableau
\(x_1\) \(\cdots\) \(x_m\) \(-1\)
\(y_1\) \(a_{11}\) \(\cdots\) \(a_{1m}\) \(b_1\) \(=-t_1\)
\(\vdots\) \(\vdots\) \(\ddots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(y_n\) \(a_{n1}\) \(\cdots\) \(a_{nm}\) \(b_n\) \(=-t_n\)
\(-1\) \(c_1\) \(\cdots\) \(c_m\) \(d\) \(=f\)
\(=s_1\) \(\cdots\) \(=s_m\) \(=g\)
and that by using similar arguments as we did in Chapter 2, we may establish conditions for determining the feasibility, optimality, and boundedness if the dual problem. A basic dual variable is feasible if and only if each \(c_j\leq0\text{,}\) and if we had a \(c_j > \) but each \(a_{ij}\leq 0\) for that problem, then the problem is infeasible, since this inequality can never be satisfied. If the basic dual is feasible then it is also optimal if and only if \(b_i\geq 0\text{.}\) However, if the basic dual is feasible but there is a row \(i\) where \(a_{ij} \geq0\text{,}\) then the dual region is unbounded, and if additionally \(b_i<0\text{,}\) then the dual problem is unbounded below. These boundedness conditions are sufficient but not neccesary, the reader is encouraged to come up with their own counterexamples to necessity.
We conclude by observing that if a primal problem has an unconstrained variable, the corresponding dual variable must be zero, since the associated hyperplane is not a bounding hyperplane. Similarly, if we had an equality constraint in the primal with corresponding slack set to zero, the corresponding dual is allowed to be unconstrained, since it is no longer relevant whether pivoting away from this plane improves the primal objective.
Figure 4.4.3. The dual problem as recorded in tableaus.