Skip to main content

Section 4.4 Summary of Chapter 4

We recall that at each corner or extreme point solution, this solution lies on the intersection of n bounding hyperplanes of the feasible region. We also note that the orthogonal vector for the objective plane may be written as a linear combination of the orthogonal vectors for these bounding hyperplanes, and in particular, the coefficients of the linear combination are exactly the negative of the cj′s in the corresponding tableau.
x1 x2 ⋯ xm −1
a11 a12 ⋯ a1m b1 =−t1
a21 a22 ⋯ a2m b2 =−t2
⋮ ⋮ ⋱ ⋮ ⋮ ⋮
an1 an2 ⋯ anm bn =−tn
c1 c2 ⋯ cm d
In some sense what −cj tells us is how much f increases if the corresponding xj decreases by one, i.e. the bound associated with xj were relaxed by one. This gives us a way of determining how relaxing each of the constraints compare to each other, and their relative value to each other. So in the case of Exploration 4.1.1, We have
x1 x2 −1
3 1 34 =−t1
1 1 14 =−t2
3 2 0 =f
We have that the normal vector for the objective plane, [32] may be written as [32]=−3[−10]+(−2)[0−1], and lowering x1 by 1 decreases the objective by 3, and lowering x2 by 1. decreases it by 2.
But should we pivot to
t1 t2 −1
1/2 −1/2 10 =−t1
−1/4 3/2 4 =−t2
−1/2 −3/2 −38 =f
Then [32]=12[31]+32[11] and decreasing t1 by 1 increases the objective by 12 and decreasing t2 by 1 increases the objective by 32. Note that this is equivalent to saying how much value would be gained should there by one more newt eye or frog respectively.
This inspires us to define the dual problem of a primal maximization problem in Definition 4.1.5, where the yi are the weights or coefficients of the normal vectors for bounding hyperplanes of the feasible region. We may define the sj to be the slack variable for each dual constraint so that (∑i=1naijyi)+sj=cjx, or equivalently:
[c1c2⋮cm]=y1[a11a12⋮a1m]+y2[a21a22⋮a2m]+⋯+yn[an1an2⋮anm]+s1[10⋮0]+s2[01⋮0]+⋯+sm[00⋮1]
and we have feasibility if and only if each yi,sj≥0.
Figure 4.4.1. Normal Vectors and Sensitivity Analysis.
We then note that by reformulating the primal and dual problem using matrix algebra, then for feasible x,y, we have that
f=c⊤x≤c⊤Ab≤y⊤b=g
which is the statement of the Weak Duality Theorem Proposition 4.2.3. In particular, if f=g for feasible x,y, then both f,g are optimal.
It turns out the converse to this statement is true as well. To show this, we suppose that f achieves optimality at
t? x? ⋯ t? −1
a11∗ a12∗ ⋯ a1m∗ b1∗ =−x?
a21∗ a22∗ ⋯ a2m∗ b2∗ =−x?
⋮ ⋮ ⋱ ⋮ ⋮ ⋮
an1∗ an2∗ ⋯ anm∗ bn∗ =−t?
c1∗ c2∗ ⋯ cm∗ d∗
We let −cj=ti,sj depending on which variable cj is the coefficient for, and by letting each other yi,sj=0, we have that
f=∑i=1cjxj−d=∑i=1n(−yi)ti+∑j=1m(−sj)xj−d∗=∑i=1n(−yi)(bi−∑j=1maijxj)+∑j=1m(−sj)xj−d∗=∑j=1m((∑i=1nyiaij)−sj)xj−(d∗+∑i=1nyibi)
We note that since the above tableau satisfies the conditions for an optimal basic solution, each yi,sj≥0. Since the above equality is an equality of functions of x1,…,xm, by comparing coefficients of xj, we have ∑i=1nyiaij≥cj and y is a feasible solution. Then we also have that −d∗=∑i=1nyibi−d. The left hand side is the optimal value for f, and the right hand side is the current value of g for this choice of y. So by Proposition 4.2.3, we prove the Strong Duality Theorem Theorem 4.2.4.
Figure 4.4.2. Proofs of the duality theorems.
We then note that the dual problem may be recorded simultaneously as the primal problem within the same tableau
x1 ⋯ xm −1
y1 a11 ⋯ a1m b1 =−t1
⋮ ⋮ ⋱ ⋮ ⋮ ⋮
yn an1 ⋯ anm bn =−tn
−1 c1 ⋯ cm d =f
=s1 ⋯ =sm =g
and that by using similar arguments as we did in Chapter 2, we may establish conditions for determining the feasibility, optimality, and boundedness if the dual problem. A basic dual variable is feasible if and only if each cj≤0, and if we had a cj> but each aij≤0 for that problem, then the problem is infeasible, since this inequality can never be satisfied. If the basic dual is feasible then it is also optimal if and only if bi≥0. However, if the basic dual is feasible but there is a row i where aij≥0, then the dual region is unbounded, and if additionally bi<0, then the dual problem is unbounded below. These boundedness conditions are sufficient but not necessary, the reader is encouraged to come up with their own counterexamples to necessity.
We conclude by observing that if a primal problem has an unconstrained variable, the corresponding dual variable must be zero, since the associated hyperplane is not a bounding hyperplane. Similarly, if we had an equality constraint in the primal with corresponding slack set to zero, the corresponding dual is allowed to be unconstrained, since it is no longer relevant whether pivoting away from this plane improves the primal objective.
Figure 4.4.3. The dual problem as recorded in tableaus.