Skip to main content

Section 2.5 Summary of Chapter 2

We recall that the canonical linear optimization problem
\begin{align*} \textbf{Maximize: } f(\mathbf{x}) = c_1x_1 + c_2x_2 +\cdots c_mx_m-d \amp= \left(\displaystyle\sum_{j=1}^m c_jx_j \right)-d\\ \textbf{subject to: } a_{1,1}x_1 + a_{1,2}x_2 +\cdots a_{1,m}x_m\amp \leq b_1\\ a_{2,1}x_1 + a_{2,2}x_2 +\cdots a_{2,m}x_m\amp \leq b_2\\ \vdots \amp \vdots\\ a_{n,1}x_1 + a_{n,2}x_2 +\cdots a_{n,m}x_m\amp \leq b_n\\ x_1, x_2, \ldots, x_m\amp \geq 0. \end{align*}
may be written as
\begin{align*} \textbf{Maximize: } f(\mathbf{x}) = c_1x_1 + c_2x_2 +\cdots c_mx_m-d \amp= \left(\displaystyle\sum_{j=1}^m c_jx_j \right)-d\\ \textbf{subject to: } a_{1,1}x_1 + a_{1,2}x_2 +\cdots a_{1,m}x_m + t_1 \amp = b_1\\ a_{2,1}x_1 + a_{2,2}x_2 +\cdots a_{2,m}x_m+t_2\amp = b_2\\ \vdots \amp \vdots\\ a_{n,1}x_1 + a_{n,2}x_2 +\cdots a_{n,m}x_m +t_n \amp = b_n\\ x_1, x_2, \ldots, x_m, t_1, \ldots, t_n\amp \geq 0. \end{align*}
where the \(t_i\) are called the slack variables for each constraint. We refer to the original \(x_j\) as decision variables. We noted that by their construction, for any point \(\x\in \mathbb{R}^m\text{,}\) \(t_i\) is zero if and only if \(\x\) lies on bounding hyperplane, and is positive if it is on the “correct” side of the bounding hyperplane.
Figure 2.5.1. An introduction to slack variables.
All of this information may be written in a condensed form, the Tucker tableau.
\(x_1\) \(x_2\) \(\cdots\) \(x_m\) \(-1\)
\(a_{11}\) \(a_{12}\) \(\cdots\) \(a_{1m}\) \(b_1\) \(=-t_1\)
\(a_{21}\) \(a_{22}\) \(\cdots\) \(a_{2m}\) \(b_2\) \(=-t_2\)
\(\vdots\) \(\vdots\) \(\ddots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(a_{n1}\) \(a_{n2}\) \(\cdots\) \(a_{nm}\) \(b_n\) \(=-t_n\)
\(c_1\) \(c_2\) \(\cdots\) \(c_m\) \(d\)
We also noted that, although our inequalities and objective are written in terms of the \(x_j\text{,}\) since we have the equality
\begin{equation*} a_{i1}x_1+\cdots a_{ij}x_j+\cdots a_{im}x_m=b_i-t_i \end{equation*}
that we could rewrite this as:
\begin{equation*} x_j=\frac{1}{a_{ij}}\left( b_i-t_i-a_{i1}x_i-\cdots-a_{im}x_m \right) \end{equation*}
provided \(x_{ij}\neq0\) and this would allow us to rewrite all the pertinent equalities and inequalities replacing \(x_j\) with \(t_i\text{.}\) This process is called a pivot transformation Definition 2.1.9, where now \(t_i\) is a decision variable and \(x_j\) is a slack variable.
We note that by setting all the decision variables equal to zero, we obtain a potential solution called a basic solution and the pivot transformation is really moving from basic solution to basic solution. Assuming that all \(b_i\geq0\text{,}\) the basic solutions are feasible, and we establish a rule to identify pivots. Picking a \(c_j > 0\text{,}\) we choose a positive entry \(a_{ij}\) minimizing the ratio \(\frac{b_i}{a_{ij}}\text{.}\) When each \(c_j\leq 0\text{,}\) we have obtained an optimal solution. This is summarized in the Simplex Algorithm Definition 2.2.11.
Figure 2.5.2. Pivot transformations and the simplex algorithm.
Note that a feasible region may be unbounded, which we may detect with seeing a column \(j\) where each \(a_{ij} \leq 0\) but each \(b_i\geq 0\text{.}\) However, this does not mean the objective function is unbounded. This only occurs when \(c_j < 0\) as well. Note that this is a sufficient, and not necessary condition.
Figure 2.5.3. Unbounded regions and objective functions.
Similarly, whenever \(b_i < 0\text{,}\) the basic solution is not feasible, but the problem may not be infeasible. The problem is only infeasible (i.e. the feasible region is empty) when \(a_{ij}\leq0\) for each \(a_{ij}\) in the same row. This is again sufficient but not necessary.
Figure 2.5.4. Infeasible basic solutions and feasible regions.
Finally, it is possible to pivot from basic solution to basic solution, represented as the intersection of different hyperplanes, but without actually changing the point in \(\mathbb{R}^m\) in a phenomena called cycling. We introduce and prove Bland’s Anticyling Theorem Theorem 2.3.5 which shows that ordering the variables, and always using the first possible variable to break any ties, we may avoid this issue.