Skip to main content

Section 1.4 Summary of Chapter 1

We introduced some basic examples of linear optimization problems in Activity 1.2.1 and Activity 1.2.2. In these examples, we were able to graphically analyze and show that they reach optimality, and that these optimal solutions occur on the boundary of the feasible region. This is an idea which is critical to the development of our theory.
To show that such a result may generalize, we introduce the notion of convexity Definition 1.3.4. A convex set is a set which contains every line segement between points in the set. We then show that a half-space was convex, that an intersection of halfspaces was convex, and thus the feasible region of a linear optimization problem is always convex.
Note then that for a linear function \(f(x_1, x_2, \ldots x_n)=\displaystyle \sum_{j=1}^m c_j x_j\text{,}\) that for any output of \(f\text{,}\) \(k\text{,}\) the equation
\begin{equation*} \displaystyle \sum_{j=1}^m c_j x_j =k \end{equation*}
form a plane in \(\mathbb{R}^n\text{,}\) and that for any \(\x'\) lying on this plane, \(f(\x')=k\) as well. We may increase the value of \(k\) by moving in one of two direction orthogonal to this plane, until we reach the boundary. But even then, we can still increase the value of \(k\) by moving along the boundary until we either fall on a subset of the boundary on which \(f\) is constant, or a corner point also called an extreme point Definition 1.3.18.
Thus, if a linear optimization problem admits a maximum solution, it must do so at some extreme point. Each extreme point is an intersection of hyperplanes, but in \(\mathbb{R}^m\) with \(k\) additional bounding hyperplanes, there could be \({m+k\choose m}\) intersections, not all of whom are feasible, and not all of whom are optimal. There is also no guarantee that a linear optimization problem admits an optimal solution at all. This motivates us to find a more systematic approach to obtaining optimal solutions.
Figure 1.4.1. A summary of Chapter 1.