Given \(\p, \q\in \mathbb{R}^m\text{,}\) we call the set \(\{t\cdot \p+(1-t)\cdot \q : t\in [0,1]\}\) the line segment between \(\p\) and \(\q\text{.}\)
Let \(S\subseteq \mathbf{R}^m\text{,}\) we say that \(S\) is convex if given any \(\p, \q\in S\text{,}\)\(S\) also contains the line segment between \(\p, \q\text{.}\)
Let \(S\subseteq \mathbb{R}^m\) be defined by \(S:=\{\mathbf{x}\in \mathbb{R}^m: a_1x_1+a_2x_2+\cdots a_mx_m\leq b\}\) for some \(a_i, b\in \mathbb{R}\) (i.e. \(S\) is a half-space of \(\mathbb{R}^m\)).
Show that for any \(\mathbf{x}=(x_1, x_2, \ldots, x_n), \mathbf{y} = (y_1, y_2, \ldots, y_n)\text{,}\) that \(f(\mathbf{x}+\mathbf{y}) = f(\mathbf{x})+f(\mathbf{y})\text{.}\)
Let \(\mathbf{x}, \mathbf{y}\in S\text{,}\) that is, there are \(k_x, k_y\) such that \(f(\mathbf{x})=a_1x_1+a_2x_2+\cdots a_nx_n = k_x, f(\mathbf{y})= a_1y_1+a_2y_2+\cdots a_ny_n =k_y\) and \(k_x, k_y\leq b\text{.}\) Show that \(f(t\mathbf{x}+(1-t)\mathbf{y})\leq b\) for \(t\in [0,1]\text{.}\)
Let \(S\subseteq \mathbb{R}^2\) be the feasible region of of a canonical maximization linear programming problem, let \(f(x_1, x_2) = 2x_2+3x_2\) be the objective function.
Let \(\mathbf{x}\in S\) such that there is a \(r > 0\) so that \(B(\mathbf{x}, r)\subseteq S\text{.}\) Explain why \(\mathbf{x}\) cannot be a maximizer of \(f\text{.}\)
Let \(S\subseteq \mathbb{R}^m\) be a convex set. We say \(\mathbf{e}\in S\) is an extreme point of \(S\) if there are no \(\mathbf{x}, \mathbf{y}\in S\) so that \(\mathbf{e}\in \{t\cdot \mathbf{x}+(1-t)\cdot \mathbf{y} : t\in (0,1)\}\text{.}\)
If \(S\subseteq \mathbb{R}^m\) is a closed and bounded region, and \(f:S\to \mathbb{R}\) is a continuous function. Then there are \(\x_1, \x_2\in S\) such that \(f(\x_1)\geq f(\x)\) and \(f(\x_2)\leq f(\x)\) for every \(\x\in S\text{.}\)
(We will assume without proof that linear functions \(f:\mathbb{R}^m\to \mathbb{R}\) are continuous, and that the feasible region of a linear optimization problem is always closed.)
If \(S\) is the feasible region of a canonical problem and is bounded, then \(S\) contains an optimal solution which is an extreme point of \(S\text{.}\)
If the problem is a maximization problem and if there is an \(M\) so that \(f(s)\leq M\) for all \(s\in S\text{,}\) then \(S\) contains an optimal solution which is an extreme point of \(S\text{.}\)
If the problem is a minimization problem and if there is an \(M\) so that \(g(s)\geq M\) for all \(s\in S\text{,}\) then \(S\) contains an optimal solution which is an extreme point of \(S\text{.}\)
Let \(\mathbf{x}_1, \mathbf{x}_2 \in S\) be optimal solutions of a canonical linear programming problem, giving optimal solution \(f(\mathbf{x}_1) = f^* =f(\mathbf{x}_2)\text{.}\) Show that for any \(t\in [0,1]\text{,}\)\(f(t\mathbf{x}_1+(1-t)\mathbf{x}_2)=f^*\) and that \(t\mathbf{x}_1+(1-t)\mathbf{x}_2\in S\text{.}\)
Let \(f(x,y)=x\) be the objective function for a canonical maximization problem. Find a set of constraints so that the feasible region is unbounded but there is a maximal solution. Why doesn’t this contradict Theorem 1.3.22?
Suppose a canonical linear programming problem in \(\mathbb{R}^m\) is bounded by the usual \(m\) hyperplanes corresponding to \(x_i\geq 0\) as well as \(k\) additional hyperplanes. How many potential points of intersection could there be?
So for a canonical linear programming problem in \(\mathbb{R}^{10}\) bounded by an additional 15 hyperplanes, how many potential extreme points are there?