# Theory of Optimization: Projected (Sub)Gradient Descent

Published:

In this post, we will continue our analysis for gradient descent. In the previous lecture, we assume that all of the functions has $L$-Lipschitz gradient. For general $L$-smooth functions, the gradient descent algorithm can get a first order $\epsilon$-critical proint in $O(\frac{1}{\epsilon^2})$ iteration. When the function is convex, we show that we can use $O(\frac{1}{\epsilon})$ iterations to get a solution differ from the optimal for at most $\epsilon$. When the function is strongly convex and smooth, we show that the number of iterations can be reduced to $O(poly(\log \frac{1}{\epsilon}))$.

However, in the previous post, we assume that the function is smooth, which implies that the function has gradient at all points. In this post, we will first assume that the function is convex but not smooth. Besides, in the previous post, we also focus on the unconstraint case, and in this posts, we will also introduce the analysis for constraint minimization.

In this post, we assume that the convex optimization problem has the following form:

In the problem, $\mathcal X$ is the constraint set, which could be $\mathbb R^n$.

Subgradient: Let $\mathcal X\subset \mathbb R^n$, and $f:\mathcal X\to \mathbb R$. Then $g\in\mathbb R^n$ is a subgradient of $f$ at $x\in\mathcal X$ if for any $y\in\mathcal X$, one has

We use $\partial f(x)$ to denote the set of subgradient at $x$, i.e.

Note that if $f$ is differentiable at point $x$, then $g$ is just $\nabla f(x)$. So the notion of subgradient is compatible with that of the original gradient.

We will also introduce the projection operator $\Pi_{\mathcal X}$ on $\mathcal X$ by

We have the following lemma for projection operator.

Let $x\in\mathcal X$ and $y\in\mathbb R^n$, then

which also implies $||\Pi_{\mathcal X}(y) - x||^2 + ||y-\Pi_{\mathcal X}(y)||^2 \le ||y-x||^2.$

Then, we will introduce the projected gradient descent algorithm. The algorithm works as follow:

1. For $t = 1,2,\dots$
2. $y^{(t+1)} = x^{(t)} - \eta g_t, g_t\in\partial f(x^{(t)})$ and $x^{(t+1)} = \Pi_{\mathcal X}(y^{(t+1)})$

### Analysis for Lipschitz Functions

Suppose $f$ is convex. Furthermore, suppose that for any $x\in\mathcal X, g\in\partial f(x)$, we have $||g|| \le L$, then the projected subgradient method with $\eta = \frac{R}{L\sqrt{t}}$ satisfies

Proof: By the convexity of $f$, we have

Now, from $||g|| \le L$ and $||x^{*} - y^{(s+1)}||^2 \ge ||x^{*} - x^{(s+1)}||^2$, we have

Summing up the equations, we complete the proof.

### Analysis for Smooth Functions(Lipschitz Gradient)

In this section, we assume that $f$ is convex and $\beta$-smooth. In the previous post when the optimization problem is not constraint, we have the following inequality(change parameters)

However, this inequality may not be true in the constraint case, since we need to apply the projection operation. The next lemma shows the `right’ quantity to measure the descent procedure.

Let $x,y\in\mathcal X,x^{+} = \Pi_{\mathcal X}(x-\frac{1}{\beta}\nabla f(x))$ and $g_{\mathcal X}(x) = \beta (x-x^{+})$. Then the following holds true:

Proof: We first observe that

since the above inequality is equivalent to

Then, we have

Now we can show the following theorem for the convergence of PGD.

Let $f$ be convex and $\beta$-smooth on $\mathcal X$. Then projected gradient descent with $\eta = \frac{1}{\beta}$ satisfies

Proof: From the previous lemma, we have

and

Then we show that $||x^{(s)} - x^*||$ is decreasing with $s$. From the previous lemma, we can also find

then we have

Let $\epsilon_s = f(x^{(s)}) - f(x^*)$, we have

Then we can finish the proof by simple induction.

### Analysis for Strongly Convex Functions

In this section, we consider the projected gradient descent with time-varying step size $(\eta_t)_{t\ge 1}$, that is

1. For $t = 1,2,\dots$
2. $y^{(t+1)} = x^{(t)} - \eta_t g_t, g_t\in\partial f(x^{(t)})$ and $x^{(t+1)} = \Pi_{\mathcal X}(y^{(t+1)})$

Then we have the following theorem

Let $f$ be $\alpha$-strongly convex and $L$-Lipschitz on $\mathcal X$. Then projected subgradient descent with $\eta_s = \frac{2}{\alpha (s+1)}$ satisfies,

Proof: Similar with the projected subgradient descent for lipschitz functions, we have

Multiplying the inequalities by $s$, we have

Summing up the above inequalities and applying the Jensen’s inequality, we complete the proof.

### Analysis for Strongly Convex and Smooth Functions

The key improvement compared with convex and smooth function is that, one can show

Based on this result, we have the following theorem

Let $f$ be $\alpha$-strongly convex and $\beta$-smooth on $\mathcal X$. Then projected gradient descent with $\eta = \frac{1}{\beta}$ satisfies,

Proof: Using the previous inequality with $y = x^*$, we have

Tags: