It has been observed that in many optimization problems, nonsmooth objective functions often appear smooth on naturally arising manifolds. This has led to the development of optimization algorithms which attempt to exploit this smoothness. Many of these algorithms follow the same two-step pattern: first to predict a direction of decrease, and second to make a correction step to return to the manifold. In this article, we examine some of the theoretical components used in such predictor-corrector methods. We begin our examination under the minimal assumption that the restriction of the function to the manifold is smooth. At the second stage, we add the condition of 'partial smoothness' relative to the manifold. Finally, we examine the case when the function is both 'prox-regular' and partly smooth. In this final setting, we show that the proximal point mapping can be used to return to the manifold, and argue that returning in this manner is preferable to returning via the projection mapping. We finish by developing sufficient conditions for quadratic convergence of predictor-corrector methods using a proximal point correction step. © 2006 Taylor & Francis.
|Publication status||Published - 1 Oct 2006|
- Newton-type methods
- Partly smooth function
- Proximal algorithm
- Riemannian gradient