Gradient descent and fast artificial time integration
Department of Computer Science, University of British Columbia, Vancouver, Canada.
2 Department of Mathematics, University of British Columbia, Vancouver, Canada. email@example.com
3 Institute of Pure and Applied Mathematics (IMPA), Rio de Janeiro, Brazil. firstname.lastname@example.org
The integration to steady state of many initial value ODEs and PDEs using the forward Euler method can alternatively be considered as gradient descent for an associated minimization problem. Greedy algorithms such as steepest descent for determining the step size are as slow to reach steady state as is forward Euler integration with the best uniform step size. But other, much faster methods using bolder step size selection exist. Various alternatives are investigated from both theoretical and practical points of view. The steepest descent method is also known for the regularizing or smoothing effect that the first few steps have for certain inverse problems, amounting to a finite time regularization. We further investigate the retention of this property using the faster gradient descent variants in the context of two applications. When the combination of regularization and accuracy demands more than a dozen or so steepest descent steps, the alternatives offer an advantage, even though (indeed because) the absolute stability limit of forward Euler is carefully yet severely violated.
Mathematics Subject Classification: 65F10 / 65F50
Key words: Steady state / artificial time / gradient descent / forward Euler / lagged steepest descent / regularization.
© EDP Sciences, SMAI, 2009