class HCL_UMin_lbfgs_d : public HCL_UMin_d

HCL_UMin_lbfgs_d implements the limited memory BFGS algorithm for unconstrained minimization

Inheritance:


Public Methods

HCL_UMin_lbfgs_d ( HCL_LineSearch_d * linesearch = NULL, char * fname = NULL )
Usual constructor
Table& Parameters () const
Access to parameter table
virtual HCL_EvaluateFunctional_d& LastEval () const
Returns a reference to the last evaluation object.
virtual void SetScaling ( HCL_LinearOp_d * S, HCL_LinearSolver_d * lsolver )
SetScaling defines a new inner product in terms of a symmetric, positive definite operator S: <x,y> = (x,Sy)
virtual void UnSetScaling ()
UnSetScaling returns the inner product to the default.
int Minimize ( HCL_Functional_d & f, HCL_Vector_d & x )
Minimize attempts to find a local minimizer of the functional f, using x as a starting guess
virtual ostream& Write ( ostream & str ) const
Prints description of the object

Inherited from HCL_UMin_d:

Public

enum Return values for the method Minimize.

PossibleMinimizer
Possible local minimizer
PossibleConvergence
Possible convergence
LineSearchFailed
Line search failed
PossibleDivergence
Possible divergence
IterationLimit
Iteration limit reached
InaccurateGradient
Possible inaccurate gradient calculation

Inherited from HCL_Base:

Public Methods

void IncCount() const
void DecCount() const
int Count() const

Documentation

HCL_UMin_lbfgs_d implements the limited memory BFGS algorithm for unconstrained minimization. This algorithm is due to Nocedal (see, for example, D. C. Liu and J. Nocedal, ``On the limited memory BFGS method for large scale optimization methods'' Mathematical Programming 45 (1989), pp. 503-528).

The minimization algorithm is based on the limited memory BFGS inverse Hessian approximation (represented by a class, but hidden from the user), and on a line search algorithm.

The line search is represented by an abstract base class, HCL_LineSearch_d; this allows the algorithm to be executed with different choices of the line search algorithm.

The purpose of the algorithm is to take a functional with gradient and a starting guess in the domain of the functional, and to produce a local minimizer via a descent algorithm. If desired, the calling routine may provide a (symmetric positive definite) linear operator (and a corresponding linear solver, if needed) to be used to change the inner product on the solution space. If these are provided, it is assumed that the functional computes the gradient with respect to the default inner product (i.e. the one provided by the vector class).

The primary methods of this class are:

Use of this minimizer typically involves the following steps:

Here's an example:

   MyFcnl f;  // MyFcnl must be derived from HCL\_Functional_d
   MyVector x( "x0" ); // Assume MyVector has a constructor which
                       //reads from a file
   HCL_LineSearch_DS_d lsearch( "lsearch.dat" );  // Dennis-Schnabel
                                                  // line search
   HCL_UMin_lbfgs_d umin( "umin.dat",&lsearch );  // Parameters read
                                                  // from "umin.dat"
   umin.Minimize( f,x );  // Search for local minimum
   cout << "Final value of f: " << umin.LastEval().ValueRef() << endl;
   cout << "Final gradient norm: "
        << umin.LastEval().GradientRef().Norm() << endl;
(If the display flag is turned on, the final function value and gradient norm will be displayed anyway, so the last two lines are not necessary. But they illustrate that the last function value and gradient are stored in case they are needed.)
int MaxItn
(100) Maximum number of iterations.

double Typf
(1.0) Typical value of the functional near the solution. Setting this makes the gradient stopping tolerance more meaningful.

double TypxNorm
(1.0) Typical norm of the unknown vector x near the solution. Setting this makes the gradient stopping tolerance more meaningful.

double GradTol
(1.0e-2) Gradient tolerance. The algorithm attempts to locate a point where the relative gradient norm (i.e. the norm of the gradient scaled by the size of the vector x and by f(x)) is less than this value.

double MinStep
(1e-20) Minimum allowable step. If the algorithm takes a (relative) step less than this value in norm, it halts and reports "Possible convergence".

double MaxStep
(1e+20) Maximum allowable step. If the algorithm takes too many consecutive steps of length MaxStep, it assumes that the iterates are diverging and reports "Possible divergence".

int CscMaxLimit
(5) Maximum number of steps of length MaxStep allowed before the algorithm decides that the iterates are diverging

int DispFlag
(0) Display level. This determines how much information should be displayed during the execution of the algorithm. Possible values are: 0 - No output; 1 - Function value and gradient norm after final iteration; 2 - Function value and gradient norm after every iteration.

int DumpFlag
(0) Dump level. This determines how much information should be sent to the dump file during the execution of the algorithm. Possible values are: 0 - No output; 1 - Function value and gradient norm after final iteration; 2 - Function value and gradient norm after every iteration.

char DumpFile[81]
(HCL_UMin_lbfgs.DumpFile) Dump file name.

int DispPrecision
(6) Display precision---the number of digits sent to the screen

int DumpPrecision
(6) Dump precision---the number of digits sent to the file

int TraceSteps
(0) If nonzero, the iterates are sent to a file using the Write method from the vector class

char StepFile[81]
(HCL_UMin_lbfgs.StepFile) File name for recording iterates.

double InvHessianScale
(1) Scale for the initial inverse Hessian approximation. The initial inverse Hessian approximation is a multiple of the identity. This scale is adjusted after each iteration using information gained from the step taken; however, its initial value must be provided.

int MaxUpdates
(4) Maximum number of BFGS updates. See paper by Liu and Nocedal for details.

HCL_UMin_lbfgs_d( HCL_LineSearch_d * linesearch = NULL, char * fname = NULL )
Usual constructor
Parameters:
linesearch - Linesearch class (optional). If no line search is specified, then the Dennis-Schnabel line search (HCL_LineSearch_DS_d) will be used.
fname - Parameter file name or NULL. If given, this file contains algorithmic parameters (such as stopping tolerances), a full list of which is given elsewhere in the documentation of this class. For example, to set the maximum number of iterations to 100, the file should contain a line of the form UMinlbfgs::MaxItn = 100 or, alternately, UMin::MaxItn = 100. If the line seach is not provided, then this file will also be passed to the line search constructor, and so can contain entries such as LineSearch::DispFlag = 1.

Table& Parameters() const
Access to parameter table. Algorithmic parameters can be accessed with Parameters().GetValue( "NAME",val ) or changed with Parameters().PutValue( "NAME",val ).

virtual HCL_EvaluateFunctional_d& LastEval() const
Returns a reference to the last evaluation object.

virtual void SetScaling( HCL_LinearOp_d * S, HCL_LinearSolver_d * lsolver )
SetScaling defines a new inner product in terms of a symmetric, positive definite operator S: <x,y> = (x,Sy)

virtual void UnSetScaling()
UnSetScaling returns the inner product to the default.

int Minimize( HCL_Functional_d & f, HCL_Vector_d & x )
Minimize attempts to find a local minimizer of the functional f, using x as a starting guess. The return value is the termination code, as defined in the base class.

virtual ostream& Write( ostream & str ) const
Prints description of the object


This class has no child classes.

alphabetic index hierarchy of classes


this page has been generated automatically by doc++

(c)opyright by Malte Zöckler, Roland Wunderling
contact: doc++@zib.de