class HCL_UMinNLCG_d : public HCL_UMin_d

HCL_UMinNLCG_d implements the nonlinear conjugate gradient algorithm (Polak-Ribiere form) for unconstrained minimization

Inheritance:


Public Methods

HCL_UMinNLCG_d ( HCL_LineSearch_d * linesearch = NULL, char *fname = NULL )
Usual constructor
Table& Parameters () const
Access to parameter table
virtual HCL_EvaluateFunctional_d& LastEval () const
Returns a reference to the last evaluation object.
virtual void SetScaling ( HCL_LinearOp_d * S, HCL_LinearSolver_d * lsolver=NULL )
SetScaling defines a new inner product in terms of a symmetric, positive definite operator S: <x,y> = (x,Sy)
virtual void UnSetScaling ()
UnSetScaling returns the inner product to the default.
int Minimize ( HCL_Functional_d & f, HCL_Vector_d & x0)
Conjugate gradient minimizer
virtual ostream& Write ( ostream & str ) const
Prints description of the object

Public

return codes (TermCode)

Inherited from HCL_UMin_d:

Public

enum Return values for the method Minimize.

PossibleMinimizer
Possible local minimizer
PossibleConvergence
Possible convergence
LineSearchFailed
Line search failed
PossibleDivergence
Possible divergence
IterationLimit
Iteration limit reached
InaccurateGradient
Possible inaccurate gradient calculation

Inherited from HCL_Base:

Public Methods

void IncCount() const
void DecCount() const
int Count() const

Documentation

HCL_UMinNLCG_d implements the nonlinear conjugate gradient algorithm (Polak-Ribiere form) for unconstrained minimization. For background on the algorithm, see, for example, "Practical Methods of Optimization" (2nd edition) by R. Fletcher, Wiley, 1987.

This algorithm depends on a fairly accurate line search. The line search is represented by an abstract base class, HCL_LineSearch_d; this allows the algorithm to be executed with different choices of the line search algorithm. The default is HCL_LineSearch_Fl_d, which implements Fletcher's line search (see the above reference).

The purpose of the algorithm is to take a functional with gradient and a starting guess in the domain of the functional, and to produce a local minimizer via a descent algorithm.

The primary methods of this class are:

Use of this minimizer typically involves the following steps:

Here's an example:

   MyFcnl f;  // MyFcnl must be derived from HCL\_Functional\_d
   MyVector x( "x0" ); // Assume MyVector has a constructor which
                       //reads from a file
   HCL_LineSearch_MT_d lsearch( "lsearch.dat" );  // More'-Thuente
                                                  // line search
   HCL_UMinNLCG_d umin( "umin.dat",&lsearch );    // Parameters read
                                                  // from "umin.dat"
   umin.Minimize( f,x );  // Search for local minimum
   cout << "Final value of f: " << umin.LastEval().ValueRef() << endl;
   cout << "Final gradient norm: "
        << umin.LastEval().GradientRef().Norm() << endl;
(If the display flag is turned on, the final function value and gradient norm will be displayed anyway, so the last two lines are not necessary. But they illustrate that the last function value and gradient are stored in case they are needed.)
int MaxItn
(100) maximum number of iterations

double Typf
(1.0) Typical value of the functional near the solution. Setting this makes the gradient stopping tolerance more meaningful.

double TypxNorm
(1.0) Typical norm of the unknown vector x near the solution. Setting this makes the gradient stopping tolerance more meaningful.

double GradTol
(1.0e-2) Gradient tolerance. The algorithm attempts to locate a point where the relative gradient norm (i.e. the norm of the gradient scaled by the size of the vector x and by f(x)) is less than this value.

double MinStep
(1e-20) Minimum allowable step. If the algorithm takes a (relative) step less than this value in norm, it halts and reports "Possible convergence".

double MaxStep
(1e+20) Maximum allowable step. If the algorithm takes too many consecutive steps of length MaxStep, it assumes that the iterates are diverging and reports "Possible divergence".

int CscMaxLimit
(5) Maximum number of steps of length MaxStep allowed before the algorithm decides that the iterates are diverging

int DumpFlag
(0) Dump level. This determines how much information should be sent to the dump file during the execution of the algorithm. Possible values are: 0 - No output; 1 - Function value and gradient norm after final iteration; 2 - Function value and gradient norm after every iteration.

char DumpFile[81]
(HCL_UMin_lbfgs.DumpFile) Dump file name.

int DispPrecision
(6) Display precision---the number of digits sent to the screen

int DumpPrecision
(6) Dump precision---the number of digits sent to the file

int TraceSteps
(0) If nonzero, the iterates are sent to a file using the Write method from the vector class

char StepFile[81]
(HCL_UMinNLCG.StepFile) File name for recording iterates.

int TermCode
Termination code.

return codes (TermCode)

HCL_UMinNLCG_d( HCL_LineSearch_d * linesearch = NULL, char *fname = NULL )
Usual constructor
Parameters:
linesearch - Linesearch class (optional). If no line search is specified, then Fletcher's line search (HCL_LineSearch_Fl_d) will be used.
fname - Parameter file name or NULL. If given, this file contains algorithmic parameters (such as stopping tolerances), a full list of which is given elsewhere in the documentation of this class. For example, to set the maximum number of iterations to 100, the file should contain a line of the form UMinNLCG::MaxItn = 100 or, alternately, UMin::MaxItn = 100. If the line seach is not provided, then this file will also be passed to the line search constructor, and so can contain entries such as LineSearch::DispFlag = 1.

Table& Parameters() const
Access to parameter table. Algorithmic parameters can be accessed with Parameters().GetValue( "NAME",val ) or changed with Parameters().PutValue( "NAME",val ).

virtual HCL_EvaluateFunctional_d& LastEval() const
Returns a reference to the last evaluation object.

virtual void SetScaling( HCL_LinearOp_d * S, HCL_LinearSolver_d * lsolver=NULL )
SetScaling defines a new inner product in terms of a symmetric, positive definite operator S: <x,y> = (x,Sy)

virtual void UnSetScaling()
UnSetScaling returns the inner product to the default.

int Minimize( HCL_Functional_d & f, HCL_Vector_d & x0)
Conjugate gradient minimizer. This algorithm uses the Polak and Ribiere formula to update the search direction. (See Fletcher (1987).
Parameters:
fptr - function
x0 - starting point, on successful completion, this will hold the computed minimum

virtual ostream& Write( ostream & str ) const
Prints description of the object


This class has no child classes.

alphabetic index hierarchy of classes


this page has been generated automatically by doc++

(c)opyright by Malte Zöckler, Roland Wunderling
contact: doc++@zib.de