RVLUmin::TRGNStep< Scalar, Policy > Class Template Reference

Generic trust region (truncated GN) step. More...

#include <TRGNAlg.hh>

Inheritance diagram for RVLUmin::TRGNStep< Scalar, Policy >:

RVLAlg::Algorithm List of all members.

Public Member Functions

void run ()
 here op = b-F(x), i.e.
atype getDelta ()
 TRGNStep (Policy const &_pol, OperatorEvaluation< Scalar > &_opeval, atype _eta1, atype _eta2, atype _gamma1, atype _gamma2, atype _minstep, bool &_nostep, atype &_predred, atype &_actred, atype &_jval, atype &_agnrm, atype &_rgnrm, ostream &_str)
 constructor
 ~TRGNStep ()

Detailed Description

template<typename Scalar, typename Policy>
class RVLUmin::TRGNStep< Scalar, Policy >

Generic trust region (truncated GN) step.

Trust region decision parameters are constants, so pass by value.

Operator evaluation, Function value, absolute and relative gradient norms, predicted and actual reduction, and trust radius are all altered by this object, so it stores mutable references for these items.

Parameters / data members:

Outline of run() method:

Definition at line 57 of file TRGNAlg.hh.


Constructor & Destructor Documentation

template<typename Scalar, typename Policy>
RVLUmin::TRGNStep< Scalar, Policy >::TRGNStep ( Policy const &  _pol,
OperatorEvaluation< Scalar > &  _opeval,
atype  _eta1,
atype  _eta2,
atype  _gamma1,
atype  _gamma2,
atype  _minstep,
bool &  _nostep,
atype &  _predred,
atype &  _actred,
atype &  _jval,
atype &  _agnrm,
atype &  _rgnrm,
ostream &  _str 
)

constructor

Depends on least-squares solver, chosen by the policy argument. Type of this object passed as template parameter. For characteristics, see TRGNAlg docs.

Pass already-initialized operator evaluation to this method. This evaluation stores the RHS (opeval.getValue()) and LinearOp (opeval.getDeriv) of the G-N least squares problem. The evaluation point is updated as part of the step, so the opeval reference must be mutable.

Parameters:
_pol - solver policy, see descriptions in TRGNAlg docs
_opeval - updated reference, 0.5*getValue().normsq() = objective function
_eta1 lower G-A parameter > 0
_eta2 upper G-A parameter > eta1
_gamma1 trust region reduction factor < 1
_gamma2 trust region expansion factor > 1, gamma1*gamma2 < 1
_minstep min permitted step as a fraction of trust radius
_nostep true if step norm was too small rel trust radius
_predred predicted reduction - updated reference
_actred actual reduction - updated reference
_jval objective function value - updated reference
_agnrm gradient norm - updated reference
_rgnrm gradient norm scaled by reciprocal of original (on constr) - updated reference
_str verbose output stream
Requirements on parameters:

Definition at line 234 of file TRGNAlg.hh.

template<typename Scalar, typename Policy>
RVLUmin::TRGNStep< Scalar, Policy >::~TRGNStep (  ) 

Definition at line 305 of file TRGNAlg.hh.


Member Function Documentation

template<typename Scalar, typename Policy>
void RVLUmin::TRGNStep< Scalar, Policy >::run (  )  [virtual]

here op = b-F(x), i.e.

the residual - use ResidualOp wrapper

NOTE: this is the "optimistic" no-copy form - i.e. assumes that the step is likely to succeed, so the restor-xsave-compute branch is unlikely, hence not costly. A "pessimistic" form would be a mirror-image. The optimal form would involve a deep copy operation on OpEvals, which is certainly possible future mod.

the quadratic model is

0.5*\|b-F(x)\|^2 + p^Tg(x) + 0.5p^TH(x)p

where H(x)=DF(x)^TDF(x), and H(x)p = g(x) = DF(x)^T(b-F(x)) (approx)

The predicted reduction is

0.5*\|b-F(x)\|^2 - 0.5p^Tg(x)

Implements RVLAlg::Algorithm.

Definition at line 82 of file TRGNAlg.hh.

References RVL::Vector< Scalar >::copy(), RVL::Vector< Scalar >::linComb(), RVLAlg::Terminator::query(), and RVLAlg::Algorithm::run().

template<typename Scalar, typename Policy>
atype RVLUmin::TRGNStep< Scalar, Policy >::getDelta (  ) 

Definition at line 194 of file TRGNAlg.hh.


The documentation for this class was generated from the following file:
Generated on 5 Jan 2017 for RvlUmin by  doxygen 1.4.7