HCL_LeastSquaresFcnlHess creates least squares objective functions from operators and data
![]() | Domain () Domain space access |
![]() | HCL_LeastSquaresFcnlHess_d (HCL_OpDerivAdj_d & Operator, HCL_Vector_d & Data) Usual constructor for Nonlinear Operator input. |
![]() | HCL_LeastSquaresFcnlHess_d (HCL_LinearOpAdj_d & Operator, HCL_Vector_d & Data) Usual constructor for Linear Operator input. |
HCL_LeastSquaresFcnlHess creates least squares objective functions from operators and data. This is a so called bridge class, i.e. it exists merely to make objects of some class behave like objects of another class. In this case, the input objects are an input or nonlinear operator A with both derivative image and adjoint image methods (an instance of HCL_LinearOpAdj or HCL_OpDerivAdj or HCL_OpDeriv2Adj) and a data vector d. The interface to these objects constructed by the class is the least squares functionalits gradient
and either the Gauss-Newton approximation to its Hessian operator
or the full Hessian operator
(not yet implemented, version 1.0) expressed as an HCL_FunctionalHess, an input class for optimization methods based on first derivative information.
So this class can be used to solve linear least squares problems straight from the implementation of the underlying operator, without constructing by hand the least squares function, as follows:
If you have a linear operator L (an instance of HCL_LinearOpAdj), and a data vector d (an instance of HCL_Vector), construct the least squares functional as follows:
HCL_LeastSquaresFcnlHess_d F(L, d);whereas if you have a nonlinear operator N instead (an instance of HCL_OpDerivAdj or HCL_OpDeriv2Adj), use
HCL_LeastSquaresFcnlHess_d F(N, d);That is, the constructors look the same - the code does appropriate things according to the types of the arguments submitted to the constructor.
The HCL_FunctionalHess_d F so constructed is suitable for submission to an optimization algorithm using second derivative information, such as Steihaug-Toint (HCL_UMinTR), or to any of the algorithms using first derivative information, such as BFGS (HCL_UMin_lbfgs) or nonlinear conjugate gradients (HCL_UMinNLCG).
Note that if the nonlinear operator submitted as an argument to the constructor does not have second derivatives defined (i.e. is an instance of HCL_OpDerivAdj but not of HCL_OpDeriv2Adj) the code constructs the Gauss-Newton approximation to the Hessian, rather than the (full) Hessian.
HCL_LeastSquaresFcnlHess_d(HCL_OpDerivAdj_d & Operator, HCL_Vector_d & Data)
this page has been generated automatically by doc++
(c)opyright by Malte Zöckler, Roland Wunderling
contact: doc++@zib.de