My Project
Public Member Functions | List of all members
cuda_mlp::CudaLBFGS Class Reference

Limited-memory BFGS with Armijo backtracking line search. More...

Inheritance diagram for cuda_mlp::CudaLBFGS:
Inheritance graph
[legend]
Collaboration diagram for cuda_mlp::CudaLBFGS:
Collaboration graph
[legend]

Public Member Functions

 CudaLBFGS (CublasHandle &handle)
 Construct the optimizer. More...
 
void setMemory (size_t m)
 Set the history size (memory) More...
 
void solve (int n, CudaScalar *params, const CudaScalar *input, const CudaScalar *target, int batch, const LossGradFun &loss_grad) override
 Run L-BFGS optimization. More...
 
- Public Member Functions inherited from cuda_mlp::CudaMinimizerBase
 CudaMinimizerBase (CublasHandle &handle)
 Construct with a cuBLAS handle reference. More...
 
virtual ~CudaMinimizerBase ()=default
 
int iterations () const noexcept
 Return the number of iterations performed in the last solve. More...
 
void setRecorder (::IterationRecorder< CudaBackend > *recorder)
 Attach a recorder for loss/grad norm history. More...
 
void setMaxIterations (int iters)
 Set maximum number of iterations. More...
 
void setTolerance (CudaScalar tol)
 Set stopping tolerance (interpretation depends on optimizer) More...
 
void setLineSearchParams (int max_iters, CudaScalar c1, CudaScalar rho)
 Configure Armijo line search parameters. More...
 

Additional Inherited Members

- Public Types inherited from cuda_mlp::CudaMinimizerBase
using LossGradFun = std::function< CudaScalar(const CudaScalar *params, CudaScalar *grad, const CudaScalar *input, const CudaScalar *target, int batch)>
 Loss and gradient callback signature. More...
 
using IterHook = std::function< void(int)>
 Optional per-iteration hook signature. More...
 
- Protected Attributes inherited from cuda_mlp::CudaMinimizerBase
CublasHandlehandle_
 cuBLAS handle used by the optimizer More...
 
int max_iters_ = 200
 
int max_line_iters_ = 20
 Iteration limits. More...
 
CudaScalar tol_ = 1e-6f
 
CudaScalar c1_ = 1e-4f
 
CudaScalar rho_ = 0.5f
 Stopping and line-search params. More...
 
int last_iterations_ = 0
 Iterations performed in last run. More...
 
::IterationRecorder< CudaBackend > * recorder_ = nullptr
 Optional recorder for diagnostics. More...
 

Detailed Description

Limited-memory BFGS with Armijo backtracking line search.

The L-BFGS method builds a low-rank approximation of the inverse Hessian using the last m curvature pairs (s_k, y_k), where: s_k = x_{k+1} - x_k y_k = grad_{k+1} - grad_k The search direction is computed with the two-loop recursion

Constructor & Destructor Documentation

◆ CudaLBFGS()

cuda_mlp::CudaLBFGS::CudaLBFGS ( CublasHandle handle)
inlineexplicit

Construct the optimizer.

Member Function Documentation

◆ setMemory()

void cuda_mlp::CudaLBFGS::setMemory ( size_t  m)
inline

Set the history size (memory)

◆ solve()

void cuda_mlp::CudaLBFGS::solve ( int  n,
CudaScalar params,
const CudaScalar input,
const CudaScalar target,
int  batch,
const LossGradFun loss_grad 
)
inlineoverridevirtual

Run L-BFGS optimization.

Parameters
nNumber of parameters
paramsParameter vector (device)
inputInput batch (device)
targetTarget batch (device)
batchBatch size
loss_gradCallback returning loss and gradient

Implements cuda_mlp::CudaMinimizerBase.

Here is the call graph for this function:

The documentation for this class was generated from the following file: