My Project
Public Types | Public Member Functions | Protected Attributes | List of all members
cuda_mlp::CudaMinimizerBase Class Referenceabstract

Abstract base class for CUDA-based minimizers. More...

Inheritance diagram for cuda_mlp::CudaMinimizerBase:
Inheritance graph
[legend]
Collaboration diagram for cuda_mlp::CudaMinimizerBase:
Collaboration graph
[legend]

Public Types

using LossGradFun = std::function< CudaScalar(const CudaScalar *params, CudaScalar *grad, const CudaScalar *input, const CudaScalar *target, int batch)>
 Loss and gradient callback signature. More...
 
using IterHook = std::function< void(int)>
 Optional per-iteration hook signature. More...
 

Public Member Functions

 CudaMinimizerBase (CublasHandle &handle)
 Construct with a cuBLAS handle reference. More...
 
virtual ~CudaMinimizerBase ()=default
 
int iterations () const noexcept
 Return the number of iterations performed in the last solve. More...
 
void setRecorder (::IterationRecorder< CudaBackend > *recorder)
 Attach a recorder for loss/grad norm history. More...
 
void setMaxIterations (int iters)
 Set maximum number of iterations. More...
 
void setTolerance (CudaScalar tol)
 Set stopping tolerance (interpretation depends on optimizer) More...
 
void setLineSearchParams (int max_iters, CudaScalar c1, CudaScalar rho)
 Configure Armijo line search parameters. More...
 
virtual void solve (int n, CudaScalar *params, const CudaScalar *input, const CudaScalar *target, int batch, const LossGradFun &loss_grad)=0
 Solve the optimization problem. More...
 

Protected Attributes

CublasHandlehandle_
 cuBLAS handle used by the optimizer More...
 
int max_iters_ = 200
 
int max_line_iters_ = 20
 Iteration limits. More...
 
CudaScalar tol_ = 1e-6f
 
CudaScalar c1_ = 1e-4f
 
CudaScalar rho_ = 0.5f
 Stopping and line-search params. More...
 
int last_iterations_ = 0
 Iterations performed in last run. More...
 
::IterationRecorder< CudaBackend > * recorder_ = nullptr
 Optional recorder for diagnostics. More...
 

Detailed Description

Abstract base class for CUDA-based minimizers.

Member Typedef Documentation

◆ IterHook

using cuda_mlp::CudaMinimizerBase::IterHook = std::function<void(int)>

Optional per-iteration hook signature.

◆ LossGradFun

using cuda_mlp::CudaMinimizerBase::LossGradFun = std::function<CudaScalar( const CudaScalar *params, CudaScalar *grad, const CudaScalar *input, const CudaScalar *target, int batch)>

Loss and gradient callback signature.

Constructor & Destructor Documentation

◆ CudaMinimizerBase()

cuda_mlp::CudaMinimizerBase::CudaMinimizerBase ( CublasHandle handle)
inlineexplicit

Construct with a cuBLAS handle reference.

◆ ~CudaMinimizerBase()

virtual cuda_mlp::CudaMinimizerBase::~CudaMinimizerBase ( )
virtualdefault

Member Function Documentation

◆ iterations()

int cuda_mlp::CudaMinimizerBase::iterations ( ) const
inlinenoexcept

Return the number of iterations performed in the last solve.

◆ setLineSearchParams()

void cuda_mlp::CudaMinimizerBase::setLineSearchParams ( int  max_iters,
CudaScalar  c1,
CudaScalar  rho 
)
inline

Configure Armijo line search parameters.

Parameters
max_itersMaximum line-search iterations
c1Armijo sufficient decrease constant
rhoBacktracking factor in (0,1)

◆ setMaxIterations()

void cuda_mlp::CudaMinimizerBase::setMaxIterations ( int  iters)
inline

Set maximum number of iterations.

◆ setRecorder()

void cuda_mlp::CudaMinimizerBase::setRecorder ( ::IterationRecorder< CudaBackend > *  recorder)
inline

Attach a recorder for loss/grad norm history.

◆ setTolerance()

void cuda_mlp::CudaMinimizerBase::setTolerance ( CudaScalar  tol)
inline

Set stopping tolerance (interpretation depends on optimizer)

◆ solve()

virtual void cuda_mlp::CudaMinimizerBase::solve ( int  n,
CudaScalar params,
const CudaScalar input,
const CudaScalar target,
int  batch,
const LossGradFun loss_grad 
)
pure virtual

Solve the optimization problem.

Parameters
nNumber of parameters
paramsParameter vector (device pointer)
inputInput data (device pointer)
targetTarget data (device pointer)
batchBatch size
loss_gradCallback that returns loss and writes gradient

Implemented in cuda_mlp::CudaSGD, cuda_mlp::CudaGD, and cuda_mlp::CudaLBFGS.

Member Data Documentation

◆ c1_

CudaScalar cuda_mlp::CudaMinimizerBase::c1_ = 1e-4f
protected

◆ handle_

CublasHandle& cuda_mlp::CudaMinimizerBase::handle_
protected

cuBLAS handle used by the optimizer

◆ last_iterations_

int cuda_mlp::CudaMinimizerBase::last_iterations_ = 0
protected

Iterations performed in last run.

◆ max_iters_

int cuda_mlp::CudaMinimizerBase::max_iters_ = 200
protected

◆ max_line_iters_

int cuda_mlp::CudaMinimizerBase::max_line_iters_ = 20
protected

Iteration limits.

◆ recorder_

::IterationRecorder<CudaBackend>* cuda_mlp::CudaMinimizerBase::recorder_ = nullptr
protected

Optional recorder for diagnostics.

◆ rho_

CudaScalar cuda_mlp::CudaMinimizerBase::rho_ = 0.5f
protected

Stopping and line-search params.

◆ tol_

CudaScalar cuda_mlp::CudaMinimizerBase::tol_ = 1e-6f
protected

The documentation for this class was generated from the following file: