|
My Project
|
Abstract base class for CUDA-based minimizers. More...


Public Types | |
| using | LossGradFun = std::function< CudaScalar(const CudaScalar *params, CudaScalar *grad, const CudaScalar *input, const CudaScalar *target, int batch)> |
| Loss and gradient callback signature. More... | |
| using | IterHook = std::function< void(int)> |
| Optional per-iteration hook signature. More... | |
Public Member Functions | |
| CudaMinimizerBase (CublasHandle &handle) | |
| Construct with a cuBLAS handle reference. More... | |
| virtual | ~CudaMinimizerBase ()=default |
| int | iterations () const noexcept |
| Return the number of iterations performed in the last solve. More... | |
| void | setRecorder (::IterationRecorder< CudaBackend > *recorder) |
| Attach a recorder for loss/grad norm history. More... | |
| void | setMaxIterations (int iters) |
| Set maximum number of iterations. More... | |
| void | setTolerance (CudaScalar tol) |
| Set stopping tolerance (interpretation depends on optimizer) More... | |
| void | setLineSearchParams (int max_iters, CudaScalar c1, CudaScalar rho) |
| Configure Armijo line search parameters. More... | |
| virtual void | solve (int n, CudaScalar *params, const CudaScalar *input, const CudaScalar *target, int batch, const LossGradFun &loss_grad)=0 |
| Solve the optimization problem. More... | |
Protected Attributes | |
| CublasHandle & | handle_ |
| cuBLAS handle used by the optimizer More... | |
| int | max_iters_ = 200 |
| int | max_line_iters_ = 20 |
| Iteration limits. More... | |
| CudaScalar | tol_ = 1e-6f |
| CudaScalar | c1_ = 1e-4f |
| CudaScalar | rho_ = 0.5f |
| Stopping and line-search params. More... | |
| int | last_iterations_ = 0 |
| Iterations performed in last run. More... | |
| ::IterationRecorder< CudaBackend > * | recorder_ = nullptr |
| Optional recorder for diagnostics. More... | |
Abstract base class for CUDA-based minimizers.
| using cuda_mlp::CudaMinimizerBase::IterHook = std::function<void(int)> |
Optional per-iteration hook signature.
| using cuda_mlp::CudaMinimizerBase::LossGradFun = std::function<CudaScalar( const CudaScalar *params, CudaScalar *grad, const CudaScalar *input, const CudaScalar *target, int batch)> |
Loss and gradient callback signature.
|
inlineexplicit |
Construct with a cuBLAS handle reference.
|
virtualdefault |
|
inlinenoexcept |
Return the number of iterations performed in the last solve.
|
inline |
Configure Armijo line search parameters.
| max_iters | Maximum line-search iterations |
| c1 | Armijo sufficient decrease constant |
| rho | Backtracking factor in (0,1) |
|
inline |
Set maximum number of iterations.
|
inline |
Attach a recorder for loss/grad norm history.
|
inline |
Set stopping tolerance (interpretation depends on optimizer)
|
pure virtual |
Solve the optimization problem.
| n | Number of parameters |
| params | Parameter vector (device pointer) |
| input | Input data (device pointer) |
| target | Target data (device pointer) |
| batch | Batch size |
| loss_grad | Callback that returns loss and writes gradient |
Implemented in cuda_mlp::CudaSGD, cuda_mlp::CudaGD, and cuda_mlp::CudaLBFGS.
|
protected |
|
protected |
cuBLAS handle used by the optimizer
|
protected |
Iterations performed in last run.
|
protected |
|
protected |
Iteration limits.
|
protected |
Optional recorder for diagnostics.
|
protected |
Stopping and line-search params.
|
protected |