ADAMax optimizer class.
More...
#include <dnn_opt.h>
|
| | opt_adamax (DNN_Dtype s, DNN_Dtype l=0.0, DNN_Dtype a=0.0, DNN_Dtype b1=0.9, DNN_Dtype b2=0.999, DNN_Dtype e=1e-8) |
| | ADAMax constructor. More...
|
| |
| | ~opt_adamax () |
| |
| void | apply (arma::Cube< DNN_Dtype > &W, arma::Mat< DNN_Dtype > &B, const arma::Cube< DNN_Dtype > &Wgrad, const arma::Mat< DNN_Dtype > &Bgrad) |
| | Apply the optimizer to the layer parameters. More...
|
| |
| std::string | get_algorithm (void) |
| | Get the optimizer algorithm information. More...
|
| |
| | opt () |
| |
| | ~opt () |
| |
| void | set_learn_rate_alg (LR_ALG alg, DNN_Dtype a=0.0, DNN_Dtype b=10.0) |
| | Set learning rate algorithm. More...
|
| |
| void | update_learn_rate (void) |
| | Update learning rate. More...
|
| |
| DNN_Dtype | get_learn_rate (void) |
| | Get the learning rate. More...
|
| |
ADAMax optimizer class.
ADAMax algorithm, see https://arxiv.org/pdf/1609.04747.pdf
Definition at line 453 of file dnn_opt.h.
◆ opt_adamax()
ADAMax constructor.
- Parameters
-
| [in] | s | Step size - learning rate |
| [in] | l | Regularisation param lambda |
| [in] | a | Regularisation param alpha |
| [in] | b1 | beta 1 parameter |
| [in] | b2 | beta 2 parameter |
| [in] | e | eps parameter |
Definition at line 474 of file dnn_opt.h.
◆ ~opt_adamax()
| dnn::opt_adamax::~opt_adamax |
( |
| ) |
|
|
inline |
◆ apply()
Apply the optimizer to the layer parameters.
- Parameters
-
| [in,out] | W,B | Learnable parameters |
| [in] | Wgrad,Bgrad | Gradient of the learnable parameters |
Implements dnn::opt.
Definition at line 493 of file dnn_opt.h.
◆ get_algorithm()
| std::string dnn::opt_adamax::get_algorithm |
( |
void |
| ) |
|
|
inlinevirtual |
Get the optimizer algorithm information.
- Returns
- Algorithm information string
Reimplemented from dnn::opt.
Definition at line 536 of file dnn_opt.h.
◆ beta1
◆ beta2
◆ eps
◆ mB
◆ vB
The documentation for this class was generated from the following file: