DaNNet
|
▼Cdnn::layer | Layer base class |
▼Cdnn::layer_act | Activation layer base class |
Cdnn::act_LReLU | Leaky ReLU activation class |
Cdnn::act_ReLU | ReLU activation class |
Cdnn::act_sigmoid | Sigmoid activation class |
Cdnn::act_softmax | Softmax activation class |
Cdnn::act_softplus | Softplus activation class |
Cdnn::act_tanh | Tanh activation class |
Cdnn::layer_conv | A convolution layer class |
▼Cdnn::layer_cost | Cost/output layer base class |
Cdnn::cost_CE | Cross Entropy cost/output layer class with linear activation |
Cdnn::cost_CE_sigmoid | Cross Entropy cost/output layer class with sigmoid activation |
Cdnn::cost_CE_softmax | Cross Entropy cost/output layer class with softmax activation |
Cdnn::cost_MSE | Mean Square Error cost/output layer class |
Cdnn::cost_MSE_sigmoid | Mean Square Error cost/output layer class with sigmoid activation |
Cdnn::layer_dense | A fully connected/dense layer class |
Cdnn::layer_drop | Dropout layer class |
Cdnn::layer_input | Input/data layer class |
Cdnn::layer_norm | Batch normalization layer class |
▼Cdnn::layer_pool | Pooling layer base class |
Cdnn::pool_average | Average pooling layer class |
Cdnn::pool_max | Max pooling layer class |
Cdnn::net | Neural netowork model class |
▼Cdnn::opt | Optimizer base class |
Cdnn::opt_adadelta | ADAdelta optimizer class |
Cdnn::opt_adagrad | ADAgrad optimizer class |
Cdnn::opt_adam | ADAM optimizer class |
Cdnn::opt_adamax | ADAMax optimizer class |
Cdnn::opt_rmsprop | RMSprop optimizer class |
Cdnn::opt_SGD | Stochastic Gradient Descent optimizer class |
Cdnn::opt_SGD_momentum | Stochastic Gradient Descent with momentum optimizer class |
Cdnn::opt_SGD_nesterov | Stochastic Gradient Descent with Nesterov momentum optimizer class |