Overfitting 过拟合
L1 regularization:
L2 regularization:
Dropout reularization:
¥
支付方式
请使用微信扫一扫 扫描二维码支付
请使用支付宝扫一扫 扫描二维码支付
Overfitting 过拟合
L1 regularization:
L2 regularization:
Dropout reularization:
activation function AF
linear/nonlinear
AF 函数:relu, sigmoid, tanh
AF要可微分 (在Back Propegation反向传递中把误差传递回去)
在层数较多时,AF选择要避免梯度爆炸/梯度消失
CNN: Relu
RNN: Relu or tanh
Classifier
feature对于分类的拟合度
避免对于分类无意义的信息
避免重复性的信息
避免复杂的信息
Feature nomalization
线性回归 linear regression
minmax normalization->(0,1)
std nomorlization->(mean=0, std=1)
Training data: 70%
Test data: 30%
误差Error
训练次数Epochs
Accuracy
R2 score
F1 score
过拟合
交叉验证
Autoencoder
Encoder
LSTM: long short term memory
Gradient explosion: 梯度爆炸
Full connection (FC):
Pooling
image->Convolution->max pooling->convtolution->max pooling->FC->FC->Classifier
Input layer
hidden layer
Output layer
Activation
Semi-supervised learning
Reinforcement learning
Generic algorithm