《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques
(In this case, classes are 0, 1, 2 and so on until 9) inputs. We use the sparse variant of the categorical cross entropy loss function so that we can use the index of the correct class for each example sparse_categorical_accuracy: 0.9500 - val_loss: 0.0753 - val_sparse_categorical_accuracy: 0.9789 Epoch 2/15 469/469 [==============================] - 2s 5ms/step - loss: 0.0570 - sparse_categorical_accuracy: val_sparse_categorical_accuracy: 0.9855 Epoch 3/15 469/469 [==============================] - 2s 5ms/step - loss: 0.0412 - sparse_categorical_accuracy: 0.9873 - val_loss: 0.0486 - val_sparse_categorical_accuracy:0 码力 | 33 页 | 1.96 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques
= optimizers.Adam(learning_rate=LEARNING_RATE) model.compile( optimizer=adam, loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model model = create_model() model.summary() Downloading frame_length=256, frame_step=128)) # Convert the labels to a one-hot vector. y = keras.utils.to_categorical(y, num_classes) return x, y x_train, y_train = get_processed_ds(data_ds['train'], 16000) x_test standard ModelCheckpoint callback to be able to keep track of and save the best checkpoint, and the categorical accuracy as the metric to monitor. The callback will save the checkpoint with the maximum accuracy0 码力 | 56 页 | 18.93 MB | 1 年前3Keras: 基于 Python 的深度学习库
hinge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 7.2.7 categorical_hinge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 7.2.8 logcosh . . . . . . . . . . . . . . . . . . 135 7.2.9 categorical_crossentropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 7.2.10 sparse_categorical_crossentropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 8.2.2 categorical_accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 8.2.3 sparse_categorical_accuracy . . . . . . . . . . . . . . . . .0 码力 | 257 页 | 1.19 MB | 1 年前3keras tutorial
'data') to_categorical It is used to convert class vector into binary class matrix. >>> from keras.utils import to_categorical >>> labels = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> to_categorical(labels) mean_squared_logarithmic_error squared_hinge hinge categorical_hinge logcosh huber_loss categorical_crossentropy sparse_categorical_crossentropy binary_crossentropy kullba kullback_leibler_divergence poisson cosine_proximity is_categorical_crossentropy All above loss function accepts two arguments: y_true - true labels as tensors y_pred - prediction0 码力 | 98 页 | 1.57 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques
loss: 1.9458 - categorical_accuracy: 0.6004 - val_loss: 2.7220 - val_categorical_accuracy: 0.1080 Epoch 2/50 125/125 [==============================] - 3s 26ms/step - loss: 1.5526 - categorical_accuracy: 0 3224 - val_categorical_accuracy: 0.2292 Epoch 3/50 125/125 [==============================] - 4s 32ms/step - loss: 1.3961 - categorical_accuracy: 0.6434 - val_loss: 2.0764 - val_categorical_accuracy: 0 loss: 1.2690 - categorical_accuracy: 0.6570 - val_loss: 1.6459 - val_categorical_accuracy: 0.4417 Epoch 5/50 125/125 [==============================] - 3s 25ms/step - loss: 1.1453 - categorical_accuracy: 00 码力 | 34 页 | 3.18 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures
get_bow_model(get_pretrained_embedding_layer(trainable=True)) bow_model_w2v.compile( loss='sparse_categorical_crossentropy', optimizer='adam', metrics=["accuracy"] ) bow_model_w2v.summary() Model: "bow" bow_model_no_w2v = get_bow_model(get_untrained_embedding_layer()) bow_model_no_w2v.compile( loss='sparse_categorical_crossentropy', optimizer='adam', metrics=["accuracy"] ) bow_model_no_w2v.summary() Model: "bow" get_cnn_model(get_pretrained_embedding_layer(trainable=True)) cnn_model_w2v.compile( loss='sparse_categorical_crossentropy', optimizer='adam', metrics=["accuracy"] ) cnn_model_w2v_history = cnn_model_w2v0 码力 | 53 页 | 3.92 MB | 1 年前3阿里云上深度学习建模实践-程孟力
6 0 .655 DeepFM: [ { "type": "Categorical", "name": "f1.embed_dim", "candidates": ["16", "32", "48", "64", "80"] }, { "type": "Categorical", "name": ”f2.embed_dim", "80"] } ] MIND: [ { "type": "Categorical", "name": ”capsule_config.routing_logits_scale", “candidates”:[10, 20, 30] }, { "type": "Categorical", "name": "capsule_config.squash_pow"0 码力 | 40 页 | 8.51 MB | 1 年前3QCon北京2018-《深度学习在微博信息流排序的应用》-刘博
阅读模型 Score = ?)*+,-./+ ∗ ???? + ?/6)/7 ∗ ???? + ?-,.8 ∗ ???? 特征工程 Ø 特征工程非常重要 • 手动组合——专家知识 • categorical特征 • 离散化/归一化处理 • conitnues特征 • one-hot 表示 • 假设检验方式 • 相关系数评估 • 特征组合 • GBDT+互信息——有效挖掘 非线性特征及组合 表达能力强 网络结构灵活 User features Relation features Contextual features Continueous featues Categorical features normalize one-hot encode embedding one-hot encode Content features ReLU(256) ReLU(128) 深度学习应用实践 —— DeepFM User features Relation features Contextual features Continueous featues Categorical features normalize one-hot encode embedding Content features ReLU(256) ReLU(128) ReLU(64)0 码力 | 21 页 | 2.14 MB | 1 年前3《TensorFlow 快速入门与实战》6-实战TensorFlow验证码识别
模型损失函数设计 交叉熵(Cross-Entropy, CE) 我们使用交叉熵作为该模型的损失函数。 虽然 Categorical / Binary CE 是更常用的损失函数,不过他们都是 CE 的变体。 CE 定义如下: 对于二分类问题 (C‘=2) ,CE 定义如下: Categorical CE Loss(Softmax Loss) 常用于输出为 One-hot 向量的多类别分类(Multi-Class0 码力 | 51 页 | 2.73 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation
adam = optimizers.Adam(learning_rate=learning_rate) model.compile( optimizer=adam, loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) return model # model = create_model() model = build_hp_model(kt Adam(learning_rate=CHILD_PARAMS['learning_rate']) model.compile( optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics='accuracy' ) model.summary() return model def train(self, model): history0 码力 | 33 页 | 2.48 MB | 1 年前3
共 14 条
- 1
- 2