LSTM-Layer使用
vec] ▪ h/c: [num_layer, b, h] ▪ out: [seq, b, h] nn.LSTM nn.LSTMCell ▪ __init__ LSTMCell.forward() ▪ ht, ct = lstmcell(xt, [ht_1, ct_1]) ▪ xt: [b, vec] ▪ ht/ct: [b, h] Single layer Two Layers 下一课时0 码力 | 11 页 | 643.79 KB | 1 年前3RNN-Layer使用
RNN Layer使用 主讲人:龙良曲 Folded model feature ??@??ℎ + ℎ?@?ℎℎ [0,0,0 … ] x: ??? ???, ????ℎ, ??????? ??? ????ℎ, ??????? ??? @[ℎ????? ???, ??????? ???]?+ ????ℎ, ℎ????? ??? @ ℎ????? ???, ℎ????? ??? ? layers, b, h dim] ▪ out: [seq len, b, h dim] Single layer RNN feature ??@??ℎ 1 + ℎ? 1@?ℎℎ 1 [0,0,0 … ] ℎ? 1@??ℎ 2 + ℎ? 2@?ℎℎ 2 [0,0,0 … ] 2 layer RNN [T, b, h_dim], [layers, b, h_dim] nn.RNNCell0 码力 | 15 页 | 883.60 KB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques
Overview of Compression One of the simplest approaches towards efficiency is compression to reduce data size. For the longest time in the history of computing, scientists have worked tirelessly towards popular example of lossless data compression algorithm is Huffman Coding, where we assign unique strings of bits (codes) to the symbols based on their frequency in the data. More frequent symbols are assigned and the path to that symbol is the bit-string assigned to it. This allows us to encode the given data in as few bits as possible, since the most frequent symbols will take the least number of bits to0 码力 | 33 页 | 1.96 MB | 1 年前3keras tutorial
basics of deep learning, Keras models, Keras layers, Keras modules and finally conclude with some real-time applications. Audience This tutorial is prepared for professionals who are aspiring to ..................................................................................... 11 Multi-Layer Perceptron ...................................................................................... .............................................................. 17 Keras iv Layer .................................................................................................0 码力 | 98 页 | 1.57 MB | 1 年前3【PyTorch深度学习-龙龙老师】-测试版202112
0.01 的高斯分布: ? = 1. ? + . + ?, ? ∼ ?( , . 12) 通过随机采样? = 1 次,可以获得?个样本的训练数据集?train,代码如下: data = []# 保存样本集的列表 for i in range(100): # 循环采样 100 个点 x = np.random.uniform(-10., 10.) # 随机采样输入 random.normal(0., 0.01) # 得到模型的输出 y = 1.477 * x + 0.089 + eps data.append([x, y]) # 保存样本点 data = np.array(data) # 转换为 2D Numpy 数组 通过 for 循环进行 100 次采样,每次从均匀分布?(−1 ,1 )中随机采样一个数据?,同时从 均值为 1000 次,返回最优 w*,b*和训练 Loss 的下降过程 [b, w]= gradient_descent(data, initial_b, initial_w, lr, num_iterations) loss = mse(b, w, data) # 计算最优数值解 w,b 上的均方差 print(f'Final loss:{loss}, w:{w}, b:{b}')0 码力 | 439 页 | 29.91 MB | 1 年前3《TensorFlow 快速入门与实战》7-实战TensorFlow人脸识别
������������������������������ Schroff, F., Kalenichenko, D. and Philbin, J., 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer �������������WKFIl�� ���k��LeS� tP�a���o� 3E9��� �����������WK �tP�a����o� FDDB: Face Detection Data Set and Benchmark �������A7� BBB�2��������4���5��1�7:�4�����8 Li����p����������F�u�rn�S��p�c�ef���b��L vision and pattern recognition. [8] Florian Schroff, Dmitry Kalenichenko, James Philbin. FaceNet: A unified embedding for face recognition and clustering. 2015, computer vision and pattern recognition. Facebook0 码力 | 81 页 | 12.64 MB | 1 年前3李东亮:云端图像技术的深度学习模型与应用
跟踪 核 心 SACC2017 图像技术的三个核心难点>>小、快、准 小模型 线上速度快 预测准 Frequent remote upgrade CPU-constrained, real-time Cloud processing SACC2017 视觉感知模型 分割 Forward Block Forward Block deconvolution deconvolution 2-5ms(K40) SACC2017 图像技术的三个核心难点>>小、快、准 小模型 线上速度快 预测准 Frequent remote upgrade CPU-constrained, real-time Cloud processing SACC2017 图像技术的三个核心难点>>小、快、准 模型 数据 工程 模型缩减 结构演进 SACC2017 单尺度卷积核 多尺度卷积核0 码力 | 26 页 | 3.69 MB | 1 年前3星际争霸与人工智能
State and Action Space Long-Term Planning Temporal and Spatial Reasoning Adversarial Real-time Strategy Multiagent Cooperation StarCraft AI Research and Competitions Classic AI Modern AI0 码力 | 24 页 | 2.54 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures
features in the input. Recurrent Neural Nets (RNNs) facilitated learning from the sequences and temporal data. These breakthroughs contributed to bigger and bigger models. Although they improved the quality of you see, books you read, food you enjoy and so on), without the need of knowing all the encyclopedic data about them. When working with deep learning models and inputs such as text, which are not in numerical high-dimensional data into low-dimension, while retaining the properties from the high-dimensional representation. It is useful because it is often computationally infeasible to work with data that has a large0 码力 | 53 页 | 3.92 MB | 1 年前3Keras: 基于 Python 的深度学习库
4.2.3.10 predict_generator . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.2.3.11 get_layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.3 函数式 API . . . . . . . . . 4.3.3.10 predict_generator . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.3.3.11 get_layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5 关于 Keras 网络层 58 5.1 关于 Keras metrics=['accuracy']) # 生成虚拟数据 import numpy as np data = np.random.random((1000, 100)) labels = np.random.randint(2, size=(1000, 1)) # 训练模型,以 32 个样本为一个 batch 进行迭代 model.fit(data, labels, epochs=10, batch_size=32)0 码力 | 257 页 | 1.19 MB | 1 年前3
共 86 条
- 1
- 2
- 3
- 4
- 5
- 6
- 9