积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(45)机器学习(45)

语言

全部中文(简体)(29)英语(16)

格式

全部PDF文档 PDF(45)
 
本次搜索耗时 0.075 秒,为您找到相关结果约 45 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 中文(简体)
  • 英语
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 Batch Norm

    Batch Norm 主讲人:龙良曲 Intuitive explanation Intuitive explanation Feature scaling ▪ Image Normalization ▪ Batch Normalization Batch Norm https://medium.com/syncedreview/facebook-ai-proposes-group-normalization- p-normalization- alternative-to-batch-normalization-fb0699bffae7 Pipeline nn.BatchNorm2d Class variables Test Visualization Advantages ▪ Converge faster ▪ Better performance ▪ Robust ▪ stable
    0 码力 | 16 页 | 1.29 MB | 1 年前
    3
  • pdf文档 PyTorch Release Notes

    language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. This model is based on the BERT: Pre-training of Deep Bidirectional Transformers language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. This model is based on the BERT: Pre-training of Deep Bidirectional Transformers language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. This model is based on the BERT: Pre-training of Deep Bidirectional Transformers
    0 码力 | 365 页 | 2.94 MB | 1 年前
    3
  • pdf文档 keras tutorial

    algorithm, which will best fit for the type of learning process (e.g image classification, text processing, etc.,) and the available input data. Algorithm is represented by Model in Keras. Algorithm includes  Text processing: Provides functions to convert text into NumPy array suitable for machine learning. We can use it in data preparation phase of machine learning.  Image processing: Provides machine learning. We can use it in data preparation phase of machine learning.  Sequence processing: Provides functions to generate time based data from the given input data. We can use it in data
    0 码力 | 98 页 | 1.57 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    vectorization layer to build the vocabulary. train_text_ds = tf.data.Dataset.from_tensor_slices(x_train).batch(512) vectorization_layer.adapt(train_text_ds) Let’s checkout the top ten words in the vocabulary Let’s train this model! bow_model_w2v_history = bow_model_w2v.fit( x_train_vectorized, y_train, batch_size=64, epochs=10, validation_data=(x_test_vectorized, y_test)) Epoch 1/10 313/313 [=========== observe its progress. bow_model_no_w2v_history = bow_model_no_w2v.fit( x_train_vectorized, y_train, batch_size=64, epochs=10, validation_data=(x_test_vectorized, y_test)) Epoch 1/10 313/313 [===========
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    LeCun, Yann, John Denker, and Sara Solla. "Optimal brain damage." Advances in neural information processing systems 2 (1989). As you can deduce, the parameter changes the influence of the previous value we can ignore the first row in the weight matrix. If the input was of shape [n, 6], where n is the batch size, and the weight matrix was of shape [6, 6], we can now treat this problem to be of input [n, 147586 _________________________________________________________________ prune_low_magnitude_batch_no (None, 32, 32, 128) 513 _________________________________________________________________
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    compression technique that has been used across different parts of Computer Science especially in signal processing. It is a process of converting high precision continuous values to low precision discrete values exercise. We use NumPy for this solution. It supports vector operations which operate on a vector (or a batch) of x variables (vectorized execution) instead of one variable at a time. Although it is possible of each dimension) of X as [batch size, D1], that of W as [D1, D2] and b is the bias vector with shape [D2]. Hence, the shape of the result of the operation (XW + b) is [batch size, D2]. σ is a nonlinear
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    as parameters. It also has two hyperparameters: batch_size and epochs. We use a small batch size because our dataset has just 1020 samples. A large batch size, say 256, will result in a small number (5) train(model, tds, vds, batch_size=24, epochs=100): tds = tds.shuffle(1000, reshuffle_each_iteration=True) batch_tds = tds.batch(batch_size).prefetch(tf.data.AUTOTUNE) batch_vds = vds.batch(256).prefetch(tf ModelCheckpoint(tmpl, save_best_only=True, monitor="val_accuracy") history = model.fit( batch_tds, validation_data=batch_vds, epochs=epochs, callbacks=[checkpoints] ) return history Let’s run a baseline
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    10) 公式 (3.1.10)中的w和x都是向量。在这里,更优雅的向量表示法比系数表示法(如w1, w2, . . . , wd)更具可读 性。|B|表示每个小批量中的样本数,这也称为批量大小(batch size)。η表示学习率(learning rate)。批量 大小和学习率的值通常是手动预先指定,而不是通过模型训练得到的。这些可以调整但不在训练过程中更新 的参数称为超参数(hyperpa 本并以小批量 方式获取数据。 在下面的代码中,我们定义一个data_iter函数,该函数接收批量大小、特征矩阵和标签向量作为输入,生成 大小为batch_size的小批量。每个小批量包含一组特征和标签。 def data_iter(batch_size, features, labels): num_examples = len(features) indices = list(range(num_examples)) for i in range(0, num_examples, batch_size): batch_indices = torch.tensor( indices[i: min(i + batch_size, num_examples)]) yield features[batch_indices], labels[batch_indices] 通常,我们利用GPU并行运算的优势,处理合
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 【PyTorch深度学习-龙龙老师】-测试版202112

    2015 DQN AlphaGO 2016 2017 AlphaGO Zero 2019 OpenAI Five ResNet 2015 2014 VGG GooLeNet 2015 Batch Normalization 德州扑克 Pluribus 2019 机器翻译 BERT 2018 TensorFlow 发布 2015 PyTorch 0.1 发布 2017 2018 PyTorch 这种算法固然简单直接,但是面对大规模、高维度数据的优化问题时计算效率极低, 基本不可行。梯度下降算法(Gradient Descent)是神经网络训练中最常用的优化算法,配合 强大的图形处理芯片 GPU(Graphics Processing Unit)的并行加速计算能力,非常适合优化海 量数据的神经网络模型,自然也适合优化这里的神经元线性模型。这里先简单地应用梯度 下降算法,来解决神经元模型预测的问题。由于梯度下降算法是深度学习的核心算法之 import pyplot as plt # 绘图工具 from utils import plot_image, plot_curve, one_hot # 便捷绘图函数 batch_size = 512 # 批大小 # 训练数据集,自动从网络下载 MNIST 数据集,保存至 mnist_data 文件夹 train_db=torchvision.datasets.MNIST('mnist_data'
    0 码力 | 439 页 | 29.91 MB | 1 年前
    3
  • pdf文档 Machine Learning Pytorch Tutorial

    batches, enables multiprocessing ● dataset = MyDataset(file) ● dataloader = DataLoader(dataset, batch_size, shuffle=True) More info about batches and shuffling here. Dataset & Dataloader from torch Dataloader dataset = MyDataset(file) dataloader = DataLoader(dataset, batch_size=5, shuffle=False) DataLoader Dataset mini-batch batch_size __getitem__(0) __getitem__(1) __getitem__(2) __getitem__(3) = 0) torch.optim optimizer = torch.optim.SGD(model.parameters(), lr, momentum = 0) ● For every batch of data: 1. Call optimizer.zero_grad() to reset gradients of model parameters. 2. Call loss.backward()
    0 码力 | 48 页 | 584.86 KB | 1 年前
    3
共 45 条
  • 1
  • 2
  • 3
  • 4
  • 5
前往
页
相关搜索词
深度学习PyTorch入门实战40BatchNormReleaseNoteskerastutorialEfficientDeepLearningBookEDLChapterArchitecturesAdvancedCompressionTechniques动手v2深度学习MachinePytorchTutorial
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩