PyTorch Release Notes
similar to the model that is discussed in the Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation paper. This model script is available on GitHub and NGC. Known similar to the model that is discussed in the Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation paper. PyTorch Release 23.06 PyTorch RN-08516-001_v23.07 | similar to the model that is discussed in the Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation paper. This model script is available on GitHub and NGC. Known0 码力 | 365 页 | 2.94 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques
quality is an important benchmark to evaluate the performance of a deep learning model. A language translation application that uses a low quality model would struggle with consumer adoption because it wouldn’t speak different languages. An application that employs a high quality model with a reasonable translation accuracy would garner better consumer support. In this chapter, our focus will be on the techniques techniques use models to generate samples for labels. Consider a training sample for English to Spanish translation: [English: “I am doing really well”, Spanish: “Estoy muy bien”]. Let’s say we have another model0 码力 | 56 页 | 18.93 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures
sequence across n time steps. RNNs are also used for sequence to sequence applications like machine translation, where both the input and output are sequences. Consider the task of training a model to translate position q in the second sequence. Figure 4-16 shows a sample attention matrix for an English-Spanish translation task. Matrix cells represent the attention scores between the respective english and spanish words Kyunghyun, et al. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014). 19 Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term0 码力 | 53 页 | 3.92 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction
experience in low or no-connectivity areas. This is made possible with an efficient on-device translation model. Explosion of Models Often there might be multiple ML models being served concurrently networks for common datasets like CIFAR-10, ImageNet, WMT etc. An example network for machine translation is shown in Figure 1-14, where using Neural Architecture Search, the authors improve over the Encoder architecture that is the leading architecture being used for complex NLP tasks such as translation. The NAS generated architecture, which is named Evolved Transformer8, achieves better quality at0 码力 | 21 页 | 3.17 MB | 1 年前3动手学深度学习 v2.0
1)是输入特征的一个 仿射变换(affine transformation)。仿射变换的特点是通过 加权和对特征进行线性变换(linear transformation),并通过偏置项来进行平移(translation)。 给定一个数据集,我们的目标是寻找模型的权重w和偏置b,使得根据模型做出的预测大体符合数据里的真实 价格。输出的预测值由输入特征通过线性模型的仿射变换决定,仿射变换由所选权重和偏置确定。 学习有用的表示。 218 6. 卷积神经网络 图6.1.1: 沃尔多游戏示例图。 现在,我们将上述想法总结一下,从而帮助我们设计适合于计算机视觉的神经网络架构。 1. 平移不变性(translation invariance):不管检测对象出现在图像中的哪个位置,神经网络的前面几层 应该对相同的图像区域具有相似的反应,即为“平移不变性”。 2. 局部性(locality):神经网络的前 模型在各类现代人工智能 应用中发挥着至关重要的作用,因此我们将其做为本章剩余部分和 10节的重点。为此,本节将介绍机器翻译 问题及其后文需要使用的数据集。 机器翻译(machine translation)指的是将序列从一种语言自动翻译成另一种语言。事实上,这个研究领域可 以追溯到数字计算机发明后不久的20世纪40年代,特别是在第二次世界大战中使用计算机破解语言编码。几 十年来,在使用神0 码力 | 797 页 | 29.45 MB | 1 年前3Machine Learning Pytorch Tutorial
BERT, GPT, ...) ○ Fairseq (sequence modeling for NLP & speech) ○ ESPnet (speech recognition, translation, synthesis, ...) ○ Most implementations of recent deep learning papers ○ ... References ● Machine0 码力 | 48 页 | 584.86 KB | 1 年前3复杂环境下的视觉同时定位与地图构建
the total frame number), and the tracking success ratio after initialization. Group A: simple translation Group B: there are loops Group C: slow and nearly pure rotation Group D: fast motion with strong0 码力 | 60 页 | 4.61 MB | 1 年前3Keras: 基于 Python 的深度学习库
PDF version, please visit https://github.com/wanzhenchn/keras-docs-zh. Thanks for the Chinese translation work done by keras-team, this document is produced based on it. Statement: This document can 是诡计多端的,他们带有一些不会实现的 信息;那些穿过抛光的喇叭出来的人背后具有真理,对于看到他们的人来说是完成 的。” Homer, Odyssey 19. 562 ff (Shewring translation). 为什么选择 KERAS? 5 2 为什么选择 Keras? 在如今无数深度学习框架中,为什么要使用 Keras 而非其他?以下是 Keras 与现有替代品的 一些比较。 2.1 Encoder-Decoder for Statistical Machine Transla- tion • On the Properties of Neural Machine Translation: Encoder-Decoder Approaches 关于 KERAS 网络层 94 • Empirical Evaluation of Gated Recurrent Neural0 码力 | 257 页 | 1.19 MB | 1 年前3【PyTorch深度学习-龙龙老师】-测试版202112
换脸、超级夜景等一系列非常实用酷炫的任务,限于篇幅,不再赘 述。 图 1.17 自动生成的图片 图 1.18 艺术风格迁移效果图 1.4.2 自然语言处理 机器翻译(Machine Translation) 过去的机器翻译算法大多是基于统计机器翻译模型,这 也是 2016 年前 Google 翻译系统采用的技术。2016 年 11 月,Google 基于 Seq2Seq 模型上 线了神经机 2016, pp. 2172-2180. [4] J.-Y. Zhu, T. Park, P. Isola 和 A. A. Efros, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” 出处 Computer Vision (ICCV), 2017 IEEE International0 码力 | 439 页 | 29.91 MB | 1 年前3
共 9 条
- 1