How Deep Do You Go?
How Deep Do You Go? Contributing to the os Package Oliver Stenbom July 25th Gophercon 2019 On an unsuspecting Monday last July, the team I was working at the time received a bug report. The report0 码力 | 70 页 | 14.56 MB | 1 年前3VMware SIG Deep Dive into Kubernetes Scheduling
VMware SIG Deep Dive into Kubernetes Scheduling Performance and high availability options for vSphere Steve Wong, Michael Gasch KubeCon North America December 13, 2018 2 Open Source Community Relations0 码力 | 28 页 | 1.85 MB | 1 年前3Prometheus Deep Dive - Monitoring. At scale.
Prometheus Deep Dive Monitoring. At scale. Richard Hartmann & Frederic Branczyk @TwitchiH & @fredbrancz 2018-12-12 Richard Hartmann & Frederic Branczyk @TwitchiH & @fredbrancz Prometheus Deep Dive Introduction lead Prometheus team member Richard Hartmann & Frederic Branczyk @TwitchiH & @fredbrancz Prometheus Deep Dive Introduction Intro 2.0 to 2.2.1 2.4 - 2.6 Beyond Outro Show of hands Who has heard of Prometheus Prometheus in production? Richard Hartmann & Frederic Branczyk @TwitchiH & @fredbrancz Prometheus Deep Dive Introduction Intro 2.0 to 2.2.1 2.4 - 2.6 Beyond Outro Prometheus 101 Inspired by Google’s0 码力 | 34 页 | 370.20 KB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation
about a variety of techniques in the last few chapters to improve efficiency and boost the quality of deep learning models. These techniques are just a small subset of the available techniques. It is often of these four options to make an informed decision. Blessed with a large research community, the deep learning field is growing at a rapid pace. Over the past few years, we have seen newer architectures the performance benchmarks higher. Figure 7-1 shows some of the choices we face when working on a deep learning problem in the vision domain for instance. Some of these choices are boolean, others have0 码力 | 33 页 | 2.48 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction
Introduction to Efficient Deep Learning Welcome to the book! This chapter is a preview of what to expect in the book. We start off by providing an overview of the state of deep learning, its applications applications, and rapid growth. We will establish our motivation behind seeking efficiency in deep learning models. We will also introduce core areas of efficiency techniques (compression techniques, learning techniques that even if you just read this chapter, you would be able to appreciate why we need efficiency in deep learning models today, how to think about it in terms of metrics that you care about, and finally0 码力 | 21 页 | 3.17 MB | 1 年前38 4 Deep Learning with Python 费良宏
2016的目标:Web爬虫+深度学习+自然语言处理 = ? Microso� Apple AWS 今年最激动人心的事件? 2016.1.28 “Mastering the game of Go with deep neural networks and tree search” 今年最激动人心的事件? 2016年3月Alphago 4:1 击败李世石九段 人工智能 VS. 机器学习 VS. 深度学习 Torch (NYU,2002), Facebook AI, Google Deepmind Theano (University of Montreal, ~2010), 学院派 Kersa, “Deep Learning library for Theano and TensorFlow” Caffe (Berkeley),卷积神经网络,贾扬清 TensorFlow (Google) Spark0 码力 | 49 页 | 9.06 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures
footprint or quality, we should consider employing suitable efficient architectures. The progress of deep learning is characterized by the phases of architectural breakthroughs to improve on previous results enjoy and so on), without the need of knowing all the encyclopedic data about them. When working with deep learning models and inputs such as text, which are not in numerical format, having an algorithmic inputs should have a larger distance between each other. Embeddings form a crucial part of modern deep-learning models, and we are excited to explain how they work. In the following section we will explain0 码力 | 53 页 | 3.92 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques
you'll go.” ― Dr. Seuss Model quality is an important benchmark to evaluate the performance of a deep learning model. A language translation application that uses a low quality model would struggle with is because, firstly, regularization and dropout are fairly straight-forward to enable in any modern deep learning framework. Secondly, data augmentation and distillation can bring significant efficiency Now, let’s dive into these learning techniques to understand what they are and how to employ them in deep learning workflows. We start with data augmentation in the next section. Data Augmentation Data0 码力 | 56 页 | 18.93 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques
to make it shorter.” Blaise Pascal In the last chapter, we discussed a few ideas to improve the deep learning efficiency. Now, we will elaborate on one of those ideas, the compression techniques. Compression a gentle introduction to the idea of compression. Details of quantization and its applications in deep learning follow right after. The quantization section delves into the implementation details using compression might lead to degradation in quality. In our case, we are concerned about compressing the deep learning models. What do we really mean by compressing though? As mentioned in chapter 1, we can break0 码力 | 33 页 | 1.96 MB | 1 年前3Harbor Deep Dive - Open source trusted cloud native registry
Harbor Deep Dive Open source trusted cloud native registry Henry Zhang, Chief Architect, VMware R&D China Steven Zou, Staff Engineer, VMware R&D China Nov. 2018 goharbor.io Initiated by VMware LDAP/Active Directory Supporting services Harbor Packaging Docker Kubernetes Cloud Foundry Deep dive Harbor through panel discussion! Q1: What other features harbor should provide? Q2: How does0 码力 | 15 页 | 8.40 MB | 1 年前3
共 1000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 100