《TensorFlow 2项目进阶实战》6-业务落地篇:实现货架洞察Web应⽤
业务落地篇:实现货架洞察 Web 应用 扫码试看/订阅 《 TensorFlow 2项目进阶实战》视频课程 • 串联 AI 流程理论:商品检测与商品识别 • 串联 AI 流程实战:商品检测与商品识别 • 展现 AI 效果理论:使用 OpenCV 可视化识别结果 • 展现 AI 效果实战:使用 OpenCV 可视化识别结果 • 搭建 AI SaaS 理论:Web 框架选型 • 搭建 AI 展现 AI 效果实战:使用 OpenCV 可视化识别结果 “Hello TensorFlow” Try it! 搭建 AI SaaS 理论:Web 框架选型 Python Web 框架 Python Web 框架 - Flask Python Web 框架 - Flask Flask 常用扩展 Flask 项目常见目录结构 启动文件 manage.py 示例 搭建 AI SaaS 理论:数据库0 码力 | 54 页 | 6.30 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation
results. For example, between quantization and clustering, which one is preferable? What is the performance impact when both are used together? We have four options: none, quantization, clustering, and both past few years, we have seen newer architectures, techniques and training procedures pushing the performance benchmarks higher. Figure 7-1 shows some of the choices we face when working on a deep learning process of learning are called hyperparameters to differentiate them from model parameters. The performance of deep learning relies on a set of good hyperparameters. Some of the commonly tuned hyperparameters0 码力 | 33 页 | 2.48 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures
models posed deployment challenges. What good is a model that cannot be deployed in practical applications! Efficient Architectures aim to improve model deployability by proposing novel ways to reduce Naturally, increasing d will increase the quality of the embeddings which might lead to better performance in downstream tasks, but it will also increase the size of the embedding table. Size of the vocabulary epochs. However, we should discuss a couple of follow-up topics around how to scale them to NLP applications and beyond. My embedding table is huge! Help me! While embedding tables help in dimensionality0 码力 | 53 页 | 3.92 MB | 1 年前3keras tutorial
learning, Keras models, Keras layers, Keras modules and finally conclude with some real-time applications. Audience This tutorial is prepared for professionals who are aspiring to make a career ............................................................................ 83 15. Keras ― Applications ............................................................................................. designed to quickly define deep learning models. Well, Keras is an optimal choice for deep learning applications. Features Keras leverages various optimization techniques to make high level neural network0 码力 | 98 页 | 1.57 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques
more places you'll go.” ― Dr. Seuss Model quality is an important benchmark to evaluate the performance of a deep learning model. A language translation application that uses a low quality model would samples including repeats seen by the model to reach the desired performance threshold (in terms of accuracy, precision, recall or other performance metrics). We designate a new model training setup to be more more sample efficient, if it achieves similar or better performance with fewer data samples when compared to the baseline. Think of it as teaching a child to recognize common household objects such as a0 码力 | 56 页 | 18.93 MB | 1 年前3Lecture 1: Overview
Concepts of Machine Learning Feng Li (SDU) Overview September 6, 2023 2 / 57 Instructor Prof. Feng Li Web: https://funglee.github.io Office: N3-312-1 Education: 2010-2015, PhD, Nanyang Technological University program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. [Tom Mitchell, Machine September 6, 2023 8 / 57 What is Machine Learning ? (Contd.) Improve on task T, with respect to performance metric P, based on expe- rience E. Feng Li (SDU) Overview September 6, 2023 9 / 57 What is Machine0 码力 | 57 页 | 2.41 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review
a new task: 1. Data Efficiency: It relies heavily on labeled data, and hence achieving a high performance on a new task requires a large number of labels. 2. Compute Efficiency: Training for new tasks labeling. Therefore, we can simply use e-books, Wikipedia and other sources for NLU related models, and web images & videos for computer vision models. We can then construct the final dataset for the pretext getting to the final model such as experiments with architectures, hyper-parameter tuning, and model performance debugging. However, since the pre-trained model is intended to be generalizable across many downstream0 码力 | 31 页 | 4.03 MB | 1 年前3PyTorch Release Notes
8-bit floating point (FP8) precision on Hopper GPUs which provides better training and inference performance with lower memory utilization. Transformer Engine also includes a collection of highly optimized Core Examples The tensor core examples provided in GitHub and NGC focus on achieving the best performance and convergence from NVIDIA Volta™ tensor cores by using the latest deep learning example networks This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time. ‣ ResNeXt101-32x4d model: This model was introduced in the Aggregated Residual Transformations0 码力 | 365 页 | 2.94 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction
to expect in the book. We start off by providing an overview of the state of deep learning, its applications, and rapid growth. We will establish our motivation behind seeking efficiency in deep learning deep learning models. Introduction to Deep Learning Machine learning is being used in countless applications today. It is a natural fit in domains where there might not be a single algorithm that works perfectly Unlike traditional algorithm problems where we expect exact optimal answers, machine learning applications can often tolerate approximate responses, since often there are no exact answers. Machine learning0 码力 | 21 页 | 3.17 MB | 1 年前3【PyTorch深度学习-龙龙老师】-测试版202112
BradburyJames, ChananGregory, . . . ChintalaSoumith. (2019). PyTorch: An Imperative Style, High-Performance Deep Learning Library. 出处 WallachH., LarochelleH., BeygelzimerA., d\textquotesingle Alch é-BucF Curran Associates, Inc. 检索来源: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep- learning-library.pdf 预览版202112 第4章 PyTorch 基础 我设想在未来,我们可能就相当于机器人的宠物狗, 到那时我也会支持机器人的。−克劳德·香农 算逻辑可以自由定义,更为通用,我们会在卷积神经网络一章看到自定义网络的优越性。 8.5 模型乐园 对于常用的网络模型,如 ResNet、VGG 等,不需要手动创建网络,可以直接从 keras.applications 子模块中通过一行代码即可创建并使用这些经典模型,同时还可以通过设 置 weights 参数加载预训练的网络参数,非常方便。 8.5.1 加载模型 以 ResNet50 网络模型为例,一般将0 码力 | 439 页 | 29.91 MB | 1 年前3
共 37 条
- 1
- 2
- 3
- 4