Learning Gulp
0 码力 | 45 页 | 977.19 KB | 1 年前3Machine Learning
Machine Learning Lecture 10: Neural Networks and Deep Learning Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Deep Feedforward usually a highly non-linear function • Feedforward networks are of extreme importance to machine learning practioners • The conventional neural networks (CNN) used for object recognition from photos are units), and output layer 7 / 19 Neural Feedforward Networks (Contd.) • We approximate f ∗(x) by learning f(x) from the given training data • In the output layer, f(x) ≈ y for each training data, but the0 码力 | 19 页 | 944.40 KB | 1 年前3Learning Laravel
0 码力 | 216 页 | 1.58 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques
Chapter 3 - Learning Techniques “The more that you read, the more things you will know. The more that you learn, the more places you'll go.” ― Dr. Seuss Model quality is an important benchmark to evaluate evaluate the performance of a deep learning model. A language translation application that uses a low quality model would struggle with consumer adoption because it wouldn’t serve its intended purpose flexibility to trade off some quality for smaller footprints. In the first chapter, we briefly introduced learning techniques such as regularization, dropout, data augmentation, and distillation to improve quality0 码力 | 56 页 | 18.93 MB | 1 年前3Learning Socket.IO
0 码力 | 15 页 | 870.16 KB | 1 年前3Machine Learning Pytorch Tutorial
Machine Learning Pytorch Tutorial TA : 曾元(Yuan Tseng) 2022.02.18 Outline ● Background: Prerequisites & What is Pytorch? ● Training & Testing Neural Networks in Pytorch ● Dataset & Dataloader ● Tensors link3 2. Deep Learning Basics ■ Prof. Lee’s 1st & 2nd lecture videos from last year ■ ref: link1, link2 Some knowledge of NumPy will also be useful! What is PyTorch? ● An machine learning framework in computing with more cores for arithmetic calculations ○ See What is a GPU and do you need one in deep learning? Tensors – Gradient Calculation >>> x = torch.tensor([[1., 0.], [-1., 1.]], requires_grad=True)0 码力 | 48 页 | 584.86 KB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review
Advanced Learning Techniques “Tell me and I forget, teach me and I may remember, involve me and I learn.” – Benjamin Franklin This chapter is a continuation of Chapter 3, where we introduced learning techniques techniques. To recap, learning techniques can help us meet our model quality goals. Techniques like distillation and data augmentation improve the model quality, without increasing the footprint of the model this chapter by presenting self-supervised learning which has been instrumental in the success of natural language models like BERT. Self-Supervised learning helps models to quickly achieve impressive0 码力 | 31 页 | 4.03 MB | 1 年前3Machine Learning with ClickHouse
Machine Learning with ClickHouse Nikolai Kochetov, ClickHouse developer Experimental dataset NYC Taxi and Uber Trips › Where to download: https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page ClickHouse stochasticLinearRegression(parameters)(target, x1, ..., xN) Available parameters: › learning_rate › l2_regularization › batch_size › optimizer: Adam, SGD, Momentum, Nesterov All parameters = df['total_amount'] # Initialize CatBoostRegressor model = CatBoostRegressor(iterations=1000, learning_rate=0.1, depth=6) # Fit model model.fit(train_data, train_labels, cat_features=[1]) # Save model0 码力 | 64 页 | 1.38 MB | 1 年前3Machine Learning with ClickHouse
Machine Learning with ClickHouse Nikolai Kochetov, ClickHouse developer Experimental dataset NYC Taxi and Uber Trips › Where to download: https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page ClickHouse stochasticLinearRegression(parameters)(target, x1, ..., xN) Available parameters: › learning_rate › l2_regularization › batch_size › optimizer: Adam, SGD, Momentum, Nesterov All parameters = df['total_amount'] # Initialize CatBoostRegressor model = CatBoostRegressor(iterations=1000, learning_rate=0.1, depth=6) # Fit model model.fit(train_data, train_labels, cat_features=[1]) # Save model0 码力 | 64 页 | 1.38 MB | 1 年前3Solving Nim by the Use of Machine Learning
Solving Nim by the Use of Machine Learning Exploring How Well Nim Can be Played by a Computer Mikael Nielsen Røykenes Thesis submitted for the degree of Master in Informatics: Programming and Networks the Use of Machine Learning Exploring How Well Nim Can be Played by a Computer Mikael Nielsen Røykenes c⃝ 2019 Mikael Nielsen Røykenes Solving Nim by the Use of Machine Learning http://www.duo.uio 3.4 The Sprague-Grundy Theorem . . . . . . . . . . . . . . . . . . . 6 4 Machine Learning 6 4.1 Reinforcement learning . . . . . . . . . . . . . . . . . . . . . . . . 7 4.1.1 The Principle . . . . .0 码力 | 109 页 | 6.58 MB | 1 年前3
共 1000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 100