机器学习课程-温州大学-02-数学基础回顾-2.CS229-Prob
0 码力 | 12 页 | 1.17 MB | 1 年前3Zabbix 1.8 Manual
knowl- edged PROB- LEM events for all triggers disre- garding their state. Sup- ported since 1.8.3. {TRIGGER.PROBLEM.EVENTS.PROBLEM.UNACK} X Number of unac- knowl- edged PROB- LEM events events for triggers in PROB- LEM state. Sup- ported since 1.8.3. 119 {TRIGGER.EVENTS.ACK} X X Number of acknowl- edged events for a map element in maps, or for the trigger which gener- acknowl- edged PROB- LEM events for all triggers disre- garding their state. Sup- ported since 1.8.3. {TRIGGER.PROBLEM.EVENTS.PROBLEM.ACK} X Number of acknowl- edged PROB- LEM events0 码力 | 485 页 | 9.28 MB | 1 年前3SQLite, Firefox, and our small IMDB movie database
CAST (aid, mid, role) MOVIE_DIRECTOR (did, mid) MOVIE_GENRE (mid, genre) DIRECTOR_GENRE (did, genre,prob) The data we use for this class is only a small subset of the large IMDB movie database, thus you Cast aid mid role Movie_director did mid Movie_genre mid genre Director_genre did genre prob 22 Small IMDB Movie Database: Example Tuples id fname lname gender 933 Lewis Abernathy M 2547 Actor id fname lname 429 Andrew Adamson 2931 Darren Aronofsky ... ... ... Director did genre prob 429 Adventure 0.75 429 Music 0.25 ... ... ... Director_genre did mid 11652 10920 44291 171730 码力 | 22 页 | 1.83 MB | 1 年前3【PyTorch深度学习-龙龙老师】-测试版202112
for r, log_prob in self.data[::-1]:#逆序取轨迹数据 R = r + gamma * R # 累加计算每个时间戳上的回报 # 每个时间戳都计算一次梯度 # grad_R=-log_P*R*grad_theta loss = -log_prob * R 完整的训练及更新代码如下: tape): # 计算梯度并更新策略网络参数。tape 为梯度记录器 R = 0 # 终结状态的初始回报为 0 for r, log_prob in self.data[::-1]:#逆序取 R = r + gamma * R # 累加计算每个时间戳上的回报 # 每个时间戳都计算一次梯度 # grad_R=-log_P*R*grad_theta 预览版202112 第 14 章 强化学习 6 loss = -log_prob * R with tape.stop_recording(): # 优化策略网络 grads =0 码力 | 439 页 | 29.91 MB | 1 年前3机器学习课程-温州大学-Scikit-learn
_depth=5) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) y_prob = clf.predict_proba(X_test) 使用决策树分类算法解决二分类问题, y_prob 为每个样本预测为 “0”和“1”类的概率 16 1.Scikit-learn概述 逻辑回归 支持向量机 朴素贝叶斯 K近邻 linear_model RandomForestClassifier(n_estimators=20) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) y_prob = clf.predict_proba(X_test) AdaBoost 基于梯度提升 ensemble.AdaBoostClassifier ensemble.AdaBoostRegressor0 码力 | 31 页 | 1.18 MB | 1 年前3Zabbix 2.0 Manual
edged PROB- LEM events for all trig- gers disre- gard- ing their state. Sup- ported since 1.8.3. 768 {TRIGGER.EVENTS.PROBLEM.UNACK} X X Number of unac- knowl- edged PROB- LEM events knowl- edged PROB- LEM events for trig- gers in PROB- LEM state. Sup- ported since 1.8.3. {TRIGGER.PROBLEM.EVENTS.PROBLEM.UNACK} X Number of unac- knowl- edged PROB- LEM events for trig- gers in PROB- LEM state. Sup- ported since 1.8.3. 1 2 3 4 5 6 7 8 9 10 11 12 {TRIGGER.EXPRESSION} X Trigger ex- pres- sion. Sup- ported since 1.8.12. 770 {TRIGGER.HOSTGROUP0 码力 | 791 页 | 9.66 MB | 1 年前3Zabbix 2.4 Manual
in progress or failed). 850 {EVENT.ACK.HISTORY} X Log of ac- knowl- edge- ments on the prob- lem. {EVENT.ACK.STATUS} X Acknowledgement sta- tus of the event (Yes/No). {EVENT.AGE} X edged PROB- LEM events for all trig- gers dis- re- gard- ing their state. Sup- ported since 1.8.3. 882 {TRIGGER.EVENTS.PROBLEM.UNACK} X X Number of un- ac- knowl- edged PROB- LEM knowl- edged PROB- LEM events for trig- gers in PROB- LEM state. Sup- ported since 1.8.3. 885 {TRIGGER.PROBLEM.EVENTS.PROBLEM.UNACK} X Number of un- ac- knowl- edged PROB- LEM events0 码力 | 910 页 | 10.81 MB | 1 年前3Zabbix 2.2 Manual
in progress or failed). 857 {EVENT.ACK.HISTORY} X Log of ac- knowl- edge- ments on the prob- lem. {EVENT.ACK.STATUS} X Acknowledgement sta- tus of the event (Yes/No). {EVENT.AGE} X edged PROB- LEM events for all trig- gers dis- re- gard- ing their state. Sup- ported since 1.8.3. 890 {TRIGGER.EVENTS.PROBLEM.UNACK} X X Number of un- ac- knowl- edged PROB- LEM knowl- edged PROB- LEM events for trig- gers in PROB- LEM state. Sup- ported since 1.8.3. 893 {TRIGGER.PROBLEM.EVENTS.PROBLEM.UNACK} X Number of un- ac- knowl- edged PROB- LEM events0 码力 | 918 页 | 11.28 MB | 1 年前3深度学习与PyTorch入门实战 - 35. Early-stopping-Dropout
https://github.com/MorvanZhou/PyTorch-Tutorial Clarification ▪ torch.nn.Dropout(p=dropout_prob) ▪ tf.nn.dropout(keep_prob) Behavior between train and test Batch- Norm Stochastic Gradient Descent ▪ Stochastic0 码力 | 16 页 | 1.15 MB | 1 年前3机器学习课程-温州大学-05深度学习-深度学习实践
a[L] DropOut Dropout的功能类似于?2正则化,与?2正则化不同的是,被应用的方 式不同,dropout也会有所不同,甚至更适用于不同的输入范围 keep-prob=1(没有dropout) keep-prob=0.5(常用取值,保留一半神经元) 在训练阶段使用,在测试阶段不使用! Dropout正则化 13 正则化 Early stopping代表提早停止训练神经网络0 码力 | 19 页 | 1.09 MB | 1 年前3
共 369 条
- 1
- 2
- 3
- 4
- 5
- 6
- 37