Lecture 5: Gaussian Discriminant Analysis, Naive Bayes
Lecture 5: Gaussian Discriminant Analysis, Naive Bayes and EM Algorithm Feng Li Shandong University fli@sdu.edu.cn September 27, 2023 Feng Li (SDU) GDA, NB and EM September 27, 2023 1 / 122 Outline Outline 1 Probability Theory Review 2 A Warm-Up Case 3 Gaussian Discriminate Analysis 4 Naive Bayes 5 Expectation-Maximization (EM) Algorithm Feng Li (SDU) GDA, NB and EM September 27, 2023 2 / 122 distributions Joint probability distribution Independence Conditional probability distribution Bayes’ Theorem ... ... Feng Li (SDU) GDA, NB and EM September 27, 2023 3 / 122 Sample Space, Events and0 码力 | 122 页 | 1.35 MB | 1 年前3Lecture Notes on Gaussian Discriminant Analysis, Naive
Notes on Gaussian Discriminant Analysis, Naive Bayes and EM Algorithm Feng Li fli@sdu.edu.cn Shandong University, China 1 Bayes’ Theorem and Inference Bayes’ theorem is stated mathematically as the following (x | 0), and pX|Y (x | 1) according to our assumptions (5)∼(7), and make predictions according to Bayes’ theorem (see Eq. (2)). Specifically, given a test data featured by ˜x, we compare P(Y = ˜y | X = always do better than GDA. In practice, logistic regression is used more often than GDA 4 Naive Bayes 4.1 Assumption Again, we assume that the m training data are denoted by {x(i), y(i)}i=1,··· ,m,0 码力 | 19 页 | 238.80 KB | 1 年前3机器学习课程-温州大学-04机器学习-朴素贝叶斯
Recognition and Machine Learning[M]. New York: Springer,2006. [4] Zhang H., The optimality of naïve Bayes[C]//Proceedings of the 17th International Florida Artificial Intelligence Research Society Conference naïve Bayes[C]// Proceedings of the Advances in 14th Neural Information Processing Systems (NIPS), MIT Press, Cambridge, MA, 841-848, 2002. [6] Kohavi R.,Scaling up the accuracy of naïve Bayes classifiers:0 码力 | 31 页 | 1.13 MB | 1 年前3Lecture 4: Regularization and Bayesian Statistics
the Bayes Rule p(θ | D) = p(θ)p(D | θ) p(D) p(θ): Prior probability of θ (without having seen any data) p(D): Probability of the data (independent of θ) p(D) = � θ p(θ)p(D | θ)dθ The Bayes Rule0 码力 | 25 页 | 185.30 KB | 1 年前3MATLAB与Spark/Hadoop相集成:实现大数据的处理和价值挖
Validation (cvpartition) – Linear Support Vector Machine (SVM) Classification (fitclinear) – Naïve Bayes Classification (fitcnb) – Random Forest Ensemble Classification (TreeBagger) – Lasso Linear Regression0 码力 | 17 页 | 1.64 MB | 1 年前3《TensorFlow 快速入门与实战》4-实战TensorFlow房价预测
逻辑回归(Logistic Regression) • 决策树(Decision Tree) • 随机森林(Random Forest) • 最近邻算法(k-NN) • 朴素贝叶斯(Naive Bayes) • 支持向量机(SVM) • 感知器(Perceptron) • 深度神经网络(DNN) 前置知识:线性回归 在统计学中,线性回归是利用称为线性回归方程的最小二乘函数对一个或多个自变量和因变0 码力 | 46 页 | 5.71 MB | 1 年前3机器学习课程-温州大学-Scikit-learn
“0”和“1”类的概率 16 1.Scikit-learn概述 逻辑回归 支持向量机 朴素贝叶斯 K近邻 linear_model.LogisticRegression svm.SVC naive_bayes.GaussianNB neighbors.NearestNeighbors 监督学习算法-分类 17 2.Scikit-learn主要用法 监督学习算法-集成学习 sklearn.en0 码力 | 31 页 | 1.18 MB | 1 年前3机器学习课程-温州大学-概率论回顾
表示?发生的条件下,?发生的概率 (2) 全概率公式: ?(?) = σ?=1 ? ?(?|??)?(??), ???? = ⌀, ? ≠ ?, ⋃ ? ?=1 ?? = ?. (3) Bayes公式: ?(??|?) = ?(?|??)?(??) σ?=1 ? ?(?|??)?(??) , ? = 1,2, ⋯ , ? (4)乘法公式: ?(?1?2) = ?(?1)?(?2|0 码力 | 45 页 | 862.61 KB | 1 年前3机器学习课程-温州大学-05机器学习-机器学习实践
and Machine Learning[M]. New York: Springer,2006. [6] Kohavi R.,Scaling up the accuracy of naïve Bayes classifiers: A decision-tree hybrid[C]// Proceedings of the 2nd International Conference on Knowledge0 码力 | 33 页 | 2.14 MB | 1 年前3Greenplum机器学习⼯具集和案例
Linear Regression • LogisDc Regression • Marginal Effects • MulDnomial Regression • Naïve Bayes • Ordinal Regression • Robust Variance Tree Methods • Decision Tree • Random Forest0 码力 | 58 页 | 1.97 MB | 1 年前3
共 30 条
- 1
- 2
- 3