freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

機(jī)器學(xué)習(xí)與概率圖模型中科院自動化所系列報告(王立威)-展示頁

2025-01-22 21:00本頁面
  

【正文】 Graphical Models: ? Definition of Bayesian Networks: A Bayesian work is a pair (P, G) of a probability distribution and a DAG, where P is factorized according to G, and all the “l(fā)ocal” conditional probability distributions are given. 48 ? Learning Bayesian Networks ? BN = Structure (graph) + Local conditional distribution ? Learning BN: ? How to learn distribution’s parameters from data ? Relatively simple, standard parameter estimation. ? How to learn structure from data ? Very difficult ! Why? 49 ? Structure learning: ? Structure (graph): edges between nodes. ? Is the structure of a BN learnable? ? Note: the edge A — C exists or not equals to whether the following equation holds strictly. 50 A C B )|()|()( BCPBAPBACP ?|? Useful structure learning methods: ? Constraintbased structure learning. ? Using hypothesis test to obtain independencies. ? Construct the graph. ? Scorebased structure learning: Penalizing “dense” graph ? Likelihood scores ? BIC ? MDL ? AIC 51 Open Problems ? Robust Learning of Structures? 52 Outline ? A brief overview of Machine Learning ? Graphical Models ? Representation ? Learning ? Inference 53 Chapter II: Inference 54 ? What is inference in GM ? The hardness of inference in GM ? Exact inference algorithms ? Approximate inference algorithms ? Future research directions 55 What is inference in GM 56 ? Input: a graph and the local conditional distributions. ? Goal: two types of inference ? Conditional probability ? MAP inference 57 )|Pr( eExX ?? )|Pr(max eExXx ??The hardness of inference in GM 58 ? Exact inference in GM is hard ? Decision version of exact inference is NPC. ? Exact inference is P plete. ? Approximate inference in GM is hard ? approximate inference is NPhard for every 59 ? ?? : Decide is NP plete. Proof: Reduction from 3SAT. ? : Compute is P plete. Proof: Use above reduction from 3SAT. A Levin reduction, certificates are onetoone. ? : For every , pute an approximate of is NPhard. 60 0)Pr( ?? xX )Pr( xX ?0?? ? )Pr( xX ? Proof: is an approximation of means that Clearly, if one has an approximation of , one can solve the NPC problem 61 ?p? )1)(Pr(?1)Pr( ?? ?????? xXpxX)Pr( xX ?? )Pr( xX ? .0)Pr( ?? xXRemark: For absolute approximate it’s still NPhard. ? : Exact and approximate MAP inference are all hard. ? Conclusion: The worstcase plexity of the inferences, both exact and approximate are NPhard. 62 Exact Inference Algorithms 63 ? The relation between BN and MRF ? From BN to MRF (factors) 64 ))(|())(,( iiiii XPaXPXPaX ?????iiinn CZXXPZXXP )(1),...,(~1),...,(11 ?BN: MRF: 1?ZBN MRF: ? From BN to MRF (graphs) ? Moral graph (聯(lián)姻圖) Delete the directions for edges。 2) No other node along the path is in Z. 30 yymxx ?????? ...39。Machine Learning and Graphical Models 王立威 北京大學(xué) 信息科學(xué)技術(shù)學(xué)院 1 (Lecture I) Outline ? A brief overview of Machine Learning ? Graphical Models ? Representation ? Inference ? Learning 2 ? Definition of Machine Learning: ? Learning from experiences. “A puter program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” Tom Mitchell 3 ? “Classical” Machine Learning Tasks: ? Classification: ? spam filter, face recognition, … ? Regression ? Hook’s law, Kepler’s law,… ? Ranking ? Search engine ? Probability (Distribution) Estimation 4 }1,1{: ??nf R RR ?nf : RR nf :? “Classical” Machine Learning Algorithms ? Classification ? SVM ? Boosting ? Random Forest ? Bagging ? (Deep) Neural Networks ? Regression ? Lasso ? Boosting 5 Support Vector Machines (SVMs) ? SVM: the large margin classifier ? SVM: hinge loss minimization + regularization 22 ll /Boosting ? Boosting: (implicit) large margin classifier ? Boosting: exp loss minimization (+ regularization) ?ll /1? “Classical” Machine Learning Theories ? VC theory Capacity of the hypothesis space ? PACtheory ? Margin theory Confidence ? Empirical Processes Capacity ? PACBayes theory PAC in Bayes framework ? Regularization Capacity, smoothness 8 ML theories: Quantification of Occam’s Razor Force Length Hook’s law ? Comparison of “Classical” Machine Learning Theories ? Regularization: ? Bayesian optimality ? Only asymptotic (convergence, rate, nonuniform)
點(diǎn)擊復(fù)制文檔內(nèi)容
語文相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖鄂ICP備17016276號-1