万本电子书0元读

万本电子书0元读

顶部广告

Mastering Probabilistic Graphical Models Using Python电子书

售       价:¥

6人正在读 | 0人评论 9.8

作       者:Ankur Ankan

出  版  社:Packt Publishing

出版时间:2015-08-03

字       数:152.5万

所属分类: 进口书 > 外文原版书 > 电脑/网络

温馨提示:数字商品不支持退换货,不提供源文件,不支持导出打印

为你推荐

  • 读书简介
  • 目录
  • 累计评论(0条)
  • 读书简介
  • 目录
  • 累计评论(0条)
If you are a researcher or a machine learning enthusiast, or are working in the data science field and have a basic idea of Bayesian learning or probabilistic graphical models, this book will help you to understand the details of graphical models and use them in your data science problems.
目录展开

Mastering Probabilistic Graphical Models Using Python

Table of Contents

Mastering Probabilistic Graphical Models Using Python

Credits

About the Authors

About the Reviewers

www.PacktPub.com

Support files, eBooks, discount offers, and more

Why subscribe?

Free access for Packt account holders

Preface

What this book covers

What you need for this book

Who this book is for

Conventions

Reader feedback

Customer support

Downloading the example code

Downloading the color images of this book

Errata

Piracy

Questions

1. Bayesian Network Fundamentals

Probability theory

Random variable

Independence and conditional independence

Installing tools

IPython

pgmpy

Representing independencies using pgmpy

Representing joint probability distributions using pgmpy

Conditional probability distribution

Representing CPDs using pgmpy

Graph theory

Nodes and edges

Walk, paths, and trails

Bayesian models

Representation

Factorization of a distribution over a network

Implementing Bayesian networks using pgmpy

Bayesian model representation

Reasoning pattern in Bayesian networks

D-separation

Direct connection

Indirect connection

Relating graphs and distributions

IMAP

IMAP to factorization

CPD representations

Deterministic CPDs

Context-specific CPDs

Tree CPD

Rule CPD

Summary

2. Markov Network Fundamentals

Introducing the Markov network

Parameterizing a Markov network – factor

Factor operations

Gibbs distributions and Markov networks

The factor graph

Independencies in Markov networks

Constructing graphs from distributions

Bayesian and Markov networks

Converting Bayesian models into Markov models

Converting Markov models into Bayesian models

Chordal graphs

Summary

3. Inference – Asking Questions to Models

Inference

Complexity of inference

Variable elimination

Analysis of variable elimination

Finding elimination ordering

Using the chordal graph property of induced graphs

Minimum fill/size/weight/search

Belief propagation

Clique tree

Constructing a clique tree

Message passing

Clique tree calibration

Message passing with division

Factor division

Querying variables that are not in the same cluster

MAP inference

MAP using variable elimination

Factor maximization

MAP using belief propagation

Finding the most probable assignment

Predictions from the model using pgmpy

A comparison of variable elimination and belief propagation

Summary

4. Approximate Inference

The optimization problem

The energy function

Exact inference as an optimization

The propagation-based approximation algorithm

Cluster graph belief propagation

Constructing cluster graphs

Pairwise Markov networks

Bethe cluster graph

Propagation with approximate messages

Message creation

Inference with approximate messages

Sum-product expectation propagation

Belief update propagation

MAP inference

Sampling-based approximate methods

Forward sampling

Conditional probability distribution

Likelihood weighting and importance sampling

Importance sampling

Importance sampling in Bayesian networks

Computing marginal probabilities

Ratio likelihood weighting

Normalized likelihood weighting

Markov chain Monte Carlo methods

Gibbs sampling

Markov chains

The multiple transitioning model

Using a Markov chain

Collapsed particles

Collapsed importance sampling

Summary

5. Model Learning – Parameter Estimation in Bayesian Networks

General ideas in learning

The goals of learning

Density estimation

Predicting the specific probability values

Knowledge discovery

Learning as an optimization

Empirical risk and overfitting

Discriminative versus generative training

Learning task

Model constraints

Data observability

Parameter learning

Maximum likelihood estimation

Maximum likelihood principle

The maximum likelihood estimate for Bayesian networks

Bayesian parameter estimation

Priors

Bayesian parameter estimation for Bayesian networks

Structure learning in Bayesian networks

Methods for the learning structure

Constraint-based structure learning

Structure score learning

The likelihood score

The Bayesian score

The Bayesian score for Bayesian networks

Summary

6. Model Learning – Parameter Estimation in Markov Networks

Maximum likelihood parameter estimation

Likelihood function

Log-linear model

Gradient ascent

Learning with approximate inference

Belief propagation and pseudo-moment matching

Structure learning

Constraint-based structure learning

Score-based structure learning

The likelihood score

Bayesian score

Summary

7. Specialized Models

The Naive Bayes model

Why does it even work?

Types of Naive Bayes models

Multivariate Bernoulli Naive Bayes model

Multinomial Naive Bayes model

Choosing the right model

Dynamic Bayesian networks

Assumptions

Discrete timeline assumption

The Markov assumption

Model representation

The Hidden Markov model

Generating an observation sequence

Computing the probability of an observation

The forward-backward algorithm

Computing the state sequence

Applications

The acoustic model

The language model

Summary

Index

累计评论(0条) 0个书友正在讨论这本书 发表评论

发表评论

发表评论,分享你的想法吧!

买过这本书的人还买过

读了这本书的人还在读

回顶部