[Probability] Approximate inference in BN

Exact inference is NP-hard, so approximate methods are best option for loopy BN

Stochastic simulation in BN

Belief network defines “generative model”, so we can easily draw samples from the joint distribution.

However, it is hard to draw samples from posterior distribution

Rejection sampling

  • Generate N joint samples from joint distribution.
  • Count number of samples N(Q=q, E=e) and N(E=e)
  • Estimate P(Q=q|E=e) ~ N(Q=q, E=e) / N(E=e)

This will converge as N reaches infinity, but it is very inefficient:

  • Discards all samples without E=e
  • Takes large number of samples for queries with unlikely evidence

Likelihood weighting

  • Instantiate evidence nodes instead of sampling them
  • Weight each sample using CPTs
    • For evidence nodes in numerator and denominator. P(E=e|Parent(E))

This will also converge as N reaches infinity.
Likelihood weighting is much faster than Rejection sampling since all samples are used. However, it is still slow for rare events.

Screen Shot 2018-10-30 at 2.28.55 AM

MCMC Simulation

  • Fix evidence nodes to observed values
  • Initialize non-evidence nodes to any values
  • Repeat N times:
    • Pick non-evidence nodes at random
    • Use Bayes rule to compute P(X=x|Bx) where Bx is fixed to current values
    • Resample X~P(X=x|Bx)

MCMC simulation is about reformulating the sampling as a Markov Chain between samples… This term is rather ambiguous, but the following link seems helpful!

LW sample non-evidence nodes from P(X|Parent(X))
MCMC sample non-evidence nodes from P(X|Bx)

Good material


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.