“This is the 13th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021”

Introduction to the

Bayesian network is a probabilistic graph model to describe uncertain causality between variables, which is one of the most effective theoretical models in the field of uncertain knowledge representation and inference. It has the advantages of powerful knowledge reasoning, intuitive expression ability, clear topology structure and convenient decision-making mechanism, and is widely used in prediction, reasoning, diagnosis, decision risk and reliability analysis.

Bayesian definition

Understanding of probability graph models

Probability model graph is a theory used to express the probability dependence of variables, combining the knowledge of probability theory and graph theory, using graph to express the joint probability distribution of variables related to the model. The probability graph model constructs such a graph, which uses observation nodes to represent observed data, implicit nodes to represent potential knowledge, and edges to describe the relationship between knowledge and data. Finally, a probability distribution is obtained based on such a graph.

Nodes in the probability graph are divided into implicit nodes and observation nodes, and edges are divided into directed edges and undirected edges. From the perspective of probability theory, nodes correspond to random variables and edges correspond to the dependence or correlation of random variables, in which directed edges represent one-way dependence and undirected edges represent mutual dependence.

Bayes’ theorem

Conditional probability (also known as posterior probability) is the probability of event A occurring if another event B has already occurred. Said the conditional probability P (A | B), pronounced “under the condition of B, the probability of A”.

For example, if A random element of π is chosen to be A member of B, the probability that this randomly chosen element is also A member of A given B is defined as the conditional probability of A given B:

Joint probability: P(A∩B) or P(A,B)

Marginal probability (prior probability) : P(A) or P(B)

Bayes structure

Head-to-head

In accordance with the above, it is: P (a, b, c) = P (a) * P (b) * P (c | a, b), namely under the condition of unknown c, a, b is blocked (blocked), is independent and is called the head – to – head independent conditions.

tail-to-tail

Consider that C is unknown, and c is given these two cases:

  1. When c is unknown, are: P (a, b, c) = P © * P (a | c) * P (b | c), at this point, can’t it is concluded that P (a, b) = P (a) P (b), c is unknown, a and b is not independent.
  2. When c is known, there are: P (a, b | c) = P (a, b, c)/P ©, then P (a, b, c) = P © * P (a | c)P (b | c) into the formula, get: P (a, b | c) = P (a, b, c)/P = P © ©P (a | c) * P (b | c)/P © = P (a | c) * P (b | c), is known as c, a, b.

head-to-tail

So let’s say we have c unknown and WE have C and we have these two cases:

  1. C is unknown, there are: P (a, b, c) = P (a) * P (c | a) * P (b | c), but can’t launch P (a, b) = P (a) P (b), c is unknown, a and b is not independent.

  2. C is known, there are: P (a, b | c) = P (a, b, c)/P ©, and according to the P (a, c) = P (a) P (c | a) = P © P (a | c), can be reduced by:

P (a, b, c) ∣ P (a, b | c) P (a, b ∣ c)

= P ( a , b , c ) / P ( c ) P(a,b,c)/P(c) P(a,b,c)/P(c)

∣ ∗ P = P (a) (c) a ∗ P (b ∣ c)/P (c) P (a) * P (c | a) * P (b | c)/P (c) (a) P ∗ ∗ P c ∣ (a) P (b ∣ c)/P (c)

= P (a, c) ∗ P (b ∣ c)/P (c) P (a, c) * P (b | c)/P (c) P (a, c) ∗ P (b ∣ c)/P (c)

C) = P (a ∣ ∗ P (b ∣ c) P (a | c) * P (b | c) P (a) ∣ c ∗ P (b ∣ c)

Therefore, under the given condition of C, A and B are blocked and are independent, which is called head-to-tail condition independence.

The head-to-tail is a chain network, as shown in the following figure:

Given xi, the distribution of xi+1 and x1,x2… Xi minus 1 is conditional independent. What does that mean? It means that the distribution state of XI +1 depends only on XI and is independent of other variables. In layman’s terms, the current state is related only to the previous state, not to the state before or up. This random process of sequential evolution is called Markov chain.

remarks

According to the introduction of the previous article, believe that everyone has to bayesian networks have a preliminary understanding, this a few days later, will system to introduce the structure of bayesian networks, definition, branch and other content, is also a system to introduce the bayesian networks and static bayesian network and the difference between dynamic bayesian network, At the same time, I will also share with you the significance of Bayesian network in practical research.

Thank you all the way, for the support of little leaves, little leaves is also a beginner, I hope to progress with you.