**Link**: Bayesian statistics Prior probability and posterior probability
Conditional probabilities

## What is Bayes’ theorem?

**Bayes’ theorem** is the basics for **Bayesian statistics**.

The equation:

$P(A∣B)=P(B)P(A∩B) =P(B)P(A)×P(B∣A) $

where:

- $P(A)$ = the prior probability of A occurring
- $P(A∣B)$ = the conditional probability of A, given that B occurs
- $P(B∣A)$ = the conditional probability of B, given that A occurs
- $P(B)$ = the probability of B occurring

## Derive Bayes’ theorem

## A simple example

Chance of a medical condition with test result

- Prior Probability $P(A)$: Initial belief about having the medical condition before taking the test. E.g. 2% chance (0.02) of having a condition based on symptoms and family history.
- Likelihood $P(E∣A)$: The probability that the test is positive given that you actually have the condition. E.g. 95% (assuming the test is pretty accurate)
- Total Evidence $(P(E)$: The overall probability of getting a positive test result, taking into account both scenarios: having the condition (A) and not having it $A$. So:
$P(E)=P(E∣A)×P(A)+P(E∣A)×P(A)$

If we apply Bayes’ theorem:

$P(A∣E)=P(E)P(A)×P(E∣A) $

- $P(A∣E)$: the updated probability of actually having the condition after getting a positive test result

## Summary

It means we can get **posterior probability** based on **prior probability**, **likelihood**, and **total evidence**.