# probability space

## 1. definition

A probability space is a triple of \((\Omega, \mathcal{F}, \mathbb{P})\) where:

- \(\Omega\) is the
*sample space*, which is the set of possible outcomes - \(\mathcal{F}\) is a \(\sigma\) -
*algebra*or \(\sigma\) -*field*, which is a collection of subsets of \(\Omega\) (see Sigma Field) - \(\mathbb{P}\) is the probability measure, which is a function that assigns a non-negative probability to every set in \(\mathcal{F}\) (see Measures and Probability Measures)

## 2. sample space

The sample space is a set of *elementary outcomes* \(\Omega = \{\omega_1, ..., \}\). The sample space can be finite, have countably many elements, or uncountably many elements.

## 3. discrete probability space

Consider the special case where \(\Omega\) is finite or countable. Then, we have a *discrete probability space* where:

- \(\Omega = \{\omega_1, ..., \}\) is finite or countable
- The \(\sigma\) -field \(\mathcal{F}\) is the set of all subsets of \(\Omega\)
- The probability measure assigns a number from \([0,1]\) to every element of \(\mathcal{F}\). It satisfies:
- \(P(A) = \sum_{\omega \in A} \mathbb{P}(\{\omega\})\) for every \(A \subset \Omega\) and
- \(\sum_{\omega \in \Omega} \mathbb{P}(\{\omega\}) = 1\)

## 4. sigma algebra

See Sigma Field

## 5. probability measure

## 6. finite additivity

So far, a probability space as we have defined it has required a \(\sigma\) -field, which has countable unions and a probability measure, which has countable additivity. What if we relax these properties so they only deal with finite unions and finite additivity? We will end up with a different structure, but one that is easier to verify.

### 6.1. definition field (also called an algebra)

Given a sample space \(\Omega\)…

- A
*field*(to be contrasted with a Sigma Field) is a collection \(\mathcal{F}_0\) of subsets of \(\Omega\) with the following properties:- \(\emptyset \in \mathcal{F}_0\)
- If \(A\in \mathcal{F}_0\), then \(A^C\in \mathcal{F}_0\)
- If \(A, B \in \mathcal{F}_0\), then \(A\cup B \in \mathcal{F}_0\)

- Let \(\mathcal{F}_0\) be a field of subsets of \(\Omega\). A function \(\mathbb{P}: \mathcal{F}_0 \rightarrow [0,1]\) is
*finitely additive*(to be contrasted with Countably Additive) if: \[ A, B \in \mathcal{F}_0, A \cap B = \emptyset \Rightarrow \mathbb{P}(A\cup B) = \mathbb{P}(A) + \mathbb{P}(B) \]

#### 6.1.1. Note

Finite additivity between two events can be used to prove finite additivity for \(n\) events using induction

#### 6.1.2. Note

Finite additivity is weaker than countable additivity, but it's often easier to verify

### 6.2. Continuity of Probabilities

It turns out that if we have finite additivity (see above) and continuity (see below), then this is sufficient to show \(\sigma\) -additivity. This is a handy tool for proofs.

#### 6.2.1. Theorem 1 \(\sigma\) -additivity \(\Leftrightarrow\) continuity

Let \(\mathcal{F}\) be a field (note: not a \(\sigma\) -field) and suppose that \(\mathbb{P}: \mathcal{F} \rightarrow [0,1]\) satisfies \(\mathbb{P}=1\) as well as the finite additivity property. Then the following are equivalent:

- \(\mathbb{P}\) is \(\sigma\) -additive on \(\mathcal{F}\)
- If \(\{A_i\}\) is an
*increasing sequence*of sets, i.e. \(A_i \subset A_{i+1}\), and \(A = \cup_{i=1}^{\infty} A_i\) belongs to \(\mathcal{F}\), then \(\mathbb{P}(A) = \lim_{i\rightarrow \infty} \mathbb{P}(A_i)\) - If \(\{A_i\}\) is a
*decreasing sequence*of sets, i.e. \(A_i \supset A_{i+1}\), and \(A = \cap_{i=1}^{\infty} A_i\) belongs to \(\mathcal{F}\), then \(\mathbb{P}(A) = \lim_{i\rightarrow \infty} \mathbb{P}(A_i)\) - If \(\{A_i\}\) is a
*decreasing sequence*of sets, i.e. \(A_i \supset A_{i+1}\), and \(\emptyset = \cap_{i=1}^{\infty} A_i\), then \(0 = \lim_{i\rightarrow \infty} \mathbb{P}(A_i)\)

In our discrete probability space example above, we have not verified that \(\mathbb{P}\) satisfies countable additivity. We can do that now by showing that (i) \(\mathbb{P}\) is finitely additive and (ii) satisfies Theorem 1.4

## 7. Monotone class theorem

### 7.1. definition

A collection of sets \(\mathcal{M}\) is a *monotone class* if all increasing and decreasing sequences of sets in \(\mathcal{M}\) have their limits in \(\mathcal{M}\). The minimal monotone class containing a collection \(C\) is denoted by \(\mu(C)\)

### 7.2. Theorem 2: If \(\mathcal{A}\) is an algebra (note: not a \(\sigma\) -algebra) then \(\mu(\mathcal{A}) = \sigma(\mathcal{A})\)

## 8. Initial misunderstandings

When I first learned about probabilities, I was confused about the difference between events and elementary outcomes. I was also confused about the semantics of joint probilities and random variables: "What does \(P(A, B)\) mean? What if we later want to consider a new event C? Won't we have to re-form our sample space and our probability measure if we want to find \(P(A,B,C)\)?"

What cleared up my confusion is realizing that in all our discussions about \(P(A)\), \(P(A,B)\), \(P(A,B,C)\), the sample space \(\Omega\) stayed fixed. In a way, I can think of \(\Omega\) as holding every possible trajectory of the world as an elementary outcome \(\omega\). The event \(A=\text{it's raining outside today}\) is a set of these outcomes where it's raining today. The event \(A=\text{it's raining today} \wedge B=\text{the grass is wet}\) are the outcomes at the intersection of two sets.