# Neyman-Pearson Lemma

If we know the costs and prior probabilities, the best decision rule is given by the Bayesian decision rule (see bayesian hypothesis test). But if we don't know those things, then what is the best decision rule?

One place we could start is by trying to find a decision rule that maximizes the detection \(P_D\), subject to the constraint that the false positive rate \(P_F\) is \(\leq\) some \(\alpha\) (see receiver operating characteristic for definitions).

It turns out that the decision rule that does this is also a Likelihood Ratio Test, just like the bayesian hypothesis test.

## 1. Theorem

To maximize \(P_D\) subject to the constraint that \(P_F < \alpha\), it is sufficient to use a Likelihood Ratio Test \(L\) where the threshold \(\eta\) is chosen so that \[ P_F = \mathbb{P}(L(\mathbf{y}) \geq \eta \mid \mathsf{H}) = \eta \]

### 1.1. things to pay attention to

- This is telling us that the optimimum decision rule takes the form of a Likelihood Ratio Test specifically, which is kind of unexpected

## 2. useful links

- proof 1
- proof 2
- 6.437 Lecture notes (local copy)
- (all proofs involve method of lagrange multipliers)