cramer-rao bound
The variance of an unbiased estimator \(\hat{\theta}\) is lower bounded: \[ Var(\hat{\theta}) \geq \frac{1}{I(\theta)} \] where \(I\) is the fisher information.
Then, the covariance of the estimator and the score is: \[\begin{align} Cov(\hat{\theta}, l'(x; \theta)) &= E[\hat{\theta}l'(x; \theta)] - E[\hat{\theta}]E[l'(x; \theta)]\\ &= E[\hat{\theta}l'(x; \theta)] \end{align}\] The second line comes from the fact that the expectation of the score is 0 (see score).
Then, \[\begin{align*} E[\hat{\theta}l'(x; \theta)] &= \int \hat{\theta}l'(x; \theta) p(x\mid \theta) dx \\ &= \int \hat{\theta} \frac{\partial \log p(x \mid \theta)}{\partial \theta} p(x\mid \theta) dx \\ &= \int \hat{\theta} \frac{1}{p(x\mid \theta)}\frac{\partial p(x \mid \theta)}{\partial \theta} p(x\mid \theta) dx \\ &= \int \hat{\theta} \frac{\partial p(x \mid \theta)}{\partial \theta} dx \\ &= \frac{\partial}{\partial \theta} \int \hat{\theta} p(x \mid \theta) dx \\ &= \frac{\partial}{\partial \theta} E[\hat{\theta}] \\ &= \frac{\partial}{\partial \theta} \theta \\ &= 1 \\ \end{align*}\] The second line is the log-derivative trick. The fourth line is moving the derivative outside the integral (see leibniz rule). The 6th line comes from the fact that the estimator is unbiased, so it is \(\theta\) in expectation.
Then, from the cauchy schwarz inequality: \[ Var(\hat{\theta})Var(l(x; \theta)) \geq Cov(\hat{\theta}l(x; \theta)) = 1 \]
So \[ Var(\hat{\theta}) \geq \frac{1}{Var(l(x; \theta))} = \frac{1}{I(\theta)} \]