<<Up     Contents

Markov's inequality

In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant. It is named after the Russian mathematician Andrey Markov.

Markov's inequality (and other similar inequalities) relate probabilities to expectations, and provide (frequently) loose but still useful bounds for the distribution function of a random variable.

Table of contents

Definition

Markov's inequality states that if X is a random variable and a is some positive constant, then

<math>\textrm{Pr}(|X| \geq a) \leq \frac{\textrm{E}(|X|)}{a}.</math>

A Generalisation

Markov's inequality is actually just one of a wider class of inequalities relating probabilities and expectations, that are all examples of a single theorem.

Theorem

Let X be a random variable and a be some positive constant (a > 0). If

<math>h:\mathbb{R} \rightarrow [0,\infty),</math>
then
<math>\textrm{Pr}(h(X) \geq a) \leq \frac{\textrm{E}(h(X))}{a}.</math>

Proof

Let A be the set {x: h(x) ≥ a}, and let IA(x) be the indicator function of A. (That is, IA(x) = 1 if xA, and is 0 otherwise.) Then,

<math>aI_A(x) \leq h(x).</math>
The theorem follows by taking the expectation of both sides of this equation, and observing that
<math>\textrm{E}(I_A(X)) = \textrm{Pr}(h(X) \geq a).</math>

Examples

wikipedia.org dumped 2003-03-17 with terodump