Privacy Policy Cookie Policy Terms and Conditions Cumulant - Wikipedia, the free encyclopedia

Cumulant

From Wikipedia, the free encyclopedia

Contents

[edit] Cumulants of a random variable

In probability theory and statistics, a random variable X has an expected value μ = E(X) and a variance σ2 = E((X − μ)2). These are the first two cumulants: μ = κ1 and σ2 = κ2.

The cumulants κn are defined by the cumulant-generating function:

g(t)=\log(E\left(\exp(tX)\right))=\sum_{n=1}^\infty\frac{\kappa_n t^n}{n!}=\mu t + \frac{\sigma^2 t^2}{2} +\cdots\,

The derivative of the cumulant generating function is simply:

g'(t)=\sum_{n=0}^\infty\frac{\kappa_{n+1} t^n}{n!}=\mu + \sigma^2 t + \cdots \,\!

so that the cumulants are the derivatives

κ1 = μ = g' (0),
κ2 = σ2 = g' '(0),
κn = g(n) (0).

A distribution with given cumulants κn can be approximated through the Edgeworth series.

[edit] Some properties of cumulants

[edit] Invariance and equivariance

The first cumulant is shift-equivariant; all of the others are shift-invariant. To state this less tersely, denote by κn(X) the nth cumulant of the probability distribution of the random variable X. The statement is that if c is constant then κ1(X + c) = κ1(X) + c and κn(X + c) = κn(X) for n ≥ 2, i.e., c is added to the first cumulant, but all higher cumulants are unchanged.

[edit] Homogeneity

The nth cumulant is homogeneous of degree n, i.e. if c is any constant, then

κn(cX) = cnκn(X).

[edit] Additivity

If X and Y are independent random variables then κn(X + Y) = κn(X) + κn(Y).

[edit] Cumulants and moments

The moment-generating function is:

1+\sum_{n=1}^\infty \frac{\mu'_n t^n}{n!}=\exp\left(\sum_{n=1}^\infty \frac{\kappa_n t^n}{n!}\right).

The first cumulant is simply the expected value; the second and third cumulants are respectively the second and third central moments (the second central moment is the variance); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments.

The cumulants are related to the moments by the following recursion formula:

\kappa_n=\mu'_n-\sum_{k=1}^{n-1}{n-1 \choose k-1}\kappa_k \mu_{n-k}'.

The nth moment μ′n is an nth-degree polynomial in the first n cumulants, thus:

\mu'_1=\kappa_1\,
\mu'_2=\kappa_2+\kappa_1^2\,
\mu'_3=\kappa_3+3\kappa_2\kappa_1+\kappa_1^3\,
\mu'_4=\kappa_4+4\kappa_3\kappa_1+3\kappa_2^2+6\kappa_2\kappa_1^2+\kappa_1^4\,
\mu'_5=\kappa_5+5\kappa_4\kappa_1+10\kappa_3\kappa_2 +10\kappa_3\kappa_1^2+15\kappa_2^2\kappa_1 +10\kappa_2\kappa_1^3+\kappa_1^5\,
\mu'_6=\kappa_6+6\kappa_5\kappa_1+15\kappa_4\kappa_2+15\kappa_4\kappa_1^2 +10\kappa_3^2+60\kappa_3\kappa_2\kappa_1+20\kappa_3\kappa_1^3+15\kappa_2^3 +45\kappa_2^2\kappa_1^2+15\kappa_2\kappa_1^4+\kappa_1^6\,

The coefficients are precisely those that occur in Faà di Bruno's formula.

The "prime" distinguishes the moments μ′n from the central moments μn. To express the central moments as functions of the cumulants, just drop from these polynomials all terms in which κ1 appears as a factor:

\mu_1=0\,
\mu_2=\kappa_2\,
\mu_3=\kappa_3\,
\mu_4=\kappa_4+3\kappa_2^2\,
\mu_5=\kappa_5+10\kappa_3\kappa_2\,
\mu_6=\kappa_6+15\kappa_4\kappa_2+15\kappa_2^3\,

[edit] Cumulants and set-partitions

These polynomials have a remarkable combinatorial interpretation: the coefficients count certain partitions of sets. A general form of these polynomials is

\mu'_n=\sum_{\pi}\prod_{B\in\pi}\kappa_{\left|B\right|}

where

  • π runs through the list of all partitions of a set of size n;
  • "B ∈ π" means B is one of the "blocks" into which the set is partitioned; and
  • |B| is the size of the set B.

Thus each monomial is a constant times a product of cumulants in which the sum of the indices is n (e.g., in the term κ3 κ22 κ1, the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the integer n corresponds to each term. The coefficient in each term is the number of partitions of a set of n members that collapse to that partition of the integer n when the members of the set become indistinguishable.

[edit] Cumulants of particular probability distributions

  • For the constant random variable, X = μ, the derivative of the cumulant generating function is g '(t) = μ. The cumulants are κ1 = μ, and κn = 0 for n = 2, 3, 4, ...
  • For the Bernoulli distribution, (number of successes in one trial), with expectation p, the derivative of the cumulant generating function is g '(t) = ((p−1−1)·et + 1)−1.
The cumulants satisfy a recursion formula: \kappa_1=p,\,\kappa_{n+1}=p(1-p){d\kappa_n \over dp}.\,
  • For the binomial distribution, (number of successes in n independent trials with probability p of success on each trial), every cumulant is just n times the cumulant of the same order, of the Bernoulli distribution. The derivative of the cumulant generating function is g '(t) = n·((p−1−1)·et + 1)−1. The first cumulants are μ = g '(0) = n·p and σ2 = g ' '(0) = μ·(1−p). The special case n = 1 is the Bernoulli distribution. Substituting p=μ·n−1 gives g '(t) = ((μ−1 − n−1)·et + n−1)−1. The limiting case n−1 = 0 is the Poisson distribution g '(t) = μ·et.
  • For the geometric distribution, (number of failures before one success), the derivative of the cumulant generating function is g '(t) = ((1−p)−1·et−1)−1.
  • For the negative binomial distribution, (number of failures before n successes), the derivative of the cumulant generating function is g '(t) = n·((1−p)−1·et−1)−1. The special case n = 1 is the Geometric distribution. The first cumulants are μ = g '(0) = n·((1−p)−1−1)−1 = n·(p−1−1), and σ2 = g ' '(0) = μ·p−1. The limiting case n−1 = 0 is the Poisson distribution.
  • For the Poisson distribution, the derivative of the cumulant generating function is g '(t) = λ·et. All cumulants are equal to the parameter: κn = λ for n=1,2,3,...

Introducing the ratio between the variance and the mean as a new parameter, the eccentricity є = μ−1·σ2, the above probability distributions get a unified formula for the derivative of the cumulant generating function:

g '(t) = μ·(ε·et − ε + 1)−1.

and the second derivative is

g ' '(t) = g '(t)·(1 + (ε−1 − 1)·et)−1

confirming that the first cumulant is g '(0) = μ and the second cumulant is g ' '(0) = μ·ε. The constant random variable has eccentricity є=0. The binomial distribution has eccentricity є = 1 − p so that 0<є<1. The Poisson distribution has eccentricity є = 1. The negative binomial distribution has eccentricity є = p−1 so that є > 1. Note the analogy to the eccentricity theory of the conic section: circle є=0, ellipse 0<є<1, parabola є=1, hyperbola є>1.

  • For the normal distribution with expected value μ and variance σ2, the derivative of the cumulant generating function is g '(t) = μ+σ2·t. The cumulants are κ1=μ, κ22, and κn=0 for n>2. The special case σ2=0 is the constant random variable X=μ.

[edit] Joint cumulants

The joint cumulant of several random variables X1, ..., Xn is

\kappa(X_1,\dots,X_n) =\sum_\pi (|\pi|-1)!(-1)^{|\pi|-1}\prod_{B\in\pi}E\left(\prod_{i\in B}X_i\right)

where π runs through the list of all partitions of { 1, ..., n }, and B runs through the list of all blocks of the partition π. For example,

\kappa(X,Y,Z)=E(XYZ)-E(XY)E(Z)-E(XZ)E(Y)-E(YZ)E(X)+2E(X)E(Y)E(Z).\,

The joint cumulant of just one random variable is its expected value, and that of two random variables is their covariance. If some of the random variables are independent of all of the others, then the joint cumulant is zero. If all n random variables are the same, then the joint cumulant is the nth ordinary cumulant.

The combinatorial meaning of the expression of moments in terms of cumulants is easier to understand than that of cumulants in terms of moments:

E(X_1\cdots X_n)=\sum_\pi\prod_{B\in\pi}\kappa(X_i : i \in B).

For example:

E(XYZ)=\kappa(X,Y,Z)+\kappa(X,Y)\kappa(Z)+\kappa(X,Z)\kappa(Y) +\kappa(Y,Z)\kappa(X)+\kappa(X)\kappa(Y)\kappa(Z).\,

Another important property of joint cumulants is multilinearity:

\kappa(X+Y,Z_1,Z_2,\dots)=\kappa(X,Z_1,Z_2,\dots)+\kappa(Y,Z_1,Z_2,\dots).\,

Just as the second cumulant is simply the variance, the joint cumulant of just two random variables is just the covariance. The familiar identity

\operatorname{var}(X+Y)=\operatorname{var}(X) +2\operatorname{cov}(X,Y)+\operatorname{var}(Y)\,

generalizes to cumulants:

\kappa_n(X+Y)=\sum_{j=0}^n {n \choose j} \kappa(\,\underbrace{X,\dots,X}_{j},\underbrace{Y,\dots,Y}_{n-j}).\,

[edit] Conditional cumulants and the law of total cumulance

The law of total expectation and the law of total variance generalize naturally to conditional cumulants. The case n = 3, expressed in the language of (central) moments rather than that of cumulants, says

\mu_3(X)=E(\mu_3(X\mid Y))+\mu_3(E(X\mid Y)) +3\,\operatorname{cov}(E(X\mid Y),\operatorname{var}(X\mid Y)).

The general result stated below first appeared in 1969 in The Calculation of Cumulants via Conditioning by David R. Brillinger in volume 21 of Annals of the Institute of Statistical Mathematics, pages 215-218.

In general, we have

\kappa(X_1,\dots,X_n)=\sum_\pi \kappa(\kappa(X_{\pi_1}\mid Y),\dots,\kappa(X_{\pi_b}\mid Y))

where

  • the sum is over all partitions π of the set { 1, ..., n } of indices, and
  • π1, ..., πb are all of the "blocks" of the partition π; the expression κ(Xπk) indicates that the joint cumulant of the random variables whose indices are in that block of the partition.

[edit] History

Cumulants were first introduced by the Danish astronomer, actuary, mathematician, and statistician Thorvald N. Thiele (1838 - 1910) in 1889. Thiele called them half-invariants. They were first called cumulants in a 1931 paper, The derivation of the pattern formulae of two-way partitions from those of simpler patterns, Proceedings of the London Mathematical Society, Series 2, v. 33, pp. 195-208, by the great statistical geneticist Sir Ronald Fisher and the statistician John Wishart, eponym of the Wishart distribution. The historian Stephen Stigler has said that the name cumulant was suggested to Fisher in a letter from Harold Hotelling. In another paper published in 1929, Fisher had called them cumulative moment functions.

[edit] Formal cumulants

More generally, the cumulants of a sequence { mn : n = 1, 2, 3, ... }, not necessarily the moments of any probability distribution, are given by

1+\sum_{n=1}^\infty m_n t^n/n!=\exp\left(\sum_{n=1}^\infty\kappa_n t^n/n!\right)

where the values of κn for n = 1, 2, 3, ... are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges. All of the difficulties of the "problem of cumulants" are absent when one works formally. The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero. Formal cumulants are subject to no such constraints.

[edit] One well-known example

In combinatorics, the nth Bell number is the number of partitions of a set of size n. All of the cumulants of the sequence of Bell numbers are equal to 1. The Bell numbers are the moments of the Poisson distribution with expected value 1.

[edit] Cumulants of a polynomial sequence of binomial type

For any sequence { κn : n = 1, 2, 3, ... } of scalars in a field of characteristic zero, being considered formal cumulants, there is a corresponding sequence { μ ′ : n = 1, 2, 3, ... } of formal moments, given by the polynomials above. For those polynomials, construct a polynomial sequence in the following way. Out the polynomial

\begin{matrix}\mu'_6= & \kappa_6+6\kappa_5\kappa_1+15\kappa_4\kappa_2+15\kappa_4\kappa_1^2 +10\kappa_3^2+60\kappa_3\kappa_2\kappa_1 \\  \\ & {}+20\kappa_3\kappa_1^3+15\kappa_2^3 +45\kappa_2^2\kappa_1^2+15\kappa_2\kappa_1^4+\kappa_1^6\end{matrix}

make a new polynomial in these plus one additional variable x:

\begin{matrix}p_6(x)= & (\kappa_6)\,x+(6\kappa_5\kappa_1+15\kappa_4\kappa_2+10\kappa_3^2)\,x^2 +(15\kappa_4\kappa_1^2+60\kappa_3\kappa_2\kappa_1+15\kappa_2^3)\,x^3 \\  \\ & {}+(45\kappa_2^2\kappa_1^2)\,x^4+(15\kappa_2\kappa_1^4)\,x^5 +(\kappa_1^6)\,x^6\end{matrix}

... and generalize the pattern. The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on x. Each coefficient is a polynomial in the cumulants; these are the Bell polynomials, named after Eric Temple Bell.

This sequence of polynomials is of binomial type. In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants.

[edit] Free cumulants

In the identity

E(X_1\cdots X_n)=\sum_\pi\prod_{B\in\pi}\kappa(X_i : i\in B)

one sums over all partitions of the set { 1, ..., n }. If instead, one sums only over the noncrossing partitions, then one gets "free cumulants" rather than conventional cumulants treated above. These play a central role in free probability theory. In that theory, rather than considering independence of random variables, defined in terms of Cartesian products of algebras of random variables, one considers instead "freeness" of random variables, defined in terms of free products of algebras rather than Cartesian products of algebras.

The ordinary cumulants of degree higher than 2 of the normal distribution are zero. The free cumulants of degree higher than 2 of the Wigner semicircle distribution are zero. This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory.

[edit] See also

[edit] External references

In other languages

Static Wikipedia 2008 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -