 #jsDisabledContent { display:none; } My Account | Register | Help Flag as Inappropriate This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate?          Excessive Violence          Sexual Content          Political / Social Email this Article Email Address:

# Generalized inverse Gaussian distribution

Article Id: WHEBN0002682998
Reproduction Date:

 Title: Generalized inverse Gaussian distribution Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:

### Generalized inverse Gaussian distribution

 Parameters Probability density function a > 0, b > 0, p real x > 0 f(x) = \frac{(a/b)^{p/2}}{2 K_p(\sqrt{ab})} x^{(p-1)} e^{-(ax + b/x)/2} \frac{\sqrt{b}\ K_{p+1}(\sqrt{a b}) }{ \sqrt{a}\ K_{p}(\sqrt{a b})} \frac{(p-1)+\sqrt{(p-1)^2+ab}}{a} \left(\frac{b}{a}\right)\left[\frac{K_{p+2}(\sqrt{ab})}{K_p(\sqrt{ab})}-\left(\frac{K_{p+1}(\sqrt{ab})}{K_p(\sqrt{ab})}\right)^2\right] \left(\frac{a}{a-2t}\right)^{\frac{p}{2}}\frac{K_p(\sqrt{b(a-2t)})}{K_p(\sqrt{ab})} \left(\frac{a}{a-2it}\right)^{\frac{p}{2}}\frac{K_p(\sqrt{b(a-2it)})}{K_p(\sqrt{ab})}

In probability theory and statistics, the generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions with probability density function

f(x) = \frac{(a/b)^{p/2}}{2 K_p(\sqrt{ab})} x^{(p-1)} e^{-(ax + b/x)/2},\qquad x>0,

where Kp is a modified Bessel function of the second kind, a > 0, b > 0 and p a real parameter. It is used extensively in geostatistics, statistical linguistics, finance, etc. This distribution was first proposed by Étienne Halphen. It was rediscovered and popularised by Ole Barndorff-Nielsen, who called it the generalized inverse Gaussian distribution. It is also known as the Sichel distribution, after Herbert Sichel. Its statistical properties are discussed in Bent Jørgensen's lecture notes.

## Contents

• Properties 1
• Summation 1.1
• Entropy 1.2
• Differential equation 1.3
• Related distributions 2
• Special cases 2.1
• Conjugate prior for Gaussian 2.2
• Notes 3
• References 4

## Properties

### Summation

Barndorff-Nielsen and Halgreen proved that the GIG distribution has Infinite divisibility

### Entropy

The entropy of the generalized inverse Gaussian distribution is given as

H(f(x))=\frac{1}{2} \log \left(\frac{b}{a}\right)+\log \left(2 K_p\left(\sqrt{a b}\right)\right)- (p-1) \frac{\left[\frac{d}{d\nu}K_\nu\left(\sqrt{ab}\right)\right]_{\nu=p}}{K_p\left(\sqrt{a b}\right)}+\frac{\sqrt{a b}}{2 K_p\left(\sqrt{a b}\right)}\left( K_{p+1}\left(\sqrt{a b}\right) + K_{p-1}\left(\sqrt{a b}\right)\right)

where \left[\frac{d}{d\nu}K_\nu\left(\sqrt{a b}\right)\right]_{\nu=p} is a derivative of the modified Bessel function of the second kind with respect to the order \nu evaluated at \nu=p

### Differential equation

The pdf of the generalized inverse Gaussian distribution is a solution to the following differential equation:

\left\{\begin{array}{l} f(x) (x (a x-2 p+2)-b)+2 x^2 f'(x)=0, \\ f(1)=\frac{e^{\frac{1}{2} (-a-b)} \left(\frac{a}{b}\right)^{p/2}}{2 K_p\left(\sqrt{a b}\right)} \end{array}\right\}

## Related distributions

### Special cases

The inverse Gaussian and gamma distributions are special cases of the generalized inverse Gaussian distribution for p = -1/2 and b = 0, respectively. Specifically, an inverse Gaussian distribution of the form

f(x;\mu,\lambda) = \left[\frac{\lambda}{2 \pi x^3}\right]^{1/2} \exp{\frac{-\lambda (x-\mu)^2}{2 \mu^2 x}}

is a GIG with a = \lambda/\mu^2, b = \lambda, and p=-1/2. A Gamma distribution of the form

g(x;\alpha,\beta) = \beta^{\alpha}\frac{1}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x}

is a GIG with a = 2 \beta, b = 0, and p = \alpha.

Other special cases include the inverse-gamma distribution, for a=0, and the hyperbolic distribution, for p=0.

### Conjugate prior for Gaussian

The GIG distribution is conjugate to the normal distribution when serving as the mixing distribution in a normal variance-mean mixture. Let the prior distribution for some hidden variable, say z, be GIG:

P(z|a,b,p) = \text{GIG}(z|a,b,p)

and let there be T observed data points, X=x_1,\ldots,x_T, with normal likelihood function, conditioned on z:

P(X|z,\alpha,\beta) = \prod_{i=1}^T N(x_i|\alpha+\beta z,z)

where N(x|\mu,v) is the normal distribution, with mean \mu and variance v. Then the posterior for z, given the data is also GIG:

P(z|X,a,b,p,\alpha,\beta) = \text{GIG}(z|p-\tfrac{T}{2},a+T\beta^2,b+S)

where \textstyle S = \sum_{i=1}^T (x_i-\alpha)^2.[note 1]