 #jsDisabledContent { display:none; } My Account | Register | Help Flag as Inappropriate This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate?          Excessive Violence          Sexual Content          Political / Social Email this Article Email Address:

# Rice distribution

Article Id: WHEBN0001843605
Reproduction Date:

 Title: Rice distribution Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:

### Rice distribution In the 2D plane, pick a fixed point at distance ν from the origin. Generate a distribution of 2D points centered around that point, where the x and y coordinates are chosen independently from a gaussian distribution with standard deviation σ (blue region). If R is the distance from these points to the origin, then R has a Rice distribution.
 Parameters Probability density function Cumulative distribution function ν ≥ 0 — distance between the reference point and the center of the bivariate distribution, σ ≥ 0 — scale x ∈ [0, +∞) \frac{x}{\sigma^2}\exp\left(\frac{-(x^2+\nu^2)} {2\sigma^2}\right)I_0\left(\frac{x\nu}{\sigma^2}\right) 1-Q_1\left(\frac{\nu}{\sigma },\frac{x}{\sigma }\right) where Q1 is the Marcum Q-function \sigma \sqrt{\pi/2}\,\,L_{1/2}(-\nu^2/2\sigma^2) 2\sigma^2+\nu^2-\frac{\pi\sigma^2}{2}L_{1/2}^2\left(\frac{-\nu^2}{2\sigma^2}\right) (complicated) (complicated)

In probability theory, the Rice distribution or Rician distribution is the probability distribution of the magnitude of a circular bivariate normal random variable with potentially non-zero mean. It was named after Stephen O. Rice.

## Contents

• Characterization 1
• Properties 2
• Moments 2.1
• Differential equation 2.2
• Related distributions 3
• Limiting cases 4
• Parameter estimation (the Koay inversion technique) 5
• Applications 6
• Notes 8
• References 9

## Characterization

The probability density function is

f(x\mid\nu,\sigma) = \frac{x}{\sigma^2}\exp\left(\frac{-(x^2+\nu^2)} {2\sigma^2}\right)I_0\left(\frac{x\nu}{\sigma^2}\right),

where I0(z) is the modified Bessel function of the first kind with order zero.

The characteristic function is:

\begin{align} &\chi_X(t\mid\nu,\sigma) \\ & \quad = \exp \left( -\frac{\nu^2}{2\sigma^2} \right) \left[ \Psi_2 \left( 1; 1, \frac{1}{2}; \frac{\nu^2}{2\sigma^2}, -\frac{1}{2} \sigma^2 t^2 \right) \right. \\[8pt] & \left. {} \qquad + i \sqrt{2} \sigma t \Psi_2 \left( \frac{3}{2}; 1, \frac{3}{2}; \frac{\nu^2}{2\sigma^2}, -\frac{1}{2} \sigma^2 t^2 \right) \right], \end{align}

where \Psi_2 \left( \alpha; \gamma, \gamma'; x, y \right) is one of Horn's confluent hypergeometric functions with two variables and convergent for all finite values of x and y. It is given by:

\Psi_2 \left( \alpha; \gamma, \gamma'; x, y \right) = \sum_{n=0}^{\infty}\sum_{m=0}^\infty \frac{(\alpha)_{m+n}}{(\gamma)_m(\gamma')_n} \frac{x^m y^n}{m!n!},

where

(x)_n = x(x+1)\cdots(x+n-1) = \frac{\Gamma(x+n)}{\Gamma(x)}

is the rising factorial.

## Properties

### Moments

The first few raw moments are:

\mu_1^{'}= \sigma \sqrt{\pi/2}\,\,L_{1/2}(-\nu^2/2\sigma^2)
\mu_2^{'}= 2\sigma^2+\nu^2\,
\mu_3^{'}= 3\sigma^3\sqrt{\pi/2}\,\,L_{3/2}(-\nu^2/2\sigma^2)
\mu_4^{'}= 8\sigma^4+8\sigma^2\nu^2+\nu^4\,
\mu_5^{'}=15\sigma^5\sqrt{\pi/2}\,\,L_{5/2}(-\nu^2/2\sigma^2)
\mu_6^{'}=48\sigma^6+72\sigma^4\nu^2+18\sigma^2\nu^4+\nu^6\,

and, in general, the raw moments are given by

\mu_k^{'}=\sigma^k2^{k/2}\,\Gamma(1\!+\!k/2)\,L_{k/2}(-\nu^2/2\sigma^2). \,

Here Lq(x) denotes a Laguerre polynomial:

L_q(x)=L_q^{(0)}(x)=M(-q,1,x)=\,_1F_1(-q;1;x)

where M(a,b,z) = _1F_1(a;b;z) is the confluent hypergeometric function of the first kind. When k is even, the raw moments become simple polynomials in σ and ν, as in the examples above.

For the case q = 1/2:

\begin{align} L_{1/2}(x) &=\,_1F_1\left( -\frac{1}{2};1;x\right) \\ &= e^{x/2} \left[\left(1-x\right)I_0\left(\frac{-x}{2}\right) -xI_1\left(\frac{-x}{2}\right) \right]. \end{align}

The second central moment, the variance, is

\mu_2= 2\sigma^2+\nu^2-(\pi\sigma^2/2)\,L^2_{1/2}(-\nu^2/2\sigma^2) .

Note that L^2_{1/2}(\cdot) indicates the square of the Laguerre polynomial L_{1/2}(\cdot), not the generalized Laguerre polynomial L^{(2)}_{1/2}(\cdot).

### Differential equation

The pdf of the Rice distribution is a solution of the following differential equation:

\left\{\begin{array}{l} \sigma ^4 x^2 f''(x)+\left(2\sigma^2 x^3-\sigma^4 x\right) f'(x)+f(x) \left(\sigma ^4-v^2 x^2+x^4\right)=0 \\[10pt] f(1)=\frac{\exp\left(-\frac{v^2+1}{2\sigma^2}\right) I_0\left(\frac{v}{\sigma^2}\right)}{\sigma^2} \\[10pt] f'(1)=\frac{\exp\left(-\frac{v^2+1}{2 \sigma ^2}\right) \left(\left(\sigma^2-1\right) I_0\left(\frac{v}{\sigma ^2}\right)+v I_1\left(\frac{v}{\sigma^2}\right)\right)}{\sigma^4} \end{array}\right\}

## Related distributions

• R \sim \mathrm{Rice}\left(\nu,\sigma\right) has a Rice distribution if R = \sqrt{X^2 + Y^2} where X \sim N\left(\nu\cos\theta,\sigma^2\right) and Y \sim N\left(\nu \sin\theta,\sigma^2\right) are statistically independent normal random variables and \theta is any real number.
• Another case where R \sim \mathrm{Rice}\left(\nu,\sigma\right) comes from the following steps:
1. Generate P having a Poisson distribution with parameter (also mean, for a Poisson) \lambda = \frac{\nu^2}{2\sigma^2}.
2. Generate X having a chi-squared distribution with 2P + 2 degrees of freedom.
3. Set R = \sigma\sqrt{X}.
• If R \sim \text{Rice}\left(\nu,1\right) then R^2 has a noncentral chi-squared distribution with two degrees of freedom and noncentrality parameter \nu^2.
• If R \sim \text{Rice}\left(\nu,1\right) then R has a noncentral chi distribution with two degrees of freedom and noncentrality parameter \nu.
• If R \sim \text{Rice}\left(0,\sigma\right) then R \sim \text{Rayleigh}\left(\sigma\right), i.e., for the special case of the Rice distribution given by ν = 0, the distribution becomes the Rayleigh distribution, for which the variance is \mu_2= \frac{4-\pi}{2}\sigma^2.
• If R \sim \text{Rice}\left(0,\sigma\right) then R^2 has an exponential distribution.

## Limiting cases

For large values of the argument, the Laguerre polynomial becomes

\lim_{x\rightarrow -\infty}L_\nu(x)=\frac{|x|^\nu}{\Gamma(1+\nu)}.

It is seen that as ν becomes large or σ becomes small the mean becomes ν and the variance becomes σ2.

## Parameter estimation (the Koay inversion technique)

There are three different methods for estimating the parameters of the Rice distribution, (1) method of moments, (2) method of maximum likelihood, and (3) method of least squares. In the first two methods the interest is in estimating the parameters of the distribution, ν and σ, from a sample of data. This can be done using the method of moments, e.g., the sample mean and the sample standard deviation. The sample mean is an estimate of μ1' and the sample standard deviation is an estimate of μ21/2.

The following is an efficient method, known as the "Koay inversion technique". for solving the estimating equations, based on the sample mean and the sample standard deviation, simultaneously . This inversion technique is also known as the fixed point formula of SNR. Earlier works on the method of moments usually use a root-finding method to solve the problem, which is not efficient.

First, the ratio of the sample mean to the sample standard deviation is defined as r, i.e., r=\mu^{'}_1/\mu^{1/2}_2. The fixed point formula of SNR is expressed as

g(\theta) = \sqrt{ \xi{(\theta)} \left[ 1+r^2\right] - 2},

where \theta is the ratio of the parameters, i.e., \theta = \frac{\nu}{\sigma}, and \xi{\left(\theta\right)} is given by:

\xi{\left(\theta\right)} = 2 + \theta^2 - \frac{\pi}{8} \exp{(-\theta^2/2)}\left[ (2+\theta^2) I_0 (\theta^2/4) + \theta^2 I_1(\theta^{2}/4)\right]^2,

where I_0 and I_1 are modified Bessel functions of the first kind.

Note that \xi{\left(\theta\right)} is a scaling factor of \sigma and is related to \mu_{2} by:

\mu_2 = \xi{\left(\theta\right)} \sigma^2.\,

To find the fixed point, \theta^{*} , of g , an initial solution is selected, {\theta}_{0} , that is greater than the lower bound, which is {\theta}_{\mathrm{lower bound}} = 0 and occurs when r = \sqrt{\pi/(4-\pi)} (Notice that this is the r=\mu^{'}_1/\mu^{1/2}_2 of a Rayleigh distribution). This provides a starting point for the iteration, which uses functional composition, and this continues until \left|g^{i}\left(\theta_{0}\right)-\theta_{i-1}\right| is less than some small positive value. Here, g^{i} denotes the composition of the same function, g, i times. In practice, we associate the final \theta_{n} for some integer n as the fixed point, \theta^{*}, i.e., \theta^{*} = g\left(\theta^{*}\right).

Once the fixed point is found, the estimates \nu and \sigma are found through the scaling function, \xi{\left(\theta\right)} , as follows:

\sigma = \frac{\mu^{1/2}_2}{\sqrt{\xi\left(\theta^{*}\right)}},

and

\nu = \sqrt{\left( \mu^{'~2}_1 + \left(\xi\left(\theta^{*}\right) - 2\right)\sigma^2 \right)}.

To speed up the iteration even more, one can use the Newton's method of root-finding. This particular approach is highly efficient.