lMoment.Rd
Estimate the \(r\)'th \(L\)-moment from a random sample.
lMoment(x, r = 1, method = "unbiased",
plot.pos.cons = c(a = 0.35, b = 0), na.rm = FALSE)
numeric vector of observations.
positive integer specifying the order of the moment.
character string specifying what method to use to compute the
\(L\)-moment. The possible values are "unbiased"
(method based on the U-statistic; the default), or "plotting.position"
(method based on the plotting position formula). See the DETAILS section for
more information.
numeric vector of length 2 specifying the constants used in the formula for the
plotting positions when method="plotting.position"
. The default value is
plot.pos.cons=c(a=0.35, b=0)
. If this vector has a names attribute with
the value c("a","b")
or c("b","a")
, then the elements will be
matched by name in the formula for computing the plotting positions. Otherwise,
the first element is mapped to the name "a"
and the second element to the
name "b"
. See the DETAILS section for more information. This argument is
ignored if method="ubiased"
.
logical scalar indicating whether to remove missing values from x
.
If na.rm=FALSE
(the default) and x
contains missing values,
then a missing value (NA
) is returned. If na.rm=TRUE
, missing
values are removed from x
prior to computing the \(L\)-moment.
Definitions: \(L\)-Moments and \(L\)-Moment Ratios
The definition of an \(L\)-moment given by Hosking (1990) is as follows.
Let \(X\) denote a random variable with cdf \(F\), and let \(x(p)\)
denote the \(p\)'th quantile of the distribution. Furthermore, let
$$x_{1:n} \le x_{2:n} \le \ldots \le x_{n:n}$$
denote the order statistics of a random sample of size \(n\) drawn from the
distribution of \(X\). Then the \(r\)'th \(L\)-moment is given by:
$$\lambda_r = \frac{1}{r} \sum^{r-1}_{k=0} (-1)^k {r-1 \choose k} E[X_{r-k:r}]$$
for \(r = 1, 2, \ldots\).
Hosking (1990) shows that the above equation can be rewritten as: $$\lambda_r = \int^1_0 x(u) P^*_{r-1}(u) du$$ where $$P^*_r(u) = \sum^r_{k=0} p^*_{r,k} u^k$$ $$p^*_{r,k} = (-1)^{r-k} {r \choose k} {r+k \choose k} = \frac{(-1)^{r-k} (r+k)!}{(k!)^2 (r-k)!}$$ The first four \(L\)-moments are given by: $$\lambda_1 = E[X]$$ $$\lambda_2 = \frac{1}{2} E[X_{2:2} - X_{1:2}]$$ $$\lambda_3 = \frac{1}{3} E[X_{3:3} - 2X_{2:3} + X_{1:3}]$$ $$\lambda_4 = \frac{1}{4} E[X_{4:4} - 3X_{3:4} + 3X_{2:4} - X_{1:4}]$$ Thus, the first \(L\)-moment is a measure of location, and the second \(L\)-moment is a measure of scale.
Hosking (1990) defines the \(L\)-moment ratios of \(X\) to be: $$\tau_r = \frac{\lambda_r}{\lambda_2}$$ for \(r = 2, 3, \ldots\). He shows that for a non-degenerate random variable with a finite mean, these quantities lie in the interval \((-1, 1)\). The quantity $$\tau_3 = \frac{\lambda_3}{\lambda_2}$$ is the \(L\)-moment analog of the coefficient of skewness, and the quantity $$\tau_4 = \frac{\lambda_4}{\lambda_2}$$ is the \(L\)-moment analog of the coefficient of kurtosis. Hosking (1990) also defines an \(L\)-moment analog of the coefficient of variation (denoted the \(L\)-CV) as: $$\lambda = \frac{\lambda_2}{\lambda_1}$$ He shows that for a positive-valued random variable, the \(L\)-CV lies in the interval \((0, 1)\).
Relationship Between \(L\)-Moments and Probability-Weighted Moments
Hosking (1990) and Hosking and Wallis (1995) show that \(L\)-moments can be
written as linear combinations of probability-weighted moments:
$$\lambda_r = (-1)^{r-1} \sum^{r-1}_{k=0} p^*_{r-1,k} \alpha_k = \sum^{r-1}_{j=0} p^*_{r-1,j} \beta_j$$
where
$$\alpha_k = M(1, 0, k) = \frac{1}{k+1} E[X_{1:k+1}]$$
$$\beta_j = M(1, j, 0) = \frac{1}{j+1} E[X_{j+1:j+1}]$$
See the help file for pwMoment
for more information on
probability-weighted moments.
Estimating L-Moments
The two commonly used methods for estimating \(L\)-moments are the
“unbiased” method based on U-statistics (Hoeffding, 1948;
Lehmann, 1975, pp. 362-371), and the “plotting-position” method.
Hosking and Wallis (1995) recommend using the unbiased method for almost all
applications.
Unbiased Estimators (method="unbiased"
)
Using the relationship between \(L\)-moments and probability-weighted moments
explained above, the unbiased estimator of the \(r\)'th \(L\)-moment is based on
unbiased estimators of probability-weighted moments and is given by:
$$l_r = (-1)^{r-1} \sum^{r-1}_{k=0} p^*_{r-1,k} a_k = \sum^{r-1}_{j=0} p^*_{r-1,j} b_j$$
where
$$a_k = \frac{1}{n} \sum^{n-k}_{i=1} x_{i:n} \frac{{n-i \choose k}}{{n-1 \choose k}}$$
$$b_j = \frac{1}{n} \sum^{n}_{i=j+1} x_{i:n} \frac{{i-1 \choose j}}{{n-1 \choose j}}$$
Plotting-Position Estimators (method="plotting.position"
)
Using the relationship between \(L\)-moments and probability-weighted moments
explained above, the plotting-position estimator of the \(r\)'th \(L\)-moment
is based on the plotting-position estimators of probability-weighted moments and
is given by:
$$\tilde{\lambda}_r = (-1)^{r-1} \sum^{r-1}_{k=0} p^*_{r-1,k} \tilde{\alpha}_k = \sum^{r-1}_{j=0} p^*_{r-1,j} \tilde{\beta}_j$$
where
$$\tilde{\alpha}_k = \frac{1}{n} \sum^n_{i=1} (1 - p_{i:n})^k x_{i:n}$$
$$\tilde{\beta}_j = \frac{1}{n} \sum^{n}_{i=1} p^j_{i:n} x_{i:n}$$
and
$$p_{i:n} = \hat{F}(x_{i:n})$$
denotes the plotting position of the \(i\)'th order statistic in the random
sample of size \(n\), that is, a distribution-free estimate of the cdf of
\(X\) evaluated at the \(i\)'th order statistic. Typically, plotting
positions have the form:
$$p_{i:n} = \frac{i-a}{n+b}$$
where \(b > -a > -1\). For this form of plotting position, the
plotting-position estimators are asymptotically equivalent to their
unbiased estimator counterparts.
Estimating \(L\)-Moment Ratios
\(L\)-moment ratios are estimated by simply replacing the population
\(L\)-moments with the estimated \(L\)-moments. The estimated ratios
based on the unbiased estimators are given by:
$$t_r = \frac{l_r}{l_2}$$
and the estimated ratios based on the plotting-position estimators are given by:
$$\tilde{\tau}_r = \frac{\tilde{\lambda}_r}{\tilde{\lambda}_2}$$
In particular, the \(L\)-moment skew is estimated by:
$$t_3 = \frac{l_3}{l_2}$$
or
$$\tilde{\tau}_3 = \frac{\tilde{\lambda}_3}{\tilde{\lambda}_2}$$
and the \(L\)-moment kurtosis is estimated by:
$$t_4 = \frac{l_4}{l_2}$$
or
$$\tilde{\tau}_4 = \frac{\tilde{\lambda}_4}{\tilde{\lambda}_2}$$
Similarly, the \(L\)-moment coefficient of variation can be estimated using
the unbiased \(L\)-moment estimators:
$$l = \frac{l_2}{l_1}$$
or using the plotting-position L-moment estimators:
$$\tilde{\lambda} = \frac{\tilde{\lambda}_2}{\tilde{\lambda}_1}$$
A numeric scalar–the value of the \(r\)'th \(L\)-moment as defined by Hosking (1990).
Fill, H.D., and J.R. Stedinger. (1995). \(L\) Moment and Probability Plot Correlation Coefficient Goodness-of-Fit Tests for the Gumbel Distribution and Impact of Autocorrelation. Water Resources Research 31(1), 225–229.
Hosking, J.R.M. (1990). L-Moments: Analysis and Estimation of Distributions Using Linear Combinations of Order Statistics. Journal of the Royal Statistical Society, Series B 52(1), 105–124.
Hosking, J.R.M., and J.R. Wallis (1995). A Comparison of Unbiased and Plotting-Position Estimators of \(L\) Moments. Water Resources Research 31(8), 2019–2025.
Vogel, R.M., and N.M. Fennessey. (1993). \(L\) Moment Diagrams Should Replace Product Moment Diagrams. Water Resources Research 29(6), 1745–1752.
Hosking (1990) introduced the idea of \(L\)-moments, which are expectations of certain linear combinations of order statistics, as the basis of a general theory of describing theoretical probability distributions, computing summary statistics from observed data, estimating distribution parameters and quantiles, and performing hypothesis tests. The theory of \(L\)-moments parallels the theory of conventional moments. \(L\)-moments have several advantages over conventional moments, including:
\(L\)-moments can characterize a wider range of distributions because they always exist as long as the distribution has a finite mean.
\(L\)-moments are estimated by linear combinations of order statistics, so estimators based on \(L\)-moments are more robust to the presence of outliers than estimators based on conventional moments.
Based on the author's and others' experience, \(L\)-moment estimators are less biased and approximate their asymptotic distribution more closely in finite samples than estimators based on conventional moments.
\(L\)-moment estimators are sometimes more efficient (smaller RMSE) than even maximum likelihood estimators for small samples.
Hosking (1990) presents a table with formulas for the \(L\)-moments of common probability distributions. Articles that illustrate the use of \(L\)-moments include Fill and Stedinger (1995), Hosking and Wallis (1995), and Vogel and Fennessey (1993).
Hosking (1990) and Hosking and Wallis (1995) show the relationship between probabiity-weighted moments and \(L\)-moments.
# Generate 20 observations from a generalized extreme value distribution
# with parameters location=10, scale=2, and shape=.25, then compute the
# first four L-moments.
# (Note: the call to set.seed simply allows you to reproduce this example.)
set.seed(250)
dat <- rgevd(20, location = 10, scale = 2, shape = 0.25)
lMoment(dat)
#> [1] 10.59556
#[1] 10.59556
lMoment(dat, 2)
#> [1] 1.0014
#[1] 1.0014
lMoment(dat, 3)
#> [1] 0.1681165
#[1] 0.1681165
lMoment(dat, 4)
#> [1] 0.08732692
#[1] 0.08732692
#----------
# Now compute some L-moments based on the plotting-position estimators:
lMoment(dat, method = "plotting.position")
#> [1] 10.59556
#[1] 10.59556
lMoment(dat, 2, method = "plotting.position")
#> [1] 1.110264
#[1] 1.110264
lMoment(dat, 3, method="plotting.position", plot.pos.cons = c(.325,1))
#> [1] -0.4430792
#[1] -0.4430792
#----------
# Clean up
#---------
rm(dat)