elnormAltCensored.Rd
Estimate the mean and coefficient of variation of a lognormal distribution given a sample of data that has been subjected to Type I censoring, and optionally construct a confidence interval for the mean.
elnormAltCensored(x, censored, method = "mle", censoring.side = "left",
ci = FALSE, ci.method = "profile.likelihood", ci.type = "two-sided",
conf.level = 0.95, n.bootstraps = 1000, pivot.statistic = "z", ...)
numeric vector of observations. Missing (NA
), undefined (NaN
), and
infinite (Inf
, -Inf
) values are allowed but will be removed.
numeric or logical vector indicating which values of x
are censored.
This must be the same length as x
. If the mode of censored
is
"logical"
, TRUE
values correspond to elements of x
that
are censored, and FALSE
values correspond to elements of x
that
are not censored. If the mode of censored
is "numeric"
,
it must contain only 1
's and 0
's; 1
corresponds to
TRUE
and 0
corresponds to FALSE
. Missing (NA
)
values are allowed but will be removed.
character string specifying the method of estimation.
For singly censored data, the possible values are: "mle"
(maximum likelihood; the default), "qmvue"
(quasi minimum variance unbiased estimation), "bcmle"
(bias-corrected maximum likelihood), "rROS"
or "impute.w.qq.reg"
(moment estimation based on imputation using
quantile-quantile regression; also called robust regression on order statistics
and abbreviated rROS), "impute.w.qq.reg.w.cen.level"
(moment estimation based on imputation
using the qq.reg.w.cen.level method
), "impute.w.mle"
(moment estimation based on imputation using the mle), and "half.cen.level"
(moment estimation based on setting the censored
observations to half the censoring level).
For multiply censored data, the possible values are: "mle"
(maximum likelihood; the default), "qmvue"
(quasi minimum variance unbiased estimation), "bcmle"
(bias-corrected maximum likelihood), "rROS"
or "impute.w.qq.reg"
(moment estimation based on imputation using
quantile-quantile regression; also called robust regression on order statistics
and abbreviated rROS), and "half.cen.level"
(moment estimation based on setting the censored
observations to half the censoring level).
See the DETAILS section for more information.
character string indicating on which side the censoring occurs. The possible
values are "left"
(the default) and "right"
.
logical scalar indicating whether to compute a confidence interval for the
mean or variance. The default value is ci=FALSE
.
character string indicating what method to use to construct the confidence interval
for the mean. The possible values are
"profile.likelihood"
(profile likelihood; the default), "cox"
(Cox's approximation), "delta"
(normal approximation based on the delta method), "normal.approx"
(normal approximation), and "bootstrap"
(based on bootstrapping).
The confidence interval method "profile.likelihood"
is valid only
when method="mle"
.
The confidence interval methods "delta"
and "cox"
are valid only
when method
is one of "mle"
, "bcmle"
, or "qmvue"
.
The confidence interval method "normal.approx"
is valid only when
method
is one of "rROS"
, "impute.w.qq.reg"
,
"impute.w.qq.reg.w.cen.level"
, "impute.w.mle"
, or
"half.cen.level"
.
See the DETAILS section for more information.
This argument is ignored if ci=FALSE
.
character string indicating what kind of confidence interval to compute. The
possible values are "two-sided"
(the default), "lower"
, and
"upper"
. This argument is ignored if ci=FALSE
.
a scalar between 0 and 1 indicating the confidence level of the confidence interval.
The default value is conf.level=0.95
. This argument is ignored if
ci=FALSE
.
numeric scalar indicating how many bootstraps to use to construct the
confidence interval for the mean when ci.type="bootstrap"
. This
argument is ignored if ci=FALSE
and/or ci.method
does not
equal "bootstrap"
.
character string indicating which pivot statistic to use in the construction
of the confidence interval for the mean when ci.method
is equal to
"delta"
, "cox"
, or "normal.approx"
(see the DETAILS section).
The possible
values are pivot.statistic="z"
(the default) and pivot.statistic="t"
.
When pivot.statistic="t"
you may supply the argument
ci.sample size
(see below). The argument pivot.statistic
is
ignored if ci=FALSE
.
additional arguments to pass to other functions.
prob.method
. Character string indicating what method to use to
compute the plotting positions (empirical probabilities) when method
is one of "rROS"
, "impute.w.qq.reg"
, "impute.w.qq.reg.w.cen.level"
, or
"impute.w.mle"
. Possible values are
"kaplan-meier"
(product-limit method of Kaplan and Meier (1958)),
"nelson"
(hazard plotting method of Nelson (1972)),
"michael-schucany"
(generalization of the product-limit method due to Michael and Schucany (1986)), and
"hirsch-stedinger"
(generalization of the product-limit method due to Hirsch and Stedinger (1987)).
The default value is prob.method="hirsch-stedinger"
. The "nelson"
method is only available for censoring.side="right"
.
See the DETAILS section and the help file for ppointsCensored
for more information.
plot.pos.con
. Numeric scalar between 0 and 1 containing the
value of the plotting position constant to use when method
is one of
"rROS"
, "impute.w.qq.reg"
, "impute.w.qq.reg.w.cen.level"
, or
"impute.w.mle"
. The default value is plot.pos.con=0.375
.
See the DETAILS section and the help file for ppointsCensored
for more information.
ci.sample.size
. Numeric scalar indicating what sample size to
assume to construct the confidence interval for the mean if
pivot.statistic="t"
and ci.method
is equal to
"delta"
, "cox"
, or "normal.approx"
.
When method
equals
"mle"
, "bcmle"
, or "qmvue"
, the default value is the
expected number of
uncensored observations, otherwise it is the observed number of
uncensored observations.
lb.impute
. Numeric scalar indicating the lower bound for imputed
observations when method is one of "rROS"
, "impute.w.qq.reg"
, "impute.w.qq.reg.w.cen.level"
, or "impute.w.mle"
.
Imputed values smaller than this
value will be set to this value. The default is lb.impute=-Inf
.
ub.impute
. Numeric scalar indicating the upper bound for imputed
observations when method is one of "rROS"
, "impute.w.qq.reg"
, "impute.w.qq.reg.w.cen.level"
, or "impute.w.mle"
.
Imputed values larger than this value
will be set to this value. The default is ub.impute=Inf
.
If x
or censored
contain any missing (NA
), undefined (NaN
) or
infinite (Inf
, -Inf
) values, they will be removed prior to
performing the estimation.
Let \(\underline{x}\) be a vector of \(n\) observations from a
lognormal distribution with
parameters mean=
\(\theta\) and cv=
\(\tau\). Let \(\eta\) denote the
standard deviation of this distribution, so that \(\eta = \theta \tau\). Set
\(\underline{y} = log(\underline{x})\). Then \(\underline{y}\) is a vector of observations
from a normal distribution with parameters mean=
\(\mu\) and sd=
\(\sigma\).
See the help file for LognormalAlt for the relationship between
\(\theta, \tau, \eta, \mu\), and \(\sigma\).
Let \(\underline{x}\) denote a vector of \(N\) observations from a
lognormal distribution with parameters
mean=
\(\theta\) and cv=
\(\tau\). Let \(\eta\) denote the
standard deviation of this distribution, so that \(\eta = \theta \tau\). Set
\(\underline{y} = log(\underline{x})\). Then \(\underline{y}\) is a
vector of observations from a normal distribution with parameters
mean=
\(\mu\) and sd=
\(\sigma\). See the help file for
LognormalAlt for the relationship between
\(\theta, \tau, \eta, \mu\), and \(\sigma\).
Assume \(n\) (\(0 < n < N\)) of the \(N\) observations are known and \(c\) (\(c=N-n\)) of the observations are all censored below (left-censored) or all censored above (right-censored) at \(k\) fixed censoring levels $$T_1, T_2, \ldots, T_k; \; k \ge 1 \;\;\;\;\;\; (1)$$ For the case when \(k \ge 2\), the data are said to be Type I multiply censored. For the case when \(k=1\), set \(T = T_1\). If the data are left-censored and all \(n\) known observations are greater than or equal to \(T\), or if the data are right-censored and all \(n\) known observations are less than or equal to \(T\), then the data are said to be Type I singly censored (Nelson, 1982, p.7), otherwise they are considered to be Type I multiply censored.
Let \(c_j\) denote the number of observations censored below or above censoring level \(T_j\) for \(j = 1, 2, \ldots, k\), so that $$\sum_{i=1}^k c_j = c \;\;\;\;\;\; (2)$$ Let \(x_{(1)}, x_{(2)}, \ldots, x_{(N)}\) denote the “ordered” observations, where now “observation” means either the actual observation (for uncensored observations) or the censoring level (for censored observations). For right-censored data, if a censored observation has the same value as an uncensored one, the uncensored observation should be placed first. For left-censored data, if a censored observation has the same value as an uncensored one, the censored observation should be placed first.
Note that in this case the quantity \(x_{(i)}\) does not necessarily represent the \(i\)'th “largest” observation from the (unknown) complete sample.
Finally, let \(\Omega\) (omega) denote the set of \(n\) subscripts in the
“ordered” sample that correspond to uncensored observations.
ESTIMATION
This section explains how each of the estimators of mean=
\(\theta\) and
cv=
\(\tau\) are computed. The approach is to first compute estimates of
\(\theta\) and \(\eta^2\) (the mean and variance of the lognormal distribution),
say \(\hat{\theta}\) and \(\hat{\eta}^2\), then compute the estimate of the cv
\(\tau\) by \(\hat{\tau} = \hat{\eta}/\hat{\theta}\).
Maximum Likelihood Estimation (method="mle"
)
The maximum likelihood estimators of \(\theta\), \(\tau\), and \(\eta\) are
computed as:
$$\hat{\theta}_{mle} = exp(\hat{\mu}_{mle} + \frac{\hat{\sigma}^2_{mle}}{2}) \;\;\;\;\;\; (3)$$
$$\hat{\tau}_{mle} = [exp(\hat{\sigma}^2_{mle}) - 1]^{1/2} \;\;\;\;\;\; (4)$$
$$\hat{\eta}_{mle} = \hat{\theta}_{mle} \; \hat{\tau}_{mle} \;\;\;\;\;\; (5)$$
where \(\hat{\mu}_{mle}\) and \(\hat{\sigma}_{mle}\) denote the maximum
likelihood estimators of \(\mu\) and \(\sigma\). See the help for for
enormCensored
for information on how \(\hat{\mu}_{mle}\) and
\(\hat{\sigma}_{mle}\) are computed.
Quasi Minimum Variance Unbiased Estimation Based on the MLE's (method="qmvue"
)
The maximum likelihood estimators of \(\theta\) and \(\eta^2\) are biased.
Even for complete (uncensored) samples these estimators are biased
(see equation (12) in the help file for elnormAlt
).
The bias tends to 0 as the sample size increases, but it can be considerable for
small sample sizes.
(Cohn et al., 1989, demonstrate the bias for complete data sets.)
For the case of complete samples, the minimum variance unbiased estimators (mvue's)
of \(\theta\) and \(\eta^2\) were derived by Finney (1941) and are discussed in
Gilbert (1987, pp.164-167) and Cohn et al. (1989). These estimators are computed as:
$$\hat{\theta}_{mvue} = e^{\bar{y}} g_{n-1}(\frac{s^2}{2}) \;\;\;\;\;\; (6)$$
$$\hat{\eta}^2_{mvue} = e^{2 \bar{y}} \{g_{n-1}(2s^2) - g_{n-1}[\frac{(n-2)s^2}{n-1}]\} \;\;\;\;\;\; (7)$$
where
$$\bar{y} = \frac{1}{n} \sum_{i=1}^n y_i \;\;\;\;\;\; (8)$$
$$s^2 = \frac{1}{n-1} \sum_{i=1}^n (y_i - \bar{y})^2 \;\;\;\;\;\; (9)$$
$$g_m(z) = \sum_{i=0}^\infty \frac{m^i (m+2i)}{m(m+2) \cdots (m+2i)} (\frac{m}{m+1})^i (\frac{z^i}{i!}) \;\;\;\;\;\; (10)$$
(see the help file for elnormAlt
).
For Type I censored samples, the quasi minimum variance unbiased estimators
(qmvue's) of \(\theta\) and \(\eta^2\) are computed using equations (6) and (7)
and estimating \(\mu\) and \(\sigma\) with their mle's (see
elnormCensored
).
For singly censored data, this is apparently the LM method of Gilliom and Helsel
(1986, p.137) (it is not clear from their description on page 137 whether their
LM method is the straight method="mle"
described above or
method="qmvue"
described here). This method was also used by
Newman et al. (1989, p.915, equations 10-11).
For multiply censored data, this is apparently the MM method of Helsel and Cohn
(1988, p.1998). (It is not clear from their description on page 1998 and the
description in Gilliom and Helsel, 1986, page 137 whether Helsel and Cohn's (1988)
MM method is the straight method="mle"
described above or method="qmvue"
described here.)
Bias-Corrected Maximum Likelihood Estimation (method="bcmle"
)
This method was derived by El-Shaarawi (1989) and can be applied to complete or
censored data sets. For complete data, the exact relative bias of the mle of
the mean \(\theta\) is given as:
$$B_{mle} = \frac{E[\hat{\theta}_{mle}]}{\theta} = exp[\frac{-(n-1)\sigma^2}{2n}] (1 - \frac{\sigma^2}{n})^{-(n-1)/2} \;\;\;\;\;\; (11)$$
(see equation (12) in the help file for elnormAlt
).
For the case of complete or censored data, El-Shaarawi (1989) proposed the
following “bias-corrected” maximum likelihood estimator:
$$\hat{\theta}_{bcmle} = \frac{\hat{\theta}_{mle}}{\hat{B}_{mle}} \;\;\;\;\;\; (12)$$
where
$$\hat{B}_{mle} = exp[\frac{1}{2}(\hat{V}_{11} + 2\hat{\sigma}_{mle} \hat{V}_{12} + \hat{\sigma}^2_{mle} \hat{V}_{22})] \;\;\;\;\;\; (13)$$
and \(V\) denotes the asymptotic variance-covariance of the mle's of \(\mu\)
and \(\sigma\), which is based on the observed information matrix, formulas for
which are given in Cohen (1991). El-Shaarawi (1989) does not propose a
bias-corrected estimator of the variance \(\eta^2\), so the mle of \(\eta\)
is computed when method="bcmle"
.
Robust Regression on Order Statistics (method="rROS"
) or
Imputation Using Quantile-Quantile Regression (method ="impute.w.qq.reg"
)
This is the robust Regression on Order Statistics (rROS) method discussed in USEPA (2009)
and Helsel (2012). This method involves using quantile-quantile regression on the
log-transformed observations to fit a regression line (and thus initially estimate the mean
\(\mu\) and standard deviation \(\sigma\) in log-space), imputing the
log-transformed values of the \(c\) censored observations by predicting them
from the regression equation, transforming the log-scale imputed values back to
the original scale, and then computing the method of moments estimates of the
mean and standard deviation based on the observed and imputed values.
The steps are:
Estimate \(\mu\) and \(\sigma\) by computing the least-squares
estimates in the following model:
$$y_{(i)} = \mu + \sigma \Phi^{-1}(p_i) + \epsilon_i, \; i \in \Omega \;\;\;\;\;\; (14)$$
where \(p_i\) denotes the plotting position associated with the \(i\)'th
largest value, \(a\) is a constant such that \(0 \le a \le 1\)
(the default value is 0.375), \(\Phi\) denotes the cumulative
distribution function (cdf) of the standard normal distribution and
\(\Omega\) denotes the set of \(n\) subscripts associated with the
uncensored observations in the ordered sample. The plotting positions are
computed by calling the function ppointsCensored
.
Compute the log-scale imputed values as:
$$\hat{y}_{(i)} = \hat{\mu}_{qqreg} + \hat{\sigma}_{qqreg} \Phi^{-1}(p_i), \; i \not \in \Omega \;\;\;\;\;\; (15)$$
Retransform the log-scale imputed values:
$$\hat{x}_{(i)} = exp[\hat{y}_{(i)}], \; i \not \in \Omega \;\;\;\;\;\; (16)$$
Compute the usual method of moments estimates of the mean and variance. $$\hat{\theta} = \frac{1}{N} [\sum_{i \not \in \Omega} \hat{x}_{(i)} + \sum_{i \in \Omega} x_{(i)}] \;\;\;\;\;\; (17)$$ $$\hat{\eta}^2 = \frac{1}{N-1} [\sum_{i \not \in \Omega} (\hat{x}_{(i)} - \hat{\theta}^2) + \sum_{i \in \Omega} (x_{(i)} - \hat{\theta}^2)] \;\;\;\;\;\; (18)$$ Note that the estimate of variance is actually the usual unbiased one (not the method of moments one) in the case of complete data.
For sinlgy censored data, this method is discussed by Hashimoto and Trussell (1983), Gilliom and Helsel (1986), and El-Shaarawi (1989), and is referred to as the LR (Log-Regression) or Log-Probability Method.
For multiply censored data, this is the MR method of Helsel and Cohn (1988, p.1998).
They used it with the probability method of Hirsch and Stedinger (1987) and
Weibull plotting positions (i.e., prob.method="hirsch-stedinger"
and
plot.pos.con=0
).
The argument plot.pos.con
(see the entry for ... in the ARGUMENTS
section above) determines the value of the plotting positions computed in
equations (14) and (15) when method
equals "hirsch-stedinger"
or
"michael-schucany"
. The default value is plot.pos.con=0.375
.
See the help file for ppointsCensored
for more information.
The arguments lb.impute
and ub.impute
(see the entry for ... in
the ARGUMENTS section above) determine the lower and upper bounds for the
imputed values. Imputed values smaller than lb.impute
are set to this
value. Imputed values larger than ub.impute
are set to this value.
The default values are lb.impute=0
and ub.impute=Inf
.
Imputation Using Quantile-Quantile Regression Including the Censoring Level
(method ="impute.w.qq.reg.w.cen.level"
)
This method is only available for sinlgy censored data. This method was
proposed by El-Shaarawi (1989), which he denoted as the Modified LR Method.
It is exactly the same method as imputation
using quantile-quantile regression (method="impute.w.qq.reg"
), except that
the quantile-quantile regression includes the censoring level. For left singly
censored data, the modification involves adding the point
\([\Phi^{-1}(p_c), T]\) to the plot before fitting the least-squares line.
For right singly censored data, the point \([\Phi^{-1}(p_{n+1}), T]\)
is added to the plot before fitting the least-squares line.
Imputation Using Maximum Likelihood (method ="impute.w.mle"
)
This method is only available for sinlgy censored data.
This is exactly the same method as robust Regression on Order Statistics (i.e.,
the same as using method="rROS"
or method="impute.w.qq.reg"
),
except that the maximum likelihood method (method="mle"
) is used to compute
the initial estimates of the mean and standard deviation.
In the context of lognormal data, this method is discussed
by El-Shaarawi (1989), which he denotes as the Modified Maximum Likelihood Method.
Setting Censored Observations to Half the Censoring Level (method="half.cen.level"
)
This method is applicable only to left censored data that is bounded below by 0.
This method involves simply replacing all the censored observations with half their
detection limit, and then computing the usual moment estimators of the mean and
variance. That is, all censored observations are imputed to be half the detection
limit, and then Equations (17) and (18) are used to estimate the mean and varaince.
This method is included only to allow comparison of this method to other methods.
Setting left-censored observations to half the censoring level is not
recommended. In particular, El-Shaarawi and Esterby (1992) show that these
estimators are biased and inconsistent (i.e., the bias remains even as the sample
size increases).
CONFIDENCE INTERVALS
This section explains how confidence intervals for the mean \(\theta\) are
computed.
f
Likelihood Profile (ci.method="profile.likelihood"
)
This method was proposed by Cox (1970, p.88), and Venzon and Moolgavkar (1988)
introduced an efficient method of computation. This method is also discussed by
Stryhn and Christensen (2003) and Royston (2007).
The idea behind this method is to invert the likelihood-ratio test to obtain a
confidence interval for the mean \(\theta\) while treating the coefficient of
variation \(\tau\) as a nuisance parameter.
For Type I left censored data, the likelihood function is given by: $$L(\theta, \tau | \underline{x}) = {N \choose c_1 c_2 \ldots c_k n} \prod_{j=1}^k [F(T_j)]^{c_j} \prod_{i \in \Omega} f[x_{(i)}] \;\;\;\;\;\; (19)$$ where \(f\) and \(F\) denote the probability density function (pdf) and cumulative distribution function (cdf) of the population. That is, $$f(t) = \phi(\frac{t-\mu}{\sigma}) \;\;\;\;\;\; (20)$$ $$F(t) = \Phi(\frac{t-\mu}{\sigma}) \;\;\;\;\;\; (21)$$ where $$\mu = log(\frac{\theta}{\sqrt{\tau^2 + 1}}) \;\;\;\;\;\; (22)$$ $$\sigma = [log(\tau^2 + 1)]^{1/2} \;\;\;\;\;\; (23)$$ and \(\phi\) and \(\Phi\) denote the pdf and cdf of the standard normal distribution, respectively (Cohen, 1963; 1991, pp.6, 50). For left singly censored data, equation (3) simplifies to: $$L(\mu, \sigma | \underline{x}) = {N \choose c} [F(T)]^{c} \prod_{i = c+1}^n f[x_{(i)}] \;\;\;\;\;\; (24)$$
Similarly, for Type I right censored data, the likelihood function is given by: $$L(\mu, \sigma | \underline{x}) = {N \choose c_1 c_2 \ldots c_k n} \prod_{j=1}^k [1 - F(T_j)]^{c_j} \prod_{i \in \Omega} f[x_{(i)}] \;\;\;\;\;\; (25)$$ and for right singly censored data this simplifies to: $$L(\mu, \sigma | \underline{x}) = {N \choose c} [1 - F(T)]^{c} \prod_{i = 1}^n f[x_{(i)}] \;\;\;\;\;\; (26)$$
Following Stryhn and Christensen (2003), denote the maximum likelihood estimates of the mean and coefficient of variation by \((\theta^*, \tau^*)\). The likelihood ratio test statistic (\(G^2\)) of the hypothesis \(H_0: \theta = \theta_0\) (where \(\theta_0\) is a fixed value) equals the drop in \(2 log(L)\) between the “full” model and the reduced model with \(\theta\) fixed at \(\theta_0\), i.e., $$G^2 = 2 \{log[L(\theta^*, \tau^*)] - log[L(\theta_0, \tau_0^*)]\} \;\;\;\;\;\; (27)$$ where \(\tau_0^*\) is the maximum likelihood estimate of \(\tau\) for the reduced model (i.e., when \(\theta = \theta_0\)). Under the null hypothesis, the test statistic \(G^2\) follows a chi-squared distribution with 1 degree of freedom.
Alternatively, we may
express the test statistic in terms of the profile likelihood function \(L_1\)
for the mean \(\theta\), which is obtained from the usual likelihood function by
maximizing over the parameter \(\tau\), i.e.,
$$L_1(\theta) = max_{\tau} L(\theta, \tau) \;\;\;\;\;\; (28)$$
Then we have
$$G^2 = 2 \{log[L_1(\theta^*)] - log[L_1(\theta_0)]\} \;\;\;\;\;\; (29)$$
A two-sided \((1-\alpha)100\%\) confidence interval for the mean \(\theta\)
consists of all values of \(\theta_0\) for which the test is not significant at
level \(alpha\):
$$\theta_0: G^2 \le \chi^2_{1, {1-\alpha}} \;\;\;\;\;\; (30)$$
where \(\chi^2_{\nu, p}\) denotes the \(p\)'th quantile of the
chi-squared distribution with \(\nu\) degrees of freedom.
One-sided lower and one-sided upper confidence intervals are computed in a similar
fashion, except that the quantity \(1-\alpha\) in Equation (30) is replaced with
\(1-2\alpha\).
Direct Normal Approximations (ci.method="delta"
or ci.method="normal.approx"
)
An approximate \((1-\alpha)100\%\) confidence interval for \(\theta\) can be
constructed assuming the distribution of the estimator of \(\theta\) is
approximately normally distributed. That is, a two-sided \((1-\alpha)100\%\)
confidence interval for \(\theta\) is constructed as:
$$[\hat{\theta} - t_{1-\alpha/2, m-1}\hat{\sigma}_{\hat{\theta}}, \; \hat{\theta} + t_{1-\alpha/2, m-1}\hat{\sigma}_{\hat{\theta}}] \;\;\;\;\;\; (31)$$
where \(\hat{\theta}\) denotes the estimate of \(\theta\),
\(\hat{\sigma}_{\hat{\theta}}\) denotes the estimated asymptotic standard
deviation of the estimator of \(\theta\), \(m\) denotes the assumed sample
size for the confidence interval, and \(t_{p,\nu}\) denotes the \(p\)'th
quantile of Student's t-distribuiton with \(\nu\)
degrees of freedom. One-sided confidence intervals are computed in a
similar fashion.
The argument ci.sample.size
determines the value of \(m\) (see
see the entry for ... in the ARGUMENTS section above).
When method
equals "mle"
, "qmvue"
, or "bcmle"
and the data are singly censored, the default value is the
expected number of uncensored observations, otherwise it is \(n\),
the observed number of uncensored observations. This is simply an ad-hoc
method of constructing confidence intervals and is not based on any
published theoretical results.
When pivot.statistic="z"
, the \(p\)'th quantile from the
standard normal distribution is used in place of the
\(p\)'th quantile from Student's t-distribution.
Direct Normal Approximation Based on the Delta Method (ci.method="delta"
)
This method is usually applied with the maximum likelihood estimators
(method="mle"
). It should also work approximately for the quasi minimum
variance unbiased estimators (method="qmvue"
) and the bias-corrected maximum
likelihood estimators (method="bcmle"
).
When method="mle"
, the variance of the mle of \(\theta\) can be estimated
based on the variance-covariance matrix of the mle's of \(\mu\) and \(\sigma\)
(denoted \(V\)), and the delta method:
$$\hat{\sigma}^2_{\hat{\theta}} = (\frac{\partial \theta}{\partial \underline{\lambda}})^{'}_{\hat{\underline{\lambda}}} \hat{V} (\frac{\partial \theta}{\partial \underline{\lambda}})_{\hat{\underline{\lambda}}} \;\;\;\;\;\; (32)$$
where
$$\underline{\lambda}' = (\mu, \sigma) \;\;\;\;\;\; (33)$$
$$\frac{\partial \theta}{\partial \mu} = exp(\mu + \frac{\sigma^2}{2}) \;\;\;\;\;\; (34)$$
$$\frac{\partial \theta}{\partial \sigma} = \sigma exp(\mu + \frac{\sigma^2}{2}) \;\;\;\;\;\; (35)$$
(Shumway et al., 1989). The variance-covariance matrix \(V\) of the mle's of
\(\mu\) and \(\sigma\) is estimated based on the inverse of the observed Fisher
Information matrix, formulas for which are given in Cohen (1991).
Direct Normal Approximation Based on the Moment Estimators (ci.method="normal.approx"
)
This method is valid only for the moment estimators based on imputed values
(i.e., method="impute.w.qq.reg"
or method="half.cen.level"
). For
these cases, the standard deviation of the estimated mean is assumed to be
approximated by
$$\hat{\sigma}_{\hat{\theta}} = \frac{\hat{\eta}}{\sqrt{m}} \;\;\;\;\;\; (36)$$
where, as already noted, \(m\) denotes the assumed sample size.
This is simply an ad-hoc method of constructing confidence intervals and is not
based on any published theoretical results.
Cox's Method (ci.method="cox"
)
This method may be applied with the maximum likelihood estimators
(method="mle"
), the quasi minimum variance unbiased estimators
(method="qmvue"
), and the bias-corrected maximum likelihood estimators
(method="bcmle"
).
This method was proposed by El-Shaarawi (1989) and is an extension of the
method derived by Cox and presented in Land (1972) for the case of
complete data (see the explanation of ci.method="cox"
in the help file
for elnormAlt
). The idea is to construct an approximate
\((1-\alpha)100\%\) confidence interval for the quantity
$$\beta = exp(\mu + \frac{\sigma^2}{2}) \;\;\;\;\;\; (37)$$
assuming the estimate of \(\beta\)
$$\hat{\beta} = exp(\hat{\mu} + \frac{\hat{\sigma}^2}{2}) \;\;\;\;\;\; (38)$$
is approximately normally distributed, and then exponentiate the confidence limits.
That is, a two-sided \((1-\alpha)100\%\) confidence interval for \(\theta\)
is constructed as:
$$[ exp(\hat{\beta} - h), \; exp(\hat{\beta} + h) ]\;\;\;\;\;\; (39)$$
where
$$h = t_{1-\alpha/2, m-1}\hat{\sigma}_{\hat{\beta}} \;\;\;\;\;\; (40)$$
and \(\hat{\sigma}_{\hat{\beta}}\) denotes the estimated asymptotic standard
deviation of the estimator of \(\beta\), \(m\) denotes the assumed sample
size for the confidence interval, and \(t_{p,\nu}\) denotes the \(p\)'th
quantile of Student's t-distribuiton with \(\nu\)
degrees of freedom.
El-Shaarawi (1989) shows that the standard deviation of the mle of \(\beta\) can be estimated by: $$\hat{\sigma}_{\hat{\beta}} = \sqrt{ \hat{V}_{11} + 2 \hat{\sigma} \hat{V}_{12} + \hat{\sigma}^2 \hat{V}_{22} } \;\;\;\;\;\; (41)$$ where \(V\) denotes the variance-covariance matrix of the mle's of \(\mu\) and \(\sigma\) and is estimated based on the inverse of the Fisher Information matrix.
One-sided confidence intervals are computed in a similar fashion.
Bootstrap and Bias-Corrected Bootstrap Approximation (ci.method="bootstrap"
)
The bootstrap is a nonparametric method of estimating the distribution
(and associated distribution parameters and quantiles) of a sample statistic,
regardless of the distribution of the population from which the sample was drawn.
The bootstrap was introduced by Efron (1979) and a general reference is
Efron and Tibshirani (1993).
In the context of deriving an approximate \((1-\alpha)100\%\) confidence interval for the population mean \(\theta\), the bootstrap can be broken down into the following steps:
Create a bootstrap sample by taking a random sample of size \(N\) from the observations in \(\underline{x}\), where sampling is done with replacement. Note that because sampling is done with replacement, the same element of \(\underline{x}\) can appear more than once in the bootstrap sample. Thus, the bootstrap sample will usually not look exactly like the original sample (e.g., the number of censored observations in the bootstrap sample will often differ from the number of censored observations in the original sample).
Estimate \(\theta\) based on the bootstrap sample created in Step 1, using the same method that was used to estimate \(\theta\) using the original observations in \(\underline{x}\). Because the bootstrap sample usually does not match the original sample, the estimate of \(\theta\) based on the bootstrap sample will usually differ from the original estimate based on \(\underline{x}\).
Repeat Steps 1 and 2 \(B\) times, where \(B\) is some large number.
The number of bootstraps \(B\) is determined by the argument
n.bootstraps
(see the section ARGUMENTS above).
The default value of n.bootstraps
is 1000
.
Use the \(B\) estimated values of \(\theta\) to compute the empirical
cumulative distribution function of this estimator of \(\theta\) (see
ecdfPlot
), and then create a confidence interval for \(\theta\)
based on this estimated cdf.
The two-sided percentile interval (Efron and Tibshirani, 1993, p.170) is computed as:
$$[\hat{G}^{-1}(\frac{\alpha}{2}), \; \hat{G}^{-1}(1-\frac{\alpha}{2})] \;\;\;\;\;\; (42)$$
where \(\hat{G}(t)\) denotes the empirical cdf evaluated at \(t\) and thus
\(\hat{G}^{-1}(p)\) denotes the \(p\)'th empirical quantile, that is,
the \(p\)'th quantile associated with the empirical cdf. Similarly, a one-sided lower
confidence interval is computed as:
$$[\hat{G}^{-1}(\alpha), \; \infty] \;\;\;\;\;\; (43)$$
and a one-sided upper confidence interval is computed as:
$$[0, \; \hat{G}^{-1}(1-\alpha)] \;\;\;\;\;\; (44)$$
The function elnormAltCensored
calls the R function quantile
to compute the empirical quantiles used in Equations (42)-(44).
The percentile method bootstrap confidence interval is only first-order accurate (Efron and Tibshirani, 1993, pp.187-188), meaning that the probability that the confidence interval will contain the true value of \(\theta\) can be off by \(k/\sqrt{N}\), where \(k\)is some constant. Efron and Tibshirani (1993, pp.184-188) proposed a bias-corrected and accelerated interval that is second-order accurate, meaning that the probability that the confidence interval will contain the true value of \(\theta\) may be off by \(k/N\) instead of \(k/\sqrt{N}\). The two-sided bias-corrected and accelerated confidence interval is computed as: $$[\hat{G}^{-1}(\alpha_1), \; \hat{G}^{-1}(\alpha_2)] \;\;\;\;\;\; (45)$$ where $$\alpha_1 = \Phi[\hat{z}_0 + \frac{\hat{z}_0 + z_{\alpha/2}}{1 - \hat{a}(z_0 + z_{\alpha/2})}] \;\;\;\;\;\; (46)$$ $$\alpha_2 = \Phi[\hat{z}_0 + \frac{\hat{z}_0 + z_{1-\alpha/2}}{1 - \hat{a}(z_0 + z_{1-\alpha/2})}] \;\;\;\;\;\; (47)$$ $$\hat{z}_0 = \Phi^{-1}[\hat{G}(\hat{\theta})] \;\;\;\;\;\; (48)$$ $$\hat{a} = \frac{\sum_{i=1}^N (\hat{\theta}_{(\cdot)} - \hat{\theta}_{(i)})^3}{6[\sum_{i=1}^N (\hat{\theta}_{(\cdot)} - \hat{\theta}_{(i)})^2]^{3/2}} \;\;\;\;\;\; (49)$$ where the quantity \(\hat{\theta}_{(i)}\) denotes the estimate of \(\theta\) using all the values in \(\underline{x}\) except the \(i\)'th one, and $$\hat{\theta}{(\cdot)} = \frac{1}{N} \sum_{i=1}^N \hat{\theta_{(i)}} \;\;\;\;\;\; (50)$$ A one-sided lower confidence interval is given by: $$[\hat{G}^{-1}(\alpha_1), \; \infty] \;\;\;\;\;\; (51)$$ and a one-sided upper confidence interval is given by: $$[0, \; \hat{G}^{-1}(\alpha_2)] \;\;\;\;\;\; (52)$$ where \(\alpha_1\) and \(\alpha_2\) are computed as for a two-sided confidence interval, except \(\alpha/2\) is replaced with \(\alpha\) in Equations (51) and (52).
The constant \(\hat{z}_0\) incorporates the bias correction, and the constant \(\hat{a}\) is the acceleration constant. The term “acceleration” refers to the rate of change of the standard error of the estimate of \(\theta\) with respect to the true value of \(\theta\) (Efron and Tibshirani, 1993, p.186). For a normal (Gaussian) distribution, the standard error of the estimate of \(\theta\) does not depend on the value of \(\theta\), hence the acceleration constant is not really necessary.
When ci.method="bootstrap"
, the function elnormAltCensored
computes both
the percentile method and bias-corrected and accelerated method bootstrap confidence
intervals.
This method of constructing confidence intervals for censored data was studied by Shumway et al. (1989).
a list of class "estimateCensored"
containing the estimated parameters
and other information. See estimateCensored.object
for details.
Bain, L.J., and M. Engelhardt. (1991). Statistical Analysis of Reliability and Life-Testing Models. Marcel Dekker, New York, 496pp.
Cohen, A.C. (1959). Simplified Estimators for the Normal Distribution When Samples are Singly Censored or Truncated. Technometrics 1(3), 217–237.
Cohen, A.C. (1963). Progressively Censored Samples in Life Testing. Technometrics 5, 327–339
Cohen, A.C. (1991). Truncated and Censored Samples. Marcel Dekker, New York, New York, 312pp.
Cox, D.R. (1970). Analysis of Binary Data. Chapman & Hall, London. 142pp.
Efron, B. (1979). Bootstrap Methods: Another Look at the Jackknife. The Annals of Statistics 7, 1–26.
Efron, B., and R.J. Tibshirani. (1993). An Introduction to the Bootstrap. Chapman and Hall, New York, 436pp.
El-Shaarawi, A.H. (1989). Inferences About the Mean from Censored Water Quality Data. Water Resources Research 25(4) 685–690.
El-Shaarawi, A.H., and D.M. Dolan. (1989). Maximum Likelihood Estimation of Water Quality Concentrations from Censored Data. Canadian Journal of Fisheries and Aquatic Sciences 46, 1033–1039.
El-Shaarawi, A.H., and S.R. Esterby. (1992). Replacement of Censored Observations by a Constant: An Evaluation. Water Research 26(6), 835–844.
El-Shaarawi, A.H., and A. Naderi. (1991). Statistical Inference from Multiply Censored Environmental Data. Environmental Monitoring and Assessment 17, 339–347.
Gibbons, R.D., D.K. Bhaumik, and S. Aryal. (2009). Statistical Methods for Groundwater Monitoring, Second Edition. John Wiley & Sons, Hoboken.
Gilliom, R.J., and D.R. Helsel. (1986). Estimation of Distributional Parameters for Censored Trace Level Water Quality Data: 1. Estimation Techniques. Water Resources Research 22, 135–146.
Gleit, A. (1985). Estimation for Small Normal Data Sets with Detection Limits. Environmental Science and Technology 19, 1201–1206.
Haas, C.N., and P.A. Scheff. (1990). Estimation of Averages in Truncated Samples. Environmental Science and Technology 24(6), 912–919.
Hashimoto, L.K., and R.R. Trussell. (1983). Evaluating Water Quality Data Near the Detection Limit. Paper presented at the Advanced Technology Conference, American Water Works Association, Las Vegas, Nevada, June 5-9, 1983.
Helsel, D.R. (1990). Less than Obvious: Statistical Treatment of Data Below the Detection Limit. Environmental Science and Technology 24(12), 1766–1774.
Helsel, D.R. (2012). Statistics for Censored Environmental Data Using Minitab and R, Second Edition. John Wiley & Sons, Hoboken, New Jersey.
Helsel, D.R., and T.A. Cohn. (1988). Estimation of Descriptive Statistics for Multiply Censored Water Quality Data. Water Resources Research 24(12), 1997–2004.
Hirsch, R.M., and J.R. Stedinger. (1987). Plotting Positions for Historical Floods and Their Precision. Water Resources Research 23(4), 715–727.
Korn, L.R., and D.E. Tyler. (2001). Robust Estimation for Chemical Concentration Data Subject to Detection Limits. In Fernholz, L., S. Morgenthaler, and W. Stahel, eds. Statistics in Genetics and in the Environmental Sciences. Birkhauser Verlag, Basel, pp.41–63.
Krishnamoorthy K., and T. Mathew. (2009). Statistical Tolerance Regions: Theory, Applications, and Computation. John Wiley and Sons, Hoboken.
Michael, J.R., and W.R. Schucany. (1986). Analysis of Data from Censored Samples. In D'Agostino, R.B., and M.A. Stephens, eds. Goodness-of Fit Techniques. Marcel Dekker, New York, 560pp, Chapter 11, 461–496.
Millard, S.P., P. Dixon, and N.K. Neerchal. (2014; in preparation). Environmental Statistics with R. CRC Press, Boca Raton, Florida.
Nelson, W. (1982). Applied Life Data Analysis. John Wiley and Sons, New York, 634pp.
Newman, M.C., P.M. Dixon, B.B. Looney, and J.E. Pinder. (1989). Estimating Mean and Variance for Environmental Samples with Below Detection Limit Observations. Water Resources Bulletin 25(4), 905–916.
Pettitt, A. N. (1983). Re-Weighted Least Squares Estimation with Censored and Grouped Data: An Application of the EM Algorithm. Journal of the Royal Statistical Society, Series B 47, 253–260.
Regal, R. (1982). Applying Order Statistic Censored Normal Confidence Intervals to Time Censored Data. Unpublished manuscript, University of Minnesota, Duluth, Department of Mathematical Sciences.
Royston, P. (2007). Profile Likelihood for Estimation and Confdence Intervals. The Stata Journal 7(3), pp. 376–387.
Saw, J.G. (1961b). The Bias of the Maximum Likelihood Estimators of Location and Scale Parameters Given a Type II Censored Normal Sample. Biometrika 48, 448–451.
Schmee, J., D.Gladstein, and W. Nelson. (1985). Confidence Limits for Parameters of a Normal Distribution from Singly Censored Samples, Using Maximum Likelihood. Technometrics 27(2) 119–128.
Schneider, H. (1986). Truncated and Censored Samples from Normal Populations. Marcel Dekker, New York, New York, 273pp.
Shumway, R.H., A.S. Azari, and P. Johnson. (1989). Estimating Mean Concentrations Under Transformations for Environmental Data With Detection Limits. Technometrics 31(3), 347–356.
Singh, A., R. Maichle, and S. Lee. (2006). On the Computation of a 95% Upper Confidence Limit of the Unknown Population Mean Based Upon Data Sets with Below Detection Limit Observations. EPA/600/R-06/022, March 2006. Office of Research and Development, U.S. Environmental Protection Agency, Washington, D.C.
Stryhn, H., and J. Christensen. (2003). Confidence Intervals by the Profile Likelihood Method, with Applications in Veterinary Epidemiology. Contributed paper at ISVEE X (November 2003, Chile). https://gilvanguedes.com/wp-content/uploads/2019/05/Profile-Likelihood-CI.pdf.
Travis, C.C., and M.L. Land. (1990). Estimating the Mean of Data Sets with Nondetectable Values. Environmental Science and Technology 24, 961–962.
USEPA. (2009). Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities, Unified Guidance. EPA 530/R-09-007, March 2009. Office of Resource Conservation and Recovery Program Implementation and Information Division. U.S. Environmental Protection Agency, Washington, D.C. Chapter 15.
USEPA. (2010). Errata Sheet - March 2009 Unified Guidance. EPA 530/R-09-007a, August 9, 2010. Office of Resource Conservation and Recovery, Program Information and Implementation Division. U.S. Environmental Protection Agency, Washington, D.C.
Venzon, D.J., and S.H. Moolgavkar. (1988). A Method for Computing Profile-Likelihood-Based Confidence Intervals. Journal of the Royal Statistical Society, Series C (Applied Statistics) 37(1), pp. 87–94.
A sample of data contains censored observations if some of the observations are reported only as being below or above some censoring level. In environmental data analysis, Type I left-censored data sets are common, with values being reported as “less than the detection limit” (e.g., Helsel, 2012). Data sets with only one censoring level are called singly censored; data sets with multiple censoring levels are called multiply or progressively censored.
Statistical methods for dealing with censored data sets have a long history in the field of survival analysis and life testing. More recently, researchers in the environmental field have proposed alternative methods of computing estimates and confidence intervals in addition to the classical ones such as maximum likelihood estimation.
Helsel (2012, Chapter 6) gives an excellent review of past studies of the properties of various estimators based on censored environmental data.
In practice, it is better to use a confidence interval for the mean or a joint confidence region for the mean and standard deviation, rather than rely on a single point-estimate of the mean. Since confidence intervals and regions depend on the properties of the estimators for both the mean and standard deviation, the results of studies that simply evaluated the performance of the mean and standard deviation separately cannot be readily extrapolated to predict the performance of various methods of constructing confidence intervals and regions. Furthermore, for several of the methods that have been proposed to estimate the mean based on type I left-censored data, standard errors of the estimates are not available, hence it is not possible to construct confidence intervals (El-Shaarawi and Dolan, 1989).
Few studies have been done to evaluate the performance of methods for constructing confidence intervals for the mean or joint confidence regions for the mean and standard deviation on the original scale, not the log-scale, when data are subjected to single or multiple censoring. See, for example, Singh et al. (2006).
# Chapter 15 of USEPA (2009) gives several examples of estimating the mean
# and standard deviation of a lognormal distribution on the log-scale using
# manganese concentrations (ppb) in groundwater at five background wells.
# In EnvStats these data are stored in the data frame
# EPA.09.Ex.15.1.manganese.df.
# Here we will estimate the mean and coefficient of variation
# ON THE ORIGINAL SCALE using the MLE, QMVUE,
# and robust ROS (imputation with Q-Q regression).
# First look at the data:
#-----------------------
EPA.09.Ex.15.1.manganese.df
#> Sample Well Manganese.Orig.ppb Manganese.ppb Censored
#> 1 1 Well.1 <5 5.0 TRUE
#> 2 2 Well.1 12.1 12.1 FALSE
#> 3 3 Well.1 16.9 16.9 FALSE
#> 4 4 Well.1 21.6 21.6 FALSE
#> 5 5 Well.1 <2 2.0 TRUE
#> 6 1 Well.2 <5 5.0 TRUE
#> 7 2 Well.2 7.7 7.7 FALSE
#> 8 3 Well.2 53.6 53.6 FALSE
#> 9 4 Well.2 9.5 9.5 FALSE
#> 10 5 Well.2 45.9 45.9 FALSE
#> 11 1 Well.3 <5 5.0 TRUE
#> 12 2 Well.3 5.3 5.3 FALSE
#> 13 3 Well.3 12.6 12.6 FALSE
#> 14 4 Well.3 106.3 106.3 FALSE
#> 15 5 Well.3 34.5 34.5 FALSE
#> 16 1 Well.4 6.3 6.3 FALSE
#> 17 2 Well.4 11.9 11.9 FALSE
#> 18 3 Well.4 10 10.0 FALSE
#> 19 4 Well.4 <2 2.0 TRUE
#> 20 5 Well.4 77.2 77.2 FALSE
#> 21 1 Well.5 17.9 17.9 FALSE
#> 22 2 Well.5 22.7 22.7 FALSE
#> 23 3 Well.5 3.3 3.3 FALSE
#> 24 4 Well.5 8.4 8.4 FALSE
#> 25 5 Well.5 <2 2.0 TRUE
# Sample Well Manganese.Orig.ppb Manganese.ppb Censored
#1 1 Well.1 <5 5.0 TRUE
#2 2 Well.1 12.1 12.1 FALSE
#3 3 Well.1 16.9 16.9 FALSE
#...
#23 3 Well.5 3.3 3.3 FALSE
#24 4 Well.5 8.4 8.4 FALSE
#25 5 Well.5 <2 2.0 TRUE
longToWide(EPA.09.Ex.15.1.manganese.df,
"Manganese.Orig.ppb", "Sample", "Well",
paste.row.name = TRUE)
#> Well.1 Well.2 Well.3 Well.4 Well.5
#> Sample.1 <5 <5 <5 6.3 17.9
#> Sample.2 12.1 7.7 5.3 11.9 22.7
#> Sample.3 16.9 53.6 12.6 10 3.3
#> Sample.4 21.6 9.5 106.3 <2 8.4
#> Sample.5 <2 45.9 34.5 77.2 <2
# Well.1 Well.2 Well.3 Well.4 Well.5
#Sample.1 <5 <5 <5 6.3 17.9
#Sample.2 12.1 7.7 5.3 11.9 22.7
#Sample.3 16.9 53.6 12.6 10 3.3
#Sample.4 21.6 9.5 106.3 <2 8.4
#Sample.5 <2 45.9 34.5 77.2 <2
# Now estimate the mean and coefficient of variation
# using the MLE:
#---------------------------------------------------
with(EPA.09.Ex.15.1.manganese.df,
elnormAltCensored(Manganese.ppb, Censored))
#>
#> Results of Distribution Parameter Estimation
#> Based on Type I Censored Data
#> --------------------------------------------
#>
#> Assumed Distribution: Lognormal
#>
#> Censoring Side: left
#>
#> Censoring Level(s): 2 5
#>
#> Estimated Parameter(s): mean = 23.003987
#> cv = 2.300772
#>
#> Estimation Method: MLE
#>
#> Data: Manganese.ppb
#>
#> Censoring Variable: Censored
#>
#> Sample Size: 25
#>
#> Percent Censored: 24%
#>
#Results of Distribution Parameter Estimation
#Based on Type I Censored Data
#--------------------------------------------
#
#Assumed Distribution: Lognormal
#
#Censoring Side: left
#
#Censoring Level(s): 2 5
#
#Estimated Parameter(s): mean = 23.003987
# cv = 2.300772
#
#Estimation Method: MLE
#
#Data: Manganese.ppb
#
#Censoring Variable: Censored
#
#Sample Size: 25
#
#Percent Censored: 24%
# Now compare the MLE with the QMVUE and the
# estimator based on robust ROS
#-------------------------------------------
with(EPA.09.Ex.15.1.manganese.df,
elnormAltCensored(Manganese.ppb, Censored))$parameters
#> mean cv
#> 23.003987 2.300772
# mean cv
#23.003987 2.300772
with(EPA.09.Ex.15.1.manganese.df,
elnormAltCensored(Manganese.ppb, Censored,
method = "qmvue"))$parameters
#> mean cv
#> 21.566945 1.841366
# mean cv
#21.566945 1.841366
with(EPA.09.Ex.15.1.manganese.df,
elnormAltCensored(Manganese.ppb, Censored,
method = "rROS"))$parameters
#> mean cv
#> 19.886180 1.298868
# mean cv
#19.886180 1.298868
#----------
# The method used to estimate quantiles for a Q-Q plot is
# determined by the argument prob.method. For the function
# elnormCensoredAlt, for any estimation method that involves
# Q-Q regression, the default value of prob.method is
# "hirsch-stedinger" and the default value for the
# plotting position constant is plot.pos.con=0.375.
# Both Helsel (2012) and USEPA (2009) also use the Hirsch-Stedinger
# probability method but set the plotting position constant to 0.
with(EPA.09.Ex.15.1.manganese.df,
elnormAltCensored(Manganese.ppb, Censored,
method = "rROS", plot.pos.con = 0))$parameters
#> mean cv
#> 19.827673 1.304725
# mean cv
#19.827673 1.304725
#----------
# Using the same data as above, compute a confidence interval
# for the mean using the profile-likelihood method.
with(EPA.09.Ex.15.1.manganese.df,
elnormAltCensored(Manganese.ppb, Censored, ci = TRUE))
#>
#> Results of Distribution Parameter Estimation
#> Based on Type I Censored Data
#> --------------------------------------------
#>
#> Assumed Distribution: Lognormal
#>
#> Censoring Side: left
#>
#> Censoring Level(s): 2 5
#>
#> Estimated Parameter(s): mean = 23.003987
#> cv = 2.300772
#>
#> Estimation Method: MLE
#>
#> Data: Manganese.ppb
#>
#> Censoring Variable: Censored
#>
#> Sample Size: 25
#>
#> Percent Censored: 24%
#>
#> Confidence Interval for: mean
#>
#> Confidence Interval Method: Profile Likelihood
#>
#> Confidence Interval Type: two-sided
#>
#> Confidence Level: 95%
#>
#> Confidence Interval: LCL = 12.37629
#> UCL = 69.87694
#>
#Results of Distribution Parameter Estimation
#Based on Type I Censored Data
#--------------------------------------------
#
#Assumed Distribution: Lognormal
#
#Censoring Side: left
#
#Censoring Level(s): 2 5
#
#Estimated Parameter(s): mean = 23.003987
# cv = 2.300772
#
#Estimation Method: MLE
#
#Data: Manganese.ppb
#
#Censoring Variable: Censored
#
#Sample Size: 25
#
#Percent Censored: 24%
#
#Confidence Interval for: mean
#
#Confidence Interval Method: Profile Likelihood
#
#Confidence Interval Type: two-sided
#
#Confidence Level: 95%
#
#Confidence Interval: LCL = 12.37629
# UCL = 69.87694