This paper aims at the Bayesian estimation for the loss and risk functions of the unknown parameter of the binomial distribution under the loss function which is different from that given by Rukhin (1988). The estimation involves beta distribution, a natural conjugate prior density function for the unknown parameter. Estimators obtained are conservatively biased and have finite frequentist risk.
## I. INTRODUCTION
Rukhin(1988) introduced a loss function given by,
$$
L (\theta , \delta , \gamma) = w (\theta , \delta) \gamma^ {- \frac {1}{2}} + \gamma^ {\frac {1}{2}} \tag {1.1}
$$
Where, $\gamma$ is an estimator of the loss function $w(\theta,\delta)$, which is non-negative. Guobing(2016) used this loss function and derived estimates of the loss and risk function of the parameter of Maxwell's distribution. Singh (2021) took various forms of $w(\theta,\delta)$ and derived estimates of the loss and risk function of the parameter of a continuous distribution which gives Half-normal distribution, Rayleigh distribution and Maxwell's distribution as particular cases. Rukhin(1988) considered the Bayesian estimation of the unknown parameter $\theta$ of the binomial distribution by taking
$$
w (\theta , \delta) = (\theta - \delta) ^ {2} \tag {1.2}
$$
In this paper, Bayes estimate of the unknown parameter $\theta$ of the binomial distribution has been obtained by replacing $w(\theta, \delta)$ by $w_1(\theta, \delta)$ given by
$$
w _ {1} (\theta , \delta) = h (\theta) (\theta - \delta) ^ {2} \tag {1.3}
$$
Where,
$$
h (\theta) = \frac {1}{\{\theta (1 - \theta) \}} \tag {1.4}
$$
## II. ESTIMATION OF LOSS AND RISK OF THE PARAMETER OF BINOMIAL DISTRIBUTION
Let the random variable $X$ follows binomial distribution with parameters $n$ and $\theta$. Where $\theta$ is unknown satisfying $0 \leq \theta \leq 1$. The prior p.d.f of $\theta$, denoted by $\pi_1(\theta)$ is as follows:
$$
\pi_{1} (\theta) = \left\{ \begin{array}{l l} \frac{\theta^{\alpha - 1} (1 - \theta)^{\beta - 1}}{B (\alpha , \beta)} & \text{if } \alpha \geq 0, \beta \geq 0, 0 < \theta < 1 \\ 0 & \text{Otherwise} \end{array} \right. \tag{2.1}
$$
Under the assumption of prior probability density function (p.d.f.) for $\theta$ as above, Bayes estimates of $\theta$ derived by Rukhin (1988) were as follows: For $\alpha \geq 0, \beta \geq 0$
$$
\delta_ {B} (X) = \frac {(X + \alpha)}{(n + \alpha + \beta)} \tag {2.2}
$$
$$
\gamma_ {B} (X) = \frac {(X + \alpha) (n + \beta - X)}{(n + \alpha + \beta) ^ {2} (n + \alpha + \beta + 1)} \tag {2.3}
$$
and for $\alpha = 0, \beta = 0$
$$
\delta_ {0} (X) = \frac {X}{n} \tag {2.4}
$$
$$
\gamma_ {0} (X) = \frac {X (n - X)}{n ^ {2} (n + 1)} \tag {2.5}
$$
It was shown that
$$
E _ {\theta} L \left(\theta , \delta_ {0}, \gamma_ {0}\right) = \infty \tag {2.6}
$$
Under, $w_{1}(\theta,\delta)$ as above, the corresponding Bayes estimate is given by, For $\alpha \geq 0,\beta \geq 0$
$$
\delta_ {1 B} (X) = \frac {E \left\{\theta h (\theta) / X \right\}}{E \left\{h (\theta) / X \right\}} \tag {2.7}
$$
$$
\delta_ {1 B} (X) = \frac {(X + \alpha - 1)}{A - 2} \tag {2.8}
$$
On simplification, provided, $A = n + \alpha +\beta >2$ and,
$$
\gamma_ {1 B} (X) = E \left\{\theta h (\theta) / X \right\} - \left\{\delta_ {1 B} (X) \right\} ^ {2} E \left\{h (\theta) / X \right\} \tag {2.9}
$$
$$
\gamma_ {1 B} (X) = \frac {1}{A - 2} \tag {2.10}
$$
on simplification, provided, $A = n + \alpha +\beta >2$
We, see that, in this case $\gamma_{1B}(X)$ does not depend upon $X$ and is function of $n,\alpha$ and $\beta$
$$
E_{\theta} L\left(\theta,\delta_{1B},\gamma_{1B}\right) = E_{\theta}\left[h(\theta)\left(\theta - (X + \alpha - 1)(A - 2)^{-1}\right)^{2}\right](A - 2)^{1/2} + (A - 2)^{-1/2}
$$
Or,
$$
E_{\theta} L\left(\theta,\delta_{1B},\gamma_{1B}\right) = \left[n+h(\theta)\left(1-\alpha+\theta(\alpha+\beta-2)\right)^{2}\right]\left(A-2\right)^{-3/2} + \left(A-2\right)^{-1/2} < \infty \tag{2.12}
$$
In this case,
$$
R (\theta , \delta_ {1 B}) = E _ {\theta} \{h (\theta) (\theta - \delta_ {1 B}) \} ^ {2} \tag {2.13}
$$
Or,
$$
R (\theta , \delta_ {1 B}) = [ n + h (\theta) \{1 - \alpha + \theta (\alpha + \beta - 2) \} ^ {2} ] (A - 2) ^ {- 2} \tag {2.14}
$$
As mentioned by Keifer (1977), an estimator $\gamma(X)$ is said to be conservatively biased if,
$$
E _ {\theta} \left\{\gamma (X) \right\} \geq R (\theta , \delta) = E _ {\theta} \left\{w (\theta , \delta) \right\} \tag {2.15}
$$
In the light of this condition, $\gamma_0(X)$ as given by Rukhin (1988) is not conservatively biased. In this case,
$$
E _ {\theta} \left\{\gamma_ {1 B} (X) \right\} = \frac {1}{A - 2} \tag {2.16}
$$
Let $\delta_{0B}(X)$ and $\gamma_{0B}(X)$ be values of $\delta_{1B}(X)$ and $\gamma_{1B}(X)$, respectively when, $\alpha = \beta = 0$. If possible let,
$$
E _ {\theta} \left\{\gamma_ {0 B} (X) \right\} \geq R (\theta , \delta_ {0 B}) \tag {2.17}
$$
which holds if,
$$
- 2 \theta^ {2} + 2 \theta - 1 \geq 0 \tag {2.18}
$$
which is a contradiction, since $0 < \theta < 1$ and maximum value of $-2\theta^2 + 2\theta - 1$ is $-\frac{1}{2}$ which corresponds to $\theta = \frac{1}{2}$. Moreover, $-2\theta^2 + 2\theta - 1 = -1$ for $\theta = 1$ and $\theta = 0$. Thus, $\gamma_{0B}(X)$ is not conservatively biased.
When $\alpha = \beta = 1$, we have,
$$
E _ {\theta} \left\{\gamma_ {1 B} (X) \right\} = R (\theta , \delta_ {1 B}) = \frac {1}{n} \tag {2.19}
$$
When, $\alpha = \beta > 1$, $\theta = 0.5$
$$
E _ {\theta} \left\{\gamma_ {1 B} (X) \right\} \geq R (\theta , \delta_ {1 B}) \tag {2.20}
$$
When, $\alpha = \beta > 1$, $\theta \neq 0.5$
$$
E _ {\theta} \left\{\gamma_ {1 B} (X) \right\} \geq R (\theta , \delta_ {1 B}) \tag {2.21}
$$
which holds if
$$
\alpha \leq 1 + g (\theta) \tag {2.22}
$$
.Where,
$$
g (\theta) = \frac {2 \theta (1 - \theta)}{(2 \theta - 1) ^ {2}} \tag {2.23}
$$
$g(\theta)$ is a monotonically increasing function of $\theta$ over the set $S = (0,1) - \{0.5\}$. Hence, $\gamma_{1B}(X)$ as above, presents a valid 'frequentist report' as mentioned by Berger(1985).
The results are summarized in the following:
THEOREM. Let $(\delta_{1B},\gamma_{1B})$ be Bayes estimators of the unknown parameter $\theta$ of the binomial distribution under the loss function $L(\theta,\delta,\gamma) = \frac{1}{\{\theta(1 - \theta)\}} (\theta -\delta)^2\gamma^{-\frac{1}{2}} + \gamma^{\frac{1}{2}}$ and beta prior density with known parameters $\alpha$ and $\beta$. Then, the frequentist risk $E_{\theta}L(\theta,\delta_{1B},\gamma_{1B})$ is finite for all values of $\alpha$ and $\beta$ provided $0 < \theta < 1$. For $\alpha = \beta = 0$, $\gamma_{1B}(X)$ is not conservatively biased. The estimator $\gamma_{1B}(X)$ is conservatively biased for $\alpha = \beta = 1$ and for $\alpha = \beta >1$ satisfying $\alpha \leq 1 + \frac{2\theta(1 - \theta)}{(2\theta - 1)^2},\theta \neq 0.5$. However, if $\alpha = \beta >1,\theta = 0.5$, $\gamma_{1B}(X)$ is also conservatively biased.
Generating HTML Viewer...
References
6 Cites in Article
(2022). Unknown Title.
J Berger (1985). The frequentist viewpoint and conditioning.
Guobing Fan (2016). Estimation of the Loss and Risk Functions of parameter of Maxwell's distribution.
J Keifer (1977). Conditional Confidence Statements and Confidence Estimators.
Randhir Singh (2021). On Bayesian Estimation of Loss and Risk Functions.
Andrew Rukhin (1988). Estimating the Loss of Estimators of a Binomial Parameter.
No ethics committee approval was required for this article type.
Data Availability
Not applicable for this article.
How to Cite This Article
Randhir Singh. 2026. \u201cOn Baysian Estimation of Loss of Estimators of Unknown Parameter of Binomial Distribution\u201d. Global Journal of Science Frontier Research - F: Mathematics & Decision GJSFR-F Volume 22 (GJSFR Volume 22 Issue F4).
Explore published articles in an immersive Augmented Reality environment. Our platform converts research papers into interactive 3D books, allowing readers to view and interact with content using AR and VR compatible devices.
Your published article is automatically converted into a realistic 3D book. Flip through pages and read research papers in a more engaging and interactive format.
Our website is actively being updated, and changes may occur frequently. Please clear your browser cache if needed. For feedback or error reporting, please email [email protected]
Thank you for connecting with us. We will respond to you shortly.