International Journal of Probability and Statistics 2015, 4(2): 37-41
DOI: 10.5923/j.ijps.20150402.01
Estimation of the Mean and Variance of a Univariate
Normal Distribution Using Least-Squares via the
Differential and Integral Techniques
C. R. Kikawa
*
, M. Y. Shatalov, P. H. Kloppers
Department of Mathematics and Statistics, Tshwane University of Technology, Pretoria, South Africa
Abstract Two new approaches (method I and II) for estimating parameters of a univariate normal probability density
function are proposed. We evaluate their performance using two simulated normally distributed univariate datasets and their
results compared with those obtained from the maximum likelihood (ML) and the method of moments (MM) approaches on
the same samples, small n = 24 and large n = 1200 datasets. The proposed methods, I and II have shown to give significantly
good results that are comparable to those from the standard methods in a real practical setting. The proposed methods have
performed equally well as the ML method on large samples. The major advantage of the proposed methods over the ML
method is that they do not require initial approximations for the unknown parameters. We therefore propose that in the
practical setting, the proposed methods be used symbiotically with the standard methods to estimate initial approximations at
the appropriate step of their algorithms.
Keywords Maximum likelihood, Method of moments, Normal distribution, Bootstrap samples
1. Introduction
Statistical inference is largely concerned with making
logical conclusions about a population using an observed
section or part of the entire population referred to as the
sample [1]. The reference population can always be
represented using an appropriate probability framework
which is usually written in terms of unknown parameters.
For instance the crop yield obtained when a certain fertilizer
is applied can be assumed to follow a normal distribution
with mean μ, and standard deviation , ; it is thereafter
required to make inferences about the parameters, μ and
using the statistics � and that are estimated based on the
sample of crop yield and then inferences made on the total
crop yield. Note that in this work we only deal with one
aspect of statistical inference that is estimation and two
novel approaches are discussed in this case.
Let be a single realisation from a univariate normal
density function with mean μ, and standard deviation ,
which implies that ~N(μ, ) with −∞< μ <∞, > 0. In this
paper, simple and computationally attractive methods for
estimating both μ and of a univariate normal
distributionfunction are proposed. However, methods for
estimating the sufficient parameters of a univariate normal
density function are well known such as the method of
* Corresponding author:
richard.kikawa@gmail.com (C. R. Kikawa)
Published online at http://journal.sapub.org/ijps
Copyright © 2015 Scientific & Academic Publishing. All Rights Reserved
moments and the maximum likelihood method [2, 3], but all
these are computationally intensive. Again, much as the
maximum likelihood estimators have higher probability of
being in the neighbourhood of the parameters to be
computed, in some instances the likelihood equations are
intractable in the absence of high computing gadgets like
computers. Though the method of moments could quickly be
computed manually by hand, its estimators are usually far
from the required quantities and for small samples the
estimates are often times outside the parameter space [4, 5].
In all it is not worthwhile to rely on the estimates from the
method of moments.
1.1. Generalized Probability Density Function
When a dataset is presented and critically observed for
any characteristics that it may exhibit; statistically called
exploratory data analysis, we usually want to study its
pattern that can vaguely lead us to a possible probability
density function (pdf) that can be taken as its probability
frame-work for those data. However, if it requires one to
build a whole new frame-work or model, then a lot of work
has to be done which is quite demanding. In this section we
present a frame-work that nearly suits all the pdfs of
continuous random variables
∧
()
∅
�
−
ℶ
�,
≤≤
(1.1)
where
and
indicate the domain of applicability:
often times from −∞to ∞ or from 0 to ∞ depending on
the framework under consideration. ∅
, is the actual shape