{"id":19474,"date":"2025-01-15T21:35:08","date_gmt":"2025-01-15T16:35:08","guid":{"rendered":"https:\/\/itfeature.com\/?p=19474"},"modified":"2025-01-12T22:18:08","modified_gmt":"2025-01-12T17:18:08","slug":"unbiasedness","status":"publish","type":"post","link":"https:\/\/itfeature.com\/estimation\/properties\/unbiasedness\/","title":{"rendered":"Unbiasedness"},"content":{"rendered":"\n<p><a href=\"https:\/\/itfeature.com\/estimation\/properties\/unbiasedness-of-the-estimator\/\" target=\"_blank\" rel=\"noreferrer noopener\">Unbiasedness<\/a> is a statistical concept that describes the accuracy of an <a href=\"https:\/\/itfeature.com\/estimation\/\" target=\"_blank\" rel=\"noreferrer noopener\">estimator<\/a>. An estimator is said to be an unbiased estimator if its expected value (or average value over many samples) equals the corresponding population parameter, that is, $E(\\hat{\\theta}) = \\theta$.<\/p>\n\n\n\n<p>If the expected value of an estimator $\\theta$ is not equal to the corresponding parameter then the estimator will be biased. The bias of an estimator of $\\hat{\\theta}$ can be defined as<\/p>\n\n\n\n<p>$$Bias = E(\\hat{\\theta}) &#8211; \\theta$$<\/p>\n\n\n\n<p>Note that $\\overline{X}$ is an unbiased estimator of the mean of a population. Therefore,<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>$\\overline{X}$ is an unbiased estimator of the parameter $\\mu$ in Normal distribution.<\/li>\n\n\n\n<li>$\\overline{X}$ is an unbiased estimator of the parameter $p$ in the Bernoulli distribution.<\/li>\n\n\n\n<li>$\\overline{X}$ is an unbiased estimator of the parameter $\\lambda$ in the Poisson distribution.<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><a href=\"https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2025\/01\/unbiasedness.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"593\" height=\"240\" src=\"https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2025\/01\/unbiasedness.jpg?resize=593%2C240&#038;ssl=1\" alt=\"Unbiasedness, positive bias, negative bias, unbiased\" class=\"wp-image-19475\" srcset=\"https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2025\/01\/unbiasedness.jpg?w=593&amp;ssl=1 593w, https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2025\/01\/unbiasedness.jpg?resize=300%2C121&amp;ssl=1 300w\" sizes=\"(max-width: 593px) 100vw, 593px\" \/><\/a><\/figure>\n<\/div>\n\n\n<p>However, the expected value of the sample variance $S^2=\\frac{\\sum\\limits_{i=1}^n (X_i &#8211; \\overline{X})^2 }{n}$ is not equal to the population variance, that is $E(S^2) = \\sigma^2$.<\/p>\n\n\n\n<p>Therefore, sample variance is not an unbiased estimator of the population variance $\\sigma^2$.<\/p>\n\n\n\n<p>Note that it is possible to have more than one unbiased estimator for an unknown parameter. For example, the sample mean and sample median are both unbiased estimators of the population mean $\\mu$ if the population distribution is symmetrical.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Question:<\/strong> Show that the sample mean is an unbiased estimator of the population mean.<\/pre>\n\n\n\n<p>Solution:<\/p>\n\n\n\n<p>Let $X_1, X_2, \\cdots, X_n$ be a random sample of size $n$ from a population having mean $\\mu$. The sample mean is $\\overline{X}$ is<\/p>\n\n\n\n<p>$$\\overline{X} = \\frac{1}{n} \\sum\\limits_{i=1}^n X_i$$<\/p>\n\n\n\n<p>We must show that $E(\\overline{X})=\\mu$, therefore, taking the expectation on both sides,<\/p>\n\n\n\n<p>\\begin{align*}<br \/>E(\\overline{X}) &amp;= E\\left[\\frac{1}{n} \\Sigma X_i \\right]\\\\<br \/>&amp;= \\frac{1}{n} E(X_i) = \\frac{1}{n} E(X_1 + X_2 + \\cdots + X_n)\\\\<br \/>&amp;= \\frac{1}{n} \\left[E(X_1) + E(X_2) + \\cdots + E(X_n) \\right]<br \/>\\end{align*}<\/p>\n\n\n\n<p>Since, in the random sample, the random variables $X_1, X_2, \\cdots, X_n$ are all independent and each has the same distribution of the population, then $E(X_1)=E(X_2)=\\cdots=E(X_n)$. So,<\/p>\n\n\n\n<p>$$E(\\overline{x}) = \\frac{1}{n}(\\mu+\\mu+\\cdots + \\mu) = \\mu$$<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Why Unbiasedness is Important<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Accuracy:<\/strong> Unbiasedness is a measure of accuracy, not precision. Unbiased estimators provide accurate estimates on average, reducing the risk of systematic errors. However, an unbiased estimator can still have a large variance, meaning its individual estimates can be far from the true value.<\/li>\n\n\n\n<li><strong>Consistency:<\/strong> An unbiased estimator is not necessarily consistent. Consistency refers to the tendency of an estimator to converge to the true value as the sample size increases.<\/li>\n\n\n\n<li><strong>Foundation for Further Analysis:<\/strong> Unbiased estimators are often used as building blocks for more complex statistical procedures.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Unbiasedness Example<\/h3>\n\n\n\n<p>Imagine you&#8217;re trying to estimate the average height of students in your university. If you randomly sample 100 students and calculate their average height, this average is an estimator of the true average height of all students in that university. If this average height is consistently equal to the true average height of the entire student population, then your estimator is unbiased.<\/p>\n\n\n\n<p>Unbiasedness is the state of being free from bias, prejudice, or favoritism. It can also mean being able to judge fairly without being influenced by one&#8217;s own opinions. In statistics, it also refers to (i) A sample that is not affected by extraneous factors or selectivity (ii) An estimator that has an expected value that is equal to the parameter being estimated.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Applications and Uses of Unbiasedness<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Parameter Estimation:<\/strong>\n<ul class=\"wp-block-list\">\n<li><span style=\"text-decoration: underline;\">Mean:<\/span> The sample mean is an unbiased estimator of the population mean.<\/li>\n\n\n\n<li><span style=\"text-decoration: underline;\">Variance:<\/span> The sample variance, with a slight adjustment (Bessel&#8217;s correction), is an unbiased estimator of the population variance.<\/li>\n\n\n\n<li><strong><span style=\"text-decoration: underline;\">Regression Coefficients: <\/span><\/strong>In linear regression, the ordinary least squares (OLS) estimators of the regression coefficients are unbiased under certain assumptions.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Hypothesis Testing:<\/strong>\n<ul class=\"wp-block-list\">\n<li>Unbiased estimators are often used in hypothesis tests to make inferences about population parameters. For example, the t-test for comparing means relies on the assumption that the sample means are unbiased estimators of the population means.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Machine Learning:<\/strong> In some machine learning algorithms, unbiased estimators are preferred for model parameters to avoid systematic errors.<\/li>\n\n\n\n<li><strong>Survey Sampling:<\/strong> Unbiased sampling techniques, such as simple random sampling, are used to ensure that the sample is representative of the population and that the estimates obtained from the sample are unbiased.<\/li>\n<\/ul>\n\n\n\n<p class=\"has-text-align-center\"><a href=\"https:\/\/gmstat.com\" target=\"_blank\" rel=\"noreferrer noopener\">Online MCQs and Quiz Website<\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a href=\"https:\/\/rfaqs.com\" target=\"_blank\" rel=\"noreferrer noopener\">R Language FAQs and Interview Questions<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Unbiasedness is a statistical concept that describes the accuracy of an estimator. An estimator is said to be an unbiased estimator if its expected value (or average value over many samples) equals the corresponding population parameter, that is, $E(\\hat{\\theta}) = \\theta$. If the expected value &#8230; <\/p>\n<p class=\"read-more-container\"><a title=\"Unbiasedness\" class=\"read-more button\" href=\"https:\/\/itfeature.com\/estimation\/properties\/unbiasedness\/#more-19474\" aria-label=\"Read more about Unbiasedness\">Read Complete Post<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[270],"tags":[],"class_list":["post-19474","post","type-post","status-publish","format-standard","hentry","category-properties"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p3aDMc-546","jetpack-related-posts":[{"id":1119,"url":"https:\/\/itfeature.com\/estimation\/properties\/unbiasedness-of-the-estimator\/","url_meta":{"origin":19474,"position":0},"title":"Unbiasedness of the Estimator (2013)","author":"Muhammad Imdad Ullah","date":"Jul 27, 2013","format":false,"excerpt":"The unbiasedness of the estimator is probably the most important property that a good estimator should possess. In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator is said to be\u2026","rel":"","context":"In &quot;Estimator Properties&quot;","block_context":{"text":"Estimator Properties","link":"https:\/\/itfeature.com\/estimation\/properties\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2013\/07\/Unbiasedness-of-the-Estimator.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2013\/07\/Unbiasedness-of-the-Estimator.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2013\/07\/Unbiasedness-of-the-Estimator.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2013\/07\/Unbiasedness-of-the-Estimator.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":17711,"url":"https:\/\/itfeature.com\/estimation\/online-estimation-mcqs-1\/","url_meta":{"origin":19474,"position":1},"title":"Best Online Estimation MCQs 1","author":"Muhammad Imdad Ullah","date":"Sep 5, 2024","format":false,"excerpt":"Online Estimation MCQs for Preparation of PPSC and FPSC Statistics Lecturer Post. There are 20 multiple-choice questions covering the topics related to properties of a good estimation (unbiasedness, efficiency, sufficiency, consistency, and invariance), expectation, point estimate, and interval estimate. Let us start with the Online Estimation MCQs Quiz. Online Estimation\u2026","rel":"","context":"In &quot;Estimates and Estimation&quot;","block_context":{"text":"Estimates and Estimation","link":"https:\/\/itfeature.com\/estimation\/"},"img":{"alt_text":"Online Estimation MCQs with Answers","src":"https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2024\/09\/MCQs-Estimation-with-Answers-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":281,"url":"https:\/\/itfeature.com\/estimation\/bias-in-statistics\/","url_meta":{"origin":19474,"position":2},"title":"Truth about Bias in Statistics","author":"Muhammad Imdad Ullah","date":"Jun 29, 2012","format":false,"excerpt":"Bias in Statistics is defined as the difference between the expected value of a statistic and the true value of the corresponding parameter. Therefore, the bias is a measure of the systematic error of an estimator. The bias indicates the distance of the estimator from the true value of the\u2026","rel":"","context":"In &quot;Estimates and Estimation&quot;","block_context":{"text":"Estimates and Estimation","link":"https:\/\/itfeature.com\/estimation\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2024\/03\/Types-of-Selection-Bias.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":18692,"url":"https:\/\/itfeature.com\/estimation\/properties\/properties-of-a-good-estimator\/","url_meta":{"origin":19474,"position":3},"title":"Properties of a Good Estimator","author":"Muhammad Imdad Ullah","date":"Oct 21, 2024","format":false,"excerpt":"Introduction (Properties of a Good Estimator) The post is about a comprehensive discussion of the Properties of a Good Estimator. In statistics, an estimator is a function of sample data used to estimate an unknown population parameter. A good estimator is both efficient and unbiased. An estimator is considered as\u2026","rel":"","context":"In &quot;Estimator Properties&quot;","block_context":{"text":"Estimator Properties","link":"https:\/\/itfeature.com\/estimation\/properties\/"},"img":{"alt_text":"Properties of a Good Estimator","src":"https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2024\/10\/properties-of-a-good-estimator.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":1614,"url":"https:\/\/itfeature.com\/estimation\/properties\/sufficient-statistics\/","url_meta":{"origin":19474,"position":4},"title":"Sufficient Estimators and Sufficient Statistics","author":"Muhammad Imdad Ullah","date":"Jun 20, 2014","format":false,"excerpt":"Introduction to Sufficient Estimator and Sufficient Statistics An estimator $\\hat{\\theta}$ is sufficient if it makes so much use of the information in the sample that no other estimator could extract from the sample, additional information about the population parameter being estimated. Introduction to Sufficient Estimator and Sufficient StatisticsSufficient Statistics ExampleMathematical\u2026","rel":"","context":"In &quot;Estimator Properties&quot;","block_context":{"text":"Estimator Properties","link":"https:\/\/itfeature.com\/estimation\/properties\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1396,"url":"https:\/\/itfeature.com\/estimation\/properties\/consistent-estimator\/","url_meta":{"origin":19474,"position":5},"title":"Consistent Estimator: Easy Learning","author":"Muhammad Imdad Ullah","date":"Feb 1, 2014","format":false,"excerpt":"Statistics is a consistent estimator of a population parameter if \"as the sample size increases, it becomes almost certain that the value of the statistics comes close (closer) to the value of the population parameter\". If an estimator (statistic) is considered consistent, it becomes more reliable with a large sample\u2026","rel":"","context":"In &quot;Estimator Properties&quot;","block_context":{"text":"Estimator Properties","link":"https:\/\/itfeature.com\/estimation\/properties\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/itfeature.com\/wp-content\/uploads\/2024\/02\/Consistent-Estimator.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/itfeature.com\/wp-json\/wp\/v2\/posts\/19474","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/itfeature.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/itfeature.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/itfeature.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/itfeature.com\/wp-json\/wp\/v2\/comments?post=19474"}],"version-history":[{"count":0,"href":"https:\/\/itfeature.com\/wp-json\/wp\/v2\/posts\/19474\/revisions"}],"wp:attachment":[{"href":"https:\/\/itfeature.com\/wp-json\/wp\/v2\/media?parent=19474"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/itfeature.com\/wp-json\/wp\/v2\/categories?post=19474"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/itfeature.com\/wp-json\/wp\/v2\/tags?post=19474"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}