Respuesta :
Answer:
(d) in many samples, the values of the statistic are centered at the value of the parameter.
True we have the two conditions satisfied, centered at the parameter and the expeced value is equal to the parameter.
Step-by-step explanation:
We say that an statistic is an unbiased estimator of a parameter when the expected value of the statistic is equatl to the parameter. Or when the sampling distribution of the statistic is centeres about the parameter.
For example the sample mean [tex]\bar X[/tex] is an unbiased estimator of the parameter [tex]\mu[/tex] the true mean since:
[tex]E(\bar X) =E(\frac{\sum_{i=1}^n X_i}{n})=\frac{1}{n} \sum_{i=1}^n E(X_i)[/tex]
And assuming that [tex]X_i \sim N(\mu, \sigma)[/tex] we have:
[tex]E(\bar X) = \frac{1}{n} n\mu = \mu[/tex]
Other interpretation is that if we select too many samples for the statisitc, the value is centered exaclty at the paramter like for example the sample mean.
With this we can analyze one of the options:
a) the statistic is calculated from a random sample.
That's not an important requirement in order to ensure that statistic is unbiased.
(b) in a single sample, the value of the statistic is equal to the value of the parameter.
False, we need that in many samples the values of the statistic would be equal to the parameter
(c) in many samples, the values of the statistic are very close to the value of the parameter.
False, we need that in many samples the values of the statistic would be exactly equal to the parameter
(d) in many samples, the values of the statistic are centered at the value of the parameter.
True we have the two conditions satisfied, centered at the parameter and the expeced value is equal to the parameter.
(e) in many samples, the distribution of the statistic has a shape that is approximately Normal.
Not an exclusive condition and not necessary.