Lesson 88 – The One-Sample Hypothesis Test – Part III

On Variance

H_{0}: \sigma^{2} = \sigma^{2}_{0}

H_{A}: \sigma^{2} > \sigma^{2}_{0}

H_{A}: \sigma^{2} < \sigma^{2}_{0}

H_{A}: \sigma^{2} \neq \sigma^{2}_{0}

Joe is cheery after an intense semester at his college. He is meeting Devine today for a casual conversation. We all know that their casual conversation always turns into something interesting. Are we in for a new concept today?

Devine: So, how did you fare in your exams.

Joe: Hmm, I did okay, but, interestingly, you are asking me about my performance in exams and not what I learned in my classes.

Devine: Well, Joe, these days, the college prepares you to be a good test taker. Learning is a thing of the past. I am glad you are still learning in your classes.

Joe: That is true to a large extent. We have exams after exams after exams, and our minds are compartmentalized to regurgitate one module after the other — no time to sit back and absorb what we see in classes.

By the way, I heard of an intriguing phenomenon from one of my professors. It might be of interest to you.

Devine: What is it.

Joe: In his eons of teaching, he has observed that the standard deviation of his class’s performance is 16.5 points. He told me that over the years, this had fed back into his ways of preparing exams. It seems that he subconsciously designs exams where the students’ grades will have a standard deviation of 16.5.

Devine: That is indeed an interesting phenomenon. Do you want to verify his hypothesis?

Joe: How can we do that?

Devine: Assuming that his test scores are normally distributed, we can conduct a hypothesis test on the variance of the distribution — H_{0}: \sigma^{2} = \sigma^{2}_{0}

Joe: Using a hypothesis testing framework?

Devine: Yes. Let’s first outline a null and alternate hypothesis. Since your professor is claiming that his exams are subconsciously designed for a standard deviation of 16.5, we will establish that this is the null hypothesis.

H_{0}: \sigma^{2} = 16.5^{2}

We can falsify this claim if the standard deviation is greater than or less than 16.5, i.e.,

H_{A}: \sigma^{2} \neq 16.5^{2}

The alternate hypothesis is two-sided. Deviation in either direction (less than or greater than) will reject the null hypothesis.

Would you be able to get some data on his recent exam scores?

Joe: I think I can ask some of my friends and get a sample of up to ten scores. Let me make some calls.

Here is a sample of ten exam scores from our most recent test with him.

60, 41, 70, 61, 69, 95, 33, 34, 82, 82

Devine: Fantastic. We can compute the standard deviation/variance from this sample and verify our hypothesis — whether this data provides evidence for the rejection of the null hypothesis.

Joe: Over the past few weeks, I was learning that we call it a parametric hypothesis test if we know the limiting form of the null distribution. I already know that we are doing a one-sample hypothesis test, but how do we know the type of the null distribution?

Devine: The sample variance (s^{2}) is a random variable that can be described using a probability distribution. Several weeks ago, in lesson 73 where we derived the T-distribution, and in lesson 75 where we derived the confidence interval of the variance, we learned that \frac{(n-1)s^{2}}{\sigma^{2}} follows a Chi-square distribution with (n-1) degrees of freedom.

Since it was more than ten lessons ago, let’s go through the derivation once again. Ofttimes, repetition helps reinforce the ideas.

Joe: I think I remember it vaguely. Let me take a shot at the derivation 🙂

I will start with the equation of the sample variance s^{2}.

s^{2} = \frac{1}{n-1} \sum(x_{i}-\bar{x})^{2}

I will move the n-1 term over to the left-hand side and do some algebra.

(n-1)s^{2} = \sum(x_{i}-\bar{x})^{2}

(n-1)s^{2} = \sum(x_{i} - \mu -\bar{x} + \mu)^{2}

(n-1)s^{2} = \sum((x_{i} - \mu) -(\bar{x} - \mu))^{2}

(n-1)s^{2} = \sum[(x_{i} - \mu)^{2} + (\bar{x} - \mu)^{2} -2(x_{i} - \mu)(\bar{x} - \mu)]

(n-1)s^{2} = \sum(x_{i} - \mu)^{2} + \sum (\bar{x} - \mu)^{2} -2(\bar{x} - \mu)\sum(x_{i} - \mu)

(n-1)s^{2} = \sum(x_{i} - \mu)^{2} + n (\bar{x} - \mu)^{2} -2(\bar{x} - \mu)(\sum x_{i} - \sum \mu)

(n-1)s^{2} = \sum(x_{i} - \mu)^{2} + n (\bar{x} - \mu)^{2} -2(\bar{x} - \mu)(n\bar{x} - n \mu)

(n-1)s^{2} = \sum(x_{i} - \mu)^{2} + n (\bar{x} - \mu)^{2} -2n(\bar{x} - \mu)(\bar{x} - \mu)

(n-1)s^{2} = \sum(x_{i} - \mu)^{2} + n (\bar{x} - \mu)^{2} -2n(\bar{x} - \mu)^{2}

(n-1)s^{2} = \sum(x_{i} - \mu)^{2} - n (\bar{x} - \mu)^{2}

Let me divide both sides of the equation by \sigma^{2}.

\frac{(n-1)s^{2}}{\sigma^{2}} = \frac{1}{\sigma^{2}}(\sum(x_{i} - \mu)^{2} - n (\bar{x} - \mu)^{2})

\frac{(n-1)s^{2}}{\sigma^{2}} = \sum(\frac{x_{i} - \mu}{\sigma})^{2} - \frac{n}{\sigma^{2}} (\bar{x} - \mu)^{2}

\frac{(n-1)s^{2}}{\sigma^{2}} = \sum(\frac{x_{i} - \mu}{\sigma})^{2} - (\frac{\bar{x} - \mu}{\sigma/\sqrt{n}})^{2}

The right-hand side now is the sum of squared standard normal distributions — assuming x_{i} are draws from a normal distribution.

\frac{(n-1)s^{2}}{\sigma^{2}} = Z_{1}^{2} + Z_{2}^{2} + Z_{3}^{2} + … + Z_{n}^{2} - Z^{2}

Sum of squares of (n - 1) standard normal random variables.

We learned in lesson 53 that if there are n standard normal random variables, Z_{1}, Z_{2}, …, Z_{n}, their sum of squares is a Chi-square distribution with n degrees of freedom. Its probability density function is f(\chi)=\frac{\frac{1}{2}(\frac{1}{2} \chi)^{\frac{n}{2}-1}e^{-\frac{1}{2}*\chi}}{(\frac{n}{2}-1)!} for \chi^{2} > 0 and 0 otherwise.

Since we have \frac{(n-1)s^{2}}{\sigma^{2}} = Z_{1}^{2} + Z_{2}^{2} + Z_{3}^{2} + … + Z_{n}^{2} - Z^{2}

\frac{(n-1)s^{2}}{\sigma^{2}} follows a Chi-square distribution with (n-1) degrees of freedom.

\frac{(n-1)s^{2}}{\sigma^{2}} \sim \chi_{n-1} with a probability distribution function f(\frac{(n-1)s^{2}}{\sigma^{2}}) = \frac{\frac{1}{2}(\frac{1}{2} \chi)^{\frac{n-1}{2}-1}e^{-\frac{1}{2}*\chi}}{(\frac{n-1}{2}-1)!}

Depending on the degrees of freedom, the distribution of \frac{(n-1)s^{2}}{\sigma^{2}} looks like this.

Smaller sample sizes imply lower degrees of freedom. The distribution will be highly skewed; asymmetric.

Larger sample sizes or higher degrees of freedom will tend the distribution to symmetry.

Devine: Excellent job, Joe. As you have shown \frac{(n-1)s^{2}}{\sigma^{2}} is our test statistic, \chi^{2}_{0}, which we will verify against a Chi-square distribution with (n-1) degrees of freedom.

Have you already decided on a rejection rate \alpha?

Joe: I will go with a 5% Type I error. If my professor’s assumption is indeed true, I am willing to commit a 5% error in my decision-making as I may get a sample from my friends that drives me to reject his null hypothesis.

Devine: Okay. Let’s then compute the test statistic.

s^{2} = \frac{1}{n-1} \sum(x_{i}-\bar{x})^{2}=452.01

\chi^{2}_{0} = \frac{(n-1)s^{2}}{\sigma^{2}} = \frac{9*452.09}{16.5^{2}} = 14.95

Since we have a sample of ten exam scores, we should consider as null distribution, a Chi-square distribution with nine degrees of freedom.

Under the null hypothesis H_{0}: \sigma^{2} = 16.5^{2}, for a two-sided hypothesis test at the 5% rejection level, \frac{(n-1)s^{2}}{\sigma^{2}} can vary between \chi^{2}_{0.025} and \chi^{2}_{0.975}, the lower and the upper percentiles of the Chi-square distribution.

If our test statistic \chi^{2}_{0} is either less than, or greater than the lower and the upper percentiles respectively, we reject the null hypothesis.

The lower and upper critical values at the 5% rejection rate (or a 95% confidence interval) are 2.70 and 19.03.

In lesson 75, we learned how to read this off the standard Chi-square table.

Joe: Aha. Since our test statistic \chi^{2}_{0} is 14.95, we cannot reject the null hypothesis.

Devine: You are right. Look at this visual.

The rejection region based on the lower and the upper critical values (percentiles \chi^{2}_{0.025} and \chi^{2}_{0.975}) is shown in red triangles. The test statistic lies inside.

It is now easy to say that the p-value, i.e., P(\chi^{2}>\chi^{2}_{0}) or P(\chi^{2} \le \chi^{2}_{0}) is greater than \frac{\alpha}{2}.

Since we have a two-sided test, we compare the p-value with \frac{\alpha}{2}.

Hence we cannot reject the null hypothesis.

Joe: Looks like I cannot refute my professor’s observation that the standard deviation of his test scores is 16.5 points.

Devine: Yes, at the 5% rejection level, and assuming that his test scores are normally distributed.

Joe: Got it. If the test scores are not normally distributed, our assumption that \frac{(n-1)s^{2}}{\sigma^{2}} follows a Chi-square distribution is questionable. How then can we test the hypothesis?

Devine: We can use a non-parametric test using a bootstrap approach.

Joe: How is that done?

Devine: You will have to wait until the non-parametric hypothesis test lessons for that. But let me ask you a question based on today’s lesson. What is the main difference between the hypothesis test on the mean, which you learned in lesson 87, and the hypothesis test on the variance which you learned here?

Joe: 😕 😕 😕

For the hypothesis test on the mean, we looked at the difference between \bar{x} and \mu. For the hypothesis on the variance, we examine the ratio of s^{2} to \sigma^{2} and reject the null hypothesis if this ratio differs too much from what we expect under the null hypothesis, i.e., when H_{0} is true.

If you find this useful, please like, share and subscribe.
You can also follow me on Twitter @realDevineni for updates on new lessons.

error

Enjoy this blog? Please spread the word :)