# Testing Normal Means

Jump to: navigation, search
 English Português Français ‎Español Italiano Nederlands

In many applications one is interested in the mean of the population distribution of a particular attribute (random variable). Statistical estimation theory ‘tells’ us how to best estimate the expectation for a given distribution shape, yet doesn’t help us in assessing the uncertainty of the estimated average: an average computed from a sample of size ${\displaystyle n=5}$ will be a single number as will be the one based on a sample size of ${\displaystyle n=5,000}$. Intuition (and the law of large numbers) leads us to believe that the latter estimate is ‘probably’ more representative than the former in that on the average the sample mean (e.g. the arithmetic mean) of large samples is closer to the population than that of small samples. That is, sample means computed from large samples are statistically more reliable. A method of quantifying the average closeness to the population parameter is to compute the standard error of the statistic under consideration (here: the mean), i.e. the square root of the estimated average squared deviation of the estimator from the population parameter. The actual sample mean for a given sample in conjunction with its standard deviation would specify an interval (i.e. the sample mean plus/minus one or more standard errors) in which the sample mean isn’t ‘unlikely’ to fall into, given the theoretical mean equals the one estimated from the observed sample. Now suppose a scientist proposes a value for the theoretical mean derived from some theory or prior data analysis. If the hypothetical value turns out to be close to the sample mean and, in particular, within a certain range around the sample mean like the one specified by the standard error, he is more likely to propose it to be the true population mean then if he had initially proposed a more distant value. But how can the distance of the sample mean from the hypothetical population mean be assessed in probabilistic terms suitable for decision making based on the ${\displaystyle \alpha }$ error concept? In other words: How can we construct a statistical test for the mean of a random variable? Our goal is to test for a specific value of the expectation ${\displaystyle \mu ={\text{E}}\left(X\right)}$ of a population distribution. Our data are a randomly drawn sample of size ${\displaystyle n}$, theoretically represented by the ${\displaystyle X_{1},\ldots ,X_{n}}$, and we want to base the test decision at a significance level of ${\displaystyle \alpha }$.

### Hypotheses

We can construct one- and two-sided tests. 1) Two-sided test ${\displaystyle {\text{H}}_{0}:\mu =\mu _{0},{0in}{3ex}{\text{H}}_{1}:{\text{ }}\mu \neq \mu _{0}.}$ 2) Right-sided test ${\displaystyle {\text{H}}_{0}:\mu \leq \mu _{0},{3ex}{\text{H}}_{1}:\mu >\mu _{0}.}$ 3) Left-sided test ${\displaystyle {\text{H}}_{0}:\mu \geq \mu _{0},{3ex}{\text{H}}_{1}:\mu <\mu _{0}.}$ In a one-sided statistical hypothesis testing problem the scientific conjecture to be validated is usually stated as ${\displaystyle {\text{H}}_{1}}$ rather than the null hypothesis ${\displaystyle {\text{H}}_{0}}$. That is, the researcher tries to statistically verify that the negation of the hypothesis to be tested does not hold for a certain significance level ${\displaystyle \alpha }$. This is due to the ‘nature’ of the significance level we have mentioned earlier: Rejecting the null hypothesis at a given significance level only means that the probability of it not being false is no greater than ${\displaystyle \alpha }$. Yet this is chosen small (most commonly ${\displaystyle 0.05}$ or ${\displaystyle 0.01}$), as one tries to control the ${\displaystyle \alpha }$ error in order to be ‘reasonably certain’ that an ‘unwanted’ proposition is not true. This makes sense if one thinks of some critical applications that rely on this approach. In testing a new drug for harmful side effects, for example, one wants to have a rational for rejecting their systematic occurrence. In doing so one accepts the converse claim that side effects are ‘negligible’. Underlying this approach is the (unknown) relationship between ${\displaystyle \alpha }$ and ${\displaystyle \beta }$: Whereas we can control the former, the latter is a function of not only the former but also other test conditions such as the underlying distribution. For these reasons it is common to speak of not rejecting a hypothesis instead of accepting it. Test statistic, its distribution and derived decision regions We need a quantity to condense the information in the random sample that is required to make a probabilistic statement about the unknown distribution characteristic (in the present case the population mean). For parametric tests, this is an estimator of the parameter. We have already shown that the arithmetic mean ${\displaystyle {\overline {X}}={\frac {1}{n}}\sum _{i=1}^{n}\,X_{i}}$ is a statistically ‘reasonable’ point estimator of the unknown population mean i.e. the unknown expectation ${\displaystyle E\left(X\right)}$, in particular it’s unbiased and consistent. The variance and standard deviation of ${\displaystyle {\overline {X}}}$ computed from a (i.e. independent and identically distributed—i.i.d.) are given by ${\displaystyle Var\left({\overline {X}}\right)=\sigma ^{2}\left({\overline {X}}\right)=\sigma _{\overline {X}}^{2}={\frac {\sigma _{X}^{2}}{n}}}$

${\displaystyle \sigma \left({\overline {X}}\right)={\frac {\sigma _{X}}{n^{1/2}}}}$ We will construct our test statistic around the sample mean ${\displaystyle {\overline {X}}}$. In order to derive the (rejection/non-rejection) regions corresponding to a given significance level, we need to make an assumption concerning the distribution of the sample mean. Either

• The random variable under investigation ${\displaystyle X}$ is normally distributed, implying normal distribution of ${\displaystyle {\overline {X}}}$ ; or
• ${\displaystyle n}$ is sufficiently large to justify the application of the central limit theorem: If the ${\displaystyle X_{i}}$ are i.i.d. with finite mean and variance, ${\displaystyle {\overline {X}}}$ is approximately normally distributed regardless of the underlying (continuous or discrete, symmetric or skewed) distribution. In this case, our test will in turn be an approximate one, i.e. has additional imprecision.

We thus postulate:${\displaystyle {\overline {X}}}$ is (at least approximately) normally distributed with expectation ${\displaystyle E\left({\overline {X}}\right)=\mu }$ and variance ${\displaystyle Var\left({\overline {X}}\right)=\sigma _{X}^{2}/n}$. Thus, the distribution of the estimator of the population mean ${\displaystyle \mu }$ depends on exactly the unknown parameter we are seeking to test ${\displaystyle \mu }$ . The only way to overcome this circular reference is to assign a numerical value to ${\displaystyle \mu }$. The least arbitrary value to take is the boundary value in the null hypothesis, i.e. the value that separates the parameter ranges for H${\displaystyle _{0}}$ and ${\displaystyle {\text{H}}_{1}}$: ${\displaystyle \mu _{0}}$. This approach does in fact make sense, if you recall the principle of rejecting the null hypothesis in order to not-reject the : Basing the decision on a postulated distribution of our test statistic with parameter ${\displaystyle \mu _{0}}$ enables us to test this particular ${\displaystyle \mu }$, by removing the uncertainty in the . Note that in the two-sided test this ${\displaystyle \mu _{0}}$ makes up the entire parameter space of the null hypothesis. In one-sided tests, it is the boundary value. Let’s put our assumption into practice and set the expectation of ${\displaystyle X}$, i.e. ${\displaystyle \mu }$, to ${\displaystyle \mu _{0}}$: Given the null hypothesis ${\displaystyle {\text{H}}_{0}:\mu =\mu _{0}}$ is true, respectively ${\displaystyle \mu }$ equals the boundary value of the null hypothesis for single-sided test, we can write${\displaystyle {\overline {X}}}$ is (at least approximately) normally distributed with expectation ${\displaystyle E\left({\overline {X}}\right)=\mu _{0}}$ and variance ${\displaystyle Var\left({\overline {X}}\right)=\sigma _{X}^{2}/n}$, or, using common notation for normal distribution functions: ${\displaystyle {\overline {X}}{\overset {{\text{H}}_{0}}{\thicksim }}\mathbb {N} \left(\mu _{0};\,\sigma /{\sqrt {n}}\right).}$ So far, we have focused on the location parameter ${\displaystyle \mu }$. But what about the second central moment that specifies a particular normal distribution, the variance (respectively standard deviation) of the random variable? As you will see, it is critical to the construction of a decision rule to distinguish between situations in which we can regard ${\displaystyle \sigma }$ as known and those where we can’t. Given a known ${\displaystyle \sigma }$, the distribution of ${\displaystyle {\overline {X}}}$ is completely specified. As we cannot analytically integrate the normal density function to get a closed-form normal distribution function, we rely on tables of numerical solutions for ${\displaystyle \mathbb {N} \left(\mu =0,\,\sigma =1\right)}$. We thus standardize ${\displaystyle {\overline {X}}}$ and take ${\displaystyle V={\frac {{\overline {X}}-\mu _{0}}{\sigma }}\,{\sqrt {n}}}$ as our test statistic. Given ${\displaystyle {\text{H}}_{0}}$ is true, ${\displaystyle V}$ has (approximately) standard normal distribution: ${\displaystyle V{\overset {{\text{H}}_{0}}{\thicksim }}\mathbb {N} \left(0,\,1\right).}$ The critical value corresponding to the relevant significance level ${\displaystyle \alpha }$ can thus be taken from a standard normal distribution table. We can now write down the decision regions for the three types of test for significance level ${\displaystyle \alpha }$, given the boundary expectation from ${\displaystyle {\text{H}}_{0}}$, i.e. ${\displaystyle \mu _{0}}$, is the true population mean. 1) Two-sided test The probability of ${\displaystyle V}$ falling into the rejection region for H${\displaystyle _{0}}$ must equal the given significance level ${\displaystyle \alpha }$: ${\displaystyle P\left(Vc_{u}|\mu _{0}\right)=\alpha /2+\alpha /2=\alpha .}$ For ${\displaystyle P\left(V\leq c_{u}\right)=1-\alpha /2}$ we can retrieve the upper critical value from the cumulative standard normal distribution table ${\displaystyle \mathbb {N} \left(0,\,1\right)}$: ${\displaystyle c_{u}=z_{1-\alpha /2}}$. Symmetry of the normal (bell) curve implies ${\displaystyle c_{l}=-z_{1-\alpha /2}}$. The rejection region for H${\displaystyle _{0}}$ is thus given by ${\displaystyle \left\{v|v<-z_{1-\alpha /2}\,{\text{ or }}\,v>z_{1-\alpha /2}\right\}.}$ The non-rejection region for H${\displaystyle _{0}}$ is then ${\displaystyle \left\{v|-z_{1-\alpha /2}\leq v\leq z_{1-\alpha /2}\right\}.}$ The probability of ${\displaystyle V}$ assuming a value from the non-rejection region for H${\displaystyle _{0}}$ is ${\displaystyle P\left(c_{l}\leq V\leq c_{u}|\mu _{0}\right)=P\left(-z_{1-\alpha /2}\leq V\leq z_{1-\alpha /2}|\mu _{0}\right)=1-\alpha }$ 2) Right-sided test Deviations of the standardized test statistic ${\displaystyle V}$ from ${\displaystyle E\left(V\right)=0}$ to the ‘right side’ (i.e. positive ${\displaystyle \left(V-0\right)}$) tend to falsify ${\displaystyle {\text{H}}_{0}}$. The rejection region will thus be a range of positive test statistic realizations ${\displaystyle v}$ (i.e. a positive critical value). The probability of observing realization of ${\displaystyle V}$ within this region must equal the given significance level ${\displaystyle \alpha }$: ${\displaystyle P\left(V>c|\mu _{0}\right)=\alpha .}$ For ${\displaystyle P\left(V\leq c\right)=1-\alpha }$ we find the in the table for the cumulative standard normal distribution ${\displaystyle \mathbb {N} \left(0,\,1\right)}$: ${\displaystyle c=z_{1-\alpha }}$. The rejection region for H${\displaystyle _{0}}$ is given by ${\displaystyle \left\{v|v>z_{1-\alpha }\right\},}$ and the non-rejection region for H${\displaystyle _{0}}$ is ${\displaystyle \left\{v|v\leq z_{1-\alpha }\right\}.}$ The probability of ${\displaystyle V}$ assuming a value within the non-rejection region for H ${\displaystyle _{0}}$ is ${\displaystyle P\left(V\leq c\,|\,\mu _{0}\right)=P\left(V\leq z_{1-\alpha }\,|\,\mu _{0}\right)=1-\alpha }$ 3) Left-sided test Sample means smaller than ${\displaystyle \mu _{0}}$ imply negative realizations of the test statistic ${\displaystyle V}$, that is, deviations of ${\displaystyle V}$ from ${\displaystyle E\left(V\right)=0}$ to the left side on the real line. In this case, rejection region for H${\displaystyle _{0}}$ therefore consists of negative ${\displaystyle V}$ outcomes. Consequently, the ${\displaystyle c}$ will be negative. Once again, we require the probability of observing realization of ${\displaystyle V}$ within the rejection region to equal ${\displaystyle \alpha }$: ${\displaystyle P\left(V<-c|\mu _{0}\right)=\alpha .}$ Using the symmetry property of the normal distribution, we can translate ${\displaystyle P\left(V<-c\right)}$ into ${\displaystyle 1-P\left(V. Thus, the absolute value of the critical value, ${\displaystyle |-c|=c}$, is the value of the cumulative normal distribution function for probability ${\displaystyle \left(1-\alpha \right)}$, i.e. ${\displaystyle c=z_{1-\alpha }}$, and ${\displaystyle -c=-z_{1-\alpha }}$ The rejection region for H${\displaystyle _{0}}$ is given by ${\displaystyle \left\{v|v<-z_{1-\alpha }\right\},}$ and the non-rejection region for H${\displaystyle _{0}}$ is ${\displaystyle \left\{v|v\geq -z_{1-\alpha }\right\}.}$ The probability of ${\displaystyle V}$ taking on a value within the non-rejection region for H${\displaystyle _{0}}$ is ${\displaystyle P\left(V\geq -c\,|\,\mu _{0}\right)=P\left(V\geq -z_{1-\alpha }\,|\,\mu _{0}\right)=1-\alpha .}$ If we don’t have any a priori knowledge about the standard deviation of the random variable under investigation, we need to plug an estimator of it into the test statistic ${\displaystyle V={\frac {{\overline {X}}-\mu _{0}}{\sigma }}\,{\sqrt {n}}.}$ An unbiased estimator of the population variance is ${\displaystyle S^{2}={\frac {\sum _{i=1}^{n}\left(X_{i}-{\overline {X}}\right)^{2}}{n-1}}.}$ Replacing ${\displaystyle \sigma }$ by the square root of ${\displaystyle S^{2}}$ yields our new test statistic: ${\displaystyle T={\frac {{\overline {X}}-\mu _{0}}{S}}\,{\sqrt {n}}.}$ If the null hypothesis ${\displaystyle {\text{H}}_{0}}$ is true, ${\displaystyle T}$ has (at least approximately) a ${\displaystyle t}$ distribution with ${\displaystyle n-1}$ degrees of freedom. For a given significance level ${\displaystyle \alpha }$ and ${\displaystyle n-1}$ degrees of freedom, the critical values can be read from the ${\displaystyle t}$ distribution table. If we denote the cumulative ${\displaystyle t}$ distribution with ${\displaystyle n-1}$ degrees of freedom for probability ${\displaystyle p}$ by ${\displaystyle t_{p;n-1}}$, and assume ${\displaystyle \mu _{0}}$ is the true population mean, we have the following decision regions for the test situations under consideration. 1) Two-sided test rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{t|t<-t_{1-\alpha /2;n-1}\,{\text{ or }}\,t>t_{1-\alpha /2;n-1}\right\},}$ where ${\displaystyle t}$ is a realization of the random variable ${\displaystyle T}$ computed from an observed sample of size ${\displaystyle n}$. Non-rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{t|-t_{1-\alpha /2;n-1}\leq t\leq t_{1-\alpha /2;n-1}\right\}.}$ 2) Right-sided test rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{t|t>t_{1-\alpha ;n-1}\right\}.}$ Non-rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{t|t\leq t_{1-\alpha ;n-1}\right\}.}$ 3) Left-sided test rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{t|t Non-rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{t|t\geq t_{1-\alpha ;n-1}\right\}.}$ Note: If the sample size is sufficiently large (${\displaystyle n>30}$), the ${\displaystyle t}$ distribution can be adequately approximated by the standard normal distribution. That is, ${\displaystyle T}$ is approximately ${\displaystyle \mathbb {N} \left(0;\,1\right)}$ distributed. Critical values can then be read from the normal table, and the decision regions equal those derived for known population standard deviation ${\displaystyle \sigma }$. Hence, for large ${\displaystyle n}$ we can estimate ${\displaystyle \sigma }$ by ${\displaystyle S}$ and abstract from the estimation error (that will occur with probability one, even if the estimator hits the correct parameter on average, i.e. is unbiased).

### Calculating the test statistic from an observed sample

When we have obtained a random sample ${\displaystyle x_{1},\ldots ,x_{n}}$ , we can compute the empirical counterparts of the theoretical test statistics we have based our test procedures on. On the theoretical level, we have expressed them in terms of (theoretical) , i.e. ${\displaystyle X_{1},\ldots ,X_{n}}$, that is, have denoted them by capital letters: ${\displaystyle {\overline {X}}}$, ${\displaystyle V}$ and ${\displaystyle S}$. Actual values calculated from a sample of size ${\displaystyle n}$, ${\displaystyle x_{1},\ldots ,x_{n}}$, are denoted by ${\displaystyle {\overline {x}}}$, ${\displaystyle v}$ and ${\displaystyle s}$ and differ from their theoretical counterparts only in that now the variables stand for real numbers rather than a range of theoretically permissible values. Hence, the respective empirical formulae for sample mean and sample standard deviation are ${\displaystyle {\overline {x}}={\frac {1}{n}}\sum _{i=1}^{n}\,x_{i}}$ and ${\displaystyle s={\sqrt {\frac {\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}\right)^{2}}{n-1}}}.}$ Accordingly, the two realized test statistics for testing normal means for known and unknown variance respectively are ${\displaystyle v={\frac {{\overline {x}}-\mu _{0}}{\sigma }}\,{\sqrt {n}}}$ and ${\displaystyle t={\frac {{\overline {x}}-\mu _{0}}{s}}\,{\sqrt {n}}.}$ You may have recognized that we have already applied this notation when specifying the decision regions.

### Test decision and interpretation

If the test statistic ${\displaystyle v}$ falls into the rejection region, the null hypothesis ${\displaystyle {\text{H}}_{0}}$ is rejected on the basis of a of size ${\displaystyle n}$ and a given a significance level ${\displaystyle \alpha }$: ${\displaystyle {\text{'H'}}_{1}}$. Statistically, we have concluded that the true expectation ${\displaystyle E\left(X\right)=\mu }$ does not equal the hypothetical ${\displaystyle \mu _{0}}$. If the true parameter does belong to the range postulated in the null hypothesis (${\displaystyle {\text{H}}_{0}}$), we have made a type I error: ${\displaystyle '{\text{H}}_{1}^{'}|{\text{H}}_{0}}$. In fact, in choosing a particular significance level, we are really deciding about the probability of making exactly this error, since the decision regions are constructed such that the probability of making a type I error equals the significance level: ${\displaystyle P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)=\alpha }$. If, on the other hand, ${\displaystyle v}$ falls into the non-rejection region, the particular sample leads to a non-rejection of the null hypothesis for the given significance level: ${\displaystyle {\text{'H'}}_{0}}$. Thus, we are not able to show statistically, that the true parameter ${\displaystyle E\left(X\right)=\mu }$ deviates from the hypothetical one (${\displaystyle \mu _{0}}$). Chances are, though, non trivial that we are making a type II error, i.e. the correctly describes reality: ${\displaystyle '{\text{H}}_{0}^{'}|{\text{H}}_{1}}$. As already pointed out, the probability of making a ${\displaystyle \beta }$ error is, in general, unknown and has to be computed for individual alternative parameter values ${\displaystyle \mu _{1}}$.

### Power

How can we assess the ‘goodness’ of a test? We have seen that in setting up a test procedure we are controlling the probability of making an ${\displaystyle \alpha }$ error (by assigning a value to the significance level ${\displaystyle \alpha }$). The probability of making a ${\displaystyle \beta }$ error is then determined by the true (and unknown) parameter. The smaller ${\displaystyle \beta }$ is for a given true parameter ${\displaystyle \mu }$ , the more reliable the test is in that it more frequently rejects the null hypothesis when the alternative hypothesis is really true. Hence, given a specific significance level, we want ${\displaystyle \beta }$ to be as small as possible for true parameter ranges outside that specified in the null hypothesis, or, equivalently, we want to maximize the probability of making the correct decision ${\displaystyle \left('{\text{H}}_{1}^{'}|{\text{H}}_{1}\right)}$, that is maximize the quantity ${\displaystyle \left(1-\beta \right)}$ for any given true ${\displaystyle \mu }$ outside the null hypothesis region, i.e. inside that of the alternative hypothesis. This notion of ‘goodness’ of a test is conceptualized with the so-called power, a function assigning probabilities of rejecting ${\displaystyle {\text{H}}_{0}}$ ${\displaystyle \left(1-\beta \right)}$ to true parameter values ${\displaystyle \mu }$ within the ${\displaystyle {\text{H}}_{1}}$ parameter region for given ${\displaystyle \alpha }$ and hypothetical parameter ${\displaystyle \mu _{1}}$. These probabilities represent the theoretical averages of making a right decision in rejecting ${\displaystyle {\text{H}}_{0}}$ over all possible samples (given ${\displaystyle \alpha }$ and ${\displaystyle \mu }$). They can thus be computed without utilizing actual samples; in fact, the power is computed because we can obtain only a limited sample and aim to quantify the expected ‘accuracy’ of the individual test procedure. Technically, the power ${\displaystyle P\left(\mu \right)}$ yields the probability of rejecting ${\displaystyle {\text{H}}_{0}}$ given hypothetical parameters ${\displaystyle \mu }$: ${\displaystyle P\left(\mu \right)=P\left(V\in {\text{rejection region for H}}_{0}|\mu \right)=P\left('{\text{H}}_{1}^{'}|\mu \right)}$ 1) Two-sided test In a two-sided test, the null hypothesis is true if and only if ${\displaystyle \mu =\mu _{0}}$. Rejecting ${\displaystyle {\text{H}}_{0}}$ given that it is true means we have made a type I error: ${\displaystyle =P\left(V\in {\text{rejection region for H}}_{0}|\mu =\mu _{0}\right)=P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)=\alpha .}$ For all other possible parameter values, rejecting ${\displaystyle {\text{H}}_{0}}$ is a right decision: ${\displaystyle =P\left(V\in {\text{rejection region for H}}_{0}|\mu \neq \mu _{0}\right)=P\left('{\text{H}}_{1}^{'}|{\text{H}}_{1}\right)=1-\beta .}$ We thus have ${\displaystyle P\left(\mu \right)={\begin{cases}P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)=\alpha ,&{\text{if }}\mu =\mu _{0}\\P\left('{\text{H}}_{1}^{'}|{\text{H}}_{1}\right)=1-\beta ,&{\text{if }}\mu \neq \mu _{0}.\end{cases}}}$ Using our normality assumption about the underlying probability distribution, we can analytically calculate the power for the case of a two-sided test: ${\displaystyle P\left(\mu \right)=1-\left[P\left(V\leq z_{1-\alpha /2}-{\frac {\mu -\mu _{0}}{\sigma /{\sqrt {n}}}}\right)-P\left(V\leq -z_{1-\alpha /2}-{\frac {\mu -\mu _{0}}{\sigma /{\sqrt {n}}}}\right)\right].}$ The probability of a type II error can be calculated from the power: ${\displaystyle P\left('{\text{H}}_{0}^{'}|{\text{H}}_{1}\right)=1-P\left(\mu \neq \mu _{0}\right)=\beta .}$ Properties of the power for a two-sided test:

• For ${\displaystyle \mu =\mu _{0}}$, the power assumes its minimum, ${\displaystyle \alpha }$.
• The power is symmetrical around the hypothetical parameter value ${\displaystyle \mu _{0}}$
• The power increases with growing distance of the true parameter ${\displaystyle \mu }$ from the hypothetical ${\displaystyle \mu _{0}}$ and converges to one as the distance increases to ${\displaystyle \infty }$ or ${\displaystyle -\infty }$ respectively.

The above characteristics are illustrated in the following power curve diagram.

In the above diagram, two alternative true parameter values ${\displaystyle \mu _{1}}$ and ${\displaystyle \mu _{2}}$ are depicted. If ${\displaystyle \mu _{1}}$ is the true parameter, the distance ${\displaystyle \mu _{1}-\mu _{0}}$ is comparatively high. Consequently, the probability ${\displaystyle 1-\beta }$ of making a right decision in not-rejecting the alternative hypothesis ${\displaystyle {\text{H}}_{1}}$ (conversely, correctly rejecting the null) is relatively high and the probability of making a type II error, ${\displaystyle \beta }$, small. The distance of the ‘hypothetically true’ parameter value ${\displaystyle \mu _{2}}$ from the hypothetical parameter value ${\displaystyle \mu }$, ${\displaystyle \mu _{2}-\mu _{0}}$, is relatively small. Hence, the probability of making a right decision in rejecting the null hypothesis, ${\displaystyle 1-\beta }$, is smaller than in the first example, and the probability of making a type II error, ${\displaystyle \beta }$, greater. This is intuitively plausible, i.e. that relatively small deviations are less easily discovered by the test. 2) Right-sided test In a right-sided test, the null hypothesis is true if the true parameter is less than or equal to the hypothetical boundary value ${\displaystyle \mu _{0}}$, i.e. if ${\displaystyle \mu \leq \mu _{0}}$. If this is the case, the maximum probability of rejecting the null hypothesis and hence making a type I error, equals the significance level ${\displaystyle \alpha }$: ${\displaystyle P\left(V\in {\text{rejection region for H}}_{0}|\mu \leq \mu _{0}\right)=P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)\leq \alpha .}$ If the alternative hypothesis, i.e. ${\displaystyle \mu >\mu _{0}}$, is true, rejecting the null hypothesis and hence making a right decision occurs with probability: ${\displaystyle =P\left(V\in {\text{rejection region for H}}_{0}|\mu \geq \mu _{0}\right)P\left('{\text{H}}_{1}^{'}|{\text{H}}_{1}\right)=1-\beta .}$ Combining these formulae for the two disjoint subsets of the parameter space gives the power: ${\displaystyle P\left(\mu \right)={\begin{cases}P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)\leq \alpha ,&{\text{if }}\mu \leq \mu _{0}\\P\left('{\text{H}}_{1}^{'}|{\text{H}}_{1}\right)=1-\beta ,&{\text{if }}\mu >\mu _{0}.\end{cases}}}$ We can explicitly calculate the power for our right-sided test problem for all possible true parameter values ${\displaystyle \mu }$: ${\displaystyle P\left(\mu \right)=1-P\left(V\leq z_{1-\alpha }-{\frac {\mu -\mu _{0}}{\sigma /{\sqrt {n}}}}\right).}$ The following diagram displays the typical shape of the power for a right-sided test problem.

For all values within the parameter set of the alternative hypothesis, the power increases monotonically to one. The greater the distance ${\displaystyle \mu -\mu _{0}}$, the higher the probability ${\displaystyle 1-\beta }$ of making a right decision in not-rejecting the alternative hypothesis, and hence the smaller the probability ${\displaystyle \beta }$ of making a type II error. At the point ${\displaystyle \mu =\mu _{0}}$ the power is ${\displaystyle \alpha }$, the given significance level. For all other values associated with the , i.e. ${\displaystyle \mu <\mu _{0}}$, the power is less than ${\displaystyle \alpha }$. That’s what we assumed when we constructed the test: We want ${\displaystyle \alpha }$ to be the maximum probability of rejecting the null hypothesis for a true null hypothesis. As you can see from the graph, this probability decreases with rising absolute distance ${\displaystyle \mu -\mu _{0}}$. 3) Left-sided test In a left-sided test, the null hypothesis is true if the true parameter is greater than or equal to the hypothetical boundary value, that is, if ${\displaystyle \mu \geq \mu _{0}}$. In this case, rejecting the null hypothesis and hence making a type I error, will happen with probability of no more than ${\displaystyle \alpha }$: ${\displaystyle P\left(V\in {\text{rejection region for H}}_{0}|\mu \geq \mu _{0}\right)=P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)\leq \alpha .}$ If the alternative hypothesis is true, i.e. ${\displaystyle \mu <\mu _{0}}$, the researcher makes a right decision in rejecting the null hypothesis, the odds being: ${\displaystyle P\left(V\in {\text{rejection region for H}}_{0}|\mu \leq \mu _{0}\right)=P\left('{\text{H}}_{1}^{'}|{\text{H}}_{1}\right)=1-\beta .}$ For the entire parameter space we thus have: ${\displaystyle P\left(\mu \right)={\begin{cases}P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)\leq \alpha ,&{\text{if }}\mu \geq \mu _{0}\\P\left('{\text{H}}_{1}^{'}|{\text{H}}_{1}\right)=1-\beta ,&{\text{if }}\mu <\mu _{0}.\end{cases}}}$ For our normally distributed population we can calculate the probability of rejecting ${\displaystyle {\text{H}}_{0}}$ as a function of the true parameter value ${\displaystyle \mu }$ (the power) explicitly: ${\displaystyle P\left(\mu \right)=P\left(V\leq -z_{1-\alpha }-{\frac {\mu -\mu _{0}}{\sigma /{\sqrt {n}}}}\right).}$ A typical graph of a power for a left-sided test is depicted in the following diagram

The graph is interpreted similar to the right-sided test case. Suppose we consider the following right-sided test: ${\displaystyle {\text{H}}_{0}:\mu \leq 0\quad {\text{ versus }}\quad {\text{H}}_{0}:\mu >0.}$ The standard deviation in the population is known to be ${\displaystyle \sigma =8}$. In this interactive example you can study the impact of the significance level ${\displaystyle \alpha }$ and the sample size ${\displaystyle n}$ on the size of the type II error. You can specify

• the sample size ${\displaystyle n}$,
• the significance level ${\displaystyle \alpha }$,
• and a true ${\displaystyle \mu }$ that will give rise to a type II error (that is, a ${\displaystyle \mu }$ greater than zero).

After you have made your choices you will be presented a display containing

• the distribution of the sample mean under ${\displaystyle {\text{H}}_{0}}$ (red bell curve),
• the distribution of the sample mean under ${\displaystyle {\text{H}}_{1}}$ using your chosen ${\displaystyle \mu }$ (blue bell curve),
• the critical value for rejecting the null hypothesis (black vertical line),
• the probability of making a type I error (red area under the red bell curve),
• and the probability of making a type II error (blue area under the blue bell curve).

By varying ${\displaystyle n}$, ${\displaystyle \sigma }$ and ${\displaystyle \mu }$, you can explore the impact of these test parameters on the type II error probability. To isolate the impacts we recommend change the value of only one parameter in successive trials. To facilitate easy comparison you will be shown a display for the current (lower display) and the previous run (upper display). Assume that the random variable ${\displaystyle X={\text{'size of credit line in currency'}}}$ in a population of ${\displaystyle N=3,000}$ overdraft facilities has normal distribution with unknown expectation ${\displaystyle \mu }$ and known standard deviation ${\displaystyle \sigma =1,174{\text{ Deutschmarks (DM)}}}$. Based on a simple random sample, the hypothesis that ${\displaystyle \mu }$ equals the hypothetical value ${\displaystyle \mu _{0}=1,800{\text{ DM}}}$ has to be tested at a significance level of ${\displaystyle \alpha }$: ${\displaystyle {\text{H}}_{0}:\mu =1,800{\text{ DM}}\quad {\text{ versus }}\quad {\text{H}}_{0}:\mu \neq 1,800{\text{ DM}}.}$ You can carry out this test as often as you like—for every new run a new sample is drawn from the population. It is up to you to control the significance level ${\displaystyle \alpha }$ and the sample size ${\displaystyle n}$. You can vary them as you like and isolate their effects by holding either of these test parameters constant. In particular, you can

• Hold both the significance level ${\displaystyle \alpha }$ and sample size ${\displaystyle n}$ constant to observe different test decisions based on different samples;
• Vary the significance level ${\displaystyle \alpha }$ for a fixed sample size ${\displaystyle n}$;
• Change the sample size ${\displaystyle n}$ and leave the ${\displaystyle \alpha }$ fixed to your chosen level; or
• Vary both the significance level ${\displaystyle \alpha }$ and the sample size ${\displaystyle n}$.

We will now illustrate how information about the population can influence the choice of the test statistic, the decision regions and—depending on the sample at hand—the test decision. A car tire producer alters the mix of raw material entering the production process in an attempt to increase the average life of the output. After the first new tires have been sold, competitors criticize that the average life of the new tires doesn’t exceed that of the old ones, which is known to be ${\displaystyle 38,000{\text{ km}}}$. The random variable under investigation is the actual life of the population of new tires, measured in km, denoted by ${\displaystyle X}$, and the producers’ claim is that its expectation ${\displaystyle E\left(X\right)=\mu }$ is higher than the historical one of the old types, ${\displaystyle \mu _{0}=38,000{\text{ km}}}$. Management wishes to scientifically test this claim and commissions a statistical investigation hoping to verify that the average life has in fact increased, i.e. that ${\displaystyle \mu >\mu _{0}}$. But they also want to minimize the risk of making a wrong decision so as not to be exposed to competitors’ (justified) counter arguments.

## Hypothesis

Since deviations in one direction are the subject of the dispute, a one-sided test will be conducted. We state the producers’ claim as the with the hope that the sample rejects it, yielding a right-sided test: ${\displaystyle {\text{H}}_{0}:\mu \leq \mu _{0}\quad {\text{ versus }}\quad {\text{H}}_{1}:\mu >\mu _{0},}$ where ${\displaystyle \mu _{0}=38,000{\text{ km}}}$. Does this operationalisation support the producers’ intention? We can answer this question by analyzing the possible errors. Rejecting ${\displaystyle {\text{H}}_{0}}$ gives rise to the possibility of a type I error. Not rejecting the null hypothesis exposes the decision-maker to a type II error. The producers’ emphasis is on keeping the type I error small, as its implications are more severe than those of the type II error: With the production process going ahead and thus the available sample of tires gradually increasing, an actual average life below the acclaimed one would sooner or later be revealed. The maximum probability of the type I error, ${\displaystyle P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)}$ is given by the significance level ${\displaystyle \alpha }$, a parameter the producer can control. Thus, the test is in line with the producers’ requirements. The probability of making a type II error, ${\displaystyle P\left('{\text{H}}_{0}^{'}|{\text{H}}_{1}\right)=\beta }$, is unknown, as the true average life of the new processes’ output is unknown. The probability of not verifying an increase in the average life of the tires that has actually taken place, can be substantial. That’s the price the producer has to pay for choosing the conservative approach of stating the claim as alternative hypothesis and actively controlling the significance level and thus keeping the crucial type I error small. This trade-off makes sense, as the perceived long term reliability of the producer is more important than short term sales gains.

## 1st alternative

### Significance level and sample size

The test will be conducted at a ${\displaystyle 0.05}$ significance level. A sample of size ${\displaystyle n=10}$ is taken from the output. As the population is reasonably large (a couple of thousand tires have already been produced), the sample can be regarded as a simple random sample.

### Test statistic and its distribution; decision regions

Sample-based investigations into the tires’ properties carried out prior to the implementation of changes in the production process, indicate, that the fluctuations in the life of the tires can be described ‘reasonably’ well by a normal distribution with standard deviation ${\displaystyle \sigma =1,500{\text{ km}}}$. Assuming, this variability is still valid in the new production regime, we have for the distribution of the sample mean under the null hypothesis: ${\displaystyle {\overline {X}}{\overset {{\text{H}}_{0}}{\thicksim }}\mathbb {N} \left(38,000;\,1,500/(10)^{1/2}=474.34\right).}$ Under ${\displaystyle {\text{H}}_{0}}$, the test statistic ${\displaystyle V={\frac {{\overline {X}}-\mu _{0}}{\sigma }}\,{\sqrt {n}},}$ follows the standard normal distribution: ${\displaystyle V{\overset {{\text{H}}_{0}}{\thicksim }}\mathbb {N} \left(0;\,1\right).}$ The critical value ${\displaystyle c}$ that satisfies ${\displaystyle P\left(V\leq c\right)=1-\alpha =0.95}$ can be found from the cumulative standard normal distribution table as the 95 % quantile: ${\displaystyle c=z_{0.95}=1.645}$. The resulting decision regions are Non-rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{v\,|\,v\leq 1.645\right\}}$. Rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{v\,|\,v>1.645\right\}}$.

### Sampling and computing the test statistic

Suppose the average life of ${\displaystyle 10}$ randomly selected tires is ${\displaystyle {\overline {x}}=39,100{\text{ km}}}$. Then the realized test statistic value is ${\displaystyle v={\frac {39,100-38,000}{1,500}}\,{\sqrt {10}}=2.32.}$

### Test decision and interpretation

As ${\displaystyle 2.32}$ is element of the rejection region for H${\displaystyle _{0}}$, the null hypothesis is rejected. Based on a sample of size ${\displaystyle n=10}$ and a significance level of ${\displaystyle \alpha =0.05}$, we have shown statistically, that the new tires can be used significantly longer than the old ones, that is, that the true expectation ${\displaystyle E\left(X\right)=\mu }$ of the tires’ life is greater than the hypothetical value ${\displaystyle \mu _{0}=38,000{\text{km}}}$. The test has resulted in a non-rejection of the alternative hypothesis ${\displaystyle {\text{H}}_{1}:{\text{'average life has increased'}}}$. The producer makes a type I error (${\displaystyle '{\text{H}}_{1}^{'}|{\text{H}}_{0}}$) if the null hypothesis correctly describes reality (${\displaystyle {\text{H}}_{0}:{\text{'average life has not increased'}}}$). But the probability of an occurrence of this error has intentionally been kept small with the significance level ${\displaystyle \alpha =0.05}$. If the alternative hypothesis is true, a right decision has been made: ${\displaystyle '{\text{H}}_{1}^{'}|{\text{H}}_{1}}$. The probability ${\displaystyle P\left('{\text{H}}_{1}^{'}|{\text{H}}_{1}\right)}$ of this situation can only be computed for specific true population parameters. Assuming this value is ${\displaystyle \mu =39,000{\text{ km}}}$, the power is {\displaystyle {\begin{aligned}P\left(39,000\right)&=1-P\left(V\leq 1.645-{\frac {39,000-38,000}{1,500}}\,{\sqrt {10}}\right)\\&=1-P\left(V\leq -0.463\right)=1-\left[1-P\left(V\leq 0.463\right)\right]\\&=0.6783=1-\beta .\end{aligned}}} The greater the increase in average life, the higher the power of the test i.e. the probability ${\displaystyle 1-\beta }$. E.g., if an increase to ${\displaystyle 40,000}$ had been achieved, the power would be ${\displaystyle 0.9949}$: ${\displaystyle P\left(40,000\right)=1-\beta =0.9949}$.

## 2nd alternative

The significance level ${\displaystyle \alpha =0.05}$ and sample size ${\displaystyle n=10}$ remain constant, and we continue to assume a normal distribution of the new tires’ lives. But we drop the restrictive assumption of a constant standard deviation. We now allow for it to have changed with the introduction of the new production process.

### Test statistic and its distribution; decision regions

Since we now have to estimate the unknown standard deviation with its empirical counterpart, the square root of the sample variance, ${\displaystyle S}$, we must employ the ${\displaystyle T}$-statistic ${\displaystyle T={\frac {{\overline {X}}-\mu _{0}}{S}}\,{\sqrt {n}},}$ which, under ${\displaystyle {\text{H}}_{0}}$, has a ${\displaystyle t}$-distribution with ${\displaystyle n-1=9}$ degrees of freedom. We can look up the critical value ${\displaystyle c}$ satisfying ${\displaystyle P\left(T\leq c\right)=1-\alpha =0.95}$ as the upper ${\displaystyle 0.05}$ quantile of the ${\displaystyle t}$-distribution with ${\displaystyle 9}$ degrees of freedom in a ${\displaystyle t}$-distribution table and find it to be ${\displaystyle t_{0.95;9}=1.833}$. Thus, our decision regions are: Non-rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{t\,|\,t\leq 1.833\right\}}$. Rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{t\,|\,t>1.833\right\}}$. You will notice that the size of the non-rejection region has increased. This is due to the added uncertainty about the unknown dispersion parameter ${\displaystyle \sigma }$. Consequently, there must be a larger allowance for variability in the test statistic for the same and sample size than in the normal test for known standard deviation.

### Sampling and computing the test statistic

Along with the sample mean ${\displaystyle {\overline {X}}}$ the sample standard deviation ${\displaystyle s}$ has to be computed. Suppose their realized values are ${\displaystyle {\overline {X}}=38,900{\text{ km}}}$ and ${\displaystyle s=1,390{\text{ km}}}$. Thus, the realized value is ${\displaystyle t={\frac {{\overline {x}}-\mu _{0}}{s}}\,{\sqrt {n}}={\frac {38,900-38,000}{1,390}}\,{\sqrt {10}}=2.047.}$

### Test decision and interpretation

As ${\displaystyle t=2.047}$ falls into the rejection region, the is rejected. Based on a sample of size ${\displaystyle n=10}$ and a significance level of ${\displaystyle \alpha =0.05}$, we were again able to statistically show that the true (and unknown) expectation ${\displaystyle E\left(X\right)=\mu }$ of the new tires’ lives has increased from its former (i.e. hypothetical) level of ${\displaystyle \mu _{0}=38,000{\text{ km}}}$. Of course, we still don’t know the true parameter ${\displaystyle \mu }$, and if it happens to be less than (or equal to) ${\displaystyle 38,000{\text{ km}}}$, we have made a type I error, for we have rejected a true null hypothesis: ${\displaystyle '{\text{H}}_{1}^{'}|{\text{H}}_{0}}$. In choosing a of ${\displaystyle 5}$ per cent we have restricted the probability of this error to a maximum of ${\displaystyle 5}$ per cent (the actual value depending on the true parameter ${\displaystyle \mu }$). If the true parameter ${\displaystyle \mu }$ does lie within the region specified by the alternative hypothesis, we have made a right decision in rejecting the null hypothesis: ${\displaystyle '{\text{H}}_{1}^{'}|{\text{H}}_{1}}$. The probability of this event, ${\displaystyle P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)=1-\beta }$, can be (approximately) computed for alternative true population means ${\displaystyle \mu }$ if we assume the sample standard deviation ${\displaystyle s}$ to be the true one in the population, i.e. ${\displaystyle s=\sigma }$.

## 3rd alternative

Suppose we now drop the assumption of normality, which is a situation more relevant to practical applications. In order to conduct an approximate test about ${\displaystyle \mu }$, we require the sample size to be greater than ${\displaystyle 30}$. If the sample size is smaller than ${\displaystyle 30}$, we cannot justify the application of the , as the approximation wouldn’t be good enough. The managers decide to pick a sample of ${\displaystyle n=35}$ tires, incurring further sampling costs as the price to employ a more suitable and therefore reliable statistical procedure. Further, suppose that the significance level is chosen to be ${\displaystyle \alpha =0.025}$.

### Test statistic and its distribution; decision regions

As in the 2nd alternative, the ${\displaystyle T}$-statistic ${\displaystyle T={\frac {{\overline {X}}-\mu _{0}}{S}}\,{\sqrt {n}},}$ has to be used. Having chosen ${\displaystyle n>30}$ independent observations, we can justify to employ the central limit theorem and approximate the distribution of this standardized statistic by a standard normal distribution: ${\displaystyle V{\overset {\text{as}}{\thicksim }}\mathbb {N} \left(0;\,1\right).}$ In above statement, ‘as’ stands for ‘asymptotically’: ${\displaystyle T}$ is asymptotically standard normal, that is, the standard normal distribution is the limit it converges to as ${\displaystyle n}$ tends to infinity. For finite samples, the standard normal distribution serves as an approximation. The ${\displaystyle c}$ satisfying ${\displaystyle P\left(T\leq c\right)=1-\alpha =0.975}$ is then (approximately) the upper ${\displaystyle 2.5}$ per cent quantile of the standard normal distribution, ${\displaystyle z_{0.975}=1.96}$, and we have the following decision regions: Non-rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{t\,|\,t\leq 1.96\right\}}$. Rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{t\,|\,t>1.96\right\}}$.

### Sampling and computing the test statistic

As in alternative 2, we have to compute both the sample mean ${\displaystyle {\overline {x}}}$ and the sample standard deviation ${\displaystyle s}$ as estimators for their population counterparts ${\displaystyle \mu }$ and ${\displaystyle \sigma }$. Suppose, their values are ${\displaystyle {\overline {X}}=38,500{\text{ km}}}$ and ${\displaystyle s=1,400{\text{ km}}}$ for our new sample of size ${\displaystyle 35}$. Then the realized test statistic value is: ${\displaystyle t={\frac {{\overline {x}}-\mu _{0}}{s}}\,{\sqrt {n}}={\frac {38,500-38,000}{1,400}}\,{\sqrt {35}}=2.11.}$

### Test decision and interpretation

As ${\displaystyle v=2.11}$ lies within the rejection region, the is rejected. On the basis of a particular sample of size ${\displaystyle n=35}$ and a significance level of ${\displaystyle \alpha =0.05}$ we were able to statistically verify that the true population mean ${\displaystyle E\left(X\right)=\mu }$ of the new tires’ lives is greater than the tires’ expected life before the implementation of the new process, ${\displaystyle \mu _{0}=38,000{\text{ km}}}$. If the null hypothesis is in fact true, we have made a type I error. Fortunately, the probability of this happening (given we have rejected ${\displaystyle {\text{H}}_{0}}$ as is the case here) has been chosen not to exceed ${\displaystyle \alpha =0.025}$ for any true population mean ${\displaystyle \mu }$ within the parameter space specified in ${\displaystyle {\text{H}}_{0}}$. Given the small (maximum) type I error probability of ${\displaystyle 0.025}$, it is much more likely that we are right in rejecting the null hypothesis: ${\displaystyle '{\text{H}}_{1}^{'}|{\text{H}}_{1}}$. But the associated probability, ${\displaystyle P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)=1-\beta }$, can only be computed for specific true parameter values. As in alternative 2, we have to assume a known ${\displaystyle \sigma }$ in order to calculate this quantity by setting ${\displaystyle \sigma =s=1,400{\text{ km}}}$. A company is packing wheat flour. The machine has been set up to fill ${\displaystyle 1,000}$ gramms (g) into each bag. Of course, the probability of any bag containing exactly 1 kg, is zero (as weight is a continuous variable), and even if we take into account the limited precision of measurement, we will still expect some fluctuation around the desired (theoretical) content of 1 kg in actual output. But without prior knowledge we can’t even be sure, if the average weight of output is actually 1 kg. Fortunately, we have means of testing this statistically. Denote by ${\displaystyle X}$ the actual net weight per bag. We are interested in the expectation of this random variable, i.e. the average net bag weight, ${\displaystyle E\left(X\right)=\mu }$. Is it sufficiently close to ${\displaystyle \mu _{0}=1{\text{ kg}}}$, the ideal quantity we want the machine to fill into each bag? As the machine has to be readjusted from time to time to produce output statistically close enough to the required weight, the producer regularly takes samples to assess the then current precision of the packing process. If the mean of any of these samples statistically differs significantly from the hypothetical value ${\displaystyle \mu _{0}}$, the machine has to be readjusted.

## Hypothesis

Management is interested in deviations of the actual from the desired weight of ${\displaystyle \mu _{0}=1{\text{ kg}}}$ in both directions. Filling in too much isn’t cost-effective and putting in too little may trigger investigations from consumer organizations, with all the negative publicity that comes with it. Thus, a two-sided test is indicated: ${\displaystyle {\text{H}}_{0}:\mu =\mu _{0}\quad {\text{ versus }}\quad {\text{H}}_{1}:\mu \neq \mu _{0},}$ where ${\displaystyle \mu _{0}=1,000{\text{ g}}}$.

## Sample size and significance level

The statistician decides to test at a ${\displaystyle 0.05}$ level and asks a technician to extract a sample of ${\displaystyle n=25}$ bags. As the population, that is, the overall production, is large compared to the sample size, the statistician can regard the sample as a simple random sample.

## Test statistic and its distribution; decision regions

The estimator of the unknown population mean ${\displaystyle E\left(X\right)=\mu }$ is the sample mean ${\displaystyle {\overline {X}}}$. Experience has shown that the actual weight can be approximated sufficiently closely by a normal distributions with standard deviation ${\displaystyle \sigma =10{\text{ g}}}$. The estimator ${\displaystyle {\overline {X}}}$ is then normally distributed with standard deviation ${\displaystyle \sigma =10/(25)^{1/2}=2{\text{ g}}}$. Under ${\displaystyle {\text{H}}_{0}}$, i.e. given, the true population parameter ${\displaystyle \mu }$ equals the hypothetical (desired) one, ${\displaystyle \mu _{0}}$, ${\displaystyle {\overline {X}}}$ is thus normally distributed with parameters ${\displaystyle \mu =1,000{\text{ g}}}$ and ${\displaystyle \sigma =2{\text{ g}}}$: ${\displaystyle {\overline {X}}{\overset {{\text{H}}_{0}}{\thicksim }}\mathbb {N} \left(1,000;\,2\right).}$ The test statistic ${\displaystyle V}$ is the standardization of the sample mean, ${\displaystyle V={\frac {{\overline {X}}-\mu _{0}}{\sigma }}\,{\sqrt {n}},}$ and follows the standard normal distribution: ${\displaystyle V{\overset {{\text{H}}_{0}}{\thicksim }}\mathbb {N} \left(0;\,1\right).}$ We can look up the upper critical value in the cumulative standard normal distribution table as ${\displaystyle c_{u}=z_{0.975}=1.96}$ to satisfy ${\displaystyle P\left(V\leq c_{u}\right)=1-\alpha /2=0.975}$. Using symmetry of the normal curve, ${\displaystyle c_{l}=-z_{1-\alpha /2}=-1.96}$. We thus have: the non-rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{v\,|\,-1.96\leq v\leq 1.96\right\}}$ and the rejection region for H${\displaystyle _{0}}$: ${\displaystyle \left\{v\,|\,v<-1.96\,{\text{ or }}\,v>1.96\right\}}$.

Rejection region ${\displaystyle {\text{H}}_{0}}$ ${\displaystyle {\vert }}$ non-rejection region ${\displaystyle {\text{H}}_{0}}$ ${\displaystyle {\vert }}$ rejection region ${\displaystyle {\text{H}}_{0}}$

## Drawing the sample and calculating the test statistic

${\displaystyle 25}$ bags are selected randomly and their net content is weighed. The arithmetic mean of these measurements is ${\displaystyle {\overline {x}}=996.4{\text{ g}}}$. The realized test statistic value is thus ${\displaystyle v={\frac {996.4-1,000}{2}}=-1.8.}$

## Test decision and interpretation

As ${\displaystyle v=-1.8}$ lies within the non-rejection region for H${\displaystyle _{0}}$, the hypothesis is not-rejected. Based on a sample of size ${\displaystyle n=25}$, the hypothetical mean value ${\displaystyle \mu _{0}=1,000{\text{ g}}}$ couldn’t be shown to differ statistically significantly from the true parameter value ${\displaystyle \mu }$, i.e. we couldn’t verify that the packing process is not precise.

## Power

Not having rejected the null hypothesis, we are inevitably taking the risk of making a type II error:${\displaystyle '{\text{H}}_{0}^{'}\,|\,{\text{H}}_{1}}$, i.e. the alternative hypothesis is true and we have rejected it. We should therefore assess the reliability of our decision in terms of type II error probabilities for parameter values different from that stated in the null hypothesis, i,e. ${\displaystyle \mu \neq \mu _{0}}$. They are given by ${\displaystyle 1-P\left(\mu \right)}$. Suppose, ${\displaystyle 1,002{\text{ g}}}$ is the true average weight and the alternative hypothesis therefore a true statement. As the power assigns probabilities for right decisions to alternative true parameter values, ${\displaystyle P\left(1,002\right)}$ is the probability of making a right decision (correctly rejecting the null hypothesis): ${\displaystyle P\left('{\text{H}}_{1}^{'}|{\text{H}}_{1}\right)=1-\beta .}$ Plugging ${\displaystyle \mu _{0}=1,000}$, ${\displaystyle \alpha =0.05}$, ${\displaystyle \sigma =10}$ and ${\displaystyle n=25}$ into the formula for the power gives {\displaystyle {\begin{aligned}P\left(1,002\right)&=1-\left[P\left(V\leq 1.96-{\frac {1,002-1,000}{2}}\right)-P\left(V\leq -1.96-{\frac {1,002-1,000}{2}}\right)\right]\\&=1-\left[P\left(V\leq 0.96\right)-P\left(V\leq -2.96\right)\right]\\&=1-\left[P\left(V\leq 0.96\right)-\left(1-P\left(V\leq 2.96\right)\right)\right]\\&=1-\left[0.831472-\left(1-0.998462\right)\right]\\&=1-0.829934\\&=0.17=1-\beta .\end{aligned}}} The probability of making a type II error if the true population mean is ${\displaystyle 1,002}$, is therefore ${\displaystyle P\left('{\text{H}}_{0}^{'}|{\text{H}}_{1}\right)=\beta \left(1,002\right)=1-P\left(1,002\right)=0.83.}$ There, if the true average weight is ${\displaystyle 1,002}$ , 83 % of all samples of size ${\displaystyle n=25}$ would not convert that fact into a correct test decision (rejection of the null) for the given significance level of ${\displaystyle \alpha =0.05}$. Since ${\displaystyle 1,002-1,000}$ is only a relatively small difference, in statistical terms, the probability of a type II error is large. If, on the other hand, ${\displaystyle 989}$ gramms is the true average weight, ${\displaystyle P\left(989\right)}$ returns the probability of making a right decision in rejecting the null hypothesis:${\displaystyle P\left('{\text{H}}_{1}^{'}|{\text{H}}_{1}\right)=1-\beta }$ , and we can calculate ${\displaystyle P\left(989\right)=1-\beta =0.9998\,{\text{ and }}\,\beta \left(989\right)=0.0002.}$ In this case, only 0.02 % of all samples will result in a non-rejection of the null hypothesis and hence a wrong decision. The probability of a type II error is small, because the difference ${\displaystyle 989-1,000}$ is large in statistical terms.. The following table lists values of ${\displaystyle P\left(\mu \right)}$ and ${\displaystyle 1-P\left(\mu \right)}$ for selected true population averages ${\displaystyle \mu }$, given the above ${\displaystyle \mu _{0}}$, ${\displaystyle \alpha }$ and ${\displaystyle \sigma }$.

${\displaystyle \mu }$ True hypothesis ${\displaystyle P\left(\mu \right)}$ ${\displaystyle 1-P\left(\mu \right)}$
${\displaystyle 988.00}$ ${\displaystyle {\text{H}}_{1}}$ ${\displaystyle 0.999973=1-\beta }$ ${\displaystyle 0.000027=\beta }$
${\displaystyle 990.40}$ ${\displaystyle {\text{H}}_{1}}$ ${\displaystyle 0.997744=1-\beta }$ ${\displaystyle 0.002256=\beta }$
${\displaystyle 992.80}$ ${\displaystyle {\text{H}}_{1}}$ ${\displaystyle 0.949497=1-\beta }$ ${\displaystyle 0.050503=\beta }$
${\displaystyle 995.20}$ ${\displaystyle {\text{H}}_{1}}$ ${\displaystyle 0.670038=1-\beta }$ ${\displaystyle 0.329962=\beta }$
${\displaystyle 997.60}$ ${\displaystyle {\text{H}}_{1}}$ ${\displaystyle 0.224416=1-\beta }$ ${\displaystyle 0.775584=\beta }$
${\displaystyle 1,000.00}$ ${\displaystyle {\text{H}}_{0}}$ ${\displaystyle 0.05=\alpha }$ ${\displaystyle 0.95=1-\alpha }$
${\displaystyle 1,002.40}$ ${\displaystyle {\text{H}}_{1}}$ ${\displaystyle 0.224416=1-\beta }$ ${\displaystyle 0.775584=\beta }$
${\displaystyle 1,004.80}$ ${\displaystyle {\text{H}}_{1}}$ ${\displaystyle 0.670038=1-\beta }$ ${\displaystyle 0.329962=\beta }$
${\displaystyle 1,007.20}$ ${\displaystyle {\text{H}}_{1}}$ ${\displaystyle 0.949497=1-\beta }$ ${\displaystyle 0.050503=\beta }$
${\displaystyle 1,009.60}$ ${\displaystyle {\text{H}}_{1}}$ ${\displaystyle 0.997744=1-\beta }$ ${\displaystyle 0.002256=\beta }$
${\displaystyle 1,012.00}$ ${\displaystyle {\text{H}}_{1}}$ ${\displaystyle 0.999973=1-\beta }$ ${\displaystyle 0.000027=\beta }$

The following diagram shows the graph of the power curve.

We can alter the shape of the power curve for a (given) fixed significance level ${\displaystyle \alpha }$ in our favour by increasing the sample size ${\displaystyle n}$ . We will illustrate the effect of a change in the sample size for the two ‘hypothetically’ true parameter values ${\displaystyle 1,002}$ and ${\displaystyle 989}$. The other test parameters remain constant: ${\displaystyle \mu _{0}=1,000}$, ${\displaystyle \alpha =0.05}$ and ${\displaystyle \sigma =10}$ .

${\displaystyle n=9}$ ${\displaystyle n=16}$ ${\displaystyle n=25}$ ${\displaystyle n=36}$
${\displaystyle P\left(1,002\right)=1-\beta }$ ${\displaystyle 0.0921}$ ${\displaystyle 0.126}$ ${\displaystyle 0.17}$ ${\displaystyle 0.224}$
${\displaystyle \beta \left(1,002\right)}$ ${\displaystyle 0.9079}$ ${\displaystyle 0.874}$ ${\displaystyle 0.83}$ ${\displaystyle 0.776}$
${\displaystyle P\left(989\right)=1-\beta }$ ${\displaystyle 0.91}$ ${\displaystyle 0.993}$ ${\displaystyle 0.9998}$ ${\displaystyle 0.999998}$
${\displaystyle \beta \left(989\right)}$ ${\displaystyle 0.09}$ ${\displaystyle 0.007}$ ${\displaystyle 0.0002}$ ${\displaystyle 0.000002}$

The next diagram displays the power of the two-sided test for these ${\displaystyle 4}$ alternative sample sizes.

When there is reason to believe that the machine produces output with small deviations from the desired weight, an increase of the significance level is advisable to statistically ‘discover’ these deviations reliably and minimize the type II error risk—given the incurred extra sampling costs are outweighed by the information gain.

## Formulating the Hypotheses

Let us illustrate the problem of choosing an appropriate null (and hence alternative) hypothesis with a real-world example. Consider a company manufacturing car tires. Alterations in the production process are undertaken in order to increase the tires’ lives. Yet competitors will not hesitate to claim that the average life of the tires hasn’t increased from the initial, pre-restructuring value of ${\displaystyle 38,000}$ kilometers (km). The producers’ management wants to justify the investment into the new production process and subsequent advertising campaign (i.e. save their necks) and commissions a scientific, i.e. statistical, investigation. That’s our part. The variable of interest is the life of an individual tire measured in km, denoted by, say, ${\displaystyle X}$. It is a random variable, because its fluctuations in magnitude depend on many unknown and known factors, that cannot practically be taken into account (such as speed, weight of the individual car, driving patterns, weather conditions, and even slight variations in the production process etc.). Before the ‘improvements’ in the production process, the average life of the particular type of car tire was ${\displaystyle 38,000}$ km; in theoretical terms, the expectation was ${\displaystyle E\left(X\right)=\mu _{0}=38,000{\text{ km}}}$. The mean value under the new production process is unknown and, in fact, the quantity we want to compare in statistical terms with ${\displaystyle \mu _{0}}$: The producer pays the statistician(s) to objectively show, if ${\displaystyle \mu >\mu _{0}=38,000{\text{ km}}}$. Note that we denote the true expectation under the new regime by ${\displaystyle \mu }$, as this is the parameter we are interested in and thus want to test. The ‘old’ mean ${\displaystyle \mu _{0}}$ ‘merely’ serves as benchmark, and the actual output it represents (the old tires) doesn’t receive further attention (and in particular neither does its fluctuations around the mean). The statement that management hopes that the statistician will ‘prove’ scientifically, ${\displaystyle \mu >\mu _{0}}$, looks very much like a readily testable . But as we have emphasized earlier, there is a crucial difference between formalized statements of scientific interest and the means of testing it by stating a null hypothesis suitable to make a reliable decision, that is, a decision that is backed by acceptable type I and II errors. So which hypothesis shall we test? It should be clear, that the problem at hand is a single-sided one; only deviations of the new expected life from the historical expected life in one direction are of interest. In deciding whether to test the hypothesis as it is already formalized using a left-sided test procedure or testing the negation, ${\displaystyle \mu \leq \mu _{0}}$, on a right-sided basis, we have to focus on the actual aim of the investigation: The tire producer intends to verify the claim of ${\displaystyle \mu }$ being greater than ${\displaystyle \mu _{0}}$, whilst at the same time controlling the risk of making a wrong decision (type I error) to a level that allows him to regard the (hopefully positive, i.e. a rejection of the null) test decision as statistically proven. This would be the case if the reverse claim of the new tires being less durable can be rejected with an acceptable (i.e. small) significance level, for this would imply that there is only a small probability that the null hypothesis, ${\displaystyle \mu \leq \mu _{0}}$, is true and hence the alternative hypothesis, ${\displaystyle \mu >\mu _{0}}$, not true. But that’s exactly the result the managers want to see. Let’s therefore state the negation of the statement to be tested as null hypothesis (and hope it will be rejected on the given significance level): ${\displaystyle {\text{H}}_{0}:\mu \leq \mu _{0}\quad {\text{ versus }}\quad {\text{H}}_{1}:\mu >\mu _{0},}$ with ${\displaystyle \mu _{0}=38,000{\text{ km}}}$. If the sample of ${\displaystyle n}$ new tires’ usable life leads to a rejection of the null hypothesis ${\displaystyle {\text{H}}_{0}}$ (${\displaystyle '{\text{H}}_{1}^{'}}$), a type I error will be made if the null hypothesis is true. If the null hypothesis is not- rejected on the basis of a particular sample of size ${\displaystyle n}$, the conjecture stated in the alternative hypothesis may still be true, in which case, the researcher has (unknowingly) made a type II error. Comparing the implications of type I and type II error for this example shows that the former’s impact on the manufacturers fortune is the crucial one, for

• the competitors can carry out (more or less) similar investigations using a left-sided test, leading to the PR nightmare associated with a possible contradiction of the producers’ test result,
• future investigation into tires subsequently produced would reveal the actual properties of the tires as the sample size inevitably increases with the amount sold, triggering even more embarrassing questions concerning the integrity and reliability of the manufacturer.

For these reasons, the tire manufacturer is best advised to keep the probability of a type I error, ${\displaystyle P\left('{\text{H}}_{1}^{'}|{\text{H}}_{0}\right)}$, small, by controlling the significance level, e.g. setting it to ${\displaystyle \alpha =0.05}$.

## Decision regions

When testing ${\displaystyle \mu }$ with either single- or two-sided tests the size of the non-rejection and rejection regions on the ${\displaystyle V}$ or ${\displaystyle T}$ (standardized test statistic) axis depends only on:

• the given (chosen) level of significance ${\displaystyle \alpha }$: ceteris paribus, increasing ${\displaystyle \alpha }$ will increase the size of the rejection region for H${\displaystyle _{0}}$, and will reduce the size of the non-rejection region (and vice versa).

Alternatively, when testing ${\displaystyle \mu }$ with either single- or two-sided tests the size of the non-rejection and rejection regions on the ${\displaystyle X}$ (our original random variable) axis depends on:

• the given (chosen) level of significance ${\displaystyle \alpha }$: ceteris paribus, increasing ${\displaystyle \alpha }$ will increase the size of the rejection region for H${\displaystyle _{0}}$, and will reduce the size of the non-rejection region (and vice versa);
• the sample size ${\displaystyle n}$: ceteris paribus, the larger the sample size, the greater the size of the rejection region for H${\displaystyle _{0}}$, and the smaller the size of the non-rejection region (and vice versa); and
• the dispersion ${\displaystyle \sigma }$ of the variable in the population and therefore ${\displaystyle S}$ in the sample: ceteris paribus, an increased variability ${\displaystyle \sigma }$ or ${\displaystyle S}$ leads to a decrease in the size of the rejection region for H ${\displaystyle _{0}}$, and increases the size of the non-rejection region (and vice versa).

That is, the critical values on the standardized test statistic axis are independent of the size of ${\displaystyle n}$ or ${\displaystyle \sigma }$ (alternatively, ${\displaystyle S}$). The same can not be said for the ”equivalent” critical values for the original ${\displaystyle X}$ axis where sample size and dispersion affect the magnitude of ”acceptable” expected deviations from the null. If the population variance ${\displaystyle \sigma }$ is known, the critical values and therefore the non-rejection/rejection regions for H${\displaystyle _{0}}$ can easily be calculated for the sample mean ${\displaystyle {\overline {X}}}$. We will do this for a two-sided test. We have derived the test statistic ${\displaystyle V}$ as a standardization of the estimator ${\displaystyle {\overline {X}}}$: ${\displaystyle V={\frac {{\overline {X}}-\mu _{0}}{\sigma }}\,{\sqrt {n}},}$ and, in terms of realizations ${\displaystyle x_{i}}$s of sample variables ${\displaystyle X_{i}}$s: ${\displaystyle v={\frac {{\overline {x}}-\mu _{0}}{\sigma }}\,{\sqrt {n}}.}$ In a two-sided test the non-rejection region for H${\displaystyle _{0}}$ consists of all realization ${\displaystyle v}$ of ${\displaystyle V}$ greater than or equal to ${\displaystyle -z_{1-\alpha /2}}$ and less than or equal to ${\displaystyle z_{1-\alpha /2}}$: ${\displaystyle \left\{v|-z_{1-\alpha /2}\leq v\leq z_{1-\alpha /2}\right\}.}$ Thus, the critical values ${\displaystyle -z_{1-\alpha /2}}$ and ${\displaystyle z_{1-\alpha /2}}$ are possible realization of the test statistic ${\displaystyle V}$. They are subject to the same standardization carried out to convert ${\displaystyle {\overline {X}}}$ into ${\displaystyle V}$ to express it in units comparable with standard normal quantiles: ${\displaystyle -z_{1-\alpha /2}={\frac {{\overline {X}}_{l}-\mu _{0}}{\sigma }}\,{\sqrt {n}}\,,\quad z_{1-\alpha /2}={\frac {{\overline {X}}_{u}-\mu _{0}}{\sigma }}\,{\sqrt {n}}.}$ As ${\displaystyle -z_{1-\alpha /2}}$ is the lower critical value with respect to ${\displaystyle V}$, we similarly have denoted the lower for ${\displaystyle {\overline {X}}}$ by ${\displaystyle {\overline {X}}_{l}}$ (the same applies to the upper bound of the non-rejection region, denoted by the subindex ${\displaystyle u}$). We can isolate the upper and lower bound of the for H${\displaystyle _{0}}$ in terms of the units of the sample mean: ${\displaystyle {\overline {X}}_{l}=\mu _{0}-z_{1-\alpha /2}\cdot {\frac {\sigma }{\sqrt {n}}}\,,\quad {\overline {X}}_{u}=\mu _{0}+z_{1-\alpha /2}\cdot {\frac {\sigma }{\sqrt {n}}}.}$ The resulting non-rejection region for H${\displaystyle _{0}}$ in terms of ${\displaystyle {\overline {X}}}$ is: ${\displaystyle \left\{{\overline {X}}\,|\,{\overline {X}}_{l}\leq {\overline {X}}\leq {\overline {X}}_{u}\right\}=\left\{{\overline {X}}\,|\,\mu _{0}-z_{1-\alpha /2}\cdot {\frac {\sigma }{\sqrt {n}}}\leq {\overline {X}}\leq \mu _{0}+z_{1-\alpha /2}\cdot {\frac {\sigma }{\sqrt {n}}}\right\},}$ and the associated rejection region is given by the complement ${\displaystyle \left\{{\overline {X}}\,|\,{\overline {X}}<{\overline {X}}_{l}\,{\text{ or }}\,{\overline {X}}>{\overline {X}}_{u}\right\}=\left\{{\overline {X}}\,|\,{\overline {X}}>\mu _{0}-z_{1-\alpha /2}\cdot {\frac {\sigma }{\sqrt {n}}}\,{\text{ or }}\,{\overline {X}}>\mu _{0}+z_{1-\alpha /2}\cdot {\frac {\sigma }{\sqrt {n}}}\right\}.}$ Similar transformations can be imposed on the estimators for one-sided tests.

## Power curve

We will derive the power curve for a two-sided population mean test. The power is calculated as {\displaystyle {\begin{aligned}P\left(\mu \right)&=P\left(V\in {\text{rejection region for H}}_{0}\,|\,\mu \right)\\&=1-P\left(V\in {\text{non-rejection region for H}}_{0}\,|\,\mu \right).\end{aligned}}} Assuming ${\displaystyle \mu }$ to be the true population mean, we have {\displaystyle {\begin{aligned}P\left(\mu \right)&=1-P\left(-z_{1-\alpha /2}\leq V\leq z_{1-\alpha /2}\,|\,\mu \right)\\&=1-P\left(-z_{1-\alpha /2}\leq {\frac {{\overline {X}}-\mu _{0}}{\sigma \,{\sqrt {n}}}}\leq z_{1-\alpha /2}\,|\,\mu \right).\end{aligned}}} Adding ${\displaystyle \mu -\mu }$ to the numerator of the middle term yields {\displaystyle {\begin{aligned}P\left(\mu \right)&=1-P\left(-z_{1-\alpha /2}\leq {\frac {{\overline {X}}-\mu _{0}+\mu -\mu }{\sigma \,{\sqrt {n}}}}\leq z_{1-\alpha /2}\,|\,\mu \right)\\&=1-P\left(-z_{1-\alpha /2}\leq {\frac {{\overline {X}}-\mu }{\sigma \,{\sqrt {n}}}}+{\frac {\mu -\mu _{0}}{\sigma \,{\sqrt {n}}}}\leq z_{1-\alpha /2}\,|\,\mu \right)\\&=1-P\left(-z_{1-\alpha /2}-{\frac {\mu -\mu _{0}}{\sigma \,{\sqrt {n}}}}\leq {\frac {{\overline {X}}-\mu }{\sigma \,{\sqrt {n}}}}\leq z_{1-\alpha /2}-{\frac {\mu -\mu _{0}}{\sigma \,{\sqrt {n}}}}\,|\,\mu \right)\\&=1-P\left(-z_{1-\alpha /2}-{\frac {\mu -\mu _{0}}{\sigma \,{\sqrt {n}}}}\leq V\leq z_{1-\alpha /2}-{\frac {\mu -\mu _{0}}{\sigma \,{\sqrt {n}}}}\,|\,\mu \right)\\&=1-\left[P\left(V\leq z_{1-\alpha /2}-{\frac {\mu -\mu _{0}}{\sigma \,{\sqrt {n}}}}\,|\,\mu \right)-P\left(V\leq -z_{1-\alpha /2}-{\frac {\mu -\mu _{0}}{\sigma \,{\sqrt {n}}}}\,|\,\mu \right)\right].\end{aligned}}} The power for the one-sided tests can be derived in a similar fashion. From a decision-theoretical point of view it is desirable, that the probability of correctly rejecting the null hypothesis increases quickly with a growing distance between the true parameter ${\displaystyle \mu }$ and the hypothetical value ${\displaystyle \mu _{0}}$, that is, we want the graph of the power curve to be as steep as possible in that range of the true parameter value. For a given estimator and test statistic, there are two possible ways of improving the ‘shape’ of the power curve. 1) Increasing the sample size ${\displaystyle n}$ The above formula for the power of a two-sided test for the mean is clearly positively related to the size of the sample ${\displaystyle n}$. In general, ceteris paribus, the graph of the power curve becomes steeper with growing ${\displaystyle n}$: For any true parameter value within the ${\displaystyle {\text{H}}_{1}}$ region (i.e. ${\displaystyle \mu \neq \mu _{0}}$ for the two-sided, ${\displaystyle \mu >\mu _{0}}$ for the right-sided and ${\displaystyle \mu <\mu _{0}}$ for the left-sided test), the probability ${\displaystyle 1-\beta }$ of rejecting the null hypothesis, and hence making a right decision, increases with growing ${\displaystyle n}$. That’s mirrored by a decreasing probability ${\displaystyle \beta }$ of making a type II error. Thus, the probability of correctly discriminating between the true and the hypothetical parameter value grows with increasing sample size. Given a fixed significance level ${\displaystyle \alpha }$, the probability of a type II error can be improved (reduced) by ‘simply’ enlarging the sample. The following diagram displays the graphs of ${\displaystyle 4}$ power curves based on four distinct sample sizes, with ${\displaystyle n_{1}.

2) Varying the significance level ${\displaystyle \alpha }$ Ceteris paribus, allowing for a higher probability of making a type I error, i.e. increasing the significance level ${\displaystyle \alpha }$, will shift the graph of the power curve upwards. This means, that a higher ${\displaystyle \alpha }$ leads to an increase in the probability of rejecting the null hypothesis for all possible true parameter values ${\displaystyle \mu }$. If the true parameter value within the H${\displaystyle _{1}}$ region (${\displaystyle \mu \neq \mu _{0}}$ for the two-sided, ${\displaystyle \mu >\mu _{0}}$ for the right-sided and ${\displaystyle \mu <\mu _{0}}$ for the left-sided test), rejecting the null is a right decision—the probability ${\displaystyle 1-\beta }$ of correctly rejecting the null hypothesis has increased, the probability ${\displaystyle \beta }$ of making a type II error has decreased. But the probability of rejecting the null hypothesis has also increased for true parameter values within the ${\displaystyle {\text{H}}_{0}}$ region, increasing the probability of making a type I error. Hence, we encounter a trade-off between the probabilities of making a type I and type II error, a problem that cannot be overcome mechanically, but has to be tackled within some sort of preference-based decision-theoretical approach. In the diagram below the power curve of a two-sided test with fixed sample size for two alternative significance levels is depicted. The red graph represents ${\displaystyle P\left(\mu \right)}$ for ${\displaystyle \alpha =0.05}$, the blue one ${\displaystyle P\left(\mu \right)}$ for ${\displaystyle \alpha =0.10}$.