Testing the Proportion in a Binary Population

From MM*Stat International

Revision as of 11:57, 19 March 2020 by Siskosth (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
English
Português
Français
‎Español
Italiano
Nederlands


Assume a random variable X has only two possible outcomes. We call the statistical population of X binary. If X is an indicator variable storing the information about the existence (or non-existence) of a feature, we can carry out statistical inference about the proportion of elements within the population possessing the property of interest (\pi ) or not (
1-\pi ). As in other parametric tests, the inference relates to a hypothetical value, here \pi _{0}, that represents a hypothetical proportion of population elements having the property of interest. We will introduce statistical test procedures based on a simple random sample of size n. This ensures that the sample variables X_{1},\ldots ,X_{n}, which are indicator variables with outcomes measured as either 0 or 1, are independent and identically distributed Bernoulli variables. As usual the significance level is denoted by \alpha .

Hypotheses

Depending on the application at hand, one- or two-sided tests are formulated: 1) \text{H}_{0}:\pi =\pi _{0},\text{H}_{1}:\pi \neq \pi _{0}. 2) \text{H}_{0}:\pi \leq \pi _{0},\text{H}_{1}:\pi >\pi _{0}. 3) \text{H}_{0}:\pi \geq \pi _{0},\text{H}_{1}:\pi <\pi _{0}. Our earlier remarks on the choice of null and alternative hypothesis in the section on testing population means also apply in this environment.

Test statistic and its distribution; decision regions

The sample proportion \widehat{\pi }=\frac{X}{n}=\frac{1}{n}\sum_{i=1}^{n}\,X_{i} is a suitable estimator of the population parameter \pi . The estimator X=\sum_{i=1}^{n}\,X_{i}, is a simple transformation of \widehat{\pi } (X=n\cdot \widehat{\pi }), which contains all the important information. It counts the number of elements in the sample possessing the property of interest. As has already been shown, X follows a Binomial distribution with parameters n and \pi
: X\thicksim B\left( n;\,\pi \right) . As n is chosen by the decision-maker, \pi is the only remaining parameter needed to completely specify the Binomial distribution. Following the logic applied in all parametric hypothesis testing problems, we assume \pi to be \pi _{0}, that is, we determine the distribution of the test statistic given the hypothetical proportion \pi _{0} is the one prevailing in the population: 
\pi =\pi _{0}. Hence, the estimator X becomes our , since it has a Binomial distribution with parameter n and \pi _{0} under \text{H}_{0}: V=X\overset{\text{H}_{0}}{\thicksim }B\left( n;\,\pi _{0}\right) . The rejection region of the null hypothesis contains all realizations of V for which the cumulated probabilities don’t exceed the \alpha . The critical values can be read from the numerical table of the cumulative distribution function F_{B}\left(
x\right) of B\left( n;\,\pi _{0}\right) , by following these rules: 1) The lower critical value c_{l} is the realization x of X, for which the cumulative distribution function just exceeds the value \alpha /2: F_{B}\left( c_{l}-1\right) \leq \alpha /2 and F_{B}\left( c_{l}\right) >\alpha /2. The upper critical value c_{u} is the argument x of the cumulative distribution function that returns a probability equal to or greater than 1-\alpha /2: F_{B}\left( c_{u}-1\right) <1-\alpha /2 and F_{B}\left( c_{u}\right) \geq 1-\alpha /2. The rejection region for H_{0} is given by\left\{ v\,|\,v<c_{l}\,\text{ or }\,v>c_{u}\right\} , such that P\left( V<c_{l}|\pi _{0}\right) +P\left( V>c_{u}|\pi _{0}\right)
\leq \alpha . For the non-rejection region for H_{0} we have\left\{ v\,|\,c_{l}\leq v\leq c_{u}\right\} , such that P\left( c_{l}\leq V\leq c_{u}|\pi _{0}\right) \geq 1-\alpha . 2) The critical value c is the smallest realization of the test statistic that occurs with cumulated probability of at least 1-\alpha  : F_{B}\left( c-1\right) <1-\alpha and F_{B}\left( c\right) \geq
1-\alpha . The rejection region for H_{0} is then\left\{ v\,|\,v>c\right\} , such that P\left( V>c|\pi _{0}\right) \leq \alpha . The non-rejection region for H_{0} is\left\{ v\,|\,v\leq c\right\} , such that P\left( V\leq c|\pi _{0}\right) \geq 1-\alpha . 3) The critical value c is determined as the smallest realization of the test statistic that occurs with cumulated probability of at least \alpha : 
F_{B}\left( c-1\right) \leq \alpha and F_{B}\left( c\right) >\alpha . The rejection region for H_{0} is \left\{ v\,|\,v<c\right\} , such that P\left( V<c|\pi _{0}\right) \leq \alpha . The non-rejection region for H_{0} is given by\left\{ v\,|\,v\geq c\right\} , such that P\left( V\geq c|\pi _{0}\right) \geq 1-\alpha . As V=X is a discrete random variable, the given significance level \alpha
will generally not be fully utilized (exhausted). The actual significance level \alpha _{a} will only by chance reach that level and will usually be smaller. The above tests are thus conservative with respect to the utilization of the allowance for the maximum probability of the type I error. Given the sample size n is sufficiently high, the estimator \widehat{\pi} can be standardized to give the test statistic V=\frac{\widehat{\pi}-\pi_{0}}{\sigma_{0}\left( \widehat{\pi}\right)}=\frac{
\widehat{\pi}-\pi_{0}}{\sqrt{\frac{\pi_{0}\,\left( 1-\pi_{0}\right)}{n}}}. Here, \sigma_{0}\left( \widehat{\pi}\right) is the standard deviation of the estimation function \widehat{\pi} under \text{H}_{0}. Under \text{H}_{0}, V has approximately standard normal distribution (i.e. normal with mean 0 and variance 1). Critical values for the given significance level can be taken from the cumulative standard normal distribution table. Decision regions for the one- and two sided tests are determined in the same way as those for the approximate population mean test for unknown \sigma : In fact, a hypothesis about a proportion is a hypothesis about an expectation (of a binary indicator variable): E\left(
\widehat{\pi }\right) =\pi .

Sampling and computing the test statistic

Once a sample of size n has been drawn, we have realization x_{1},\ldots
,x_{n} of the sampling variables, X_{1},\ldots ,X_{n}, and can compute the realized value v of the V.

Test decision and interpretation

See the remarks for the \mu test.

Power Curve P\left( \pi \right)

The power curve of the large-sample test based on V=\frac{\widehat{\pi }-\pi _{0}}{\sigma _{0}\left( \widehat{\pi }\right) }=
\frac{\widehat{\pi }-\pi _{0}}{\sqrt{\frac{\pi _{0}\,\left( 1-\pi
_{0}\right) }{n}}} can be calculated explicitly for all test situations in the same manner as the power curve for the population mean tests. The power curve of the exact test based on V=X is computed using the Binomial distribution (as this is the distribution underlying the test statistic) for all 0\leq \pi \leq 1 and fixed n. From the definition P\left( \pi \right) =P\left( V=X\in \,\text{rejection region for H}
_{0}\,|\,\pi \right) it follows 1) for the two-sided test P\left( \pi\right)=P\left( V<c_{l}\,|\,\pi \right)+P\left(
V>c_{u}\,|\,\pi \right)=P\left( V\leq c_{l}-1\,|\,\pi \right)+\left[1-
P\left( V\leq c_{u}\,|\,\pi \right)\right], 2) for the right-sided test P\left( \pi\right)=P\left( V>c\,|\,\pi \right)=1- P\left( V\leq
c\,|\,\pi \right), 3) for the left-sided test P\left( \pi\right)=P\left( V<c\,|\,\pi \right)=P\left( V\leq
c-1\,|\,\pi \right). Given the respective critical values, the probabilities can be looked up in the numerical table of the cumulative Binomial . For \pi =\pi _{0}, the power curve equals the actual \alpha _{a}. Imagine a ‘binary population’ of N=3,250 economics students, an unknown proportion of which is enthusiastic about statistics. We define the random variable X to assume one if the statistical element (‘economics student’) likes statistic and zero if not. We believe that half of the students fancy learning statistical concepts (our hypothetical proportion is thus \pi _{0}=0.5) and want to test whether this informed guess is true in statistical terms, at a significance level of \alpha on the basis of a random sample of size n: \text{H}_{0}:\pi =\pi _{0}=0.5\quad \text{ versus }\quad \text{H}_{1}:\pi
\neq \pi _{0}=0.5. In this interactive example you can repeat this test as often as you like. In each run a new sample is simulated (drawn). You can interact by deciding about \alpha and n in each repetition. In particular, you can try the following combinations:

Nl s2 52 e 5.gif

One of the raison d’etres of financial intermediaries is their ability to efficiently assess the credit-standing (‘creditworthiness’) of potential borrowers. The management of ABC bank decides to introduce an extended credit checking scheme if the proportion of customers with repayment irregularities isn’t below 20 per cent. The in-house statistician conducting the statistical test is asked to keep the probability of not deciding to improve the credit rating procedure even though the proportion is ‘really’ above 20 per cent low (i.e. to keep \alpha low). The random variable X: ‘credit event’ or ‘repayment problems’ is defined as an indicator variable taking on zero (‘no’) or one (‘yes’). The actual proportion \pi of clients having trouble with servicing the debt is unknown. The hypothetical boundary value for testing this population proportion is \pi _{0}=0.2.

Hypothesis

Deviations from the hypothetical parameter into one direction are of interest; thus, a one-sided test will be employed. As the bank hopes to prove that the evaluation processes in place are sufficient, i.e. the proportion of debtors displaying irregularities in repaying their loans is less than 20 per cent, this claim is formulated as the alternative hypothesis: \text{H}_{0}:\pi \geq \pi _{0}=0.2\quad \text{ versus }\quad \text{H}
_{1}:\pi <\pi _{0}=0.2 The properties of this test with respect to the bank managers’ requirements have to be evaluated to ensure the test really meets their needs. The type I error, which can be made if the null hypothesis is rejected, is here: '\text{H}_{1}^{' }|\text{H}_{0}=\text{'do not-reject that the proportion of problematic debtors}<0.2; \text{no new guidelines}\,|\,
\text{in reality, unreliable debtors make up at least 20 per cent; credit process has to be reviewed}. If the test results in the non-rejection of the null hypothesis, a type II error might occur: 
'\text{H}_{0}^{' }|\text{H}_{1}=\text{'do not-reject that the proportion of problematic debtors} \geq 0.2; \text{new evaluation process to be developed}\,|\,\text{in reality, unreliable debtors make up no more than 20 per cent; no need for action}. The type I error represents the risk the managers of the ABC bank want to cap. Its maximum level is given by the significance level, which has been set to a sufficiently low level of 0.05. The type II error represents the risk of a costly introduction of new credit evaluation processes without management-approved need. The impact of this scenario on the banks’ profitability is difficult to assess, as the new process will lead to a repricing of credits and thus may also generate cost savings. The following two alternatives are both based on the above test. A random sample is drawn from the population of 10,000 debtors without replacement. This is reasonable, if n/N\leq 0.05, as then the random sample can then be regarded as ‘simple’.

1st alternative

To curb costs, a sample size of n=30 is chosen. The sampling-theoretical requirement n/N\leq 0.05 is fulfilled.

Test statistic and its distribution; decision regions

The estimator X: ‘Number of clients with irregularities in debt servicing in sample of size 30’ can directly serve as our test statistic V. Under 
\text{H}_{0}, V=X has Binomial distribution B\left( 30;\,0.2\right) . A small V supports the . The critical value c is the smallest realization of 
X, for which F_{B}\left( x\right) equals to or is greater than \alpha , i.e. it has to satisfy: F_{B}\left( c-1\right) \leq \alpha
=0.05 and F_{B}\left( c\right) >\alpha =0.05. In the numerical table of the cumulative distribution function of B\left(
30;\,0.2\right) we find c=3, and thus we have the following decision regions: Rejection region for H_{0}:\left\{ v\,|\,v<3\right\} =\left\{ 0,1,2\right\} , with P\left( V<5|0.2\right) =0.0442. Non-rejection region for H_{0}:\left\{ v\,|\,v\geq 3\right\} =\left\{ 3,4,\ldots ,30\right\} , with P\left( V\geq 3|0.02\right) =0.9558. Because V=X is a discrete random variable, the given isn’t exhausted: i.e. \alpha _{a}=0.0442 <\alpha =0.05.

Sampling and computing the test statistic

30 randomly selected debtors are investigated with respect to reliability in debt servicing. Assume 5 of them haven’t always fulfilled their contractual obligations: v=5.

Test decision and interpretation

As v=5 belongs to the non-rejection region for H_{0}, the is not-rejected. Even though the sample proportion 
x/n=5/30=0.167 is smaller than the hypothetical boundary proportion \pi
_{0}=0.2, which should favour H_{1}, we cannot conclude \text{H}_{0} is false: at a significance level of 0.05, the difference cannot be regarded as statistically significant. In other words: It is far too likely that the difference has arisen from sampling variability due to the small sample size to be able to reject the null hypothesis. It is important to observe that it’s not merely the value of the point estimator compared to the hypothetical value that leads to a non-rejection or rejection of the null hypothesis, but intervals that take into account the random character of the estimator (i.e. the difference is compared to an appropriate, case specific, statistical yardstick to determine what is statistically significant large, and hence small, deviations/differences). Based on a random sample of size n=30 and a significance level \alpha =0.05, we were unable to show statistically, that the proportion of trouble debtors is significantly smaller than 20 per cent. Consequently, the ABC bank will review and try to improve the credit approval procedures.

Power

Not having rejected the null hypothesis, we are vulnerable to a type II error, which occurs when the is a true statement: '\text{H}_{0}^{' }|\text{H}
_{1}. Let’s calculate the type II error probability for a true parameter value 
\pi =0.15: What is the probability of not rejecting the null hypothesis in a left-sided test with \pi _{0}=0.2, n=30, \alpha =0.05 and c=3, given the true population proportion is \pi =0.15 and hence the null hypothesis actually wrong? \beta \left( \pi =0.15\right) =P\left( '\text{H}_{0}^{' }|\text{H}
_{1}\right) =P\left( V=X\in \,\text{non-rejection region for H}
_{0}\,|\,\pi =0.15\right) =P\left( V\geq 3\,|\,\pi =0.15\right) . We compute P\left( V\geq 3\,|\,\pi=0.15\right)=1-P\left(
V<3\,|\,\pi=0.15\right)=1-P\left( V\leq 2
\,|\,\pi=0.15\right)=1-0.1514=0.8486, where P\left( V\leq 2\,|\,\pi=0.15\right) is taken from the table of the cumulative distribution function B\left( 30; \,
0.15\right) for c=2, that is F_{B}\left( 2\right). Interpretation: Given the true proportion is \pi =0.15, 84.86\% of all samples of size n=30 will not be able to discriminate between the true parameter and the hypothetical \pi _{0}=.20, inducing the bank to undertake suboptimal improvements of the credit assessment process with probability 0.8486. In deciding to control the maximum error I probability, the bank is accepting type II error probabilities of such magnitude, statisticians can provide management with power function graphs for any desired true parameter value \pi . Of course, not rejecting the null hypothesis can also be the right decision: '\text{H}_{0}^{' }|\text{H}_{1}. Suppose, for example, that the true proportion of unreliable debtors is \pi =0.25. The probability of not rejecting the null hypothesis and hence (unknowingly) making the right decision given our current test setting (left sided with \pi _{0}=0.20, 
n=30, \alpha =0l05 and thus c=3) is P\left( V=X\in \,\text{non-rejection region for H}_{0}\,|\,\pi
=0.25\right) =P\left( V\geq 3\,|\,\pi =0.25\right) =P\left( '\text{H}
_{0}^{' }|\text{H}_{1}\right) =1-\alpha . We have P\left( V\geq 3\,|\,\pi=0.25\right)=1-P\left(
V<3\,|\,\pi=0.25\right)=1-P\left( V\leq 2
\,|\,\pi=0.25\right)=1-0.0106=0.9894, where P\left( V\leq 2\,|\,\pi=0.25\right) can be looked up in a numerical table of B\left( 30; \, 0.25\right) as the cumulative probability for values less than or equal to c=2, i.e. F_{B}\left(
2\right). These calculations can be carried out for any desired parameter value within the overall parameter space (here: \pi \in \left( 0,1\right) ). Depending on which hypothesis the individual parameter adheres to, the power curve 
P\left( \pi \right) or 1-P\left( \pi \right) returns probabilities for making a right decision or a type I or type II error.

\pi True hypothesis P\left( \pi\right) 1-P\left( \pi\right)
0 \text{H}_{1} 1=1-\beta 0=\beta
0.05 \text{H}_{1} 0.8122=1-\beta 0.1878=\beta
0.10 \text{H}_{1} 0.4114=1-\beta 0.5886=\beta
0.15 \text{H}_{1} 0.1514=1-\beta 0.8486=\beta
0.20 \text{H}_{0} 0.0442=\alpha_{a} 0.9558=1-\alpha_{a}
0.25 \text{H}_{0} 0.0106=\alpha 0.9894=1-\alpha
0.30 \text{H}_{0} 0.0021=\alpha 0.9979=1-\alpha
0.35 \text{H}_{0} 0.0003=\alpha 0.9997=1-\alpha
0.40 \text{H}_{0} 0=\alpha 1=1-\alpha

The following display shows the graph of the power curve in the left-sided test with parameters \pi _{0}=0.20, n=30, \alpha =0.05 and c=3.

Nl s2 52 e 4.gif

2nd alternative

Now the statistician tries to both satisfy the parameter \alpha=0.05 set by the management to contain the probability of the crucial type I error and keep the type II error as low as possible. She is aware of the trade-off relationship between \alpha and \beta error and focuses on possibilities of reducing the associated probabilities simultaneously by increasing the sample size n and thus making the decision an economic one. Cost projections in conjunction with a valuation of the benefit of higher reliability lead to a choice of n=350, still small enough to satisfy 
n/N\leq 0.05 as basis for simple random sampling without replacement.

Test statistic and its distribution; decision regions

The standardized test statistic V=\frac{\widehat{\pi }-\pi _{0}}{\sigma _{0}\left( \widehat{\pi }\right) }=
\frac{\widehat{\pi }-\pi _{0}}{\sqrt{\frac{\pi _{0}\,\left( 1-\pi
_{0}\right) }{n}}} is used. Under \text{H}_{0}, it is approximately with parameters \mu =0 and \sigma =1. Large sample theory suggests that the approximation is sufficiently accurate for a sample size of n=350. From the cumulative standard normal distribution table we can take c=z_{0.95}=1.645 to satisfy P\left( V\leq c\right) =1-\alpha
=0.95. From symmetry it follows that -c=-1.645, and we have \left\{
v\,|\,v<-1.645\right\} as the approximated for H_{0} and \left\{ v\,|\,v\geq -1.645\right\} as the approximated non-rejection region for H_{0}.

Sampling and computing the test statistic

From the universe of 10,000 debtors, 350 are selected and random, of which 63 turn out to have displayed problems in debt servicing at least once in their repayment history. Their proportion within the sample is thus 
0.18. Plugging this into the test statistic yields v=\frac{0.18-0.2}{\sqrt{\frac{0.2\,\cdot (\,0.8)}{350}}}=-0.935.

Test decision and interpretation

As v=-.0935 falls into the non-rejection region for H_{0}, the null hypothesis is not rejected. On the basis of this particular sample of size 
n=350, it can not be statistically claimed, that the proportion of problematic debtors is less than 20 per cent. The ABC bank management will thus initiate a review of their credit procedures.

Type II error probability

As the bank management has been induced to not-reject the statement in the null hypothesis, it may have made a type II error, which occurs if the true proportion amongst the 10,000 is actually smaller than 
0.2: '\text{H}_{0}^{' }|\text{H}_{1}. Let’s examine the probability of this happening for a ‘hypothetical’ true population proportion of \pi
=0.15, i.e. P\left( '\text{H}_{0}^{' }|\text{H}
_{1}\right) =\beta \left( \pi =0.15\right) . First we must determine the critical proportion p_{c} corresponding to the critical value calculated using the normal approximation. From -c=\left( p_{c}-\pi _{0}\right) /\sigma \left( \hat{\pi}\right) follows p_{c}=\pi _{0}-c\cdot \sigma \left( \hat{\pi}\right) =0.2-1.645\,\left(
0.2\cdot 0.8/350\right) =0.1648. \beta \left( \pi =0.15\right) is the probability of the sample function 
\widehat{\pi } assuming a value from the non-rejection region of the null hypothesis, given the true parameter \pi belongs to the alternative hypothesis: \beta \left( \pi =0.15\right) =P\left( \widehat{\pi }\geq p_{c}\,|\,\pi
=0.15\right) =P\left( \widehat{\pi }\geq 0.1648\,|\,\pi =0.15\right) . In order to determine this probability on the basis of a numerical table for the standard normal distribution, we must standardize using E\left(
\widehat{\pi }\right) =\pi =0.15 and Var\left( \widehat{\pi }\right) =\pi
\left( 1-\pi \right) /n=0.15\cdot (0.85)/350: \begin{align}
\beta \left( \pi =0.15\right) & =P\left( \widehat{\pi }\geq
p_{c}\,|\,\pi =0.15\right) =P\left( \frac{\widehat{\pi }-\pi _{0}}{\sqrt{
\frac{\pi \,\left( 1-\pi \right) }{n}}}\geq \frac{p_{c}-\pi _{0}}{\sqrt{
\frac{\pi \,\left( 1-\pi \right) }{n}}}\,|\,\pi =0.15\right) \\
& =P\left( \frac{0.1648-0.15}{\sqrt{\frac{0.15\cdot (0.85)}{350}}}
\,|\,\pi =0.15\right) =P\left( V\geq 0.775\,|\,\pi =0.15\right) .\end{align} In the standard normal distribution table we find P\left( V\leq 0.775
\right)=0.7808 and thus have \beta\left( \pi=0.15\right)=1-P\left( V\leq 0.775
\right)=1-0.7808=0.2192. Thus, compared to \beta\left( \pi=0.15\right) from the 1st alternative, the increase in the sample size has resulted in a sizeable reduction in the error type II probability for a true population proportion of \pi=0.15.

Nl s2 52 f 2.gif

A statistics professor has the impression that in the last year the university library has bought proportionally less new statistics books than in the past. Over the last couple of years the relative amount of statistics books amongst new purchases has consistently been more than 10 per cent. He asks one of his assistants to investigate whether this has changed in favour of other departments. Acting on behalf of his students whom he wants to secure as many new books as possible, he asks his assistant to minimize the risk of not complaining to the head of the library when the proportion of statistics books has decreased. The assistant decides to have a sample of 25 books taken from the file containing the new purchases over the last 12 months. He wants to know how many of these are statistics books. He is thus dichotomizing the random variable ‘subject matter’ into the outcomes ‘statistics’ and ‘not statistics’. Of course, if you regard the purchases as an outcome of a decision-making process conducted by the librarians, this is anything but a random variable. But for the statisticians who rely on a sample because they don’t have access to all relevant information, it appears to be one. From the proportion of statistics books the assistant wants to infer to the population of all newly purchased books, using a statistical test to allow for deviations of the proportion in the sample from those in the population. In particular, he wants to verify whether the proportion has indeed dropped below the past average of 10 per cent. He will thus test the population proportion \pi and chooses a ‘standard’ of 0.05.

Hypothesis

As the assistant wants to verify whether the proportion has dropped below 
0.1, he has to employ a one-sided test. He recalls that the professor wants him to minimize the probability of not disclosing that the proportion has decreased below \pi_{0}=0.1 when in reality it has. He thus opts for a right-sided test, i.e. puts the professors’ claim as null hypothesis in the hope of not rejecting it: \text{H}_{0}: \pi \leq \pi_{0}=0.1 \quad \text{ versus } \quad\text{H}_{1}:
\pi > \pi_{0}=0.1. The assistant undertakes an investigation into the properties of this test with respect to the professors’ intention of minimizing the probability of not detecting a relative decrease in the statistics book supply. A real-world decrease can only not have been detected if the has been rejected even though it is really true. This situation is called type I error: '\text{H}_{1}^{' }|\text{H}_{0}=\text{'conclude proportion of statistics books has not decreased'}|\text{in reality, the proportion has decreased}. The maximum probability of this situation, P\left( '\text{H}
_{1}^{' }|\text{H}_{0}\right) , is given by the \alpha , which has been set to 0.05. Thus, the risk the professor wanted to ‘minimize’ is under control. If the null hypothesis is not-rejected, then a type II error can arise: '\text{H}_{0}^{' }|\text{H}_{1}=\text{'conclude proportion of statistics books has decreased'}|\text{in reality, the proportion has not decreased}. The probability of this happening (conditional on the null hypothesis not having been rejected), P\left( '\text{H}_{1}^{' }|\text{H}
_{0}\right) =\beta , is unknown, because the true proportion \pi (which is element of the parameter set specified by the ), is unknown. As we have already seen in other examples, it can be substantial, but the professors’ priorities lie on trading off type II error for type I error which is under control.

Test statistic and its distribution; decision regions

The estimator X: ‘number of statistics books in a sample of 25 books’ can serve as test statistic V. Under \text{H}_{0}, 
V=X has Binomial distribution with parameter n=25 and \pi =0.1: 
V\thicksim B\left( 25;\,0.1\right) . A relatively high number of statistics books in the sample supports the , that the proportion of statistics books has not decreased. The critical value c is the realization of X , for which F_{B}\left( c\right) equals or exceeds 1-\alpha =0.05, that is, we require F_{B}\left( c-1\right) <1-\alpha =0.95 and F_{B}\left(
c\right) \geq 1-\alpha =0.95. In the table of the cumulative of B\left( 25;\,0.1\right) you will find c=5. The rejection region for H_{0} is thus\left\{ v\,|\,v>5\right\} =\left\{ 6,7,\ldots ,25\right\} , such that P\left( V>5|0.1\right) =0.0334=\alpha _{a}<\alpha . As V=X is a discrete random variable, the given significance level isn’t fully utilized:\alpha _{a}=0.0334<\alpha =0.05. The non-rejection region for H_{0} is given by\left\{ v\,|\,v\leq 5\right\} =\left\{ 0,1,2,3,4,5\right\} , such that P\left( V\leq 5|0.01\right) =0.9666.

Sampling and computing the test statistic

A subset of 25 books is selected at random from the list of last years’ new purchases and categorized in statistics and non-statistics books. As the total amount of new books is sufficiently large from a sample-theoretical point of view, a simple random sample is drawn, i.e. the sampling is carried out without replacement. The amount of statistics books in the sample is counted to be x=3, which will serve as the realized test statistic value v.

Test decision and interpretation

As v=3 falls into the non-rejection region for H_{0}, the cannot be rejected. On the basis of a random sample of size n=25 and a significance level of \alpha =0.05, the assistant couldn’t verify statistically that the proportion of statistics books is still above 10 per cent. This test result means that a complaint to the library seems to be merited.

Power

Given our test parameters (\pi _{0}=0.1, n=25, \alpha =0.05 and c=5 ), what is the probability of not rejecting the null hypothesis if the true proportion of statistics books is \pi =0.2? That is, we want to calculate the probability of the type II error given a specific element of the parameter set associated with the alternative hypothesis, 
\pi =0.2: \beta \left( 0.2\right) =P\left( '\text{H}_{0}^{' }|\text{H}
_{1}\right) =P\left( V=X\in \,\text{non-rejection region for H}
_{0}\,|\,\pi =0.2\right) =P\left( V\leq 5\,|\,\pi =0.2\right) . In the table of the cumulative Binomial distribution B\left(
25;\,0.2\right) we find this probability to be 0.6167. Alas, if the true proportion has increased to 20 per cent, there is still a 61.67 per cent chance of not discovering a significant deviation from the hypothetical boundary proportion of 10 per cent. This is the probability of an unjustified complaint issued by the professor given the proportion has risen to 0.2—a substantial relative increase. The probability of making a type II error contingent on alternative true proportions \pi can be computed via the power curve. Levels of P\left(
\pi \right) and 1-P\left( \pi \right) for several values of \pi are listed in the following table.

\pi True hypothesis P\left( \pi\right) 1-P\left( \pi\right)
0 \text{H}_{0} 0=\alpha 1=1-\alpha
0.05 \text{H}_{0} 0.0012=\alpha 0.9988=1-\alpha
0.1 \text{H}_{0} 0.0334=\alpha_{a} 0.9666=1-\alpha_{a}
0.15 \text{H}_{1} 0.1615=1-\beta 0.8385=\beta
0.20 \text{H}_{1} 0.3833=1-\beta 0.6167=\beta
0.25 \text{H}_{1} 0.6217=1-\beta 0.3783=\beta
0.30 \text{H}_{1} 0.8065=1-\beta 0.1935=\beta
0.35 \text{H}_{1} 0.9174=1-\beta 0.0826=\beta
0.40 \text{H}_{1} 0.9706=1-\beta 0.0294=\beta
0.45 \text{H}_{1} 0.9914=1-\beta 0.0086=\beta
0.50 \text{H}_{1} 0.9980=1-\beta 0.0020=\beta
0.60 \text{H}_{1} 0.9999=1-\beta 0.0001=\beta
0.70 \text{H}_{1} 1=1-\beta 0=\beta

For example, if the true proportion (and therefore absolute amount) of statistics books is \pi =0, the sample cannot contain any statistics books and we will expect x=0 and won’t reject the null hypothesis. The rejection of the null hypothesis ('\text{H}_{1}^{' }) is an impossible event with associated probability of zero. The poweris the conditional probability of rejecting the null hypothesis given the relative amount is zero: P\left( 0\right) =P\left( V=X\in \,\text{rejection region for H}
_{0}\,|\,\pi =0\right) =P\left( '\text{H}_{1}^{' }\,|\,0\right) =0. If, on the other hand, the true proportion of statistics books is \pi=0.35 , the power is calculated as P\left( 0.35\right)=P\left( V>5\,|\,\pi=0.35\right)=1-P\left(
V\leq5\,|\,\pi=0.35\right)=1-0.0826=0.9174, where P\left( V\leq5\,|\,\pi=0.35\right) can be looked up in the table of the cumulative distribution function as the value of 
B\left( 25; \, 0.2\right) for c=5. P\left( 0.35\right) is the probability of correctly rejecting the null hypothesis, P\left( '\text{H}_{1}^{' }|\text{H}_{1}\right) . The probabilities of rejecting the null hypothesis and not-rejecting it must always sum up to one for any given true parameter value within the range specified by the alternative hypothesis: P\left( '\text{H}_{0}^{' }|\text{H}_{1}\right) +P\left( '\text{H}_{1}^{' }|\text{H}_{1}\right) =1. For a true proportion of \pi =0.35, the former sampling result amounts to making a type II error, the probability of which is denoted by \beta \left(
0.35\right) . Thus, we can write \beta \left( 0.35\right) +P\left( '\text{H}_{1}^{' }|\text{H}
_{1}\right) =1 or P\left( '\text{H}_{1}^{' }|\text{H}_{1}\right) =1-\beta \left(
0.35\right) . As P\left( '\text{H}_{1}^{' }|\text{H}_{1}\right) is the value of the power at point \pi =0.35, we can calculate the probability of making a type II error as \beta \left( 0.35\right) =1-P\left( 0.35\right) =0.0826. If the true proportion of statistics books is 35 per cent, 8.26 per cent of all samples of size n=25 will lead to a non-rejection of the null hypothesis, i.e. won’t detect the significant difference between 
\pi =0.35 and \pi _{0}=0.10. The following display depicts the graph of the power curve for the right-sided test we have just discussed: \pi _{0}=0.10, n=25, \alpha
=0.05 and c=5.

Nl s2 52 f 1.gif