How does Hypothesis Testing work

How are hypothesis testing and confidence intervals related and it also explain how does hypothesis testing contribute to scientific knowledge
DrKateBesant Profile Pic
DrKateBesant,United States,Researcher
Published Date:06-07-2017
Your Website URL(Optional)
Comment
Hypothesis Testing ECE 3530 Spring 2010 Antonio PaivaWhat is hypothesis testing? A statistical hypothesis is an assertion or conjecture concerning one or more populations. To prove that a hypothesis is true, or false, with absolute certainty, we would need absolute knowledge. That is, we would have to examine the entire population. Instead, hypothesis testing concerns on how to use a random sample to judge if it is evidence that supports or not the hypothesis.What is hypothesis testing? (cont.) Hypothesis testing is formulated in terms of two hypotheses:  H : the null hypothesis; 0  H : the alternate hypothesis. 1What is hypothesis testing? (cont.) The hypothesis we want to test is if H is \likely" true. 1 So, there are two possible outcomes:  RejectH and accept H because of sucient evidence in 0 1 the sample in favor or H ; 1  Do not reject H because of insucient evidence to 0 support H . 1What is hypothesis testing? (cont.) Very important Note that failure to reject H does not mean the null 0 hypothesis is true. There is no formal outcome that says \accept H ." It only means that we do not have sucient 0 evidence to support H . 1What is hypothesis testing? (cont.) Example In a jury trial the hypotheses are:  H : defendant is innocent; 0  H : defendant is guilty. 1 H (innocent) is rejected if H (guilty) is supported by 0 1 evidence beyond \reasonable doubt." Failure to reject H 0 (prove guilty) does not imply innocence, only that the evidence is insucient to reject it.Case study A company manufacturing RAM chips claims the defective rate of the population is 5%. Let p denote the true defective probability. We want to test if:  H :p = 0:05 0  H :p 0:05 1 We are going to use a sample of 100 chips from the production to test.Case study (cont.) Let X denote the number of defective in the sample of 100. Reject H if X 10 (chosen \arbitrarily" in this case). 0 X is called the test statistic. p = 0:05 Reject H , p 0:05 0 critical region 0 10 100 Do not critical reject H value 0Case study (cont.) Why did we choose a critical value of 10 for this example? Because this is a Bernoulli process, the expected number of defectives in a sample is np. So, if p = 0:05 we should expect 100 0:05 = 5 defectives in a sample of 100 chips. Therefore, 10 defectives would be strong evidence that p 0:05. The problem of how to nd a critical value for a desired level of signi cance of the hypothesis test will be studied later.Types of errors Because we are making a decision based on a nite sample, there is a possibility that we will make mistakes. The possible outcomes are: H is true H is true 0 1 Do not Correct Type II reject H decision error 0 Type I Correct Reject H 0 error decisionTypes of errors (cont.) De nition The acceptance of H when H is true is called a Type I error. 1 0 The probability of committing a type I error is called the level of signi cance and is denoted by . Example Convicting the defendant when he is innocent The lower signi cance level , the less likely we are to commit a type I error. Generally, we would like small values of ; typically, 0.05 or smaller.Types of errors (cont.) Case study continued = Pr(Type I error) = Pr(reject H when H is true) 0 0 = Pr(X 10 when p = 0:05) 100 X = b(x;n = 100;p = 0:05); binomial distribution x=10   100 X 100 x 100x = 0:05 0:95 = 0:0282 n x=10 So, the level of signi cance is = 0:0282.Types of errors (cont.) De nition Failure to reject H when H is true is called a Type II error. 0 1 The probability of committing a type II error is denoted by . Note: It is impossible to compute unless we have a speci c alternate hypothesis.Types of errors (cont.) Case study continued We cannot compute for H :p 0:05 because the true p is 1 unknown. However, we can compute it for testing H :p = 0:05 against the alternative hypothesis that 0 H :p = 0:1, for instance. 1 = Pr(Type II error) = Pr(reject H when H is true) 1 1 = Pr(X 10 when p = 0:1) 9 X = b(x;n = 100;p = 0:1) = 0:4513 x=0Types of errors (cont.) Case study continued What is the probability of a type II error if p = 0:15? = Pr(Type II error) = Pr(X 10 when p = 0:15) 9 X = b(x;n = 100;p = 0:15) = 0:0551 x=0E ect of the critical value Moving the critical value provides a trade-o between and . A reduction in is always possible by increasing the size of the critical region, but this increases . Likewise, reducing is possible by decreasing the critical region.E ect of the critical value (cont.) Case study continued Lets see what happens when we change the critical value from 10 to 8. That is, we reject H if X 8. 0 Reject H 0 old critical region 0 8 10 100 new critical region Do not critical reject H value 0E ect of the critical value (cont.) Case study continued The new signi cance level is = Pr(X 8 when p = 0:05) 100 X = b(x;n = 100;p = 0:05) = 0:128: x=8 As expected, this is a large value than before (it was 0.0282).E ect of the critical value (cont.) Case study continued Testing against the alternate hypothesis H :p = 0:1, 1 = Pr(X 8 when p = 0:1) 7 X = b(x;n = 100;p = 0:1) = 0:206; x=0 which is lower than before. Testing against the alternate hypothesis H :p = 0:15, 1 7 X = b(x;n = 100;p = 0:15) = 0:012; x=0 again, lower than before.E ect of the sample size Both and can be reduced simultaneously by increasing the sample size. Case study continued Consider that now the sample size is n = 150 and the critical value is 12. Then, reject H if X 12, where X is now the 0 number of defectives in the sample of 150 chips.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.