Hypothesis testing is the method psychologists and research scientists employ to prove or disprove the validity of their hypotheses.
Research seeks to answer questions or to prove theories by experimentally manipulating, or testing, variables. A hypothesis is an idea or estimation of a possible result or explanation. Experiments are designed to test whether a hypothesis is valid. With carefully controlled experimental methods, analysis, and interpretation, researchers try to draw general conclusions that should be applicable in other situations.
When psychologists engage in research, they generate specific questions called hypotheses. Research hypotheses are informed speculations about the likely results of a project. In a typical research design, researchers might want to know whether people in two groups differ in their behavior.
Psychologists often construct research protocols to determine which of two or more conclusions is more likely: the research hypothesis, i.e., whether organizing related information helps memory or its complement whether organizing related information does not help memory. The possibility that organizing related information will make no difference is called the null hypothesis, because it speculates that there may be no change in learning. The other possibility, that organizing related information helps learning, is called the research hypothesis or the alternate hypothesis. To test the research hypotheses, people will be randomly assigned to one of two groups that differ in the way they are taught to learn and then the specific memory of the people in the two groups is compared.
As a rule, psychologists attempt to rule out the null hypothesis and to accept the research hypothesis because their research typically tries to focus on changes from one situation to the next, not failure to change. In hypothesis testing, psychologists are aware that they may make erroneous conclusions. For example, they might reject the null hypothesis and conclude that performance of people in two groups is different; that is, that one group remembers more than the other because they organize the information differently. In reality, one group might have been lucky and if the study were performed a second time, the result might be different. In hypothesis testing, this mistaken conclusion is called a Type I error.
Sometimes researchers erroneously conclude that the difference in the way the two groups learn is not important. That is, they fail to reject the null hypothesis when they should. This kind of error is called a Type II error.
When researchers conduct a single experiment, they may be making an error without realizing it. This is why researchers repeat experiments and test the work of others in an attempt to isolate variables, determine cause and effect relationships, evaluate the validity of hypotheses, and arrive at outcomes that can be generalized or repeated.
See also Independent variable ; Probability ; Scientific method ; Validity .
Altman, Douglas G. Statistics with Confidence: Confidence Intervals and Statistical Guidelines. London: BMJ Books, 2011.
Bennett, Paul. Abnormal and Clinical Psychology: An Introductory Textbook. Maidenhead, UK: McGraw Hill, Open University Press, 2011.
Brace, Nicola, and Jovan Byford. Investigating Psychology: Key Concepts, Key Studies, Key Approaches. Oxford: Oxford University Press, 2012.
Cohen, Ronald Jay, et al. Psychological Testing and Assessment: An Introduction to Tests and Measurement. New York: McGraw-Hill, 2013.
Davis, Stephen F., and Joseph J. Palladino. Psychology. Upper Saddle River, NJ: Prentice Hall, 2010.
Horowitz, Leonard M., and Stephen Strack. Handbook of Interpersonal Psychology: Theory, Research, Assessment and Therapeutic Interventions. Hoboken, NJ: Wiley, 2011.
Martin, William E., and Krista D. Bridgmon. Quantitative and Statistical Research Methods from Hypothesis to Results. Hoboken, NJ: John Wiley & Sons, 2012.
Privitera, Gregory J. Statistics for the Behavioral Sciences. Thousand Oaks, CA: SAGE, 2012.
Spiegel, Murray R., et al. Probability and Statistics. New York: McGraw-Hill, 2009.
Takahashi, Shin. The Manga Guide to Statistics. San Francisco: No Starch Press, 2009.
Wilcox, Rand R. Introduction to Robust Estimation and Hypothesis Testing. Amsterdam: Academic Press, 2012.
Cioffi-Revilla, Claudio. “Computational Social Science.” Com-putational Statistics 2, no. 3 (May/June 2010): 259–71.