Empirical Methods in Political Science: An Introduction

5 hypothesis testing.

By Zhihang Ruan

5.1 Introduction

Either in our daily lives or in scientific research, we come across a lot of claims. We may formulate our own hypotheses based on our knowledge, available information, or existing theory. These hypotheses can be descriptive, e.g., we may hypothesize that a certain percent of U.S. voters support the policy of universal basic income. Or the hypothesis can be causal, e.g., we may believe that education leads to stronger support for gender equality. The measures (for example, mean or standard deviation) used to describe a population distribution are called population parameter . If we have access to everyone among the population we are interested in, then we may easily tell whether our hypothesis of a population parameter is true or false (e.g., if we know every voter’s support for the policy of universal basic income, then we can prove/disprove our hypothesis concerning the support rate for the policy). But in many cases, we do not have access to the population to firmly prove or disprove our hypotheses. For example, it may cost too much to ask each U.S. voter about their opinions on specific policies. In these cases, statistical theory and methods provide us some effective ways to test a hypothesis, or more accurately, assess whether the observed data is or is not consistent with a claim of interest concerning the population. In this chapter, we will go through the idea of hypothesis testing in statistics and how it is applied in political science.

5.2 Background

There are different understandings of hypothesis testing. In this chapter, we will follow the Neyman-Pearson paradigm ( Rice 2007, 331 ) , which casts hypothesis testing as a decision problem. Within this paradigm, we first have a null hypothesis and an alternative hypothesis concerning the population. A null hypothesis is a claim or hypothesis we plan to test, or more specifically, something we decide whether to reject or not. It can be descriptive (e.g., the support rate for the president among all U.S. voters) or causal (education leads to stronger support for gender equality among all human beings). An alternative hypothesis is also called the research hypothesis, which is opposite to the null hypothesis. It is what we believe to be true if we reject the null hypothesis. Then with what we observe in a random sample from the population, we make a decision to reject or not reject the null hypothesis concerning the population. This approach does not enable us to say the exact probability that the null or alternative hypothesis is true. 7 To do that, we need more information and maybe another paradigm (e.g., so-called prior probability within the Bayesian paradigm), and we will not go in details in this chapter. But, even though the approach we discuss in this chapter does not directly tell us how likely a hypothesis is true or false, the approach is very useful in scientific studies as well as daily lives, as you will see in this chapter.

As mentioned in the introduction of this chapter, the classic idea of hypothesis testing concerns a sample and a population. In the previous chapter, we learned what the terms population, random sample and random sampling mean. The techniques we discuss in this chapter mostly assume a random sample. Below, we will quickly review the idea of random sampling and random sample and explains how random sampling enables us to make inference about the population with what we observe in the sample.

5.3 Samples and Sampling

As mentioned in the beginning of this chapter, in many cases, we do not have the access to all the units of the population we are interested in. For example, if we are interested in the support rate for the president, it would be perfect if we know the opinion of every single person (i.e., unit of the population) in the U.S. However, it is almost impossible to get access to everyone’s opinion. In many cases, we can only get access to a small group of individuals, which we call a sample from the population. When the sample is randomly chosen from the population (i.e., everyone in the population has an equal chance to be selected, or at least has a specific chance known to the researchers before the sample is drawn), then we may learn about the population with what we observe in the random sample we have. More specifically, statistical theory enables us to make inference about the population from the random sample. In the next part, I will explain how we may make inference from a random sample to the population and test a hypothesis concerning the population with a random sample.

5.3.1 Magic of the Central Limit Theorem

Let’s say, we roll a fair die. We know the probability of getting 1 is 1/6. In other words, the probability that the mean of the number we get from one trial equals 1 is 1/6. Then, if we roll the same die twice, we get two numbers. We can calculate the mean of the two numbers. What is the probability that the mean equals 1? Is the probability still 1/6? No, because if the mean is 1, we have to get 1 twice, the probability of which would be 1/36 (which equals 1/6 times 1/6). Very likely, the mean we get is larger than 1. Similarly, if we roll the die three times, the mean of the three numbers we get would probably be larger than 1. If we roll the die many times (e.g. 1,000 times), it is almost impossible that the mean would be 1 or even close to 1 (since it means we need to get 1 in all or most of the trials). Then what would the mean be? The mean would not be an extreme number like 1 or 6. Instead, it would be very close to the expected value we get from rolling it once, which is 3.5, the average of all possible numbers we get. Among the 1,000 trials, the number of 1s we get would be close to the amount of 2s we get, or the amount of 3s, etc. If we take the average of all numbers we get in the 1,000 trials, we would get a number very close to 3.5, which equals (1+2+3+4+5+6)/6.

This is what we call the weak law of large numbers: the sample average converges in probability towards the expected value or the population average, or in other words, the average of the sample gets close to the population average when the sample size is large (e.g., when rolling the die 1000 times).

One step further from the law of large numbers, we can rely on something called the central limit theorem to make inference. The central limit theorem suggests that the mean of a sufficiently large number of independent draws from any distribution will be normally distributed. A normal distribution is a bell-shaped probability density. From the example above, we already know the mean of a large amount of draws is very close to the expected value of the population. But in most cases, the average of the draws will not be exactly equal to the expected value of the population (which is 3.5 in the example of rolling a fair die). The central limit theorem enables us to calculate/quantify the probability that the sample average falls into intervals around the expected value of the population. As long as the expected value and variance of a normal distribution is known, we can calculate the probability that we get a sample mean within a specified interval. For example, with some calculation based on the central limit theorem (which we will not go into details here), we know that if we roll a fair die 1,000 times, the chance that the mean of the 1,000 numbers we get falls between 3.394 and 3.606 is roughly 0.95 (or 95 percent).

What if, after rolling the die 1,000 times, the average of the 1,000 numbers we get is much smaller than 3.394 or much larger than 3.606? Then we may want to check whether there is some problem with the rolling process, or whether the die is fair. Similarly, if we hypothesize that the support rate for the president is 50 percent, but after interviewing 1,000 people randomly drawn from the population, we find that the support rate is much lower than 50 percent, then we may doubt whether the support rate is really 50 percent. This makes sense when the sample is drawn randomly from the population. But if the sample is not drawn randomly (e.g., all the people in the sample are drawn from a gathering of a specific party), then the result does not tell us much about the support rate among the population. This is like a magician who uses tricks and gets 1 every time rolling a fair die. We cannot learn anything about the die based on the mean the magician gets.

These examples show us how central limit theorem works and how it makes hypothesis testing possible. In the next part, I will explain more specifically how we may estimate the population average/expected value based on what we observe from the sample, as well as how to test a hypothesis.

5.4 Estimates and Certainty

Based on the central limit theorem, we can make inferences about the population with the data we observe. One way to estimate the population parameter is called point estimate , which is a sample statistic used to estimate the exact value of a population parameter. We may consider the point estimate as our best guess to the population parameter based on what we observe in the sample. For example, if we learn that the mean of a random sample from simple random sampling is 3.5, then we may say that the point estimate of the population mean is 3.5.

But in most cases, the point estimate does not equal the true value of the population parameter (e.g., the population mean can be 3.5001, 3.4986 or other number when the sample mean is 3.5). Another way to estimate the population parameter is interval estimation. With the information we learn from the sample, we may calculate an interval that may include the population average. The central limit theorem enables us to quantify how confident we are that the interval will include the population average. The interval is called confidence interval , which defines a range of values within which the population parameter is estimated to fall. If we want to estimate the confidence interval of the population mean, we need the sample mean, the estimated population variance, and the sample size. A 95 percent confidence interval for the population mean equals \(\bar{X}\pm 1.96 * (S_{\bar{X}})\) . \(S_{\bar{X}}\) is the estimated standard error of the sampling distribution of the sample mean. It is equal to the standard error (or the square root of the variance) of the population divided by the square root of the sample size. 8 We can see from the formula that the range of the interval will decrease when the population variance is small, and the sample size is large. This makes sense intuitively because when there is little variation among the population, or when we have a large sample, the sample mean may be close to the population mean, and thus our estimation will be more precise.

In short, we can estimate the confidence interval of the population mean based on the sample we get. Similarly, if we have a hypothesis about the population average, then we can calculate an interval which the sample mean may fall into, and quantify how confident we are that the sample average will fall onto this interval.

It is intuitive to say that if we increase the range of our estimated interval, we are more confident that the interval will include the population mean. The trade-off is that our estimation is less precise. The likelihood, expressed as a percentage or a probability, that a specified interval will contain the population parameter, is called confidence level . For example, if we learn from a random sample (with a sample size of 1,000) that the support rate for the president is 52 percent, then a 95 percent confidence interval of the support rate among the population is between 50.5 and 53.5. And a 99 percent confidence interval is roughly 50.0 to 54.0 percent. As we can see, the confidence interval becomes wider (in other words, our estimation becomes less precise) if we want to be more confident that the population mean is within the confidence interval we estimate (i.e., we have a higher confidence level). More specifically, a 99 percent confidence interval for the population mean equals \(\bar{X}\pm 2.58 * (S_{\bar{X}})\) . 9 As we can see, the interval is wider than the 95 percent confidence interval, which is \(\bar{X}\pm 1.96 * (S_{\bar{X}})\) , and the 90 percent confidence interval, which is \(\bar{X}\pm 1.64 * (S_{\bar{X}})\) .

5.5 Steps of Hypothesis Testing

Hypothesis testing becomes more straightforward once we understand the central limit theorem and confidence interval. As mentioned earlier, if we have a hypothesis of the population mean, then we can calculate a confidence interval that the sample average will fall into. But if the sample average is very different from the population average we hypothesize, or in other words, falls outside the confidence interval at a specific confidence level, then we may reject the null hypothesis with a specific level of confidence. For example, if we hypothesize a die is a fair one, then the expected value (or the population mean) we get from rolling the die once is 3.5. However, if we roll the die many times (e.g., 1000 times), and the mean of all the numbers we get is 2.003, then we may be very confident to say that the die is not a fair die (i.e., we will reject the null hypothesis that the die is a fair one).

More specifically, there are four steps of hypothesis testing. First, we need to have a statement about a population parameter evaluated with a test statistic. The parameter can be the population mean (e.g., the average number of basketball games Americans go to), proportion (e.g., the support rate for the president among all U.S. voters), or some other characteristics of the population, like the variance of heights among all first-grade children. Any statement concerning the population implies a null hypothesis and an alternative/research hypothesis concerning the population. The research hypothesis is the hypothesis we’re putting forward to test, which reflects the substantive hypothesis. It is also called ‘alternative hypothesis’, but some prefer ‘research’ to convey that this hypothesis comes from an understanding of the subject area and is often derived from theory. The research/alternative hypothesis is in contrast to the null hypothesis , which is the ‘default’ one that we wish to challenge. For example, if most people believe that on average individuals in the U.S. go to more than 1 basketball game annually, and we hypothesize that on average Americans go to fewer than 1 basketball game every year. Then we can set our hypothesis as the research hypothesis and the common belief as the null hypothesis.

Then, we collect a random sample, calculate the statistic from the sample, and compare the statistic with the null hypothesis and the alternative hypothesis. What kind of statistic is calculated depends on the kind of hypothesis we have and statistical methods we use in hypothesis testing. For example, if we are interested in the population mean, then we need to calculate the mean and standard error of the sample.

Then we determine the rejection of the null hypothesis or of failure to reject the null. If the statistic we observe differs significantly from what we hypothesize, then we will reject the null hypothesis. Otherwise, we fail to reject the null hypothesis. As stated earlier, in most cases what we get from the sample is different from what we state in the null hypothesis. But we reject the null hypothesis only when what we observe in the sample is really weird or significantly different from the null hypothesis. What counts as weird, depends on the rule we set before, as well as common practices in the field. In social science, we usually take a pretty strict standard concerning the rejection of the null hypothesis. In many cases, only when the sample mean is outside the 95 percent, 99 percent or 99.9 percent of the confidence interval, do we reject the null hypothesis. This means, that we would expect to get a result as ‘weird’ as ours less than 5% of the time, if the null hypothesis is true. Since the probability is so low (e.g., 0.05), we reject the null hypothesis.

We tend to be conservative and decide not to reject the null hypothesis. Thus, failing to reject the null hypothesis does not mean the hypothesis is true, but just means that we do not have enough evidence to reject it. Similarly, rejecting the null hypothesis does not mean we prove that it is false, but it only suggest that we have pretty strong evidence (that we feel confident about) that it is false, if all the assumptions of the sampling process and statistical methods we use are met (e.g., the sample is a random sample from the population).

5.6 Types of Hypothesis testing

In our lives, we may have different types of claims or hypotheses. It can be a hypothesis about the mean of the population (e.g., the support rate for the president, the average income, etc.) or the variance of the population (e.g., the variance of people’s income). Or it can be a hypothesis concerning the difference between two groups, or a hypothesis about the correlation between two variables. Statisticians have developed different tests for different types of hypotheses. In this section, we will introduce some basic methods of hypothesis testing.

5.6.1 Single Mean Hypothesis Testing

The single mean hypothesis testing concerns the mean of the population we care about. In many cases, we are interested in the population average. For example, in an election, we may want to know the support rate for a specific candidate, which is important for the development of campaign strategy. We may hypothesize that the support rate for the candidate is a specific number, and we can test the hypothesis with a random sample we get from the population. If the support rate for the candidate among the sample is very different from the rate stated in our hypothesis, we may reject the hypothesis. If the rate we get from the sample is not very different from the number stated in the hypothesis, we may fail to reject the hypothesis.

Here is how a single mean hypothesis works. As we have discussed, the central limit theorem suggests that the mean of a random sample with a sufficiently large sample size is normally distributed. The normal distribution of the sample mean is an example of sampling distribution , which is a theoretical distribution of all possible sample values for the statistic which we are interested. For example, when we have a sample (with the sample size of 1,000), we can calculate the sample mean. If we do the sampling multiple times (e.g., 1 million times), we get 1 million samples and 1 million sample means (each sample still has 1000 cases). From the central limit theorem, we know that the 1 million sample means follow a normal distribution. This distribution is the sampling distribution of the sample mean, for samples with the sample size of 1,000.

If we get a simple random sample (explained in the previous chapter), the expected value of the sampling distribution of the mean equals the population mean, and the variance of the sampling distribution is determined by the population variance and the size of the sample. When there is less variation among the population, or we have a larger sample, the variance of the sampling distribution is smaller, which means the sample mean is expected to be closer to the population mean.

Since the sampling distribution of the mean is a normal distribution, we can calculate the probability that the sample mean falls into a specific range given the hypothesis is true. If the sample mean we get is very different from the hypothesized population mean, we may think there is some problem with the null hypothesis and we may reject the null hypothesis. Statisticians have learned a lot about the normal distribution, and we know that if we randomly draw a number from a normal distribution, we have roughly 95 percent chance of getting a number within two (or more accurately, 1.96) standard deviations (which equals the square root of the variance) away from the expected value of the normal distribution. Since the sampling distribution of the sample mean is a normal distribution, the chance that the distance between the sample mean we observe and the expected value of the normal distribution is more than two standard deviations of the normal distribution is roughly 5 percent. Thus, if we observe a difference between the sample mean and the hypothesized population mean that is larger than twice the standard deviations of the sampling distribution, we may reject the null hypothesis at the significance level of 95 percent. It is weird (e.g., less than 5 percent chance) to get a sample mean as extreme as the one we have if the null hypothesis is true, so we decide to reject the null hypothesis. We can also set a stricter standard (e.g., a significance level of 99 percent, or 99.9 percent) and reject the null only when the difference between the sample mean and the population mean is more extreme.

5.6.2 Difference of Means Hypothesis Testing

Sometimes, we are not interested in the mean of a single group, but more interested in the difference of means between two groups. Testing the difference of means is especially useful when we aim to make causal inference with an experiment. It can also be useful when we compare two groups without aiming to make causal inference. For example, in an election, especially an election within the majority system, we may be interested in whether one candidate has a higher support rate than another candidate. In this case, we are dealing with a hypothesis concerning the difference of means. The hypothesis may take the forms of \(A>B\) , \(A<B\) , \(A=B\) , or \(A-B=c\) . If our research hypothesis is \(A>B\) , the null hypothesis would be \(A<B\) . Then we test the hypothesis with what we observe in the random sample. For example, if the null hypothesis is that Candidate A has a higher support rate than Candidate B and we get a random sample in which Candidate A has a support rate much lower than Candidate B, then we may reject the hypothesis.

Similar to the single mean test, testing the difference of means hypothesis requires the standard deviation of the sampling distribution. We observe the difference of means among the two samples (groups), and then compare the difference to the standard deviation of the sampling distribution. If the difference is much larger than (e.g. more than two times) the standard deviation, then we may reject the null hypothesis that there is no difference between the two groups and suggest that there is statistically significant difference between the two groups.

5.6.3 Regression Coefficients Hypothesis Testing

In other cases, we are not only interested in describing the population, but analyzing the correlations of different variables concerning the population. We may want to test whether two characteristics or variables within the population are correlated with each other. To test the correlations, we may put them into a regression model, which we will discuss more in later chapters on regressions. Here we can briefly explain how testing regression coefficients works.

A bivariate regression model is like this. \[Y=\beta_{0} + \beta_{1} X\] If there is no correlation between a variable X and another variance Y, then any change of X will not be correlated to any change of Y. Thus, \(\beta_{1}\) in the regression model should be 0, which implies the value of Y will not change with the value of X. When we do the hypothesis testing, the null hypothesis is that the coefficient is 0. Then we put the data we get from a random sample into the regression model. The model will provide us an estimate of the coefficient. Then we do statistic tests (e.g., \(t\) test which compares the difference with the standard deviation) to see whether the coefficient estimated differs significantly from 0. If it differs significantly from 0, we may reject the null hypothesis and suggest that there may be some correlation between X and Y.

5.6.4 Conclusions you can draw based on the type of test

Based on the type of tests we conduct, we may draw certain types of conclusions. For example, with the single mean test, we may reject the null hypothesis that the single mean is a specific number or within a specific interval. With the test of the difference of means, we may reject that the null hypothesis that there is no difference between two groups. Based on the test of the regression coefficient, we may reject the null hypothesis that there is no correlation between two variables. But as stated above, in many cases we may fail to reject the null hypothesis. This does not suggest the null hypothesis is true, but that we do not have strong enough evidence to reject it.

5.7 Applications

The single mean hypothesis testing is very straightforward in statistics and one of the basic tools in social science research. Once we get a random sample and get the sample mean and sample variance, we can easily estimate the confidence interval for the population mean, e.g., the public opinions on specific policies. Then we can compare the null hypothesis with the sample mean or the confidence interval, and decide whether to reject the null hypothesis or not. The main challenge in these descriptive works is not statistical theory or method per se, but the sampling process. As we emphasize earlier in this chapter, to make inference about the population with a sample, we need to first have a random sample from the population, otherwise it is like trying to make inference based on magicians’ tricks. But it is extremely difficult to get a random sample in real lives. Many factors, like the non-response rate, lack of access to specific groups, financial and time constraints, make it unlikely to get a perfect random sample from the population. Researchers have tried different techniques to get a representative and random sample from the population. To test whether a sampling method is reliable, one way is to compare the findings we get with the new technique with census data or others authoritative data. In an article by Ansolabehere and Schaffner, they compare three sampling techniques (over the Internet, by telephone with live interviews, and by mail) with other data sources ( Ansolabehere and Schaffner 2014 ) . Comparing the confidence interval estimated from the sample with validating source, provides us some inputs on whether the sampling process provides a good enough (though not perfect) sample.

Testing hypotheses concerning the difference of means and regression coefficients are even more widely used in political science. In most studies in political science nowadays, researchers care about correlations or causal relations between different variables. Different methods, like regression and experiments, have been developed to explore the relations between different variables in the world, e.g., democracy and economic growth ( Boix and Stokes 2003 ) , social network and welfare provision ( Tsai 2007 ) , media frame strategy and public opinion ( Bonilla and Mo 2018 ) , etc. In these works which aim to explore relations between different variables, we often have a null hypothesis that there are no correlations between two variables, and researchers aim to find strong evidence to reject the null hypothesis.

More specifically, in an experiment, the null hypothesis is often that there are no difference between the treatment group and the control group. If we find statistically significant difference in the means between the treatment and the control groups, we may reject the null hypothesis and suggest that there are some difference between the two groups. And since the two groups differ in getting the treatment or not, researchers may suggest that the treatment is the cause for the difference between the two groups. Here is an example for of an experiment. As some may know, the general support for aid to foreign countries is low among U.S. citizens. This is a descriptive finding. But what explains the low support? Some researchers ( Scotto et al. 2017 ) suggest, one reason is that people in the United States and other developed countries tend to overestimate the percent of government budget spend on overseas aid. To test this research hypothesis, they designed an experiment in the United States and Great Britain, in which one group of people (i.e., the control group) are provided the amount of dollars/pounds spent on foreign aid each year, and the other group (i.e., the treatment group) of people are provided the amount of money as well as the percentage of government budget on overseas aid. Then they ask the two groups of people about their opinions on foreign aid, and test the difference of means between the two groups. They find out that the group of people informed the percentage as well as the amount of overseas aid are less likely to think that the governments have spent "too much" on foreign aid. The difference is statistically significant at the confidence level of 99 percent, which enables them to reject the null hypothesis that there are no difference between the two groups and argue that overestimating the percentage of budget spent on aid is one cause for the low support for foreign aid.

In many cases, we cannot randomly assign people into different groups and change the treatment they get. Other techniques, like regression discontinuity designs (RDD), may be used for testing whether there are differences between groups that were similar before the treatment. For example, some researchers are interested in whether advantaged individuals may see the world through the lens of the poor after engagement with disadvantaged populations ( Mo and Conn 2018 ) . To do that, they surveyed top college graduates who were accepted into Teach For America program and those who were not. The former group of students had selection scores just above the threshold score and the later group had scores fall just short of the threshold score. Since the two groups differed only slightly in the scores, so it may be reasonable to suggest that the two groups were similarly to each other, and then we can see whether the experience in the program changes how the students view the world.

When we use regressions based on observational data instead of experiments, the idea of hypothesis testing is similar. Researchers often have a null hypothesis that the coefficient for a specific variable \(X\) is 0, which implies no correlations between the explanatory variable \(X\) and the outcome variable \(Y\) . If from the sample we find that the estimated coefficient differs significantly from 0, then we may decide to reject the null hypothesis and suggest that there is some correlation between \(X\) and \(Y\) . Whether the correlation implies causal relations, requires a closer look on the research design, but is not something hypothesis testing can tell. For example, a study explores the correlation between anti-Muslim hostility and the support for ISIS in Western Europe; on Twitter, ISIS followers who are in constituencies with high vote shares for far-right parties are more likely to support ISIS. But the correlation does not necessarily mean that anti-Muslim hostility causes the support, and thus the researcher looks closer into the tweets before and after major events related to ISIS to show that the support is indeed linked to the anti-Muslim hostility ( Mitts 2019 ) . Another example is from the field of American politics; a researcher tests whether people whose family members are arrested or incarcerated become mobilized to vote or not ( A. White 2019 ) .

5.8 “Is it weird?”

The idea of hypothesis testing can be formulated as some kind of "Is it weird" question. We start from a hypothesis concerning the population, then we observe the data from a sample, and ask ourselves, someone with training in statistical methods, "is it weird that we get a sample like this, if the null hypothesis is true?" If it is weird (AKA statistically unlikely), in the sense of statistical method, then we will reject the null hypothesis. Otherwise, we decide not to reject the null hypothesis, though that does not mean we prove or accept the null hypothesis.

5.9 Broader significance/use in political science

The Neyman-Pearson paradigm of hypothesis testing may be a bit obscure if we have not gone through the idea behind it. Students without a firm understanding of the statistical theory behind may make mistakes when interpreting the result of hypothesis testing. In recent years, there have been some heated discussions on whether we should continue this paradigm and use some jargon with this paradigm, e.g., \(p\) value, statistical significance, et al. ( Ziliak and McCloskey 2008 ; Amrhein, Greenland, and McShane 2019 ) . One concern with this paradigm is whether we should set a threshold value (e.g., the confidence level of 95 percent) to reject the null hypothesis and suggest there is statistically significant correlation once the threshold is met, since this may mislead someone without much training in statistical methods to think that we are more than 95 percent confident that the alternative hypothesis is true. 10 Another concern is that the paradigm of hypothesis testing may not tell us much about substantial relationship. When the sample size is very large, it may be very easy to reject the null hypothesis and suggest that one variable may have statistically significant correlation with another variable, but the effect/correlation may be trivial. 11 Besides, the paradigm may bring the problem of publication bias. Researchers and journal editors may tend to report findings that show statistically significant correlations, but not findings that do not show significant correlations. This may make our understanding of the world biased.

Other than that, for studies that do not involve random sampling, how the Neyman-Pearson paradigm of hypothesis testing works is not very clear. For example, when we have a sample which is not randomly drawn from the population, we cannot test a hypothesis concerning the population with the sample we have. And if we have access to information concerning every unit of the population (e.g., if the unit of interest is country, then in many cases we get access to the whole population as long as we learn specific information of all countries in the world), what hypothesis testing means and how the method we introduced above tells us about the population is less clear.

Other paradigms of hypothesis testing, like Bayesian approach, may provide more intuitive ways for us to understand and explain hypothesis testing and quantitative results to new learners and the general public. But these paradigms are not necessarily incompatible with the paradigm introduced in this chapter. The main issue is when we use this approach of hypothesis testing, we should be clear what each step and the results mean, and what we can and cannot say with our findings.

5.10 Conclusion

Hypothesis testing is a basic tool in contemporary political science studies, especially in quantitative political science. In the following chapters, we will introduce specific methods that explore the relations between different variables in our society. Hypothesis testing is the basic idea behind most of these methods. Understanding how hypothesis testing works will make it easier for us to understand experiments, large-N analysis and other quantitative methods.

5.11 Application Questions

Before an election, a political analyst argues that the support rate for a candidate is above 60 percent. With a sample from all voters (assuming the sample is a random one), researchers find that the 95 percent confident interval of the support rate for the candidate is between 56.2 percent and 58.9 percent. Does this provide strong evidence that the analyst is wrong? Why or why not?

In an experiment, 80 students are randomly divided into two groups. The first group of students are asked to read a news article on the negative effects of climate change on peasants in developing countries, and the other group of students are asked to read an article on a new electronic device. Then both groups of students are asked about their opinions on the role of the United States in fighting climate change. Researchers find compared to the second group, the first group of students show slightly higher support for the U.S. government to take more responsibility in fighting climate change, but the difference is not statistically significant at the level of 95 percent. Does it mean that reading the news article on climate change has no effects on students’ opinions on U.S.’s responsibility in fighting climate change? Why or why not?

A student is interested in the average amount of courses Northwestern undergrads took last quarter. In total, there were 8,231 Northwestern undergrads last quarter. With a random sample from all NU undergrads, whose sample size is 196, she learned that on average, a student took 4.0 courses last quarter. With the sample, she estimated that the population variance is 1.21. Can you calculate a 95 percent confidence interval for the average amount of courses Northwestern undergrads took last quarter?

5.12 Key Terms

Central Limit Theorem

confidence interval

null hypothesis

population parameter

point estimate

quantitative data

random sample

regression coefficient

research hypothesis

standard deviation

standard error

statistically significant difference

5.13 Answers to Application Questions

Yes. This provides strong evidence that the analyst is wrong. The confidence interval of the support rate among the population suggests that we are 95 confident that the support rate will not be higher than 58.9 percent or lower than 56.2 percent. Since the prediction of the analyst (higher than 60 percent) is well beyond the confidence interval we calculated from the random sample, we are pretty confident the prediction is wrong. But this is based on assumptions that the sample is a random one, respondents in the survey tell their true preference for the candidate, etc. If these assumptions are not met, the sample does not tell us anything about the population and we cannot tell whether the analyst is right or wrong.

Finding no statistically significant difference between the two groups makes us fail to reject the null hypothesis, which is that there are no difference between the two groups. However, it does not tell us that the null hypothesis is true. We can only say that we do not find enough evidence to show that there are difference between the two groups based on one study, but we cannot say the difference is exactly 0.

A 95 confidence interval is \(\bar{X}\pm 1.96 * (S_{\bar{X}})\) . The sample mean is 4.0. The estimated standard error of the sampling distribution equals the square root of the population variance divided by the square root of the sample size, which is \(\sqrt{1.21}/\sqrt{196}=0.0785\) . Thus the 95 confidence interval is \(\bar{X}\pm 1.96 * (S_{\bar{X}}) = 4.0\pm 1.96* 0.0785= [3.846, 4.154]\) .

This may be a bit confusing. But you may consider it this way. Let’s say, we hypothesize that the average height of all Northwestern undergrads is 5.7 feet. If we do the hypothesis testing as we will learn in this chapter, we will not reject the null hypothesis unless we get a random sample whose average height is much higher than 5.7 or much lower than 5.7 feet. In many cases, we may not reject the hypothesis. However, how likely is the hypothesis true, even if we do not reject it? Almost 0, because the exact average height can be any number slightly different from 5.7 feet, e.g., 5.700001 or 5.697382. As a result, the hypothesis is almost always wrong, but we do not always reject it. Thus, whether to reject the hypothesis or not does not tell us whether it is true or false. Nor does it tell us the probability that it is true. ↩︎

We have 1.96 in the formula because statisticians tell us if we randomly draw a number from a normal distribution, we have a 95 percent chance of getting a number no more than 1.96 standard errors above or below the mean of the distribution. ↩︎

We have 2.58 in the formula because if we randomly draw a number from a normal distribution, we have a 99 percent chance of getting a number no more than 2.58 standard errors above or below the mean of the distribution. ↩︎

As I have tried to explain, the level of significance is not the probability that the research hypothesis is true. ↩︎

For example, the finding that 1 million investment in education for one student may increase her annual income by 100 dollars after graduation may be statistically significant, but the effect is too small to tell any substantial relations. ↩︎

  • View sidebar

A Political Science Guide

For students, researchers, and others interested in doing the work of political science, formulating/extracting hypotheses.

Formulating hypotheses, which are defined as propositions set forth to explain a group of facts or phenomena, is a fundamental component to any research scholarship. Hypotheses lay out the central arguments that will be tested and either verified or rejected in the body of a paper. Papers may address multiple competing or supporting hypotheses in order to account for the full spectrum of explanations that could account for the phenomenon being studied. As such, hypotheses often include statements about a presumed impact of an independent variable on a dependent variable.

Hypotheses should not emanate from preconceived perceptions about a given relationship between variables, but rather should come about as a product of research. Thus, hypotheses should be formed after developing an understanding of the relevant literature to a given topic rather than before conducting research. Beginning research with a specific argument in mind can lead to discounting other evidence that could either run counter to this preconceived argument or could point to other potential explanations.

There are a number of different types of hypotheses utilized in political science research:

  • Null hypothesis: states that there is no relationship between two concepts
  • Correlative hypothesis: states that there is a relationship, between two or more concepts or variables, but doesn’t specify the nature of a relationship
  • Directional hypothesis: states the nature of the relationship between concepts or variables. These types of relationships can include positive, negative (inverse), high or low levels of influence, etc.
  • Causal hypothesis: states that one variable causes the other

A good hypothesis should be both correlative and directional and most hypotheses in political science research will also be causal, asserting the impact of an independent variable on a dependent variable.

There are a number of additional considerations that must be taken into account in order to make a hypothesis as strong as possible:

  • Hypotheses  must be falsifiable , that is able to be empirically tested. They cannot attribute causation to something like a supernatural entity whose existence can neither be proven nor denied.
  • Hypotheses must be internally consistent , that is that they must be proving what they claim to be proving and must not contain any logical or analytical contradiction
  • Hypotheses must have clearly defined outcomes (dependent variables) that are both dependent and vary based on the dependent variable.
  • Hypotheses must be general and should aim to explain as much as possible with as little as possible. As such, hypotheses should have as few exceptions as possible and should not rely on amorphous concepts like ‘national interest.’
  • Hypotheses must be empirical statements that are propositions about relationships that exist in the real world.
  • Hypotheses must be plausible (there must be a logical reason why they might be true) and should be specific (the relationship between variables must be expressed as explicitly as possible) and directional.
  • Fearon, James D. 1991. Counterfactuals and Hypothesis Testing in Political Science . World Politics 43 (2): 169-195.

Abstract : “Scholars in comparative politics and international relations routinely evaluate causal hypotheses by referring to counterfactual cases where a hypothesized causal factor is supposed to have been absent. The methodological status and the viability of this very common procedure are unclear and are worth examining. How does the strategy of counterfactual argument relate, if at all, to methods of hypothesis testing based on the comparison of actual cases, such as regression analysis or Mill’s Method of Difference? Are counterfactual thought experiments a viable means of assessing hypotheses about national and international outcomes, or are they methodologically invalid in principle? The paper addresses the first question in some detail and begins discussion of the second. Examples from work on the causes of World War I, the nonoccurrence of World War III, social revolutions, the breakdown of democratic regimes in Latin America, and the origins of fascism and corporatism in Europe illustrate the use, problems and potential of counterfactual argument in small-N-oriented political science research.” – Jstor.org

  • King, Gary, Robert Owen Keohane, and Sidney Verba. 1994. Designing social inquiry: scientific inference in qualitative research. Princeton, NJ: Princeton University Press.
  • Palazzolo, David and Dave Roberts. 2010. What is a Good Hypothesis? University of Richmond Writing Center.

Contributor: Harrison Polans

updated July 12, 2017 – MN

Share this:

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

University Libraries

Psci 3300: introduction to political research.

  • Library Accounts
  • Selecting a Topic for Research
  • From Topic to Research Question
  • From Question to Theories, Hypotheses, and Research Design
  • Annotated Bibliographies
  • The Literature Review
  • Search Strategies for Ann. Bibliographies & Lit. Reviews
  • Find PSCI Books for Ann. Bibliographies & Lit. Reviews
  • Databases & Electronic Resources for Your Lit. Review
  • Methods, Data Analysis, Results, Limitations, and Conclusion
  • Finding Data and Statistics for the Data Analysis
  • Citing Sources for the Reference Page

Profile Photo

Need Help with Basic and Advance Research?

Visit the Basic and Advanced Library Research Guide to learn more about the library. 

If you select "no," please send me an email so I can improve this guide.

Hypothesis in Political Science

"A generalization predicting that a relationship exists between variables. Many generalizations about politics are a sort of folklore. Others proceed from earlier work carried out by social scientists. Within the social sciences most statements about behaviour relate to large groups of people. Hence, testing any hypothesis in the field of political science will involve statistical method. It will be dealing with probabilities.

To test a hypothesis one must pose a null hypothesis. If we wanted to test the validity of the common generalization, 'manual workers tend to vote for the Labour Party' we would begin by assuming the statement was untrue. The investigation would require a sample survey in which manual workers were identified and questions put to them. It would need to be done in several constituencies in different parts of the country. Having collated the data we would use the evidence to test the null hypothesis, employing statistical techniques to assess the probability of acquiring such data if the null hypothesis were correct. These techniques are known as 'significance tests'. They estimate the probability that the rejection of a null hypothesis is a mistake. If the statistical tests indicates that the odds against it being a mistake are 1000 to one, then this is stated as a '.001 level of significance'.

The fact that the research showed that it was highly likely that manual workers 'tend' to vote for the Labour vote would not satisfy most political scientists. They also want to understand those who did not. Consequently much more work would need to be done to refine the hypothesis and define the tendency with more accuracy. Whatever the case, a hypothesis in the social sciences about a group or socio-demographic category can never tell us about the behaviour of an individual in that group or category."

Hypothesis. (1999). In F. Bealey. The Blackwell Dictionary of Political Science , Oxford, United Kingdom: Blackwell Publishers.

What a Quantitative Research Design?

Quantitative research studies produce results that can be used to describe or note numerical changes in measurable characteristics of a population of interest; generalize to other, similar situations; provide explanations of predictions; and explain causal relationships. The fundamental philosophy underlying quantitative research is known as positivism, which is based on the scientific method of research. Measurement is necessary if the scientific method is to be used. The scientific method involves an empirical or theoretical basis for the investigation of populations and samples. Hypotheses must be formulated, and observable and measurable data must be gathered. Appropriate mathematical procedures must be used for the statistical analyses required for hypothesis testing.

Quantitative methods depend on the design of the study (experimental, quasi-experimental, non-experimental). Study design takes into account all those elements that surround the plan for the investigation, such as research question or problem statement, research objectives, operational definitions, scope of inferences to be made, assumptions and limitations of the study, independent and dependent variables, treatment and controls, instrumentation, systematic data collection actions, statistical analysis, time lines, and reporting procedures. The elements of a research study and experimental, quasi-experimental, and nonexperimental designs are discussed here.

Elements of Quantitative Design

Problem statement.

First, an empirical or theoretical basis for the research problem should be established. This basis may emanate from personal experiences or established theory relevant to the study. From this basis, the researcher may formulate a research question or problem statement.

Operational Definitions

Operational definitions describe the meaning of specific terms used in a study. They specify the procedures or operations to be followed in producing or measuring complex constructs that hold different meanings for different people. For example, intelligence may be defined for research purposes by scores on the Stanford-Binet Intelligence Scale.

Population and Sample

Quantitative methods include the target group (population) to which the researcher wishes to generalize and the group from which data are collected (sample). Early in the planning phase, the researcher should determine the scope of inference for results of the study. The scope of inference pertains to populations of interest, procedures used to select the sample(s), method for assigning subjects to groups, and the type of statistical analysis to be conducted.

Formulation of Hypotheses

Complex questions to compare responses of two or more groups or show relationships between  two or more variables are best answered by hypothesis testing. A hypothesis is a statement of the researcher's expectations about a relationship between variables.

Hypothesis Testing

Statements of hypotheses may be written in the alternative or null form. A directional alternative hypothesis states the researcher's predicted direction of change, difference between two or more sample means, or relationship among variables. An example of a directional alternative hypothesis is as follows:

Third-grade students who use reading comprehension strategies will score higher on the State Achievement Test than their counterparts who do not use reading comprehension strategies.

A nondirectional alternative hypothesis states the researcher's predictions without giving the direction of the difference. For example:

There will be a difference in the scores on the State Achievement Test between third-grade students who use reading comprehension strategies and those who do not.

Stated in the null form, hypotheses can be tested for statistically significant differences between groups on the dependent variable(s) or statistically significant relationships between and among variables. The null hypothesis uses the form of “no difference” or “no relationship.” Following is an example of a null hypothesis:

There will be no difference in the scores on the State Achievement Test between third-grade students who use reading comprehension strategies and those who do not.

It is important that hypotheses to be tested are stated in the null form because the interpretation of the results of inferential statistics is based on probability. Testing the null hypothesis allows researchers to test whether differences in observed scores are real, or due to chance or error; thus, the null hypothesis can be rejected or retained.

Organization and Preparation of Data for Analysis

Survey forms, inventories, tests, and other data collection instruments returned by participants should be screened prior to the analysis. John Tukey suggested that exploratory data analysis be conducted using graphical techniques such as plots and data summaries in order to take a preliminary look at the data. Exploratory analysis provides insight into the underlying structure of the data. The existence of missing cases, outliers, data entry errors, unexpected or interesting patterns in the data, and whether or not assumptions of the planned analysis are met can be checked with exploratory procedures.

Inferential Statistical Tests

Important considerations for the choice of a statistical test for a particular study are (a) type of research questions to be answered or hypotheses to be tested; (b) number of independent and dependent variables; (c) number of covariates; (d) scale of the measurement instrument(s) (nominal, ordinal, interval, ratio); and (e) type of distribution (normal or non-normal). Examples of statistical procedures commonly used in educational research are  t  test for independent samples, analysis of variance, analysis of covariance, multivariate procedures, Pearson product-moment correlation, Mann–Whitney  U  test, Kruskal–Wallis test, and Friedman's chi-square test.

Results and Conclusions

The level of statistical significance that the researcher sets for a study is closely related to hypothesis testing. This is called the alpha level. It is the level of probability that indicates the maximum risk a researcher is willing to take that observed differences are due to chance. The alpha level may be set at .01, meaning that 1 out of 100 times the results will be due to chance; more commonly, the alpha level is set at .05, meaning that 5 out of 100 times observed results will be due to chance. Alpha levels are often depicted on the normal curve as the critical region, and the researcher must reject the null hypothesis if the data fall into the predetermined critical region. When this occurs, the researcher must conclude that the findings are statistically significant. If the  researcher rejects a true null hypothesis (there is, in fact, no difference between the means), a Type I error has occurred. Essentially, the researcher is saying there is a difference when there is none. On the other hand, if a researcher fails to reject a false null (there is, in fact, a difference), a Type II error has occurred. In this case, the researcher is saying there is no difference when a difference exists. The power in hypothesis testing is the probability of correctly rejecting a false null hypothesis. The cost of committing a Type I or Type II error rests with the consequences of the decisions made as a result of the test. Tests of statistical significance provide information on whether to reject or fail to reject the null hypothesis; however, an effect size ( R 2 , eta 2 , phi, or Cohen's  d ) should be calculated to identify the strength of the conclusions about differences in means or relationships among variables.

Salkind, Neil J. 2010.  Encyclopedia of Research Design . Thousand Oaks, CA: SAGE Publications, Inc. doi: 10.4135/9781412961288 .

Some Terms in Statistics that You Should Know

Bivariate Regression

Central Tendacy, Measures of

Chi-Square Test

Cohen's d Statistic

Cohen's f Statistic

Correspondence Analysis

Cross-Sectional Design

Descriptive Statistics

Effect Size, Measure of

Eta-Squared

Factor Loadings

False Positive

Frequency Tables

Alternative Hypotheses

Null Hypothesis

Krippendorff's Alpha

Multiple Regression

Multivariate Analysis of Variance (MANOVA)

Multivariate Normal Distribution

Partial Eta-Squared

Percentile Rank

Random Error

Reliability 

Regression Discontinuity

Regression to the Mean

Standard Deviation

Significance, Statistical

Trimmed Mean

Variability, Measure of

Is the term you are looking for not here? Review the Encyclopedia of Research Design below. 

SAGE Research Methods is a research methods tool created to help researchers, faculty and students with their research projects. SAGE Research Methods links over 175,000 pages of SAGE’s renowned book, journal and reference content. Researchers can explore methods concepts to help them design research projects, understand particular methods or identify a new method, conduct their research, and write up their findings. Since SAGE Research Methods focuses on methodology rather than disciplines, it can be used across the social sciences, health sciences, and more. Subject coverage includes sociology, health, criminology, education, anthropology, psychology, business, political science, history, economics, among others.

Sage Research Methods has a feature called a Methods Map that can help you explore different types of Research Designs .

hypothesis in political analysis

You can also explore Cases to see real research using your selected research method to learn how other authors are writing up their findings.

hypothesis in political analysis

  • << Previous: From Topic to Research Question
  • Next: Annotated Bibliographies >>

Copyright © University of North Texas. Some rights reserved. Except where otherwise indicated, the content of this library guide is made available under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license . Suggested citation for citing this guide when adapting it:

This work is a derivative of "PSCI 3300: Introduction to Political Research" , created by [author name if apparent] and © University of North Texas, used under CC BY-NC 4.0 International .

  • Last Updated: Nov 13, 2023 4:28 PM
  • URL: https://guides.library.unt.edu/PSCI3300

Additional Links

UNT: Apply now UNT: Schedule a tour UNT: Get more info about the University of North Texas

UNT: Disclaimer | UNT: AA/EOE/ADA | UNT: Privacy | UNT: Electronic Accessibility | UNT: Required Links | UNT: UNT Home

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Architecture
  • Theory of Architecture
  • Browse content in History
  • History of Education
  • Regional and National History
  • Browse content in Philosophy
  • Feminist Philosophy
  • Philosophy of Language
  • Browse content in Religion
  • Religious Studies
  • Browse content in Society and Culture
  • Cultural Studies
  • Ethical Issues and Debates
  • Technology and Society
  • Browse content in Law
  • Comparative Law
  • Criminal Law
  • Environment and Energy Law
  • Human Rights and Immigration
  • Browse content in International Law
  • Public International Law
  • Legal System and Practice
  • Medical and Healthcare Law
  • Browse content in Medicine and Health
  • Browse content in Public Health and Epidemiology
  • Public Health
  • Browse content in Science and Mathematics
  • Browse content in Earth Sciences and Geography
  • Environmental Geography
  • Urban Geography
  • Environmental Science
  • Browse content in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Browse content in Business and Management
  • Business Ethics
  • Business History
  • Corporate Social Responsibility
  • Human Resource Management
  • Industry Studies
  • Information and Communication Technologies
  • Knowledge Management
  • Criminology and Criminal Justice
  • Browse content in Economics
  • Behavioural Economics and Neuroeconomics
  • Economic Systems
  • Economic History
  • Economic Development and Growth
  • Financial Markets
  • History of Economic Thought
  • Public Economics
  • Browse content in Education
  • Educational Strategies and Policy
  • Higher and Further Education
  • Philosophy and Theory of Education
  • Browse content in Human Geography
  • Political Geography
  • Browse content in Politics
  • Asian Politics
  • Comparative Politics
  • Conflict Politics
  • Environmental Politics
  • European Union
  • Indian Politics
  • International Relations
  • Middle Eastern Politics
  • Political Sociology
  • Political Economy
  • Political Theory
  • Public Policy
  • Russian Politics
  • Security Studies
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • Asian Studies
  • Browse content in Social Work
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Urban and Rural Studies
  • Reviews and Awards
  • Journals on Oxford Academic
  • Books on Oxford Academic

Public policy analysis

  • < Previous chapter
  • Next chapter >

Eleven Research and working hypotheses

  • Published: March 2011
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter constructs hypotheses that underlie all of the processes involved in policy analysis, irrespective of whether its purpose is explanatory, evaluative, or predicative. It recapitulates the main analytical dimensions previously identified for the definition of the six products of a public policy found in political-administrative and social reality. It presents three possible access points for the formulation of working hypotheses to be tested in the course of an empirical analysis of the explanatory factors behind the six policy products. It makes direct reference to the logic of the analysis model and to the basic elements on which the public policy approach, which is inspired by actor-centered institutionalism, is based. It attempts to describe the six products of a public policy.

Signed in as

Institutional accounts.

  • GoogleCrawler [DO NOT DELETE]
  • Google Scholar Indexing

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code

Institutional access

  • Sign in with a library card Sign in with username/password Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Sign in through your institution

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Sign in with a library card

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

1. Introduction: The Nature of Politics and Political Analysis

This chapter discusses the nature of politics and political analysis. It first defines the nature of politics and explains what constitutes ‘the political’ before asking whether politics is an inevitable feature of all human societies. It then considers the boundary problems inherent in analysing the political and whether politics should be defined in narrow terms, in the context of the state, or whether it is better defined more broadly by encompassing other social institutions. It also addresses the question of whether politics involves consensus among communities, rather than violent conflict and war. The chapter goes on to describe empirical, normative, and semantic forms of political analysis as well as the deductive and inductive methods of the study of politics. Finally, it examines whether politics can be a science.

  • Related Documents

This introductory chapter examines the nature of politics and the political, and more specifically whether politics is an inevitable feature of all human societies. It begins by addressing questions useful when asking about ‘who gets what, when, how?’; for example, why those taking decisions are able to enforce them. The discussion proceeds by focusing on the boundary problems inherent in an analysis of the nature of the political. One such problem is whether politics is equivalent to consensus and cooperation, so that it does not exist in the event of conflict and war. The chapter then explores different forms of political analysis — the empirical, the normative, and the semantic—as well as deductive and inductive methods of studying politics. Finally, it asks whether politics can ever be a science to rival subjects in the natural sciences.

Introduction: The Nature of Politics and Political Analysis

This introductory chapter examines the nature of politics and the political, and more specifically whether politics is an inevitable feature of all human societies. It begins by addressing questions useful when asking about ‘who gets what, when, how?’; for example, why those taking decisions are able to enforce them. The discussion proceeds by focusing on the boundary problems inherent in an analysis of the nature of the political. One such problem is whether politics is equivalent to consensus and cooperation, so that it does not exist in the event of conflict and war. The chapter then explores different forms of political analysis — the empirical, the normative, and the semantic — as well as deductive and inductive methods of studying politics. Finally, it asks whether politics can ever be a science to rival subjects in the natural sciences.

Devbans, Caste, Gender and the State:Political Ecology of a Sacred Grove of Himachal Pradesh

Devbans are the parts of forest territory that have been traditionally conserved in reverence to the local deities in various parts of Himachal Pradesh. Today, they stand at the intersection of tradition and modernity. This paper endeavours to study the political ecology of a Devban in the contemporary times by looking at the power dynamics between various stakeholders with respect to their relative decision making power in the realm of managing the Devban of Parashar Rishi Devta. It further looks at howcertain political and administrative factors can contribute towards the growth or even decline of any Devban. The study argues that in the contemporary times when the capitalist doctrines have infiltrated every sphere of the social institutions including the religion, Devbans have a greater probability of survival when both the state and the community have shared conservatory idealsand powers to preserve them.

Politicheskaia organizatsiia SShA. Obshchestvennye instituty i ikh vzaimodeistvie s gosudarstvom (The Political Organization of the USA: Social Institutions and their Interaction with the State). by Galina G. Boichenko. (Minsk: V. I. Lenin Byelorussian State University Press, 1970. Pp. 539.)

The final stage of the minority revolution in the united states. upward mobility of minorities to the corridors of power.

This article is the third and final in a series dealing with the birth of a new political elite in the United States, the minority elite. In previous articles, the mechanism of its appearance was analyzed, as well as its ideology, goals, program and values. The black movement, as the most co-organized of all protest movements, is entering the final phase of its development, being engaged in the placement of its representatives in state and federal governments, political parties and other social institutions. The women’s movement has recently been taken over by ethnic movements, primarily blacks, and has become their vanguard. This article describes new social elevators for the promotion of minority representatives into the corridors of power. The logic of promoting people of their own race, gender and nationality to the highest branches of power began to prevail over other criteria for recruiting personnel. During the 2020 election campaign, a new mechanism for promoting minorities in all branches of government was formed. It is based on numerous violations of local and federal electoral legislation. The mechanism of pressure on the US electoral system is analyzed using the example of the state of Georgia and the activities of politician Stacey Abrams. The article describes Abrams’ strategy to create a network of NGOs that are focused on one mission - to arrange for the political shift of the state in the elections. These organizations circumvented existing laws, making the state of Georgia the record holder for electoral irregularities and lawsuits. The article shows that Abrams’ struggle with the electoral laws of her state is based on the political myth of the voter suppression of minorities. The author identifies a number of common characteristics of the new elite. The minority elite does not show any interest in social reconciliation and overcoming racial conflict, but rather makes efforts to incite the latter, to attract the government to its side and increase its role in establishing “social justice” through racial quotas and infringement of the rights of those social strata that it has appointed bearers of systematic racism in society. As the colored elite increases and the government’s role in resolving racial conflicts grows, the minority movement is gradually condemned, it ceases to be a true grassroots movement and turns into astroturfing.

The Political Analysis on Chinese Governments’ Work Reports during the Transitional Period: The Cases of the State Council Work Reports from 1993 to 2007

The state owes us: social exclusion and collective actions of china’s bereaved parents.

China’s one-child policy has come to an end, but its ramifications continue. This article explores the issue of bereaved parents who lost their only child as an unintended consequence of the one-child policy and provides a social and political analysis of their suffering and resistance. Chinese social institutions and cultural norms traditionally discriminate against bereaved parents, excluding them from social support and social relations. As a result, bereaved parents often cut ties with their extended families, networks, and other social relations; they are stigmatized, marginalized, and isolated. However, through the internet and social media, these parents have begun forging new social relations and associations among themselves. They mobilize collective protests fueled by shared grief and the moral appeal of their patriotic sacrifice for the state, demanding official government recognition and compensation for their losses. This article treats bereavement as a subject of analysis and shows how state power shapes or even damages people’s everyday lives in a specific family-centered cultural context.

MANHAJ IJTIHAD PADA ASPEK POLITIK

This article discusses the application of the manhaj or the ijtihad method to the political aspects of the state. The focus of the problem is, can ijtihad be applied to the political aspects of the state, not only to the aspects of fiqh or religious law? Some Muslims still understand that the position of ijtihad is limited to the aspect of fiqh alone. for example, matters of the law of religious observances, marriage and other social institutions). During the period of the Prophet Muhammad, when he moved to Medina, the Prophet made a political commitment as a nation and state involving various ethnic, ethnic and religious layers in Medina. This political commitment is called Shahifah Madinah or Watsiqah Madinah (Medina charter), which consists of 47 articles as the basis for living together with the nation and state. Until now, in a very modern world, the Medina Charter is still considered the most modern political monumental ijtihad ever practiced by the Prophet Muhammad. Based on the above thought background, ijtihad can be used as a method of approach in formulating the concepts of state politics.

INSTITUTIONAL CONDITIONALITY OF THE ENDOGENIZATION OF ECONOMIC DEVELOPMENT

At the end of the XXth century, in the countries of the former socialist camp, the capitalist reforms of the fundamental content of the principles of ensuring the right to liberty were carried out, including the economic one, that was realized in accordance with the existence and protection of the rights for a private property. This choice was made because there was a fundamental desire to overcome the dependence on the leadership of the political sovereign, which, in fact, ensured the receipt of «rents» through the implementation of a centralized management system on a planning and distribution basis, restraining the desire to gain freedom by providing opportunities for self-realization. In place of the ideology of the political «sovereign», the new ways of human activity coordination had to come, based on the principles of the ideology of liberalism. At the initial stages of reforms, the problems of institutionalization of activity of both the state and business, remained out of attention, since freedom was «above all».Capitalism, that develops without control and restrictions, is guided by a single criterion - by the private interest of the strongest and remains hostile to any form of public interest of the majority. At the same time, the development of the social institutions requires the formation of an institutional space for the implementation of the civic initiatives and the protection of freedoms from the manifestations of power and the weakly controlled monopoly organized business in the limitation of the civic activity. For this reason, in the process of development of society, the state should establish the long-term social mechanisms not only to consolidate the new spirit of capitalization and further economic growth, but also development through the social mechanisms of the social space that will not break, but will stabilize the society on the basis of the social values.

Educational role of the clergy according to the teachings of the UGCC

The theme of the Church's influence on the political life of the state is one that is constantly focused on the attention of the scientific community, the media and its own politics. The current legislation in Ukraine clearly separates the church from the state. However, both the church and the state are important social institutions that can not but influence one another. The official position of the state in the relevant relations is outlined again by the law. Each of the confessions of the country, through democratic freedoms and within them, is able to implement its own concept of relations with the state. Moreover, the positions of even the largest churches in Ukraine here are significantly different and significantly affect the social realities, which determines the relevance of the topic.

Export Citation Format

Share document.

An Introduction to Political and Social Data Analysis Using R

Chapter 9 hypothesis testing, 9.1 getting started.

In this chapter, we extend the concepts used in Chapters 7 & 8 to focus more squarely on making statistical inferences through the process of hypothesis testing. The focus here is on taking the abstract ideas that are the foundation for hypothesis testing and apply them to some concrete example. The only thing you need to load in order to follow among is the anes20.rda data set.

9.2 The Logic of Hypothesis Testing

When engaged in the process of of hypothesis testing, we are essentially asking “what is the probability that the statistic found in the sample could have come from a population in which it is equal to some other, specified, value?” As discussed in chapter 8, social scientists want to know something about a population value of interest but frequently are only able to work with sample data. We generally think the sample data represent the population fairly well but we know that there will be some sampling error. In Chapter 8, we took this into account using confidence intervals around sample statistics. In this chapter, we apply some of the same logic to determine if the sample statistic is different enough from a hypothesize population parameter that we can be confident it did not occur just due to sampling error. (Come back and reread this paragraph when you are done with this chapter; it will make a lot more sense then).

We generally consider two different types of hypotheses, the null and alternative (or research) hypotheses?

Null Hypothesis (H 0 ): This is the hypothesis that is tested directly. This hypothesis usually states that the sample finding ( \(\bar{x}\) ) is the same as some hypothetical population parameter ( \(\mu\) ), even though it may appear that they are different. We usually hope to reject the null hypothesis (H 0 ). I know this sounds strange, but it will make more sense to you soon.

Alternative (research) Hypothesis (H 1 ): This is a substantive hypothesis that we think is true. Usually, the alternative hypothesis posits that a sample statistic \(\bar{x}\) does not equal some specified population parameter ( \(\mu\) ). We don’t actually test this hypothesis directly. Rather, we try to build a case for it by showing that the sample statistic is different enough from the population value hypothesize in H 0 that it is unlikely that the null hypothesis is true.

We can use what we know about the z-distribution to test the validity of the null hypothesis by stating and testing hypotheses about specific values of population parameters. Consider the following problem:

An analyst in the Human Resources department for a large metropolitan county is asked to evaluate the impact of a new method of documenting sick leave among county employees. The new policy is intended to cut down on the number of sick leave hours taken by workers. Last year, the average number of hours of sick leave taken by workers was 59.2 (about 7.4 days), a level determined to be too high. To evaluate if the new policy is working, the analyst took a sample of 100 workers at the end of one year under the new rules and found that a sample mean of 54.8 hours (about 6.8 days), and a standard deviation of 15.38. The question is, does this sample mean represent a real change in sick leave use, or or does it only refect sampling error? To answer this, we need to determine how likely is is to fet a sample mean if 54.8 from a population in which \(\mu=59.2\) .

9.2.1 Using Confidence Intervals

As alluded to at the end of Chapter 8, you already know one way to test hypotheses about population parameters by using confidence intervals In this case, we can calculate the lower- and upper-limits of a 95% confidence interval around the sample mean (54.8) to see if it includes \(\mu\) (59.2):

\[c.i._{.95}=54.8\pm {1.96(S_{\bar{x}})}\] \[S_{\bar{x}}=\frac{15.38}{\sqrt{100}}=1.538\] \[c.i._{.95}=54.8 \pm 1.96(1.538)\] \[c.i._{.95}=54.8 \pm 3.01\] \[51.78\le \mu \le57.81\]

From this sample of 100 employees, after one year of the new policy in place we estimate that there is a 95% chance that \(\mu\) is between 51.78 and 57.81, and the probability that \(\mu\) is outside this range is less than .05. Based on this alone we can say there is less than a 5% chance that the number of hours of sick leave taken is the the same that it was in the previous year. In other words, there is a fairly high probability that fewer sick leave hours were used in the year after that policy change than in the previous year.

9.2.2 Direct Hypothesis Tests

We can be a bit more direct and precise by setting this up as a hypothesis test and then calculating the probability that the null hypothesis is true. First, the null hypothesis.

\[H_{0}:\mu=59.2\]

Note that this is saying is that there is no real difference between last year’s mean number of sick days ( \(\mu\) ) and the sample we’ve drawn from this year ( \(\bar{x}\) ). Even though the sample mean looks different from 59.2, the true population mean is 59.2 and the sample statistic is just a result of random sampling error. After all, if the population mean is equal to 59.2, any sample drawn from that population will produce a mean that is different from 59.2, due to sampling error. In other words, H 0 , is saying that the new policy had no effect, even though the sample mean suggests otherwise.

Because the county analyst is interested in whether the new policy reduced the use of sick leave hours, the alternative hypothesis is:

\[H_{1}:\mu < 59.2\]

Here, we are saying that the sample statistic is different enough from the hypothesized population value (59.2) that it is unlikely to be the result of random chance, and the population value is less than 59.2.

Note here that we are not testing whether the number of sick days is equal to 54.8 (the sample mean). Instead, we are testing whether the average hours of sick leave taken this year is lower than the number of sick days taken last year. The alternative hypothesis reflects what we really think is happening; it is what we’re really interested in. However, we cannot test the alternative hypotheses directly. Instead, we examine the null hypothesis as a way of gathering evidence to support the alternative.

So, the question we need to answer in order to test the null hypothesis is, how likely is it that a sample mean of this magnitude (54.8) could be drawn from a population in which \(\mu\text{= 59.2}\) ? We know that we would get lots of different mean outcomes if we took repeated samples from this population. We also know that most of them would be clustered near \(\mu\) and a few would be relatively far away from \(\mu\) at both ends of the distribution. All we have to do is estimate the probability of getting a sample mean of 54.8 from a population in which \(\mu\text{= 59.2}\) If the probability of drawing \(\bar{x}\) from \(\mu\) is small enough, then we can reject H 0 .

How do we assess this probability? By using what we know about the sampling distributions. Check out the figure below, which illustrates the logic of hypothesis testing using a theoretical distribution:

The Logic of Hypothesis Testing

Figure 9.1: The Logic of Hypothesis Testing

Suppose we draw a sample mean equal to -1.96 from a population in which \(\mu=0\) and the standard error equals 1 (this, of course, is a normal distribution). We can calculate the probability of \(\bar{x}\le0\) by estimating the area under the curve to the left of -1.96. The area on the tail of the distribution used for hypothesis testing is referred to as the \(\alpha\) (alpha) area. We know that this \(\alpha\) area is equal to .025 (How do we know this? Check out the discussion of the z-distribution from the earlier chapters), so we can say that the probability of drawing a sample mean less than or equal to -1.96 from a population in which \(\mu=0\) is about .025. What does this mean in terms of H 0 ? It means that probability that \(\mu=0\) is about .025, which is pretty low, so we reject the null hypothesis and conclude that \(\mu<0\) . The smaller the p-value, the less likely it is the H 0 is true.

Critical Values. A common and fairly quick way to use the z-score in hypothesis testing is by comparing it the to critical value (c.v.) for z. The c.v. is the z-score associated with the probability level required to reject the null hypothesis. To determine the critical value of z, we need to determine what the probability threshold is for rejecting the null hypothesis. It is fairly standard to consider any probability level lower than .05 sufficient for rejecting the null hypothesis in the social sciences. This probability level is also known as the significance level .

So, typically, the critical value is the z-score that gives us .05 as the area on the (left in this case) tail of the normal distribution. Looking at the z-score table from Chapter 6, or using the qnorm function in R, we see that this is z = -1.645. The area beyond the critical value is referred to as the critical region, and is sometimes also called the area of rejection: if the z-score fall in this region, the null hypothesis is rejected.

Once we have the \(c.v.\) we can calculate the z-score for the difference between \(\bar{x}\) and \(\mu\) . If \(|z| > |z_{cv}|\) , then we reject the null hypothesis:

So let’s get back to the sick leave example.

  • First, what’s the critical value? -1.65 (make sure you understand why this is the value)
  • What is the obtained value of z?

\[z=\frac{\bar{x}-\mu}{S_{\bar{x}}} = \frac{54.8-59.2}{1.538} = \frac{-4.4}{1.538}= -2.86\]

  • If the |z| is greater than the |c.v.|, then reject H 0 . If the |z| is less than the critical value, then fail to reject H 0

In this case z (-2.86) is of much greater (absolute) magnitude than c.v. (-1.65), so we reject the null hypothesis and conclude that \(\mu\) is probably < 59.2. By rejecting the null hypothesis we build a case for the alternative hypothesis, though we never test the alternative directly. One way of thinking about this is that there is less than a .05 probability that H 0 is true. We are saying that this probability is small enough that we are confident in rejecting H 0 .

We can be a bit more precise about the level of confidence in rejecting the null hypothesis (the level of significance) by estimating the alpha area to the left of z=-2.86:

This alpha area (or p-value ) is close to zero, meaning that there is little chance that the there was no change in sick leave usage. Check out Figure 9.2 as an illustration of how unlikely it is to get a sample mean of 54.8 (thin solid line) from a population in which \(\mu=59.2\) , (thick solid line) based on our sample statistics. Remember, the area to the left of the critical value (dashed line) is the critical region, equal to .05 of the the area under the curve in this case, and the sample mean is far to the left of this point.

One useful way to think about this p-value is that if we took 1000 samples of 100 workers from a population in which \(\mu=59.2\) and calculated the mean mean hours of sick leave taken for each sample, only two samples would give you a result equal to or less than than 54.8 simply due to sampling error. In other words, there is a 2/1000 chance that the sample mean was the result of random variation instead of representing a real difference form the hypothesized value.

An Illustration of Key Concepts in Hypothesis Testing

Figure 9.2: An Illustration of Key Concepts in Hypothesis Testing

9.2.3 One-tail or Two?

Note that we were explicitly testing a one-tailed hypothesis in the example above. We were saying that we expect a reduction in the number of sick days due to the new policy. But suppose someone wanted to argue that there was a loophole in the new policy that might make it easier for people to take sick days. These sort of unintended consequences almost always occur with new policies. Given that it could go either way ( \(\mu\) could be higher or lower than 59.2), we might want to test a two-tailed hypothesis, that the new policy could create a difference in sick day use–maybe positive, maybe negative.

\(H_{1}:\mu \ne 59.2\)

The process for testing two-tailed hypotheses is exactly the same, except that we use a larger critical value because even though the \(\alpha\) area is the same (.05), we must now split it between two tails of the distribution. Again, this is because we are not sure if the policy will increase or decrease sick leave. When the alternative hypothesis does not specify a direction, we use the two-tailed test.

Critical Values for One and Two-tailed Tests

Figure 9.3: Critical Values for One and Two-tailed Tests

The figure below illustrates the difference in critical values for one- and two-tailed hypothesis tests. Since we are splitting .05 between the two tails, the c.v. for a two-tailed test is now the z-score that gives us .025 as the area beyond z at the tails of the distribution. Using the qnorm function in R (below), we see that this is z= 1.96, so the critical value for the two-tailed test is 1.96.

If we obtain a z-score (positive or negative) that is larger in absolute magnitude than this, we reject H 0 . Using a two-tailed test requires a larger z-score, making it slightly harder to reject the null hypothesis. However, since the z-score in the sick leave example is -2.86, we would still reject H 0 under a two-tailed test.

In truth, the choice between a one- or two-tailed test rarely makes a difference in rejecting or failing to reject the null hypothesis. The choice matters most when the p-value from a one-tailed test is greater than .025, in which case it would be greater than .05 in a two-tailed test. It is worth scrutinizing findings from one-tailed tests that are just barely statistically significant to see if a two-tailed test would be more appropriate. Because the two-tailed test provides a more conservative basis for rejecting the null hypothesis, researchers often choose to report two-tailed significance levels even when a one-tailed test could be justified. Many statistical programs, including R, report two-tailed p-values by default.

9.3 T-Distribution

Thus far, we have focused on using z-scores and the z-distribution for testing hypotheses and constructing confidence intervals. Another distribution available to us is the t-distribution. The t-distribution has an important advantage over the z-distribution: it does not assume that we know the population standard error. This is very important because we rarely know the population standard error. In other words, the t-distribution assumes that we are using an estimate of the standard error. The estimate of the standard error is:

\[\hat{\sigma}_{\bar{x}}=S_{\bar{x}}=\frac{S}{\sqrt{N}}\]

\(S_{\bar{x}}\) is our best guess for \(\sigma_{\bar{x}}\) , but it is a sample statistic, so it does involve some level of error.

In recognition of the fact that we are estimating the standard error with sample data rather than the population, the t-distribution is somewhat flatter (see Figure 9.4 below) than the z-distribution. This means that the critical value for a given level of significance will be larger in magnitude for a t-score than for a z-score. This difference is especially noticeable for small samples and virtually disappears for samples greater than 100, at which point the t-distribution becomes almost indistinguishable from the z-distribution (see Figure 9.5 below). Comparing the two distributions, you can see that they are both perfectly symmetric but that the t-distribution is a bit more squat and has slightly fatter tails.

Comparison of Normal and t-Distributions

Figure 9.4: Comparison of Normal and t-Distributions

Now, here’s the fun part—the t-score is calculated the same way as the z-score. We do nothing different than what we did to calculate the z-score.

\[t=\frac{\bar{x}-\mu}{S_{\bar{x}}}\]

We use the t-score and the t-distribution in the same way and for the same purposes that we use the z-score.

Choose a p-value or level of significance ( \(\alpha\) ) for rejecting H 0 . (Usually .05)

Find the critical value of t associated with \(\alpha\) (depends on degrees of freedom)

Calculate the t-score from the sample data.

Compare t-score to c.v. If \(|t| > c.v.\) , then reject H 0 ; if \(|t| < c.v.\) , then fail to reject.

While everything else looks about the same as the process for hypothesis testing with z-scores, determining the critical value for a t-distribution is somewhat different and depends upon sample size. This is because we have to consider something called degrees of freedom (df), essentially taking into account the issue discussed in Chapter 8, that sample data tend to slightly underestimated the variance and standard deviation and that this underestimation is a bigger problem with small samples. For testing hypotheses about a single mean, degrees of freedom equal:

So for the sick leave example used above:

In the figure below, you can see the impact of sample size (through degrees of freedom) on the shape of the t-distribution: as sample size and degrees of freedom increase, the t-distribution grows more and more similar to the normal distribution. At df=100 (not shown here) the t-distribution is virtually indistinguishable from the z-distribution.

Degrees of Freedom and Resemblance of t-distribution to the Normal Distribution

Figure 9.5: Degrees of Freedom and Resemblance of t-distribution to the Normal Distribution

There are two different methods you can use to find the critical value of t for a given level of degrees of freedom. We can go “old school” and look it up in a t-distribution table (below) 23 , or we can ask R to figure it out for us. It’s easier to rely on R for this, but there is some benefit to going old school at least once. In particular, it helps reinforce how degrees of freedom, significance levels, and critical values fit together. You should follow along.

hypothesis in political analysis

Alternatively, we could ask R to provide this information using the qt function. for this, you need to declare the desired p-value and specify the degrees of freedom, and R reports the critical value:

By default, qt() provides the critical values for a specified alpha area at the lower tail of the distribution (hence, -1.66). For a two-tailed test, you need to cut the alpha area in half:

Here, R reports a critical value of \(\pm 1.984\) for a two-tailed test from a sample with df=99. Again, this is slightly larger than the critical value for a z-score (1.96). If you used the t-score table to do this the old-school way, you would find the critical value is t=1.99, for df=90. The results from using the qt function are more accurate than using the t-table since you are able to specify the correct degrees of freedom.

One-tailed or two-tailed,the conclusion for the sick leave example is unaffected: the t-score obtained from the sample is in the critical region, so reject H 0 .

We can also get a bit more precise estimate of the probability of getting a sample mean of 54.8 from a population in which \(\mu\) =59.2 by asking R to tell us the area under the curve to the left of t=-2.86:

Note that this result is very similar to what we obtained when using the z-distribution (.002118). For a two-tailed test using the t-distribution, we double this to find a p-value equal to .005167.

9.4 Proportions

As discussed in Chapter 8, the logic of hypothesis testing about mean values also applies to proportions. For example, in the sick leave example, instead of testing whether \(\mu=59.2\) we could test a hypothesis regarding the proportion of employees who take a certain number of sick days. Let’s suppose that in the year before the new policy went into effect, 50% of employees took at least 7 sick days. If the new policy has an impact, then the proportion of employees taking at least 7 days sick leave should be lower than .50. In the sample of 100 employees used above, the proportion of employees taking at least 7 sick days was .41. In this case, the null and alternative hypotheses are:

H 0 : P=.50

H 1 : P<.50

To review, in the previous example, to test the null hypothesis we established a desired level of statistical significance (.05), determined the critical value for the t-score (-1.66), and calculated the t-statistic. If |t| is greater than the |c.v.|, we can reject H 0 ; if |t| is less than the |c.v.|, fail to reject H 0 . There are a couple differences, however, when working with this proportion.

In this case, because we can calculate the population standard deviation based on the hypothesized value of P (.5), we can use the z-distribution rather than the t-distribution. First, to calculate the z-score, we use the same formula as before

\[z=\frac{p-P}{S_{p}}\] Where:

\[S_{p}=\sqrt{\frac{P(1-P)}{n}}\]

Using the data from the problem, this give us:

\[z=\frac{p-P}{S_{p}}=\frac{.41-.5}{\sqrt{\frac{.5(.5))}{100}}}=\frac{-.09}{.05}=-1.8\]

We know from before that the critical value for a one-tailed test using the z-distribution is -1.65. Since this z-score is larger (in absolute terms) than the critical value, we can reject the null hypothesis and conclude that the proportion of employees using at least 7 days of sick leave per year is lower than it was in the year before the new sick leave policy went into effect.

Again, we can be a bit more specific about the p-value:

Here are a couple of things to think about with this finding. First, while the p-value is lower than .05, it is not much lower. In this case, if you took 1000 samples of 100 workers from a population in which \(P=.50\) and calculated the proportion who took 7 or more sick days, approximately 36 of those samples would produce a proportion equal to .41 or lower, just due to sampling error. This still means that the probability of getting this sample finding from a population in which the null hypothesis were true is pretty small (.03593), so we should be comfortable rejecting the null hypothesis. But what if there were good reasons to use a two-tailed test? Would we still reject the null hypothesis? No, because the critical value (-1.96) would be larger in absolute terms than the z-score, and the p-value would be .07186. These findings stand in contrast to those findings from the analysis of the average number of sick days taken, where the p-values for both one- and two-tailed tests were well below the .05 cut-off level.

One of the take-home messages from this example is that our confidence in findings is sometimes fragile, since “significance” can be a function of how you frame the hypothesis test (one- or two-tailed test?) or how you measure your outcomes (average hours of sick days taken, or proportion who take a certain number of sick days). For this reason, it is always a good idea to be mindful of how the choices you make might influence your findings.

9.5 T-test in R

Let’s say you are looking at data on public perceptions of the presidential candidates in 2020 and you have a sense that people had mixed feelings about Democratic nominee, Joe Biden, going into the election. This leads you to conclude that his average rating was probably about 50 on the 0 to 100 feeling thermometer scale from the ANES. You decide to test this directly with the anes20 data set.

The null hypothesis is:

H 0 : \(\mu=50\)

Because there are good arguments for expecting the mean to be either higher or lower than 50, the alternative hypothesis is two-tailed:

H 1 : \(\mu\ne50\)

First, you get the sample mean:

Here, you see that the mean feeling thermometer rating for Biden in the fall of 2020 was 53.41. This is higher than what you thought it would be (50), but you know that it’s possible to could get a sample outcome of 53.41 from a population in which the mean is actually 50, so you need to do a t-test to rule out sampling error as reason for the difference.

In R, the command for a one-sample two-tailed t-test is relatively simple, you just have to specify the variable of interest and the value of \(\mu\) under the null hypothesis:

These results are pretty conclusive, the t-score is 8.2 and the p-value is very close to 0 24 Also, if it makes more sense for you to think of this in terms of a confidence interval, the 95% confidence interval ranges from about 52.6 to 54.2, which does not include 50. We should reject the null hypothesis and conclude instead that Biden’s feeling thermometer rating in the fall of 2020 was greater than 50.

9.6 Next Steps

The last three chapters have given you a foundation in the principles and mechanics of sampling, statistical inference, and hypothesis testing. Everything you have learned thus far is interesting and important in its own right, but what is most exciting is that it prepares you for testing hypotheses about outcomes of a dependent variable across two or more categories of an independent variable. In other words, you now have the tools necessary to begin looking at relationships among variables. We take this up in the next chapter by looking at differences in outcomes across two groups. Following that, we test hypotheses about outcomes across multiple groups in Chapters 11 through 13. In each of the next several chapters, we continue to focus methods of statistical inference, exploring alternative ways to evaluate statistical significance. At the same time, we also introduce the idea of evaluating the strength of relationships by focusing on measures of effect size . Both of these concepts–statistical significance and effect size–continue to play an important role in the remainder of the book.

9.7 Exercises

9.7.1 concepts and calculations.

The survey of 300 college student introduced in exercises in Chapter 8 found that the average semester expenditure was $350 with a standard deviation of $78. At the same time, campus administration has done an audit of required course materials and claims that the average cost of books and supplies for a single semester should be no more that $340. In other words, the administration is saying the the population value is $340.

State a null and alternative hypothesis to test the administration’s claim. Did you use a one- or two-tailed alternative hypothesis? Explain your choise

Test the null hypothesis and discuss the findings. Show all calculations

In the same survey reports that among the 300 students, 55% reported being satisfied with the university’s response to the COVID-19 pandemic. The administration hailed this finding as evidence that a majority of students support the course they’ve taken in reaction to the pandemic. (Hint: this is a “proportion” problem)

9.7.2 R Problems

For this assignment, you should use the feeling thermometers for Donald Trump ( anes20$V202144 ), liberals ( anes20$V20261 ), and conservatives ( anes20$V202164 ).

Using descriptive statistics and either a histogram, boxplot, or density plot, describe the central tendency and distribution of each feeling thermometer.

Use the t.test function to test the null hypotheses that the mean for each of these variables in the population is equal to 50. State the null and alternative hypotheses and interpret the findings from the t-test.

Taking these findings into account, along with the analysis of the Joe Biden’s feeling thermometer at the end of the chapter, do you notice any apparent contradictions in American public opinion? Explain

The code for generating this table comes from Ben Bolker via stackoverflow ( https://stackoverflow.com/questions/31637388/ ). ↩︎

Remember that 3e-16 is scientific notation and means that you should move the decimal point 16 places to the left of 3. This means that p=.0000000000000003. ↩︎

  • Help and information
  • Comparative Politics
  • Environmental Politics
  • European Politics
  • European Union
  • Foreign Policy
  • Human Rights and Politics
  • International Political Economy
  • International Relations
  • Introduction to Politics
  • Middle Eastern Politics
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Philosophy
  • Political Theory
  • Politics of Development
  • Security Studies
  • UK Politics
  • US Politics
  • Share This Facebook LinkedIn Twitter

Politics

Politics (1st edn)

  • Acknowledgements
  • List of Boxes
  • List of Tables
  • About the authors
  • How to Use This Book
  • How to Use the Online Resources
  • 1. Introduction: The Nature of Politics and Political Analysis
  • 2. Politics and the State
  • 3. Political Power, Authority, and the State
  • 4. Democracy
  • 5. Democracies, Democratization, and Authoritarian Regimes
  • 6. Nations and Nationalism
  • 7. The Ideal State
  • 8. Ideologies
  • 9. Political Economy: National and Global Perspectives
  • 10. Institutions and States
  • 11. Laws, Constitutions, and Federalism
  • 12. Votes, Elections, Legislatures, and Legislators
  • 13. Political Parties
  • 14. Executives, Bureaucracies, Policy Studies, and Governance
  • 15. Media and Politics
  • 16. Civil Society, Interest Groups, and Populism
  • 17. Security Insecurity, and the State
  • 18. Governance and Organizations in Global Politics
  • 19. Conclusion: Politics in the Age of Globalization

p. 1 1. Introduction: The Nature of Politics and Political Analysis

  • Peter Ferdinand , Peter Ferdinand Emeritus Reader in Politics and International Studies, University of Warwick
  • Robert Garner Robert Garner Professor of Politics, University of Leicester
  •  and  Stephanie Lawson Stephanie Lawson Professor of Politics and International Studies, Macquarie University
  • https://doi.org/10.1093/hepl/9780198787983.003.0001
  • Published in print: 12 April 2018
  • Published online: August 2018

This chapter discusses the nature of politics and political analysis. It first defines the nature of politics and explains what constitutes ‘the political’ before asking whether politics is an inevitable feature of all human societies. It then considers the boundary problems inherent in analysing the political and whether politics should be defined in narrow terms, in the context of the state, or whether it is better defined more broadly by encompassing other social institutions. It also addresses the question of whether politics involves consensus among communities, rather than violent conflict and war. The chapter goes on to describe empirical, normative, and semantic forms of political analysis as well as the deductive and inductive methods of the study of politics. Finally, it examines whether politics can be a science.

  • political analysis
  • empirical analysis
  • normative analysis
  • semantic analysis
  • deductive method
  • inductive method

You do not currently have access to this chapter

Please sign in to access the full content.

Access to the full content requires a subscription

Printed from Oxford Politics Trove. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 25 April 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [66.249.64.20|195.216.135.184]
  • 195.216.135.184

Characters remaining 500 /500

The Relevance of Political Awareness: A Literature Review with Meta-Analysis

  • First Online: 10 December 2021

Cite this chapter

hypothesis in political analysis

  • Carl Görtz 5  

409 Accesses

3 Citations

Has research overlooked causes of citizens’ political awareness—and are the empirical merits, connected to political awareness, not that convincing as we thought? Recently, such topics have been discussed among scholars. Although, information about the state and development of the research is substandard. Therefore, this chapter provides an extensive literature review that focuses on how studies have theoretically employed the concept of political awareness and on results about the relevance of political awareness. The results from analyzing 78 articles are as follows. (1) Most of the research on political awareness uses political awareness as a moderating variable (38.4%), followed by nearly equal proportions of studies using political awareness as either an independent (20.5%) or dependent variable (24.3%), and a small number of studies using political awareness as an intervening variable. (2) The assessment of the empirical evidence brought forward in these studies, through meta-analysis, shows that an overwhelming majority of the research report positive and significant results. Suggesting that the field is essentially in agreement, the influence of the social world on public opinion and political behavior is far from equally distributed among the citizenry. Such effects are depended on citizens’ (levels of) political awareness.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Studies Marked * are Included in the Analyses

*Abdo-Katsipis, C. B. (2017). Women, political participation, and the Arab spring: Political awareness and participation in democratizing Tunisia. Journal of Women, Politics & Policy, 38 (4), 413–429.

Google Scholar  

*Adkins, T., Layman, G. C., Campbell, D. E., & Green, J. C. (2013). Religious group cues and citizen policy attitudes in United States. Politics and Religion, 6 (2), 235–263.

Ahmadov, A. K. (2014). Oil, democracy, and context: A meta-analysis. Comparative Political Studies, 47 (9), 1238–1267.

Article   Google Scholar  

*Al-Dayel, N. (2019). “Now is the time to wake up”: Islamic state’s narratives of political awareness. Terrorism and Political Violence (online First) . https://doi.org/10.1080/09546553.2019.1603145

*Al-Thubetat, Q. J. A. (2018). Impact of the subject of political science on students awareness in Petra university: A case of Jordan. Journal of Politics and Law, 11 (4), 170–180.

Amer, M. (2009). Political awareness and its implications on participatory behaviour—a study of Naga women voters in Nagaland. Indian Journal of Gender Studies, 16 (3), 359–374.

*Anduiza, E., Gallego, A., & Munoz, J. (2013). Turning a blind eye: Experimental evidence of partisan bias in attitudes toward corruption. Comparative Political Studies, 46 (12), 1664–1692.

Aneshensel, C. S. (2013). Theory-based data analysis for the social sciences . SAGE publications.

Book   Google Scholar  

*Arceneaux, K. (2008). Can partisan cues diminish democratic accountability? Political Behavior, 30 (2), 139–160.

*Arias, C. R., Garcia, J., & Corpeno, A. (2015). Population as auditor of an election process in Honduras: The case of VotoSocial crowdsourcing platform. Policy & Internet, 7 (2), 185–200.

*Arnold, J. R. (2012). Political awareness, corruption perception and democratic accountability in Latin America. Acta Politica, 47 (1), 67–90.

*Ayers, J. W., & Hofstetter, R. C. (2008). American Muslim political participation following 9/11: Religious belief, political resources, social structures, and political awareness. Politics and Religion, 1 (1), 3–26.

Bartels, L. M. (2012). The political education of John Zaller. Critical Review, 24 (4), 463–488.

*Bartle, J. (2000). Political awareness, opinion constraint and the stability of ideological position. Political Studies, 48 (3), 467–484.

*Bayulgen, O. (2008). Muhammad Yunus, Grameen Bank and the Nobel peace prize: What political science can contribute to and learn from the study of microcredit. International Studies Review, 10 (3), 525–547.

*Berg, L., & Chambers, J. (2019). Bet out the vote: Prediction markets as a tool to promote undergraduate political engagement. Journal of Political Science Education, 15 (1), 2–16.

*Berry, W. D., Fording, R. C., Ringquist, J. E., Hanson, R. L., & Klarner, C. (2012). A new measure of state government ideology, and evidence that both the new measure and an old measure are valid. State Politics & Politics Quarterly, 13 (2), 164–182.

Boulianne, S. (2009). Does internet use affect engagement? A meta-analysis of research. Political Communication, 26 (2), 193–211.

*Busemeyer, M. R., Lergetporer, P., & Woessmann, L. (2018). Public opinion and the political economy of educational reforms: A survey. European Journal of Political Economy, 53 (3), 161–185.

Bushman, B. J. (1994). Vote-counting procedures in meta-analysis. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 194–213). Russell Sage Foundation.

*Çakir, A. A., & Şekercioğlu, E. (2016). Public confidence in judiciary: The interaction between political awareness and levels of democracy. Democratization, 23 (4), 634–656.

Cancela, J., & Geys, B. (2016). Explaining voter turnout: A meta-analysis of national and subnational elections. Electoral Studies, 42 (2), 264–275.

*Carrubba, C. J., & Murrah, L. (2005). Legal integration and use of the preliminary ruling process in the European Union. International Organization, 59 (2), 399–418.

*Cassel, C. A., & Lo, C. C. (1997). Theories of political literacy. Political Behavior, 19 (4), 317–335.

*Claassen, R. L. (2011a). Political awareness and electoral campaigns: Maximum effects for minimum citizens. Political Behavior, 33 (3), 203–223.

*Claassen, R. L. (2011b). Political awareness and partisan realignment: Are the unaware unevolved? Political Research Quarterly, 64 (4), 818–830.

*Cobb, M. D., & Kuklinski, J. H. (1997). Changing minds: Political arguments and political persuasion. American Journal of Political Science, 41 (1), 88–121.

*Collet, C., & Kato, G. (2014). Does NHK make you smarter (and super news make you ‘softer’): an examination of Japanese political knowledge and the potential influence of TV news. Japanese Journal of Political Science, 15 (1), 23–50.

Converse, P. E. (2006). The nature of belief system in mass publics (1964). Critical Review, 18 (1–3), 1–74.

*Croke, K., Grossman, G., Larreguy, H. A., & Marshall, J. (2016). Deliberate disengagement: How education can decrease political participation in electoral authoritarian regimes. American Political Science Review, 110 (3), 579–600.

Dahl, R. (1989). Democracy and its critics . Yale University Press.

de Almeida Teles J. (2017). The Araguaia Guerrilla War (1972–1974): Armed Resistance to the Brazilian Dictatorship. Latin American Perspectives, 44 (5), 30–52.

*Denemark, D. (2002). Television effects and voter decision making in Australia: A re-examination of the converse model. British Journal of Political Science, 32 (4), 663–690.

*Dobrzynska, A., & Blais, A. (2008). Testing Zaller’s reception and acceptance model in an intense election campaign. Political Behavior, 30 (3), 259–276.

Doucouliagos, H., & Ulubaşuğlu, M. A. (2008). Democracy and economic growth: A meta-analysis. American Journal of Political Science, 52 (1), 61–83.

*Dragojlovic, N. (2011). Priming and the Obama effect on public evaluations on the United States. Political Psychology, 32 (6), 989–1006.

*Dragojlovic, N. (2013). Leaders without borders: Familiarity as a moderator of transnational Source cue effects. Political Communication, 30 (2), 297–316.

*Dragojlovic, N. (2015). Listening to outsiders: The Impact of messenger nationality on transnational persuasion in the United States. International Studies Quarterly, 59 (1), 73–85.

*Drury, A. C., Overby, L. M., Ang, A., & Li, Y. (2010). ‘Pretty prudent’ or rhetorically responsive? The American public’s support for military action. Political Research Quarterly, 63 (1), 83–96.

Enns, P. K., & Kellstedt, P. M. (2008). Policy mood and political sophistication: Why everybody moves mood. British Journal of Political Science, 38 (4), 433–454.

Fink, A. (2014). Conducting research literature reviews: From the internet to paper . SAGE Publications.

*Gabel, M., & Scheve, K. (2007). Estimating the effect of elite communication on public opinion using instrumental variables. American Journal of Political Science, 51 (4), 1013–1028.

*Gattermann, K., de Vreese, C. H., & van der Brug, W. (2016). Evaluations of the Spitzenkandidaten: the role of information and news exposure in citizens’ preference formation. Politics and Governance, 4 (1), 37–54.

Geys, B. (2006). Explaining voter turnout: A review of aggregate-level research. Electoral Studies, 25 (4), 637–663.

*Geys, B., Heinemann, F., & Kalb, A. (2010). Voter involvement, fiscal autonomy and public sector efficiency: evidence from German municipalities. European Journal of Political Economy, 26 (2), 265–278.

*Gimpel, J. G., & Wolpert, R. M. (1995). Rationalizing support and opposition to supreme court-nominations—the role of credentials. Polity, 28 (1), 67–82.

*Glansville, J. L. (1999). Political socialization or selection? Adolescent extracurricular participation and political activity in early adulthood. Social Science Quarterly, 80 (2), 279–290.

Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta-analysis in social research . SAGE publications.

*Goot, M. (2006). The aboriginal franchise and its consequences. Australian Journal of Politics and History, 54 (4), 517–561.

*Górecki, M. A. (2011). Why bother lying when you know so few care? Party contact, education and over-reporting voter turnout in different types of elections. Scandinavian Political Studies, 34 (3), 250–267.

*Goren, P. (2012). Political values and political awareness. Critical Review, 24 (4), 505–525.

Grönlund, K., & Milner, H. (2006). The determinants of political knowledge in comparative perspective. Scandinavian Political Studies, 29 (4), 386–406.

*Gwiasda, G. W. (2001). Network news coverage of campaign advertisements—Media’s ability to reinforce campaign messages. American Politics Research, 29 (5), 461–482.

Haidich, A.-B. (2010). Meta-analysis in medical research. Hippokratia, 14 (1), 29–37.

*Hayes, D. (2009). Has television personalized voting behavior? Political Behavior, 31 (2), 231–260.

*Hayes, D. (2010). Trait voting in U.S. senate elections. American Politics Research, 38 (6), 1102–1129.

*Hayes, D., & Guardino, M. (2011). The influence of foreign voices on U.S. public opinion. American Journal of Political Science, 55 (4), 830–850.

*Hayes, D., & Lawless, J. L. (2015). As local news goes, so goes citizen engagement: Media, knowledge, and participation in US House elections. The Journal of Politics, 77 (2), 447–462.

*Hewitt, W. E. (2000). The political dimensions of women’s participation in Brazil’s base Christian communities (CEBs). Women and Politics, 21 (3), 1–25.

*Hibbing, J. R., & Patterson, S. C. (1994). Public trust in the new parliaments of central and Eastern Europe. Political Studies, 42 (4), 570–592.

*Highton, B. (2009). Revisiting the relationship between educational attainment and political sophistication. The Journal of Politics, 71 (4), 1564–1576.

*Huo, J. (2005). Party dominance in 18 countries: The role of party dominance in the transmission of political ideology. Canadian Journal of Political Science, 38 (3), 745–765.

*Jackson, R. A. (1995). Clarifying the relationship between education and turnout. American Politics Quarterly, 23 (5), 279–299.

*Jones, P., & Dawson, P. (2008). How much do voters know? An analysis of political awareness and motivation. Scottish Journal of Political Economy, 55 (2), 123–142.

*Jordan, J. (2018). Political awareness and support for redistribution. European Political Science Review, 10 (1), 119–137.

*Kam, C. D. (2005). Who toes the party line? Cues, values and individual differences. Political Behavior, 27 (2), 163–182.

*Klašnja, M. (2017). Uninformed voters and corrupt politicians. American Politics Research, 45 (2), 256–279.

*Koch, J. W. (1998). Political rhetoric and political persuasion—the changing structure of citizens’ preferences on health insurance during policy debate. Public Opinion Quarterly, 62 (2), 209–229.

*Koch, J. W. (2001). When parties and candidates collide: Citizen perception of House candidates’ positions on abortion. Public Opinion Quarterly, 65 (1), 1–21.

*Koch, J. W. (2002). Gender stereotypes and citizens’ Impression of House candidates’ ideological orientations. American Journal of Political Science, 46 (2), 453–462.

*Ladd, J. M. (2007). Predispositions and public support for the president during the war on terrorism. Public Opinion Quarterly, 71 (4), 511–538.

*Lall, M. (2012). Citizenship in Pakistan: State, nation and contemporary faultlines. Contemporary Politics, 18 (1), 71–86.

*Lia, B., & Hegghammer, T. (2004). Jihadi strategic studies: The alleged Al Qaida policy study preceding the Madrid bombings. Studies in Conflict & Terrorism, 27 (5), 355–375.

*Mader, M. (2017). Citizens’ perception of policy objectives and support for military action: Looking for prudence in Germany. Journal of Conflict Resolution, 61 (6), 1290–1314.

*Marshall, B., & Peress, M. (2018). Dynamic estimation of ideal points of the US congress. Public Choice, 179 (1–2), 153–174.

*Mondak, J. J. (1995). Newspapers and political awareness. American Journal of Political Science, 39 (2), 513–527.

Munck, G. L., & Snyder, R. (2007). “ebating the direction of comparative politics: an analysis of leading journals. Comparative Political Studies, 40 (5), 5–31.

*Nicholson, S. P. (2003). The political environment and ballot proposition awareness. American Journal of Political Science, 47 (3), 403–410.

*Nisbet, M. C., & Markowitz, E. M. (2015). Expertise in an era of polarization: Scientists’ political awareness and communication behavior. The ANNALS of the American Academy of Political and Social Science, 658 (1), 136–154.

*Parker, S. L., Parker, R. G., & McCann, J. A. (2008). Opinion taking within friendship networks. American Journal of Political Science, 52 (2), 412–420.

*Parrott, E. (2017). Building political participation: The role of family policy and political science courses. Journal of Political Science Education, 13 (4), 404–425.

*Perez-Linan, A. (2002). Television news and political partisanship in Latin America. Political Research Quarterly, 55 (3), 571–588.

Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences: A practical guide . Blackwell.

Rapeli, L. (2013). The conception of citizen knowledge in democratic theory . Palgrave Macmillan UK.

*Reuter, O. J., & Szakonyi, P. (2015). Online social media and political awareness in authoritarian regimes. British Journal of Political Science, 45 (1), 29–51.

*Ringquist, E. J., & Dasse, C. (2004). Lies, damned lies, and campaign promises? Environmental legislation in the 105th congress. Social Science Quarterly, 85 (2), 400–419.

*Roy, J., & Alcantara, C. (2015). The candidate effect: Does the local candidate matter? Journal of Elections, Public Opinion and Parties, 25 (2), 195–214.

*Schipper, B. C., & Woo, H. Y. (2019). Political Awareness, microtargeting of voters, and negative electoral campaigning. Quarterly Journal of Political Science, 14 (1), 41–88.

*Sciarini, P., & Kriesi, H. (2003). Opinion stability and change during an electoral campaign – results from 1999 Swiss Election study. International Journal of Public Opinion Research, 15 (4), 431–453.

*Seabrook, N. R., Dyck, J. J., & Lascher, E. L. (2015). Do ballot initiatives increase general political knowledge? Political Behavior, 37 (2), 279–307.

*Seo, M. (2011). Beyond coethnic boundaries: Coethnic residential context, communication, and Asian political participation. International Journal of Public Opinion Research, 23 (3), 338–360.

*Stein, E. A. (2013). The unraveling of support for authoritarianism: The dynamic relationship of media, elites, and public opinion in Brazil, 1972–82. The International Journal of Press/politics, 18 (1), 85–107.

*Stucky, D. T., Miller, G. M., & Murphy, L. M. (2008). Gender, guns, and legislating: An analysis of state legislative policy preference. Journal of Women, Politics & Policy, 29 (4), 477–495.

Talebi, M. (2013). Study of publication-bias in meta-analysis using trim and fill method. International Research Journal of Applied and Basic Sciences, 4 (1), 31–36.

*Tolbert, J. C., & Smith, D. A. (2005). The educative effects of ballot initiatives on voter turnout. American Politics Research, 33 (2), 283–309.

*Welch, S., & Hibbing, J. R. (1992). Financial condition, gender, and voting in American national elections. The Journal of Politics, 54 (1), 197–213.

*Wozniak, K. H., Calfano, B. R., & Drakulich, K. M. (2019). A “Ferguson effect” on 2016 presidential vote preference? Findings from a framing experiment examining “shy voters” and cues related to policing and social unrest. Social Science Quarterly, 100 (4), 1024–1038.

Zaller, J. R. (1992). The nature and origins of mass opinion . Cambridge University Press.

*Zinni, F. P., Mattei, F., & Rhodebeck, L. A. (1997). The structure of political attitudes toward groups: a comparison of experts and novices. Political Research Quarterly, 50 (3), 597–626.

Download references

Author information

Authors and affiliations.

Department of Political Science, University of Örebro, Örebro, Sweden

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Carl Görtz .

Editor information

Editors and affiliations.

Department of Politics & Society, Aalborg University, Aalborg, Denmark

Niels Nørgaard Kristensen

Department of Political Science, Örebro University, Örebro, Sweden

Thomas Denk

Humanities and Social Sciences Education, Stockholm University, Stockholm, Sweden

Maria Olson

Social and Educational Sciences, Norwegian University of Science and Tech, Trondheim, Norway

Trond Solhaug

Overview of the 78 included articles.

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Görtz, C. (2022). The Relevance of Political Awareness: A Literature Review with Meta-Analysis. In: Kristensen, N.N., Denk, T., Olson, M., Solhaug, T. (eds) Perspectives on Political Awareness. Springer, Cham. https://doi.org/10.1007/978-3-030-90394-7_2

Download citation

DOI : https://doi.org/10.1007/978-3-030-90394-7_2

Published : 10 December 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-90393-0

Online ISBN : 978-3-030-90394-7

eBook Packages : Political Science and International Studies Political Science and International Studies (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

1.4: Chapter 4- Political Science as a Social Science

  • Last updated
  • Save as PDF
  • Page ID 73445

  • David Hubert
  • Salt Lake Community College
“Science is extraordinarily effective at rooting out rubbish.” –David J. Helfand (1)

The discipline called “political science” is a branch of the social sciences, which includes sociology, psychology, anthropology, and economics. Social scientists study individual and social behavior. They explore questions that often come from established  theoretical perspectives  consisting of concepts, definitions, and a body of scholarly literature developed over time. As you engage in this political science class, make sure you pay attention to the various theoretical perspectives that exist in the discipline. Recall that this text approaches political science from a modified version of  elite theory , which takes the perspective that a struggle exists between elites who use their money, access, and influence over political institutions and processes to consistently push government to serve their interests and ordinary people who use their votes to inconsistently push government to serve their interests. We will use this lens through which to better understand  how  the political system in the United States works and  for whom  it works.

Political scientists describe and explain political behavior. In doing so, they often look for patterns and relationships in what may appear to be a blizzard of random events. They know that while the political world is not as predictable as the physical world studied by chemists and physicists, they can study it systematically if they know where and how to look. Political scientists attempt to make empirical or  verifiable  statements about how the world of politics works. They carefully observe phenomena such as voting, political opinions, legislative decisions, campaign finance disclosures, presidential vetoes, Supreme Court decisions, and so forth.

The Scientific Method

Many social scientists employ the scientific method in the same ways that natural scientists do—although studying people instead of natural phenomena adds layers of complexity to the task. Other social scientists eschew using the formal scientific method in favor of rigorous interpretations, analyses, or in-depth case studies. They may do so because historical events and contemporary social phenomena are too complex for simple causal models to address or because people are too self-aware to be measured and studied without distorting results. Nevertheless, all social scientists adhere to empirical, formalized methodologies. The difficulty social scientists have is detaching themselves from their ideological or normative understandings of how they want the political world to work versus how it actually works. Science historian and philosopher Lee McIntyre argues that “the challenge in social science is to find a way to preserve our values without letting them interfere with empirical investigation. We need to understand the world before we can change it.” (2)

The phrase scientific method is a bit misleading in that it is an idealized process with a clear order of steps used to describe the often messy work that scientists actually do. You might have learned these steps in elementary school:

  • Ask a question
  • Research what others have learned about the question
  • Formulate a hypothesis
  • Conduct an experiment
  • Collect and analyze data
  • Communicate results

Regarding political science, we might be better off if we think of the  scientific method  as a systematic, logically driven process to gather information and make conclusions about natural and social phenomena. And rather than focusing on an artificial step-by-step approach to understanding the scientific method, we can go into more detail on the features that distinguish science from other ways of knowing.

We have already referenced  empiricism  above. The words  empiricism , noun, and  empirical , adjective, mean that scientists base their conclusions on careful verifiable observation and experience, rather than on intuition, revelation, prejudice, superstition, or anecdote. Empiricism in the West is a cherished gift of the Renaissance and the Age of Enlightenment. For example, through his telescope, Galileo patiently observed four “stars” dancing around Jupiter, which led him to make the empirical statement that they were in fact moons orbiting around the planet. In addition, English physician Edward Jenner observed that farm hands who contracted cowpox earlier in life did not get smallpox, which led him to make the empirical statement that inoculating individuals with cowpox could protect them against smallpox. He tested this proposition on an 8-year-old boy named James Phipps. Phipps did not get smallpox. The result was the insight that inoculation made a person immune from the disease. These and many other examples illustrate empiricism’s power over other forms of knowing such as tradition or revelation.

Hypotheses, Concepts, and Variables

Aside from making careful and patient observations, the scientific method requires that we formulate hypotheses, conceptualize complex phenomena, and analyze constantly changing variables. Political scientists generate a  hypothesis  by asking a research question—an inquiry that asks how the political world operates or why it works the way it does. The hypothesis posits an answer to the research question that you then test by conducting studies or experiments. The kinds of  why  or  how  questions that make good hypotheses are distinct from questions that elicit factual answers. For example, questions such as “What interests or organizations contribute the most money to political campaigns?” or “How many Supreme Court justices have been women?” are important—indeed, they are foundational to political science, so we will concern ourselves with many of them in this course. But they are the kinds of questions that typically elicit straightforward answers. Rather, here are some examples of large research questions in political science that make good hypotheses:

  • Why does the United States—uniquely among advanced democracies—not have universal health coverage? My hypothesis might be that entrenched interests have been able to use the political system to block broad health coverage.
  • Why do congressional incumbents have high reelection rates? My hypothesis might be that their financial advantage contributes greatly to their high reelection rate.
  • How does the constitutional structure benefit some interests over others? My hypothesis might be that the constitutional structure privileges certain interests over others, particularly those who want to stop new policy over those who want to start it.
  • How did conservatives go from spectacular defeat in 1964 to preeminence in all three government branches by 2000? My hypothesis might be that the conservative movement simply expanded to reflect real shifts in popular support on key issues that were favorable to the conservative point of view. In other words, shifts in public opinion caused the success of conservative politicians.

These kinds of questions are complex and require that scholars gather evidence from a variety of sources. Hypotheses must be supported systematically through a process of argumentation with political scientists who might disagree.

Not all hypotheses are the same. Here are the major categories of hypotheses:

Null hypothesis:  This essentially asserts that there is  n o relationship between two variables. Often political scientists will refute the null hypothesis to make sure there is something interesting going on before they undertake more sophisticated analysis. On the question of money and incumbent reelection rates, for example, the null hypothesis would be that there is no relationship between campaign budgets and chances of succeeding in an election.

Correlative  or correlational  hypothesis:  This simply suggests that two variables vary together. For example, I might hypothesize that there is a relationship between religious fundamentalism and acts of terrorism. In doing so, I’m not speculating which variable is causing movement in the other.

Directional hypotheses:  Correlative hypotheses are not especially powerful, so we tend to construct particular kinds of correlative hypotheses. As you might guess, directional hypotheses posit a direction to the relationship in question. For example, I could say that as religious fundamentalism increases, acts of terrorism increase as well. This is called a  positive relationship —the value of one variable increasing along with the value of another variable. A  negative relationship  involves the value of one variable decreasing as the value of the other variable increases. For example, we might hypothesize that as personal income increases, willingness to support public transit decreases.

Causal hypothesis:  This goes one step further by positing that at least some of the variance in one variable is being caused by the variance in the other variable. In all the other hypotheses, the two variables do not need to connect, but in a causal hypothesis, they do. Causation is extremely difficult to establish. For example, let’s say that we could somehow measure the rise and fall of religious fundamentalism in the world and also that we have an accurate count of terrorist incidents over time. To establish causation, we would have to show a statistical relationship between the changing values for each variable  and  convince our readers of a valid link between the two variables—a link that cannot be explained in a better way. On top of this difficulty is the problem of social complexity. Rarely can a complex phenomenon such as terrorism be explained by one variable, which brings to mind the admonition, beware of mono-causal explanations. Political scientists are much more likely to say that a certain percentage of the variance in terrorism can be explained by the variance in religious fundamentalism than they are to say that fundamentalism causes terrorism.

Hypotheses require the political scientist to conceptualize certain terms. Earlier, we posed a research question about the conservative movement’s growth from 1964 to 2000. What exactly do we mean by “the conservative movement”? A  concept  is a word or phrase that stands for something more complex or abstract. Political science is often concerned with big concepts such as liberty, democracy, power, justice, equality, war and peace, and representation. But there are many mid-level concepts in the discipline such as political development, political legitimacy, electoral realignment, or globalization. In addition, terms related to political ideologies—liberal, conservative, socialist, fascist, feminist, libertarian, and so forth—are also key concepts. We must be clear about our key concept definitions. If I mean one thing by the concept “conservative movement” and you mean another, then it becomes difficult for us to have a productive academic dialogue about that topic.

In turn, researchers need to define or operationalize fuzzy concepts into measurable concrete  variables . For example, earlier we hypothesized that as people’s income increases, their tendency to support public transit programs would decrease. How are we going to operationalize “income” as a variable that we can use in our analysis? We could ask a sample of people to tell us their income and then ask them questions about public transit. But, let’s say we wanted to rely on more concrete income records, assuming we could get them. We’ll still have questions to consider: gross income before taxes? Only wage income? Family income or individual income? As you can see, operationalizing concepts into measurable variables is not always easy.

A final comment about variables and testing hypotheses: the political scientist must control other relevant variables in the research design or methodology, so they are seeing the impact of the key variable on its own. For example, we might hypothesize that higher income causes people to tend to turn out to vote more, and indeed that’s what the data show. However, income correlates well with higher formal education. How do we know whether we’re seeing the impact of income or education on voting turnout? We need to control for education. One way to do that would be to sample only people with similar formal education levels and then break down the voting data by income within that educational stratum. Thus, we could look at only people with a bachelor’s degree but no graduate degree and see whether tendency to vote  within that group  increases as personal income rises. Statisticians have developed mathematical techniques to control the effects of unwanted variables, but those techniques are beyond this textbook’s scope.

Experiments

Scientists often employ experiments to test their hypotheses, and political scientists do as well. Experiments come in two flavors: controlled and natural. A  controlled experiment  is one that is carefully set up by the scientist to control the variables that might affect the outcome, thereby isolating and evaluating the variable in which they are most interested. For example, let’s imagine we are interested in how conservatives and liberals respond to new information about health policy. We could gather two groups of 100 people, one conservative and one liberal, and bring them into our office for the experiment. We would need to be sure that the conservatives were conservative to the same degree as the liberals were liberal. We would also want two groups that matched each other in important demographic variables such as race, income, and sex. Once we have assured ourselves that the two groups differed only in their political ideology, we could then provide individuals in each group with the same new information about health policy. Then, we would need to develop an instrument to gauge the responses of conservatives and liberals. That instrument might be a knowledge questionnaire, a survey, or a behavior observation, depending on our hypothesis. Note that we have controlled the variables to such an extent that we can be confident that any difference we see between the groups is related to their different ideologies.

A  natural experiment  is an observational study in the real world where the scientist does not control the variables, but where natural processes or social events provide an opportunity for them to see the effect of a variable in action. Natural experiments are messier than controlled experiments, and therefore the conclusions that can be drawn from them are necessarily more tentative. Nevertheless, natural experiments are often compelling because they happen in the world around us rather than in a laboratory setting. For example, the Affordable Care Act—ACA or Obamacare—unintentionally created a natural experiment. The ACA required states to expand Medicaid to a larger percentage of poor people and funded them to do so. However, the Supreme Court struck down the mandate in 2012, thereby allowing states to choose whether or not to expand Medicaid. As it happens, states controlled by Republicans generally chose not to expand Medicaid, while states controlled by Democrats or that had a Democratic-Republic balance tended to expand Medicaid. Over a four-year period, researchers found that states that had expanded Medicaid reduced their mean annual mortality rate by 9.3 percent. Effectively, what this meant was that the 14 states that did  not  take advantage of the ACA to expand Medicaid had 15,600 people die who would not have died had the states expanded Medicaid. (3) Aside from the obvious conclusion that the decisions of the Supreme Court, state governors and legislatures caused the premature deaths of nearly 16,000 Americans, this natural experiment allowed us to see the variable’s impact at the state level—was Medicaid expansion a net positive or negative on people’s health?

Falsifiability and Professional Responsibilities

Demonstrator with a Sign Saying He Wants Evidence-Based Science

Science’s emphasis on empiricism, conceptual clarity, variables, hypotheses, and experiments underscores another characteristic that we want to highlight here: falsifiability.  Falsifiability —also known as testability—refers to the fact that scientific knowledge claims are subject to being proven wrong. Science philosopher Karl Popper argued that falsifiability is central to differentiating science from nonscience. “A system,” he wrote, “is to be considered as scientific only if it makes assertions which may clash with observations: and a system is, in fact, tested by attempts to produce such clashes; that is to say, by attempts to refute it.” (4) Scientists make claims about the natural or social worlds and how they work. Those claims are so carefully documented that another scientist can either replicate the original study or marshal another set of observations with the explicit goal of testing whether or not the first scientist’s claim was correct. Systematically falsifying incorrect claims makes science progress toward greater understanding. If someone claims that providing welfare causes people to avoid work, we should be able to gather data to shed light on the claim. How would we do that? Could we compare unemployment figures from countries with more and less generous welfare systems? Could we do a pre- and post-study centered around a state or country instituting a new welfare system? Whatever we do, we are empirically testing a claim that can either be refuted or confirmed.

What does an  untestable claim  look like? Namely, it is a theory that cannot be refuted. The paleontologist Donald Prothero provided a great example by citing the case of Philip Henry Gosse, a nineteenth-century English naturalist and member of the puritanical Plymouth Brethren. A couple of years before Charles Darwin published  On the Origin of Species by Natural Selection  in 1859, Gosse published a book called  Omphalos: An Attempt to Untie the Geological Knot . Like Darwin, Gosse was trying to explain the increasing evidence that life had evolved over time. But Darwin used careful observations to explicate his theory of natural selection—a theory that was eminently falsifiable. Gosse, on the other hand, put forward a theory that God had created the currently existing plants and animals as well as fossils to  look  like evolution had taken place over a long period, but that in fact, God had created all life relatively recently, just as Gosse’s  Bible  told him. He reconciled his religious beliefs with empirical observations by developing a theory that could not be refuted. When Darwin came along and wrote—in one of his many examples—that finches on the Galapagos Islands had, through natural selection over time, modified their morphology to suit the kinds of things they ate on the various island ecosystems, Gosse’s adherents could simply say, “God just made the finches look that way.” Gosse’s claim is not falsifiable through any observation or experiment, whereas the theory of natural selection has passed literally thousands of tests for over 160 years. (5)

Scientists of all stripes engage in common behaviors that support their work and to better understand each discipline’s study. Two particularly noteworthy behaviors are  attending   professional conferences  and  publishing in peer-reviewed journals . At  professional conferences,   scientists present their findings to their peers . There, they challenge each other, share new ideas and data sets, and develop common research interests around which they can collaborate. While professional conferences are not particularly exciting for someone who is not a member of that disciplinary community, its members greatly enjoy the give and take around poster sessions, panel discussions, and workshops.  Scientists  also  publish their findings in peer-reviewed journals . A peer-reviewed journal is a magazine that publishes only peer-reviewed articles. Peer-review is an extremely important and often overlooked feature of science. If a political scientist sends a manuscript to  International Studies Quarterly  or any of dozens of political science journals, that manuscript will be farmed out to at least two other political scientists who have published in that field. They will review the manuscript and make comments on the methodology, the data, and the conclusions it offers. They will tell the editors of  International Studies Quarterly  whether the manuscript should be published, rejected, or sent back to the author for revisions. This is a blind process—the author of the manuscript does not know who is reviewing it, and the reviewers do not know who wrote the manuscript. The peer-review process is not foolproof, but it is a very robust way of ensuring credibility.

Political science is a member of the social sciences. While not all political scientists use the formal scientific method, they all adhere to empirical, falsifiable methods that are peer-reviewed. Political scientists at universities focus primarily on research and secondarily on teaching. Political scientists at community colleges focus primarily on teaching and secondarily on research.

What if . . . ?

What if we did a better job of developing scientific literacy among the American population? What impact would that have on our conversations about political issues that have scientific dimensions to them? How might those conversations be different? How would you go about promoting scientific literacy in America?

  • David J. Helfand,  A Survival Guide to the Misinformation Age. Scientific Habits of Mind . New York: Columbia University Press, 2016. Page 22.
  • Lee McIntyre,  The Scientific Attitude. Defending Science from Denial, Fraud, and Pseudoscience . Cambridge, MA: The MIT Press, 2019. Pages 193-194.
  • Sarah Miller, Sean Altekruse, Norman Johnson, Laura R. Wherry, “Medicaid and Mortality: New Evidence from Linked Survey and Administrative Data,” Working Paper No. 26081. The National Bureau of Economic Research. July 2019.
  • Karl Popper,  Conjectures and Refutations: The Growth of Scientific Knowledge. London: Routledge, 2002. Page 345.
  • Donald R. Prothero,  Evolution. What the Fossils Say and Why It Matters . New York: Columbia University Press, 2007, Page 9.

Media Attributions

  • Peer Review Science  © Thomas Cizauskas is licensed under a  CC BY-NC-ND (Attribution NonCommercial NoDerivatives)  license

IMAGES

  1. How to Write a Strong Hypothesis in 6 Simple Steps

    hypothesis in political analysis

  2. (PDF) The Political Analysis Framework

    hypothesis in political analysis

  3. Hypotheses in Political Science Research

    hypothesis in political analysis

  4. How to Write a Hypothesis: The Ultimate Guide with Examples

    hypothesis in political analysis

  5. Hypothesis political group on airesis

    hypothesis in political analysis

  6. How to Write a Hypothesis

    hypothesis in political analysis

VIDEO

  1. Concept of Hypothesis

  2. Leo Strauss

  3. Political theory: An introduction ।+2 1st year political science Chapter 1 in Odia medium ।

  4. HYPOTHESIS TESTING CONCEPT AND EXAMPLE #shorts #statistics #data #datanalysis #analysis #hypothesis

  5. HYPOTHESIS in 3 minutes for UPSC ,UGC NET and others

  6. Custom Hypothesis Tests in the Completely Randomized Design

COMMENTS

  1. 5 Hypothesis Testing

    Hypothesis testing is a basic tool in contemporary political science studies, especially in quantitative political science. In the following chapters, we will introduce specific methods that explore the relations between different variables in our society. Hypothesis testing is the basic idea behind most of these methods. ... Political Analysis ...

  2. Formulating/Extracting Hypotheses

    A good hypothesis should be both correlative and directional and most hypotheses in political science research will also be causal, asserting the impact of an independent variable on a dependent variable. There are a number of additional considerations that must be taken into account in order to make a hypothesis as strong as possible:

  3. 8.3: Introduction to Statistical Inference and Hypothesis Testing

    Conduct a hypothesis testing (differences of means test) Differentiate between Type-I and Type-II errors. Statistical inference is defined as the process of analyzing data generated by a sample, but then used to determine some characteristic of the larger population. Remember, surveys analyses are the bread and butter of quantitative political ...

  4. 3.3: Applying the Scientific Method to Political Phenomena

    The first article we will map is titled "Do Inheritance Customs Affect Political and Social Inequality?" (Hager and Hilbig 2019) by Anselm Hager and Hanno Hilbig in the American Journal of Political Science. Remember that our third model of the scientific method includes six stages: Observation, Theory, Hypothesis, Data, Analysis, and Update.

  5. PSCI 3300: Introduction to Political Research

    Hypothesis in Political Science "A generalization predicting that a relationship exists between variables. Many generalizations about politics are a sort of folklore. ... Exploratory analysis provides insight into the underlying structure of the data. The existence of missing cases, outliers, data entry errors, unexpected or interesting ...

  6. PDF Introduction to the Theory and Practice of Politics Political Analysis

    3 Grant, Ruth, John Locke's Liberalism: A Study of Political Thought in its Intellectual Setting, (1987) * Hirschmann, Nancy J., and Kirstie M. McClure, eds. Feminist Interpretations of John Locke (2010). * Simmons, A. J, The Lockean Theory of Rights, (1992) * Tully, James, A Discourse on Property, John Locke and his adversaries, (1980) Tully, James, An Approach to Political Philosophy: Locke ...

  7. Counterfactuals and Hypothesis Testing in Political Science

    Abstract. Scholars in comparative politics and international relations routinely evaluate causal hypotheses by referring to counterfactual cases where a hypothesized causal factor is supposed to have been absent. The methodological status and the viability of this very common procedure are unclear and are worth examining.

  8. Multiple Hypothesis Testing in Conjoint Analysis

    1 Introduction. Conjoint analysis has been one of the most widely used survey experimental designs in political science, since Hainmueller, Hopkins, and Yamamoto (Reference Hainmueller, Hopkins and Yamamoto 2014) defined the average marginal component effect (AMCE) as an estimand in conjoint designs and developed a simple estimator.In a typical conjoint experiment, respondents are asked to ...

  9. Counterfactuals and Hypothesis Testing in Political Science

    the easier it will be to draw and support causal inferences, and the more defensible they will be.72. CONCLUSION. Counterfactuals and the counterfactual strategy of hypothesis testing play an important but often unacknowledged and underdeveloped role in the efforts of political scientists to assess causal hypotheses.

  10. Research and working hypotheses

    This chapter constructs hypotheses that underlie all of the processes involved in policy analysis, irrespective of whether its purpose is explana ... (for example, by the target groups) (hypothesis II) (b) Working hypotheses on the perception of stakes and ... of the political-administrative authorities) (hypothesis III) (c) Working hypotheses ...

  11. Context and Causal Mechanisms in Political Analysis

    Political scientists largely agree that causal mechanisms are crucial to understanding causation. Recent advances in qualitative and quantitative methodology suggest that causal explanations must be contextually bounded. Yet the relationship between context and mechanisms and this relationship's importance for causation are not well understood.

  12. 1. Introduction: The Nature of Politics and Political Analysis

    This chapter discusses the nature of politics and political analysis. It first defines the nature of politics and explains what constitutes 'the political' before asking whether politics is an inevitable feature of all human societies. It then considers the boundary problems inherent in analysing the political and whether politics should be ...

  13. 2.1: The Scientific Method and Comparative Politics

    Testing: A political scientist, at this stage, will test the hypothesis, or hypotheses, through observation of the relationship between the designated variables. Analysis: When the testing is complete, political scientists will need to review their results and draw conclusions about the findings.

  14. PDF Modern Political Analysis

    The Essentials of Political Analysis, 3rd Edition & An SPSS Companion to Political Analysis, 3rd Edition + SPSS Student Version Software package. October 2008, Philip Pollock III ISBN 978-1-60426-502-6. (Note, the above text can be purchased as a package or separately, but it is typically less expensive if purchased as a package, with the ...

  15. Chapter 9 Hypothesis Testing

    For a two-tailed test, you need to cut the alpha area in half: #Calculate t-score for .025 at one tail, wit df=99 qt(.025, 99) [1] -1.984. Here, R reports a critical value of ±1.984 ± 1.984 for a two-tailed test from a sample with df=99. Again, this is slightly larger than the critical value for a z-score (1.96).

  16. Guide for Writing in Political Science

    Political Science. Political science writing asks analyze various kinds of political problems, questions, and puzzles, and to advance informed, well-researched, and substantive arguments. topics. They do not all employ the same kinds of evidence. "Data" means different things in the different subfields of political science, and your essays ...

  17. Hypothesis Tests under Separation

    The literature offers three common methods to assess the null hypothesis—the "holy trinity" of hypothesis tests: the Wald test, the likelihood ratio test, and the score test (also known as the Lagrange multiplier test). For practical reasons, most regression tables in political science report Wald p -values.

  18. 1. Introduction: The Nature of Politics and Political Analysis

    This chapter discusses the nature of politics and political analysis. It first defines the nature of politics and explains what constitutes 'the political' before asking whether politics is an inevitable feature of all human societies. It then considers the boundary problems inherent in analysing the political and whether politics should be defined in narrow terms, in the context of the ...

  19. How to Write a Strong Hypothesis

    4. Refine your hypothesis. You need to make sure your hypothesis is specific and testable. There are various ways of phrasing a hypothesis, but all the terms you use should have clear definitions, and the hypothesis should contain: The relevant variables; The specific group being studied; The predicted outcome of the experiment or analysis; 5.

  20. The Relevance of Political Awareness: A Literature Review with Meta

    Table 3 The relevance of political awareness. Full size table. The meta-analysis shows that an overwhelming majority of the scholars of political awareness have reported positive and statistically significant results, both at the study level and test level. Consequently, a rather conclusive picture.

  21. 1.4: Chapter 4- Political Science as a Social Science

    Often political scientists will refute the null hypothesis to make sure there is something interesting going on before they undertake more sophisticated analysis. On the question of money and incumbent reelection rates, for example, the null hypothesis would be that there is no relationship between campaign budgets and chances of succeeding in ...

  22. Party Politics Reassessing the gap-hypothesis: Tough The Author(s) 2019

    The "gap-hypothesis" expects political parties to deliver "tough talk" and "weak action" on the issue of migration. This article tests this idea empirically by asking whether political parties keep their electoral promises on migration policy. The analysis of governments across 18 West European countries between 1980 and 2014 makes ...