• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research methods survey design

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

Focus group software

Top 7 Focus Group Software for Comprehensive Research

Apr 17, 2024

DEI software

Top 7 DEI Software Solutions to Empower Your Workplace 

Apr 16, 2024

ai for customer experience

The Power of AI in Customer Experience — Tuesday CX Thoughts

employee lifecycle management software

Employee Lifecycle Management Software: Top of 2024

Apr 15, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Market Research
  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Survey Research

Try Qualtrics for free

What is survey research.

15 min read Find out everything you need to know about survey research, from what it is and how it works to the different methods and tools you can use to ensure you’re successful.

Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall .

As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions. But survey research needs careful planning and execution to get the results you want.

So if you’re thinking about using surveys to carry out research, read on.

Get started with our free survey maker tool

Types of survey research

Calling these methods ‘survey research’ slightly underplays the complexity of this type of information gathering. From the expertise required to carry out each activity to the analysis of the data and its eventual application, a considerable amount of effort is required.

As for how you can carry out your research, there are several options to choose from — face-to-face interviews, telephone surveys, focus groups (though more interviews than surveys), online surveys , and panel surveys.

Typically, the survey method you choose will largely be guided by who you want to survey, the size of your sample , your budget, and the type of information you’re hoping to gather.

Here are a few of the most-used survey types:

Face-to-face interviews

Before technology made it possible to conduct research using online surveys, telephone, and mail were the most popular methods for survey research. However face-to-face interviews were considered the gold standard — the only reason they weren’t as popular was due to their highly prohibitive costs.

When it came to face-to-face interviews, organizations would use highly trained researchers who knew when to probe or follow up on vague or problematic answers. They also knew when to offer assistance to respondents when they seemed to be struggling. The result was that these interviewers could get sample members to participate and engage in surveys in the most effective way possible, leading to higher response rates and better quality data.

Telephone surveys

While phone surveys have been popular in the past, particularly for measuring general consumer behavior or beliefs, response rates have been declining since the 1990s .

Phone surveys are usually conducted using a random dialing system and software that a researcher can use to record responses.

This method is beneficial when you want to survey a large population but don’t have the resources to conduct face-to-face research surveys or run focus groups, or want to ask multiple-choice and open-ended questions .

The downsides are they can: take a long time to complete depending on the response rate, and you may have to do a lot of cold-calling to get the information you need.

You also run the risk of respondents not being completely honest . Instead, they’ll answer your survey questions quickly just to get off the phone.

Focus groups (interviews — not surveys)

Focus groups are a separate qualitative methodology rather than surveys — even though they’re often bunched together. They’re normally used for survey pretesting and designing , but they’re also a great way to generate opinions and data from a diverse range of people.

Focus groups involve putting a cohort of demographically or socially diverse people in a room with a moderator and engaging them in a discussion on a particular topic, such as your product, brand, or service.

They remain a highly popular method for market research , but they’re expensive and require a lot of administration to conduct and analyze the data properly.

You also run the risk of more dominant members of the group taking over the discussion and swaying the opinions of other people — potentially providing you with unreliable data.

Online surveys

Online surveys have become one of the most popular survey methods due to being cost-effective, enabling researchers to accurately survey a large population quickly.

Online surveys can essentially be used by anyone for any research purpose – we’ve all seen the increasing popularity of polls on social media (although these are not scientific).

Using an online survey allows you to ask a series of different question types and collect data instantly that’s easy to analyze with the right software.

There are also several methods for running and distributing online surveys that allow you to get your questionnaire in front of a large population at a fraction of the cost of face-to-face interviews or focus groups.

This is particularly true when it comes to mobile surveys as most people with a smartphone can access them online.

However, you have to be aware of the potential dangers of using online surveys, particularly when it comes to the survey respondents. The biggest risk is because online surveys require access to a computer or mobile device to complete, they could exclude elderly members of the population who don’t have access to the technology — or don’t know how to use it.

It could also exclude those from poorer socio-economic backgrounds who can’t afford a computer or consistent internet access. This could mean the data collected is more biased towards a certain group and can lead to less accurate data when you’re looking for a representative population sample.

When it comes to surveys, every voice matters.

Find out how to create more inclusive and representative surveys for your research.

Panel surveys

A panel survey involves recruiting respondents who have specifically signed up to answer questionnaires and who are put on a list by a research company. This could be a workforce of a small company or a major subset of a national population. Usually, these groups are carefully selected so that they represent a sample of your target population — giving you balance across criteria such as age, gender, background, and so on.

Panel surveys give you access to the respondents you need and are usually provided by the research company in question. As a result, it’s much easier to get access to the right audiences as you just need to tell the research company your criteria. They’ll then determine the right panels to use to answer your questionnaire.

However, there are downsides. The main one being that if the research company offers its panels incentives, e.g. discounts, coupons, money — respondents may answer a lot of questionnaires just for the benefits.

This might mean they rush through your survey without providing considered and truthful answers. As a consequence, this can damage the credibility of your data and potentially ruin your analyses.

What are the benefits of using survey research?

Depending on the research method you use, there are lots of benefits to conducting survey research for data collection. Here, we cover a few:

1.   They’re relatively easy to do

Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience , the data collection is usually straightforward regardless of which survey type you use.

2.   They can be cost effective

Survey research can be relatively cheap depending on the type of survey you use.

Generally, qualitative research methods that require access to people in person or over the phone are more expensive and require more administration.

Online surveys or mobile surveys are often more cost-effective for market research and can give you access to the global population for a fraction of the cost.

3.   You can collect data from a large sample

Again, depending on the type of survey, you can obtain survey results from an entire population at a relatively low price. You can also administer a large variety of survey types to fit the project you’re running.

4.   You can use survey software to analyze results immediately

Using survey software, you can use advanced statistical analysis techniques to gain insights into your responses immediately.

Analysis can be conducted using a variety of parameters to determine the validity and reliability of your survey data at scale.

5.   Surveys can collect any type of data

While most people view surveys as a quantitative research method, they can just as easily be adapted to gain qualitative information by simply including open-ended questions or conducting interviews face to face.

How to measure concepts with survey questions

While surveys are a great way to obtain data, that data on its own is useless unless it can be analyzed and developed into actionable insights.

The easiest, and most effective way to measure survey results, is to use a dedicated research tool that puts all of your survey results into one place.

When it comes to survey measurement, there are four measurement types to be aware of that will determine how you treat your different survey results:

Nominal scale

With a nominal scale , you can only keep track of how many respondents chose each option from a question, and which response generated the most selections.

An example of this would be simply asking a responder to choose a product or brand from a list.

You could find out which brand was chosen the most but have no insight as to why.

Ordinal scale

Ordinal scales are used to judge an order of preference. They do provide some level of quantitative value because you’re asking responders to choose a preference of one option over another.

Ratio scale

Ratio scales can be used to judge the order and difference between responses. For example, asking respondents how much they spend on their weekly shopping on average.

Interval scale

In an interval scale, values are lined up in order with a meaningful difference between the two values — for example, measuring temperature or measuring a credit score between one value and another.

Step by step: How to conduct surveys and collect data

Conducting a survey and collecting data is relatively straightforward, but it does require some careful planning and design to ensure it results in reliable data.

Step 1 – Define your objectives

What do you want to learn from the survey? How is the data going to help you? Having a hypothesis or series of assumptions about survey responses will allow you to create the right questions to test them.

Step 2 – Create your survey questions

Once you’ve got your hypotheses or assumptions, write out the questions you need answering to test your theories or beliefs. Be wary about framing questions that could lead respondents or inadvertently create biased responses .

Step 3 – Choose your question types

Your survey should include a variety of question types and should aim to obtain quantitative data with some qualitative responses from open-ended questions. Using a mix of questions (simple Yes/ No, multiple-choice, rank in order, etc) not only increases the reliability of your data but also reduces survey fatigue and respondents simply answering questions quickly without thinking.

Find out how to create a survey that’s easy to engage with

Step 4 – Test your questions

Before sending your questionnaire out, you should test it (e.g. have a random internal group do the survey) and carry out A/B tests to ensure you’ll gain accurate responses.

Step 5 – Choose your target and send out the survey

Depending on your objectives, you might want to target the general population with your survey or a specific segment of the population. Once you’ve narrowed down who you want to target, it’s time to send out the survey.

After you’ve deployed the survey, keep an eye on the response rate to ensure you’re getting the number you expected. If your response rate is low, you might need to send the survey out to a second group to obtain a large enough sample — or do some troubleshooting to work out why your response rates are so low. This could be down to your questions, delivery method, selected sample, or otherwise.

Step 6 – Analyze results and draw conclusions

Once you’ve got your results back, it’s time for the fun part.

Break down your survey responses using the parameters you’ve set in your objectives and analyze the data to compare to your original assumptions. At this stage, a research tool or software can make the analysis a lot easier — and that’s somewhere Qualtrics can help.

Get reliable insights with survey software from Qualtrics

Gaining feedback from customers and leads is critical for any business, data gathered from surveys can prove invaluable for understanding your products and your market position, and with survey software from Qualtrics, it couldn’t be easier.

Used by more than 13,000 brands and supporting more than 1 billion surveys a year, Qualtrics empowers everyone in your organization to gather insights and take action. No coding required — and your data is housed in one system.

Get feedback from more than 125 sources on a single platform and view and measure your data in one place to create actionable insights and gain a deeper understanding of your target customers .

Automatically run complex text and statistical analysis to uncover exactly what your survey data is telling you, so you can react in real-time and make smarter decisions.

We can help you with survey management, too. From designing your survey and finding your target respondents to getting your survey in the field and reporting back on the results, we can help you every step of the way.

And for expert market researchers and survey designers, Qualtrics features custom programming to give you total flexibility over question types, survey design, embedded data, and other variables.

No matter what type of survey you want to run, what target audience you want to reach, or what assumptions you want to test or answers you want to uncover, we’ll help you design, deploy and analyze your survey with our team of experts.

Ready to find out more about Qualtrics CoreXM?

Get started with our free survey maker tool today

Related resources

Survey bias types 24 min read, post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, request demo.

Ready to learn more about Qualtrics?

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 15 April 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on July 15, 2021 by Pritha Bhandari . Revised on June 22, 2023.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs. surveys, questionnaire methods, open-ended vs. closed-ended questions, question wording, question order, step-by-step guide to design, other interesting articles, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives , placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleansing and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalize your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimizing these will help you avoid several types of research bias , including sampling bias , ascertainment bias , and undercoverage bias .

Prevent plagiarism. Run a free check.

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • cost-effective
  • easy to administer for small and large groups
  • anonymous and suitable for sensitive topics

But they may also be:

  • unsuitable for people with limited literacy or verbal skills
  • susceptible to a nonresponse bias (most people invited may not complete the questionnaire)
  • biased towards people who volunteer because impersonal survey requests often go ignored.

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • help you ensure the respondents are representative of your target audience
  • allow clarifications of ambiguous or unclear questions and answers
  • have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • costly and time-consuming to perform
  • more difficult to analyze if you have qualitative responses
  • likely to contain experimenter bias or demand characteristics
  • likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalizable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert scale questions collect ordinal data using rating scales with 5 or 7 points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio scales , you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer “multiracial” for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle for productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarizing responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorize answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Use a mix of both positive and negative frames to avoid research bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counter argument within the question as well.

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favor flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barreled questions. Double-barreled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

You can organize the questions logically, with a clear progression from simple to complex. Alternatively, you can randomize the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioral or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimize order effects because they can be a source of systematic error or bias in your study.

Randomization

Randomization involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomization, order effects will be minimized in your dataset. But a randomized order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalize your variables of interest into questionnaire items. Operationalizing concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivized or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomize questions. Randomizing questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis. You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved April 17, 2024, from https://www.scribbr.com/methodology/questionnaire/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, survey research | definition, examples & methods, what is a likert scale | guide & examples, reliability vs. validity in research | difference, types and examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

6 Survey Design

Survey research is one of the most ubiquitous forms of research, Seemingly we see questions being asked everywhere.  However, let’s think specifically about surveys for research purposes rather than about forms for collecting information for administrative or other purposes (although what we know about surveys can help make better forms).

Fundamentally, surveys are a structured form of data collection that relies on self-reports. That is, the information comes from the respondent, not from the interviewer, observation or unstructured processes such as ethnography or unstructured or semi-structured open-ended interviews.  This has to be kept in mind at all times when working with survey data: it is always about the participant.  That is to say, people can have their perceptions and it is important to understand their perceptions, but that is not the same thing as understanding what is actually happening.

Of course sometimes we break this pattern and add open ended questions and even ask an interviewer to make some observations as part of the study (like what was their perception of the attitude of the participant towards the survey?).

There are many different formats for quesitons and we will take a look at a few of them. But let’s dive into the overview first.

Readings from

This (Fink) is the book we use in sociology 303 (Advanced Reseaerch Methods.  It takes you through the entire survey process.  For this unit Chapters 1-3 are key, but the section on sampling we will cover in more depth later.

Research Methods for Lehman EdD Copyright © by elinwaring. All Rights Reserved.

Share This Book

Logo for Open Oregon Educational Resources

20 12. Survey design

Chapter outline.

  • What is a survey, and when should you use one? (14 minute read)
  • Collecting data using surveys (29 minute read)
  • Writing effective questions and questionnaires (38 minute read)
  • Bias and cultural considerations (22 minute read)

Content warning: examples in this chapter contain references to drug use, racism in politics, COVD-19, undocumented immigration, basic needs insecurity in higher education, school discipline, drunk driving, poverty, child sexual abuse, colonization and Global North/West hegemony, and ethnocentrism in science.

12.1 What is a survey, and when should you use one?

Learning objectives.

Learners will be able to…

  • Distinguish between survey as a research design and questionnaires used to measure concepts
  • Identify the strengths and weaknesses of surveys
  • Evaluate whether survey design fits with their research question

Students in my research methods classes often feel that surveys are self-explanatory. This feeling is understandable. Surveys are part of our everyday lives. Every time you call customer service, purchase a meal, or participate in a program, someone is handing you a survey to complete. Survey results are often discussed in the news, and perhaps you’ve even carried our a survey yourself. What could be so hard? Ask people a few quick questions about your research question and you’re done, right?

Students quickly learn that there is more to constructing a good survey than meets the eye. Survey design takes a great deal of thoughtful planning and often many rounds of revision, but it is worth the effort. As we’ll learn in this section, there are many benefits to choosing survey research as your data collection method particularly for student projects. We’ll discuss what a survey is, its potential benefits and drawbacks, and what research projects are the best fit for survey design.

Is survey research right for your project?

To answer this question, the first thing we need to do is distinguish between a survey and a questionnaire. They might seem like they are the same thing, and in normal non-research contexts, they are used interchangeably. In this textbook, we define a survey  as a research design in which a researcher poses a set of predetermined questions to an entire group, or sample , of individuals. That set of questions is the questionnaire , a research instrument consisting of a set of questions (items) intended to capture responses from participants in a standardized manner. Basically, researchers use questionnaires as part of survey research. Questionnaires are the tool. Surveys are one research design for using that tool.

Let’s contrast how survey research uses questionnaires with the other quantitative design we will discuss in this book— experimental design . Questionnaires in experiments are called pretests and posttests and they measure how participants change over time as a result of an intervention (e.g., a group therapy session) or a stimulus (e.g., watching a video of a political speech) introduced by the researcher. We will discuss experiments in greater detail in Chapter 13 , but if testing an intervention or measuring how people react to something you do sounds like what you want to do with your project, experiments might be the best fit for you.

research methods survey design

Surveys, on the other hand, do not measure the impact of an intervention or stimulus introduced by the researcher. Instead, surveys look for patterns that already exist in the world based on how people self-report on a questionnaire. Self-report simply means that the participants in your research study are answering questions about themselves, regardless of whether they are presented on paper, electronically, or read aloud by the researcher. Questionnaires structure self-report data into a standardized format—with everyone receiving the exact same questions and answer choices in the same order [1] —which makes comparing data across participants much easier. Researchers using surveys try to influence their participants as little as possible because they want honest answers.

Questionnaires are completed by individual people, so the unit of observation is almost always individuals, rather than groups or organizations. Generally speaking, individuals provide the most informed data about their own lives and experiences, so surveys often also use individuals as the unit of analysis . Surveys are also helpful in analyzing dyads, families, groups, organizations, and communities, but regardless of the unit of analysis, the unit of observation for surveys is usually individuals. Keep this in mind as you think about sampling for your project.

In some cases, getting the most-informed person to complete your questionnaire may not be feasible . As we discussed in Chapter 2 and Chapter 6 , ethical duties to protect clients and vulnerable community members mean student research projects often study practitioners and other less-vulnerable populations rather than clients and community members. The ethical supervision needed via the IRB to complete projects that pose significant risks to participants takes time and effort, and as a result, student projects often rely on key informants like clinicians, teachers, and administrators who are less likely to be harmed by the survey. Key informants are people who are especially knowledgeable about your topic. If your study is about nursing, you should probably survey nurses. These considerations are more thoroughly addressed in Chapter 10 . Sometimes, participants complete surveys on behalf of people in your target population who are infeasible to survey for some reason. Some examples of key informants include a head of household completing a survey about family finances or an administrator completing a survey about staff morale on behalf of their employees. In this case, the survey respondent is a proxy , providing their best informed guess about the responses other people might have chosen if they were able to complete the survey independently. You are relying on an individual unit of observation (one person filling out a self-report questionnaire) and group or organization unit of analysis (the family or organization the researcher wants to make conclusions about). Proxies are commonly used when the target population is not capable of providing consent or appropriate answers, as in young children and people with disabilities.

Proxies are relying on their best judgment of another person’s experiences, and while that is valuable information, it may introduce bias and error into the research process. Student research projects, due to time and resource constraints, often include sampling people with second-hand knowledge, and this is simply one of many common limitations of their findings. Remember, every project has limitations. Social work researchers look for the most favorable choices in design and methodology, as there are no perfect projects. If you are planning to conduct a survey of people with second-hand knowledge of your topic, consider reworking your research question to be about something they have more direct knowledge about and can answer easily. One common missed opportunity I see is student researchers who want to understand client outcomes (unit of analysis) by surveying practitioners (unit of observation). If a practitioner has a caseload of 30 clients, it’s not really possible to answer a question like “how much progress have your clients made?” on a survey. Would they just average all 30 clients together? Instead, design a survey that asks them about their education, professional experience, and other things they know about first-hand. By making your unit of analysis and unit of observation the same, you can ensure the people completing your survey are able to provide informed answers.

Researchers may introduce measurement error if the person completing the questionnaire does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, many schools of social work market themselves based on the rankings of social work programs published by US News and World Report . Last updated in 2019, the methodology for these rankings is simply to send out a survey to deans, directors, and administrators at schools of social work. No graduation rates, teacher evaluations, licensure pass rates, accreditation data, or other considerations are a part of these rankings. It’s literally a popularity contest in which each school is asked to rank the others on a scale of 1-5, and ranked by highest average score. W hat if an informant is unfamiliar with a school or has a personal bias against a school? [2] This could significantly skew results. One might also question the validity of such a questionnaire in assessing something as important and economically impactful as the quality of social work education. We might envision how students might demand and create more authentic measures of school quality. 

In summary, survey design best fits with research projects that have the following attributes: 

  • Researchers plan to collect their own raw data, rather than secondary analysis of existing data.
  • Researchers have access to the most knowledgeable people (that you can feasibly and ethically sample) to complete the questionnaire.
  • Research question is best answered with quantitative methods.
  • Individuals are the unit of observation, and in many cases, the unit of analysis.
  • Researchers will try to observe things objectively and try not to influence participants to respond differently.
  • Research questions asks about indirect observables—things participants can self-report on a questionnaire.
  • There are valid, reliable, and commonly used scales (or other self-report measures) for the variables in the research question.

research methods survey design

Strengths of survey methods

Researchers employing survey research as a research design enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people. In a study by Blackstone (2013) [3] on older people’s experiences in the workplace , researchers were able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of a seven-page survey, printing a cover letter, addressing and stuffing envelopes, mailing the survey, and buying return postage for the survey. We realize that $1,000 is nothing to sneeze at, but just imagine what it might have cost to visit each of those people individually to interview them in person. You would have to dedicate a few weeks of your life at least, drive around the state, and pay for meals and lodging to interview each person individually. Researchers can double, triple, or even quadruple their costs pretty quickly by opting for an in-person method of data collection over a mailed survey. Thus, surveys are relatively cost-effective.

Related to the benefit of cost-effectiveness is a survey’s potential for generalizability. Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 10 . When used with probability sampling approaches, survey research is the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group. Unfortunately, student projects are quite often not able to take advantage of the generalizability of surveys because they use availability sampling rather than the more costly and time-intensive random sampling approaches that are more likely to elicit a representative sample. While the conclusions drawn from availability samples have far less generalizability, surveys are still a great choice for student projects and they provide data that can be followed up on by well-funded researchers to generate generalizable research.

Survey research is particularly adept at investigating indirect observables . Indirect observables are things we have to ask someone to self-report because we cannot observe them directly, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviors (e.g., smoking or drinking), or factual information (e.g., income). Unlike qualitative studies in which these beliefs and attitudes would be detailed in unstructured conversations, surveys seek to systematize answers so researchers can make apples-to-apples comparisons across participants. Surveys are so flexible because you can ask about anything, and the variety of questions allows you to expand social science knowledge beyond what is naturally observable.

Survey research also tends to be a reliable method of inquiry. This is because surveys are standardized in that the same questions, phrased in exactly the same way, as they are posed to participants. Other methods, such as qualitative interviewing, which we’ll learn about in Chapter 18 , do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed questions and survey design, one strength of this methodology is its potential to produce reliable results.

The versatility of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. They can measure anything that people can self-report. Surveys are also appropriate for exploratory, descriptive, and explanatory research questions (though exploratory projects may benefit more from qualitative methods). Moreover, they can be delivered in a number of flexible ways, including via email, mail, text, and phone. We will describe the many ways to implement a survey later on in this chapter. 

In sum, the following are benefits of survey research:

  • Cost-effectiveness
  • Generalizability
  • Reliability
  • Versatility

research methods survey design

Weaknesses of survey methods

As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that you can ask any kind of question about any topic we want, once the survey is given to the first participant, there is nothing you can do to change the survey without biasing your results. Because surveys want to minimize the amount of influence that a researcher has on the participants, everyone gets the same questionnaire. Let’s say you mail a questionnaire out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their questionnaires. When conducting qualitative interviews or focus groups, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them. Survey researchers often ask colleagues, students, and others to pilot test their questionnaire and catch any errors prior to sending it to participants; however, once researchers distribute the survey to participants, there is little they can do to change anything.

Depth can also be a problem with surveys. Survey questions are standardized; thus, it can be difficult to ask anything other than very general questions that a broad range of people will understand. Because of this, survey results may not provide as detailed of an understanding as results obtained using methods of data collection that allow a researcher to more comprehensively examine whatever topic is being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect an African American president. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” (Smith, 2009). [4] Respondents were then asked to respond either yes or no to the question. But what if someone’s opinion was more complex than could be answered with a simple yes or no? What if, for example, a person was willing to vote for an African American man, but only if that person was a conservative, moderate, anti-abortion, antiwar, etc. Then we would miss out on that additional detail when the participant responded “yes,” to our question. Of course, you could add a question to your survey about moderate vs. radical candidates, but could you do that for all of the relevant attributes of candidates for all people? Moreover, how do you know that moderate or antiwar means the same thing to everyone who participates in your survey? Without having a conversation with someone and asking them follow up questions, survey research can lack enough detail to understand how people truly think.

In sum, potential drawbacks to survey research include the following:

  • Inflexibility
  • Lack of depth
  • Problems specific to cross-sectional surveys, which we will address in the next section.

Secondary analysis of survey data

This chapter is designed to help you conduct your own survey, but that is not the only option for social work researchers. Look back to Chapter 2 and recall our discussion of secondary data analysis . As we talked about previously, using data collected by another researcher can have a number of benefits. Well-funded researchers have the resources to recruit a large representative sample and ensure their measures are valid and reliable prior to sending them to participants. Before you get too far into designing your own data collection, make sure there are no existing data sets out there that you can use to answer your question. We refer you to Chapter 2 for all full discussion of the strengths and challenges of using secondary analysis of survey data.

Key Takeaways

  • Strengths of survey research include its cost effectiveness, generalizability, variety, reliability, and versatility.
  • Weaknesses of survey research include inflexibility and lack of potential depth. There are also weaknesses specific to cross-sectional surveys, the most common type of survey.

If you are using quantitative methods in a student project, it is very likely that you are going to use survey design to collect your data.

  • Check to make sure that your research question and study fit best with survey design using the criteria in this section
  • Remind yourself of any limitations to generalizability based on your sampling frame.
  • Refresh your memory on the operational definitions you will use for your dependent and independent variables.

12.2 Collecting data using surveys

  • Distinguish between cross-sectional and longitudinal surveys
  • Identify the strengths and limitations of each approach to collecting survey data, including the timing of data collection and how the questionnaire is delivered to participants

As we discussed in the previous chapter, surveys are versatile and can be shaped and suited to most topics of inquiry. While that makes surveys a great research tool, it also means there are many options to consider when designing your survey. The two main considerations for designing surveys is how many times researchers will collect data from participants and how researchers contact participants and record responses to the questionnaire.

research methods survey design

Cross-sectional surveys: A snapshot in time

Think back to the last survey you took. Did you respond to the questionnaire once or did you respond to it multiple times over a long period? Cross-sectional surveys are administered only one time. Chances are the last survey you took was a cross-sectional survey—a one-shot measure of a sample using a questionnaire. And chances are if you are conducting a survey to collect data for your project, it will be cross-sectional simply because it is more feasible to collect data once than multiple times.

Let’s take a very recent example, the COVID-19 pandemic. Enriquez and colleagues (2021) [5] wanted to understand the impact of the pandemic on undocumented college students’ academic performance, attention to academics, financial stability, mental and physical health, and other factors. In cooperation with offices of undocumented student support at eighteen campuses in California, the researchers emailed undocumented students a few times from March through June of 2020 and asked them to participate in their survey via an online questionnaire. Their survey presents an compelling look at how COVID-19 worsened existing economic inequities in this population.

Strengths and weaknesses of cross-sectional surveys

Cross-sectional surveys are great. They take advantage of many of the strengths of survey design. They are easy to administer since you only need to measure your participants once, which makes them highly suitable for student projects. Keeping track of participants for multiple measures takes time and energy, two resources always under constraint in student projects. Conducting a cross-sectional survey simply requires collecting a sample of people and getting them to fill out your questionnaire—nothing more.

That convenience comes with a tradeoff. When you only measure people at one point in time, you can miss a lot. The events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain the same over time. Because nomothetic causal explanations seek a general, universal truth, surveys conducted a decade ago do not represent what people think and feel today or twenty years ago. In student research projects, this weakness is often compounded by the use of availability sampling, which further limits the generalizability of the results in student research projects to other places and times beyond the sample collected by the researcher. Imagine generalizing results on the use of telehealth in social work prior to the COVID-19 pandemic or managers’ willingness to allow employees to telecommute. Both as a result of shocks to the system—like COVID-19—and the linear progression of cultural, economic and social change—like human rights movements—cross-sectional surveys can never truly give us a timeless causal explanation. In our example about undocumented students during COVID-19, you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long after you administered your survey or describe patterns that go back far in time.

Of course, just as society changes over time, so do people. Because cross-sectional surveys only measure people at one point in time, they have difficulty establishing cause-and-effect relationships for individuals because they cannot clearly establish whether the cause came before the effect. If your research question were about how school discipline (our independent variable) impacts substance use (our dependent variable), you would want to make that any changes in our dependent variable, substance use, came  after changes in school discipline. That is, if your hypothesis is that says school discipline causes increases in substance use, you must establish that school discipline came first and increases in substance use came afterwards. However, it is perhaps just as likely that increased substance use might cause increases in school discipline. If you sent a cross-sectional survey to students asking them about their substance use and disciplinary record, you would get back something like “tried drugs or alcohol 6 times” and “has been suspended 5 times.” You could see whether similar patterns existed in other students, but you wouldn’t be able to tell which was the cause or the effect.

Because of these limitations, cross-sectional surveys are limited in how well they can establish whether a nomothetic causal relationship is true or not. Surveys are still a key part of establishing causality. But they need additional help and support to make causal arguments. That might come from combining data across surveys in meta-analyses and systematic reviews, integrating survey findings with theories that explain causal relationships among variables in the study, as well as corroboration from research using other designs, theories, and paradigms. Scientists can establish causal explanations, in part, based on survey research. However, in keeping with the assumptions of postpositivism, the picture of reality that emerges from survey research is only our best approximation of what is objectively true about human beings in the social world. Science requires a multi-disciplinary conversation among scholars to continually improve our understanding.

research methods survey design

Longitudinal surveys: Measuring change over time

One way to overcome this sometimes-problematic aspect of cross-sectional surveys is to administer a longitudinal survey . Longitudinal surveys enable a researcher to make observations over some extended period of time. There are several types of longitudinal surveys, including trend, panel, and cohort surveys. We’ll discuss all three types here, along with retrospective surveys, which fall somewhere in between cross-sectional and longitudinal surveys.

The first type of longitudinal survey is called a trend survey . The main focus of a trend survey is, perhaps not surprisingly, trends. Researchers conducting trend surveys are interested in how people in a specific group change over time. Each time researchers gather data, they survey different people from the identified group because they are interested in the trends of the whole group, rather than changes in specific individuals. Let’s look at an example.

The Monitoring the Future Study is a trend study that described the substance use of high school children in the United States. It’s conducted annually by the National Institute on Drug Abuse (NIDA). Each year, the NIDA distributes surveys to children in high schools around the country to understand how substance use and abuse in that population changes over time. Perhaps surprisingly, fewer high school children reported using alcohol in the past month than at any point over the last 20 years—a fact that often surprises people because it cuts against the stereotype of adolescents engaging in ever-riskier behaviors. Nevertheless, recent data also reflected an increased use of e-cigarettes and the popularity of e-cigarettes with no nicotine over those with nicotine. By tracking these data points over time, we can better target substance abuse prevention programs towards the current issues facing the high school population.

Unlike trend surveys, panel surveys require the same people participate in the survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year, for 5 years in a row. Keeping track of where respondents live, when they move, and when they change phone numbers takes resources that researchers often don’t have. However, when the researchers do have the resources to carry out a panel survey, the results can be quite powerful. The Youth Development Study (YDS), administered from the University of Minnesota, offers an excellent example of a panel study.

Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and they are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003). [6] Contrary to popular beliefs about the impact of work on adolescents’ school performance and transition to adulthood, work increases confidence, enhances academic success, and prepares students for success in their future careers. Without this panel study, we may not be aware of the positive impact that working can have on young people.

Another type of longitudinal survey is a cohort survey. In a cohort survey, the participants have a defining characteristic that the researcher is interested in studying. The same people don’t necessarily participate from year to year, but all participants must meet whatever categorical criteria fulfill the researcher’s primary interest. Common cohorts that researchers study include people of particular generations or people born around the same time period, graduating classes, people who began work in a given industry at the same time, or perhaps people who have some specific historical experience in common. An example of this sort of research can be seen in Lindert and colleagues (2020) [7] work on healthy aging in men . Their article is a secondary analysis of longitudinal data collected as part of the Veterans Affairs Normative Aging Study conducted in 1985, 1988, and 1991.

research methods survey design

Strengths and weaknesses of longitudinal surveys

All three types of longitudinal surveys share the strength that they permit a researcher to make observations over time. Whether a major world event takes place or participants mature, researchers can effectively capture the subsequent potential changes in the phenomenon or behavior of interest. This is the key strength of longitudinal surveys—their ability to establish temporality needed for nomothetic causal explanations. Whether your project investigates changes in society, communities, or individuals, longitudinal designs improve on cross-sectional designs by providing data at multiple points in time that better establish causality.

Of course, all of that extra data comes at a high cost. If a panel survey takes place over ten years, the research team must keep track of every individual in the study for those ten years, ensuring they have current contact information for their sample the whole time. Consider this study which followed people convicted of driving under the influence of drugs or alcohol (Kleschinsky et al., 2009). [8] It took an average of 8.6 contacts for participants to complete follow-up surveys, and while this was a difficult-to-reach population, researchers engaging in longitudinal research must prepare for considerable time and expense in tracking participants. Keeping in touch with a participant for a prolonged period of time likely requires building participant motivation to stay in the study, maintaining contact at regular intervals, and providing monetary compensation. Panel studies are not the only costly longitudinal design. Trend studies need to recruit a new sample every time they collect a new wave of data at additional cost and time.

In my years as a research methods instructor, I have never seen a longitudinal survey design used in a student research project because students do not have enough time to complete them. Cross-sectional surveys are simply the most convenient and feasible option. Nevertheless, social work researchers with more time to complete their studies use longitudinal surveys to understand causal relationships that they cannot manipulate themselves. A researcher could not ethically experiment on participants by assigning a jail sentence or relapse, but longitudinal surveys allow us to systematically investigate such sensitive phenomena ethically. Indeed, because longitudinal surveys observe people in everyday life, outside of the artificial environment of the laboratory (as in experiments), the generalizability of longitudinal survey results to real-world situations may make them superior to experiments, in some cases.

Table 12.1 summarizes these three types of longitudinal surveys.

Retrospective surveys: Good, but not the best of both worlds

Retrospective surveys try to strike a middle ground between the two types of surveys. They are similar to other longitudinal studies in that they deal with changes over time, but like a cross-sectional study, data are collected only once. In a retrospective survey, participants are asked to report events from the past. By having respondents report past behaviors, beliefs, or experiences, researchers are able to gather longitudinal-like data without actually incurring the time or expense of a longitudinal survey. Of course, this benefit must be weighed against the possibility that people’s recollections of their pasts may be faulty. Imagine that you are participating in a survey that asks you to respond to questions about your feelings on Valentine’s Day. As last Valentine’s Day can’t be more than 12 months ago, there is a good chance that you are able to provide a pretty accurate response of how you felt. Now let’s imagine that the researcher wants to know how last Valentine’s Day compares to previous Valentine’s Days, so the survey asks you to report on the preceding six Valentine’s Days. How likely is it that you will remember how you felt at each one? Will your responses be as accurate as they might have been if your data were collected via survey once a year rather reporting the past few years today? The main limitation with retrospective surveys are that they are not as reliable as cross-section or longitudinal surveys. That said, retrospective surveys are a feasible way to collect longitudinal data when the researcher only has access to the population once, and for this reason, they may be worth the drawback of greater risk of bias and error in the measurement process.

Because quantitative research seeks to build nomothetic causal explanations, it is important to determine the order in which things happen. When using survey design to investigate causal relationships between variables in a research question, longitudinal surveys are certainly preferable because they can track changes over time and therefore provide stronger evidence for cause-and-effect relationships. As we discussed, the time and cost required to administer a longitudinal survey can be prohibitive, and most survey research in the scholarly literature is cross-sectional because it is more feasible to collect data once. Well designed cross-sectional surveys provide can provide important evidence for a causal relationship, even if it is imperfect. Once you decide how many times you will collect data from your participants, the next step is to figure out how to get your questionnaire in front of participants.

research methods survey design

Self-administered questionnaires

If you are planning to conduct a survey for your research project, chances are you have thought about how you might deliver your survey to participants. If you don’t have a clear picture yet, look back at your work from Chapter 11 on the sampling approach for your project. How are you planning to recruit participants from your sampling frame? If you are considering contacting potential participants via phone or email, perhaps you want to collect your data using a phone or email survey attached to your recruitment materials. If you are planning to collect data from students, colleagues, or other people you most commonly interact with in-person, maybe you want to consider a pen-and-paper survey to collect your data conveniently. As you review the different approaches to administering surveys below, consider how each one matches with your sampling approach and the contact information you have for study participants. Ensure that your sampling approach is feasible conduct before building your survey design from it. For example, if you are planning to administer an online survey, make sure you have email addresses to send your questionnaire or permission to post your survey to an online forum.

Surveys are a versatile research approach. Survey designs vary not only in terms of when they are administered but also in terms of how they are administered. One common way to collect data is in the form of self-administered questionnaires . Self-administered means that the research participant completes the questions independently, usually in writing. Paper questionnaires can be delivered to participants via mail or in person whenever you see your participants. Generally, student projects use in-person collection of paper questionnaires, as mail surveys require physical addresses, spending money, and waiting for the mail. It is common for academic researchers to administer surveys in large social science classes, so perhaps you have taken a survey that was given to you in-person during undergraduate classes. These professors were taking advantage of the same convenience sampling approach that student projects often do. If everyone in your sampling frame is in one room, going into that room and giving them a quick paper survey to fill out is a feasible and convenient way to collect data. Availability sampling may involve asking your sampling frame to complete your study during when they naturally meet—colleagues at a staff meeting, students in the student lounge, professors in a faculty meeting—and self-administered questionnaires are one way to take advantage of this natural grouping of your target population. Try to pick a time and situation when people have the downtime needed to complete your questionnaire, and you can maximize the likelihood that people will participate in your in-person survey. Of course, this convenience may come at the cost of privacy and confidentiality. If your survey addresses sensitive topics, participants may alter their responses because they are in close proximity to other participants while they complete the survey. Regardless of whether participants feel self-conscious or talk about their answers with one another, by potentially altering the participants’ honest response you may have introduced bias or error into your measurement of the variables in your research question.

Because student research projects often rely on availability sampling, collecting data using paper surveys from whoever in your sampling frame is convenient makes sense because the results will be of limited generalizability. But for researchers who aim to generalize (and students who want to publish their study!), self-administered surveys may be better distributed via the mail or electronically. While is very unusual for a student project to send a questionnaire via the mail, this method is used quite often in the scholarly literature and for good reason. Survey researchers who deliver their surveys via postal mail often provide some advance notice to respondents about the survey to get people thinking and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done not only to remind those who have not yet completed the survey to please do so but also to thank those who have already returned the survey. Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys’ return rates (Babbie, 2010). [6] Other helpful tools to increase response rate are to create an attractive and professional survey, offer monetary incentives, and provide a pre-addressed, stamped return envelope. These are also effective for other types of surveys.

While snail mail may not be feasible for student project, it is increasingly common for student projects and social science projects to use email and other modes of online delivery like social media to collect responses to a questionnaire. Researchers like online delivery for many reasons. It’s quicker than knocking on doors in a neighborhood for an in-person survey or waiting for mailed surveys to be returned. It’s cheap, too. There are many free tools like Google Forms and Survey Monkey (which includes a premium option). While you are affiliated with a university, you may have access to commercial research software like Redcap or Qualtrics which provide much more advanced tools for collecting survey data than free options. Online surveys can take advantage of the advantages of computer-mediated data collection by playing a video before asking a question, tracking how long participants take to answer each question, and making sure participants don’t fill out the survey more than once (to name a few examples. Moreover, survey data collected via online forms can be exported for analysis in spreadsheet software like Google Sheets or Microsoft Excel or statistics software like SPSS or JASP , a free and open-source alternative to SPSS. While the exported data still need to be checked before analysis, online distribution saves you the trouble of manually inputting every response a participant writes down on a paper survey into a computer to analyze.

The process of collecting data online depends on your sampling frame and approach to recruitment. If your project plans to reach out to people via email to ask them to participate in your study, you should attach your survey to your recruitment email. You already have their attention, and you may not get it again (even if you remind them). Think pragmatically. You will need access to the email addresses of people in your sampling frame. You may be able to piece together a list of email addresses based on public information (e.g., faculty email addresses are on their university webpage, practitioner emails are in marketing materials). In other cases, you may know of a pre-existing list of email addresses to which your target population subscribes (e.g., all undergraduate students in a social work program, all therapists at an agency), and you will need to gain the permission of the list’s administrator recruit using the email platform. Other projects will identify an online forum in which their target population congregates and recruit participants there. For example, your project might identify a Facebook group used by students in your social work program or practitioners in your local area to distribute your survey. Of course, you can post a survey to your personal social media account (or one you create for the survey), but depending on your question, you will need a detailed plan on how to reach participants with enough relevant knowledge about your topic to provide informed answers to your questionnaire.

Many of the suggestions that were provided earlier to improve the response rate of hard copy questionnaires also apply to online questionnaires, including the development of an attractive survey and sending reminder emails. One challenge not present in mail surveys is the spam filter or junk mail box. While people will at least glance at recruitment materials send via mail, email programs may automatically filter out recruitment emails so participants never see them at all. While the financial incentives that can be provided online differ from those that can be given in person or by mail, online survey researchers can still offer completion incentives to their respondents. Over the years, I’ve taken numerous online surveys. Often, they did not come with any incentive other than the joy of knowing that I’d helped a fellow social scientist do their job. However, some surveys have their perks. One survey offered a coupon code to use for $30 off any order at a major online retailer and another allowed the opportunity to be entered into a lottery with other study participants to win a larger gift, such as a $50 gift card or a tablet computer. Student projects should not pay participants unless they have grant funding to cover that cost, and there should be no expectations of any out-of-pocket costs for students to complete their research project.

One area in which online surveys are less suitable than mail or in-person surveys is when your target population includes individuals with limited, unreliable, or no access to the internet or individuals with limited computer skills. For these groups, an online survey is inaccessible. At the same time, online surveys offer the most feasible way to collect data anonymously. By posting recruitment materials to a Facebook group or list of practitioners at an agency, you can avoid collecting identifying information from people who participated in your study. For studies that address sensitive topics, online surveys also offer the opportunity to complete the survey privately (again, assuming participants have access to a phone or personal computer). If you have the person’s email address, physical address, or met them in-person, your participants are not anonymous, but if you need to collect data anonymously, online tools offer a feasible way to do so.

The best way to collect data using self-administered questionnaires depends on numerous factors. The strengths and weaknesses of in-person, mail, and electronic self-administered surveys are reviewed in Table 12.2. Ultimately, you must make the best decision based on its congruence with your sampling approach and what you can feasibly do. Decisions about survey design should be done with a deep appreciation for your study’s target population and how your design choices may impact their responses to your survey.

research methods survey design

Quantitative interviews: Researcher-administered questionnaires

There are some cases in which it is not feasible to provide a written questionnaire to participants, either on paper or digitally. In this case, the questionnaire can be administered verbally by the researcher to respondents. Rather than the participant reading questions independently on paper or digital screen, the researcher reads questions and answer choices aloud to participants and records their responses for analysis. Another word for this kind of questionnaire is an interview schedule . It’s called a schedule because each question and answer is posed in the exact same way each time.

Consistency is key in quantitative interviews . By presenting each question and answer option in exactly the same manner to each interviewee, the researcher minimizes the potential for the interviewer effect , which encompasses any possible changes in interviewee responses based on how or when the researcher presents question-and-answer options. Additionally, in-person surveys may be video recorded and you can typically take notes without distracting the interviewee due to the closed-ended nature of survey questions, making them helpful for identifying how participants respond to the survey or which questions might be confusing.

Quantitative interviews can take place over the phone or in-person. Phone surveys are often conducted by political polling firms to understand how the electorate feels about certain candidates or policies. In both cases, researchers verbally pose questions to participants. For many years, live-caller polls (a live human being calling participants in a phone survey) were the gold-standard in political polling. Indeed, phone surveys were excellent for drawing representative samples prior to mobile phones. Unlike landlines, cell phone numbers are portable across carriers, associated with individuals as opposed to households, and do not change their first three numbers when people move to a new geographical area. For this reason, many political pollsters have moved away from random-digit phone dialing and toward a mix of data collection strategies like texting-based surveys or online panels to recruit a representative sample and generalizable results for the target population (Silver, 2021). [9]

I guess I should admit that I often decline to participate in phone studies when I am called. In my defense, it’s usually just a customer service survey! My point is that it is easy and even socially acceptable to abruptly hang up on an unwanted caller asking you to participate in a survey, and given the high incidence of spam calls, many people do not pick up the phone for numbers they do not know. We will discuss response rates in greater detail at the end of the chapter. One of the benefits of phone surveys is that a person can complete them in their home or a safe place. At the same time, a distracted participant who is cooking dinner, tending to children, or driving may not provide accurate answers to your questions. Phone surveys make it difficult to control the environment in which a person answers your survey. When administering a phone survey, the researcher can record responses on a paper questionnaire or directly into a computer program. For large projects in which many interviews must be conducted by research staff, computer-assisted telephone interviewing (CATI) ensures that each question and answer option are presented the same way and input into the computer for analysis. For student projects, you can read from a digital or paper copy of your questionnaire and record participants responses into a spreadsheet program like Excel or Google Sheets.

Interview schedules must be administered in such a way that the researcher asks the same question the same way each time. While questions on self-administered questionnaires may create an impression based on the way they are presented, having a researcher pose the questions verbally introduces additional variables that might influence a respondent. Controlling one’s wording, tone of voice, and pacing can be difficult over the phone, but it is even more challenging in-person because the researcher must also control their non-verbal expressions and behaviors that may bias survey respondents. Even a slight shift in emphasis or wording may bias the respondent to answer differently. As we’ve mentioned earlier, consistency is key with quantitative data collection—and human beings are not necessarily known for their consistency. But what happens if a participant asks a question of the researcher? Unlike self-administered questionnaires, quantitative interviews allow the participant to speak directly with the researcher if they need more information about a question. While this can help participants respond accurately, it can also introduce inconsistencies between how the survey administered to each participant. Ideally, the researcher should draft sample responses researchers might provide to participants if they are confused on certain survey items. The strengths and weaknesses of phone and in-person quantitative interviews are summarized in Table 12.3 below.

Students using survey design should settle on a delivery method that presents the most favorable tradeoff between strengths and challenges for their unique context. One key consideration is your sampling approach. If you already have the participant on the phone and they agree to be a part of your sample…you may as well ask them your survey questions right then if the participant can do so. These feasibility concerns make in-person quantitative interviews a poor fit for student projects. It is far easier and quicker to distribute paper surveys to a group of people it is to administer the survey verbally to each participant individually. Ultimately, you are the one who has to carry out your research design. Make sure you can actually follow your plan!

  • Time is a factor in determining what type of survey a researcher administers; cross-sectional surveys are administered at one time, and longitudinal surveys are at multiple points in time.
  • Retrospective surveys offer some of the benefits of longitudinal research while only collecting data once but may be less reliable.
  • Self-administered questionnaires may be delivered in-person, online, or via mail.
  • Interview schedules are used with in-person or phone surveys (a.k.a. quantitative interviews).
  • Each way to administer surveys comes with benefits and drawbacks.

In this section, we assume that you are using a cross-sectional survey design. But how will you deliver your survey? Recall your sampling approach you developed in Chapter 10 . Consider the following questions when evaluating delivery methods for surveys.

  • Can you attach your survey to your recruitment emails, calls, or other contacts with potential participants?
  • What contact information (e.g., phone number, email address) do you need to deliver your survey?
  • Do you need to maintain participant anonymity?
  • Is there anything unique about your target population or sampling frame that may impact survey research?

Imagine you are a participant in your survey.

  • Beginning with the first contact for recruitment into your study and ending with a completed survey, describe each step of the data collection process from the perspective of a person responding to your survey. You should be able to provide a pretty clear timeline of how your survey will proceed at this point, even if some of the details eventually change

12.3 Writing effective questions and questionnaires

  • Describe some of the ways that survey questions might confuse respondents and how to word questions and responses clearly
  • Create mutually exclusive, exhaustive, and balanced response options
  • Define fence-sitting and floating
  • Describe the considerations involved in constructing a well-designed questionnaire
  • Discuss why pilot testing is important

In the previous section, we reviewed how researchers collect data using surveys. Guided by their sampling approach and research context, researchers should choose the survey approach that provides the most favorable tradeoffs in strengths and challenges. With this information in hand, researchers need to write their questionnaire and revise it before beginning data collection. Each method of delivery requires a questionnaire, but they vary a bit based on how they will be used by the researcher. Since phone surveys are read aloud, researchers will pay more attention to how the questionnaire sounds than how it looks. Online surveys can use advanced tools to require the completion of certain questions, present interactive questions and answers, and otherwise afford greater flexibility in how questionnaires are designed. As you read this section, consider how your method of delivery impacts the type of questionnaire you will design. Because most student projects use paper or online surveys, this section will detail how to construct self-administered questionnaires to minimize the potential for bias and error.

research methods survey design

Start with operationalization

The first thing you need to do to write effective survey questions is identify what exactly you wish to know. As silly as it sounds to state what seems so completely obvious, we can’t stress enough how easy it is to forget to include important questions when designing a survey. Begin by looking at your research question and refreshing your memory of the operational definitions you developed for those variables from Chapter 11 . You should have a pretty firm grasp of your operational definitions before starting the process of questionnaire design. You may have taken those operational definitions from other researchers’ methods, found established scales and indices for your measures, or created your own questions and answer options.

STOP! Make sure you have a complete operational definition for the dependent and independent variables in your research question. A complete operational definition contains the variable being measured, the measure used, and how the researcher interprets the measure. Let’s make sure you have what you need from Chapter 11 to begin writing your questionnaire.

List all of the dependent and independent variables in your research question.

  • It’s normal to have one dependent or independent variable. It’s also normal to have more than one of either.
  • Make sure that your research question (and this list) contain all of the variables in your hypothesis. Your hypothesis should only include variables from you research question.

For each variable in your list:

  • If you don’t have questions and answers finalized yet, write a first draft and revise it based on what you read in this section.
  • If you are using a measure from another researcher, you should be able to write out all of the questions and answers associated with that measure. If you only have the name of a scale or a few questions, you need to access to the full text and some documentation on how to administer and interpret it before you can finish your questionnaire.
  • For example, an interpretation might be “there are five 7-point Likert scale questions…point values are added across all five items for each participant…and scores below 10 indicate the participant has low self-esteem”
  • Don’t introduce other variables into the mix here. All we are concerned with is how you will measure each variable by itself. The connection between variables is done using statistical tests, not operational definitions.
  • Detail any validity or reliability issues uncovered by previous researchers using the same measures. If you have concerns about validity and reliability, note them, as well.

If you completed the exercise above and listed out all of the questions and answer choices you will use to measure the variables in your research question, you have already produced a pretty solid first draft of your questionnaire! Congrats! In essence, questionnaires are all of the self-report measures in your operational definitions for the independent, dependent, and control variables in your study arranged into one document and administered to participants. There are a few questions on a questionnaire (like name or ID#) that are not associated with the measurement of variables. These are the exception, and it’s useful to think of a questionnaire as a list of measures for variables. Of course, researchers often use more than one measure of a variable (i.e., triangulation ) so they can more confidently assert that their findings are true. A questionnaire should contain all of the measures researchers plan to collect about their variables by asking participants to self-report. As we will discuss in the final section of this chapter, triangulating across data sources (e.g., measuring variables using client files or student records) can avoid some of the common sources of bias in survey research.

Sticking close to your operational definitions is important because it helps you avoid an everything-but-the-kitchen-sink approach that includes every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your participants to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you actually plan to use in your analysis. For each question in your questionnaire, ask yourself how this question measures a variable in your study. An operational definition should contain the questions, response options, and how the researcher will draw conclusions about the variable based on participants’ responses.

research methods survey design

Writing questions

So, almost all of the questions on a questionnaire are measuring some variable. For many variables, researchers will create their own questions rather than using one from another researcher. This section will provide some tips on how to create good questions to accurately measure variables in your study. First, questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and concise as possible. As I’ve mentioned earlier, your survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and concise will go a long way toward showing your respondents the gratitude they deserve. Pilot testing the questionnaire with friends or colleagues can help identify these issues. This process is commonly called pretesting, but to avoid any confusion with pretesting in experimental design, we refer to it as pilot testing.

Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experienced the events, behaviors, or feelings you are asking them to report. If you are asking participants for second-hand knowledge—asking clinicians about clients’ feelings, asking teachers about students’ feelings, and so forth—you may want to clarify that the variable you are asking about is the key informant’s perception of what is happening in the target population. A well-planned sampling approach ensures that participants are the most knowledgeable population to complete your survey.

If you decide that you do wish to include questions about matters with which only a portion of respondents will have had experience, make sure you know why you are doing so. For example, if you are asking about MSW student study patterns, and you decide to include a question on studying for the social work licensing exam, you may only have a small subset of participants who have begun studying for the graduate exam or took the bachelor’s-level exam. If you decide to include this question that speaks to a minority of participants’ experiences, think about why you are including it. Are you interested in how studying for class and studying for licensure differ? Are you trying to triangulate study skills measures? Researchers should carefully consider whether questions relevant to only a subset of participants is likely to produce enough valid responses for quantitative analysis.

Many times, questions that are relevant to a subsample of participants are conditional on an answer to a previous question. A participant might select that they rent their home, and as a result, you might ask whether they carry renter’s insurance. That question is not relevant to homeowners, so it would be wise not to ask them to respond to it. In that case, the question of whether someone rents or owns their home is a filter question , designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Figure 12.1 presents an example of how to accomplish this on a paper survey by adding instructions to the participant that indicate what question to proceed to next based on their response to the first one. Using online survey tools, researchers can use filter questions to only present relevant questions to participants.

example of filter question, with a yes answer meaning you had to answer more questions

Researchers should eliminate questions that ask about things participants don’t know to minimize confusion. Assuming the question is relevant to the participant, other sources of confusion come from how the question is worded. The use of negative wording can be a source of potential confusion. Taking the question from Figure 12.1 about drinking as our example, what if we had instead asked, “Did you not abstain from drinking during your first semester of college?” This is a double negative, and it’s not clear how to answer the question accurately. It is a good idea to avoid negative phrasing, when possible. For example, “did you not drink alcohol during your first semester of college?” is less clear than “did you drink alcohol your first semester of college?”

You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). When I first moved to southwest Virginia, I didn’t know what a holler was. Where I grew up in New Jersey, to holler means to yell. Even then, in New Jersey, we shouted and screamed, but we didn’t holler much. In southwest Virginia, my home at the time, a holler also means a small valley in between the mountains. If I used holler in that way on my survey, people who live near me may understand, but almost everyone else would be totally confused. A similar issue arises when you use jargon, or technical language, that people do not commonly know. For example, if you asked adolescents how they experience imaginary audience , they would find it difficult to link those words to the concepts from David Elkind’s theory. The words you use in your questions must be understandable to your participants. If you find yourself using jargon or slang, break it down into terms that are more universal and easier to understand.

Asking multiple questions as though they are a single question can also confuse survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question . Figure 12.2 shows a double-barreled question. Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also more boring than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.

Double-barreled question asking more than one thing at a time.

Another thing to avoid when constructing survey questions is the problem of social desirability . We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. (You may recall we covered social desirability bias in Chapter 11 .)

Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college for our research project. We all know that cheating on exams is generally frowned upon (at least I hope we all know this). So, it may be difficult to get people to admit to cheating on a survey. But if you can guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) [10] offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.

Try to step outside your role as researcher for a second, and imagine you were one of your participants. Evaluate the following:

  •   Is the question too general? Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provide a response scale ranging from “not at all” to “extremely well”, and if that person selected “extremely well,” what do they mean? Instead, ask more specific behavioral questions, such as “Will you recommend this book to others?” or “Do you plan to read other books by the same author?” 
  • Is the question too detailed? Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.
  • Is the question presumptuous? Does your question make assumptions? For instance, if you ask, “what do you think the benefits of a tax cut would be?” you are presuming that the participant sees the tax cut as beneficial. But many people may not view tax cuts as beneficial. Some might see tax cuts as a precursor to less funding for public schools and fewer public services such as police, ambulance, and fire department. Avoid questions with built-in presumptions.
  • Does the question ask the participant to imagine something? Is the question imaginary? A popular question on many television game shows is “if you won a million dollars on this show, how will you plan to spend it?” Most participants have never been faced with this large amount of money and have never thought about this scenario. In fact, most don’t even know that after taxes, the value of the million dollars will be greatly reduced. In addition, some game shows spread the amount over a 20-year period. Without understanding this “imaginary” situation, participants may not have the background information necessary to provide a meaningful response.

Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.

In sum, in order to pose effective survey questions, researchers should do the following:

  • Identify how each question measures an independent, dependent, or control variable in their study.
  • Keep questions clear and succinct.
  • Make sure respondents have relevant lived experience to provide informed answers to your questions.
  • Use filter questions to avoid getting answers from uninformed participants.
  • Avoid questions that are likely to confuse respondents—including those that use double negatives, use culturally specific terms or jargon, and pose more than one question at a time.
  • Imagine how respondents would feel responding to questions.
  • Get feedback, especially from people who resemble those in the researcher’s sample.

Let’s complete a first draft of your questions. In the previous exercise, you listed all of the questions and answers you will use to measure the variables in your research question. 

  • In the previous exercise, you wrote out the questions and answers for each measure of your independent and dependent variables. Evaluate each question using the criteria listed above on effective survey questions.
  • Type out questions for your control variables and evaluate them, as well. Consider what response options you want to offer participants.

Now, let’s revise any questions that do not meet your standards!

  •  Use the BRUSO model in Table 12.2 for an illustration of how to address deficits in question wording. Keep in mind that you are writing a first draft in this exercise, and it will take a few drafts and revisions before your questions are ready to distribute to participants.

research methods survey design

Writing response options

While posing clear and understandable questions in your survey is certainly important, so too is providing respondents with unambiguous response options. Response options are the answers that you provide to the people completing your questionnaire. Generally, respondents will be asked to choose a single (or best) response to each question you pose. We call questions in which the researcher provides all of the response options closed-ended questions . Keep in mind, closed-ended questions can also instruct respondents to choose multiple response options, rank response options against one another, or assign a percentage to each response option. But be cautious when experimenting with different response options! Accepting multiple responses to a single question may add complexity when it comes to quantitatively analyzing and interpreting your data.

Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher. This is particularly important for mixed-methods research. It is possible to analyze open-ended response options quantitatively using content analysis (i.e., counting how often a theme is represented in a transcript looking for statistical patterns). However, for most researchers, qualitative data analysis will be needed to analyze open-ended questions, and researchers need to think through how they will analyze any open-ended questions as part of their data analysis plan. We will address qualitative data analysis in greater detail in Chapter 19 .

To keep things simple, we encourage you to use only closed-ended response options in your study. While open-ended questions are not wrong, they are often a sign in our classrooms that students have not thought through all the way how to operationally define and measure their key variables. Open-ended questions cannot be operationally defined because you don’t know what responses you will get. Instead, you will need to analyze the qualitative data using one of the techniques we discuss in Chapter 19 to interpret your participants’ responses.

To write an effective response options for closed-ended questions, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive . Look back at Figure 12.1, which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive . In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 12.1, we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options, and every respondent fits into one of the response options we provided.

Earlier in this section, we discussed double-barreled questions. Response options can also be double barreled, and this should be avoided. Figure 12.3 is an example of a question that uses double-barreled response options. Other tips about questions are also relevant to response options, including that participants should be knowledgeable enough to select or decline a response option as well as avoiding jargon and cultural idioms.

Double-barreled response options providing more than one answer for each option

Even if you phrase questions and response options clearly, participants are influenced by how many response options are presented on the questionnaire. For Likert scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:

Unlikely  |  Somewhat Likely  |  Likely  |  Very Likely  |  Extremely Likely

Because we have four rankings of likely and only one ranking of unlikely, the scale is unbalanced and most responses will be biased toward “likely” rather than “unlikely.” A balanced version might look like this:

Extremely Unlikely  |  Somewhat Unlikely  |  As Likely as Not  |  Somewhat Likely  | Extremely Likely

In this example, the midpoint is halfway between likely and unlikely. Of course, a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. Fence-sitters are respondents who choose neutral response options, even if they have an opinion. Some people will be drawn to respond, “no opinion” even if they have an opinion, particularly if their true opinion is the not a socially desirable opinion. Floaters , on the other hand, are those that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion. 

As you can see, floating is the flip side of fence-sitting. Thus, the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers specifically want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is okay to force respondents to choose one side or another (e.g., agree or disagree) without a middle option (e.g., neither agree nor disagree) or to not include an option like “don’t know enough to say” or “not applicable.” There is no always-correct solution to either problem. But in general, including middle option in a response set provides a more exhaustive set of response options than one that excludes one. 

The most important check before your finalize your response options is to align them with your operational definitions. As we’ve discussed before, your operational definitions include your measures (questions and responses options) as well as how to interpret those measures in terms of the variable being measured. In particular, you should be able to interpret all response options to a question based on your operational definition of the variable it measures. If you wanted to measure the variable “social class,” you might ask one question about a participant’s annual income and another about family size. Your operational definition would need to provide clear instructions on how to interpret response options. Your operational definition is basically like this social class calculator from Pew Research , though they include a few more questions in their definition.

To drill down a bit more, as Pew specifies in the section titled “how the income calculator works,” the interval/ratio data respondents enter is interpreted using a formula combining a participant’s four responses to the questions posed by Pew categorizing their household into three categories—upper, middle, or lower class. So, the operational definition includes the four questions comprising the measure and the formula or interpretation which converts responses into the three final categories that we are familiar with: lower, middle, and upper class.

It is interesting to note that even though participants inis an ordinal level of measurement. Whereas, Pew asks four questions that use an interval or ratio level of measurement (depending on the question). This means that respondents provide numerical responses, rather than choosing categories like lower, middle, and upper class. It’s perfectly normal for operational definitions to change levels of measurement, and it’s also perfectly normal for the level of measurement to stay the same. The important thing is that each response option a participant can provide is accounted for by the operational definition. Throw any combination of family size, location, or income at the Pew calculator, and it will define you into one of those three social class categories.

Unlike Pew’s definition, the operational definitions in your study may not need their own webpage to define and describe. For many questions and answers, interpreting response options is easy. If you were measuring “income” instead of “social class,” you could simply operationalize the term by asking people to list their total household income before taxes are taken out. Higher values indicate higher income, and lower values indicate lower income. Easy. Regardless of whether your operational definitions are simple or more complex, every response option to every question on your survey (with a few exceptions) should be interpretable using an operational definition of a variable. Just like we want to avoid an everything-but-the-kitchen-sink approach to questions on our questionnaire, you want to make sure your final questionnaire only contains response options that you will use in your study.

One note of caution on interpretation (sorry for repeating this). We want to remind you again that an operational definition should not mention more than one variable. In our example above, your operational definition could not say “a family of three making under $50,000 is lower class; therefore, they are more likely to experience food insecurity.” That last clause about food insecurity may well be true, but it’s not a part of the operational definition for social class. Each variable (food insecurity and class) should have its own operational definition. If you are talking about how to interpret the relationship between two variables, you are talking about your data analysis plan . We will discuss how to create your data analysis plan beginning in Chapter 14 . For now, one consideration is that depending on the statistical test you use to test relationships between variables, you may need nominal, ordinal, or interval/ratio data. Your questions and response options should match the level of measurement you need with the requirements of the specific statistical tests in your data analysis plan. Once you finalize your data analysis plan, return to your questionnaire to match the level of measurement matches with the statistical test you’ve chosen.

In summary, to write effective response options researchers should do the following:

  • Avoid wording that is likely to confuse respondents—including double negatives, use culturally specific terms or jargon, and double-barreled response options.
  • Ensure response options are relevant to participants’ knowledge and experience so they can make an informed and accurate choice.
  • Present mutually exclusive and exhaustive response options.
  • Consider fence-sitters and floaters, and the use of neutral or “not applicable” response options.
  • Define how response options are interpreted as part of an operational definition of a variable.
  • Check level of measurement matches operational definitions and the statistical tests in the data analysis plan (once you develop one in the future)

Look back at the response options you drafted in the previous exercise. Make sure you have a first draft of response options for each closed-ended question on your questionnaire.

  • Using the criteria above, evaluate the wording of the response options for each question on your questionnaire.
  • Revise your questions and response options until you have a complete first draft.
  • Do your first read-through and provide a dummy answer to each question. Make sure you can link each response option and each question to an operational definition.
  • Look ahead to Chapter 14 and consider how each item on your questionnaire will inform your data analysis plan.

From this discussion, we hope it is clear why researchers using quantitative methods spell out all of their plans ahead of time. Ultimately, there should be a straight line from operational definition through measures on your questionnaire to the data analysis plan. If your questionnaire includes response options that are not aligned with operational definitions or not included in the data analysis plan, the responses you receive back from participants won’t fit with your conceptualization of the key variables in your study. If you do not fix these errors and proceed with collecting unstructured data, you will lose out on many of the benefits of survey research and face overwhelming challenges in answering your research question.

research methods survey design

Designing questionnaires

Based on your work in the previous section, you should have a first draft of the questions and response options for the key variables in your study. Now, you’ll also need to think about how to present your written questions and response options to survey respondents. It’s time to write a final draft of your questionnaire and make it look nice. Designing questionnaires takes some thought. First, consider the route of administration for your survey. What we cover in this section will apply equally to paper and online surveys, but if you are planning to use online survey software, you should watch tutorial videos and explore the features of of the survey software you will use.

Informed consent & instructions

Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000) . [11] One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. Thus, the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.

The second function of the introduction is to establish informed consent . Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and other ethical considerations we covered in Chapter 6 . Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.

Organizing items to be easy and intuitive to follow

The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.

Questions are often organized thematically. If our survey were measuring social class, perhaps we’d have a few questions asking about employment, others focused on education, and still others on housing and community resources. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about parents’ income and then present a series of questions about estimated future income. Grouping by theme is one way to be deliberate about how you present your questions. Keep in mind that you are surveying people, and these people will be trying to follow the logic in your questionnaire. Jumping from topic to topic can give people a bit of whiplash and may make participants less likely to complete it.

Using a matrix is a nice way of streamlining response options for similar questions. A matrix is a question type that that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 12.4.

Survey using matrix options--between agree and disagree--and opinions about class

Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will want to make respondents continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). [12] In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions, such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. On the other hand, if your survey deals with some very sensitive topic, such as child sexual abuse or criminal convictions, you don’t want to scare respondents away or shock them by beginning with your most intrusive questions.

Your participants are human. They will react emotionally to questionnaire items, and they will also try to uncover your research questions and hypotheses. In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research. When feasible, you should consult with key informants from your target population determine how best to order your questions. If it is not feasible to do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions. None of your decisions will be perfect, and all studies have limitations.

Questionnaire length

You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a sizable number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of it will turn off respondents and may make them not want to complete your survey.

Second, and perhaps more important, how long are respondents likely to be willing to spend completing your questionnaire? If you are studying college students, asking them to use their very free time to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you find ask them to complete your survey during down-time between classes and there is little work to be done, students may be willing to give you a bit more of their time. Think about places and times that your sampling frame naturally gathers and whether you would be able to either recruit participants or distribute a survey in that context. Estimate how long your participants would reasonably have to complete a survey presented to them during this time. The more you know about your population (such as what weeks have less work and more free time), the better you can target questionnaire length.

The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some researchers advise that surveys should not take longer than about 15 minutes to complete (as cited in Babbie 2010), [13] whereas others suggest that up to 20 minutes is acceptable (Hopper, 2010). [14] As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered to determine how long to make your questionnaire. For example, if you planned to distribute your questionnaire to students in between classes, you will need to make sure it is short enough to complete before the next class begins.

When designing a questionnaire, a researcher should consider:

  • Weighing strengths and limitations of the method of delivery, including the advanced tools in online survey software or the simplicity of paper questionnaires.
  • Grouping together items that ask about the same thing.
  • Moving any questions about sensitive items to the end of the questionnaire, so as not to scare respondents off.
  • Moving any questions that engage the respondent to answer the questionnaire at the beginning, so as not to bore them.
  • Timing the length of the questionnaire with a reasonable length of time you can ask of your participants.
  • Dedicating time to visual design and ensure the questionnaire looks professional.

Type out a final draft of your questionnaire in a word processor or online survey tool.

  • Evaluate your questionnaire using the guidelines above, revise it, and get it ready to share with other student researchers.

research methods survey design

Pilot testing and revising questionnaires

A good way to estimate the time it will take respondents to complete your questionnaire (and other potential challenges) is through pilot testing . Pilot testing allows you to get feedback on your questionnaire so you can improve it before you actually administer it. It can be quite expensive and time consuming if you wish to pilot test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pilot testing with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By pilot testing your questionnaire, you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are boring or offensive, and learn whether there are places where you should have included filter questions. You can also time pilot testers as they take your survey. This will give you a good idea about the estimate to provide respondents when you administer your survey and whether you have some wiggle room to add additional items or need to cut a few items.

Perhaps this goes without saying, but your questionnaire should also have an attractive design. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page. Make your font size readable (at least 12 point or larger, depending on the characteristics of your sample), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. If you are using an online survey, ensure that participants can complete it via mobile, computer, and tablet devices. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions. While online survey tools automate much of visual design, word processors are designed for writing all kinds of documents and may need more manual adjustment as part of visual design.

Realistically, your questionnaire will continue to evolve as you develop your data analysis plan over the next few chapters. By now, you should have a complete draft of your questionnaire grounded in an underlying logic that ties together each question and response option to a variable in your study. Once your questionnaire is finalized, you will need to submit it for ethical approval from your professor or the IRB. If your study requires IRB approval, it may be worthwhile to submit your proposal before your questionnaire is completely done. Revisions to IRB protocols are common and it takes less time to review a few changes to questions and answers than it does to review the entire study, so give them the whole study as soon as you can. Once the IRB approves your questionnaire, you cannot change it without their okay.

  • A questionnaire is comprised of self-report measures of variables in a research study.
  • Make sure your survey questions will be relevant to all respondents and that you use filter questions when necessary.
  • Effective survey questions and responses take careful construction by researchers, as participants may be confused or otherwise influenced by how items are phrased.
  • The questionnaire should start with informed consent and instructions, flow logically from one topic to the next, engage but not shock participants, and thank participants at the end.
  • Pilot testing can help identify any issues in a questionnaire before distributing it to participants, including language or length issues.

It’s a myth that researchers work alone! Get together with a few of your fellow students and swap questionnaires for pilot testing.

  • Use the criteria in each section above (questions, response options, questionnaires) and provide your peers with the strengths and weaknesses of their questionnaires.
  • See if you can guess their research question and hypothesis based on the questionnaire alone.

12.4 Bias and cultural considerations

  • Identify the logic behind survey design as it relates to nomothetic causal explanations and quantitative methods.
  • Discuss sources of bias and error in surveys.
  • Apply criticisms of survey design to ensure more equitable research.

The logic of survey design

As you may have noticed with survey designs, everything about them is intentional—from the delivery method, to question wording, to what response options are offered. It’s helpful to spell out the underlying logic behind survey design and how well it meets the criteria for nomothetic causal explanations. Because we are trying to isolate the causal relationship between our dependent and independent variable, we must try to control for as many possible confounding factors as possible. Researchers using survey design do this in multiple ways:

  • Using well-established, valid, and reliable measures of key variables, including triangulating variables using multiple measures
  • Measuring control variables and including them in their statistical analysis
  • Avoiding biased wording, presentation, or procedures that might influence the sample to respond differently
  • Pilot testing questionnaires, preferably with people similar to the sample

In other words, survey researchers go through a lot of trouble to make sure they are not the ones causing the changes they observe in their study. Of course, every study falls a little short of this ideal bias-free design, and some studies fall far short of it. This section is all about how bias and error can inhibit the ability of survey results to meaningfully tell us about causal relationships in the real world.

Bias in questionnaires, questions, and response options

The use of surveys is based on methodological assumptions common to research in the postpositivist paradigm. Figure 12.5 presents a model the methodological assumptions behind survey design—what researchers assume is the cognitive processes that people engage in when responding to a survey item (Sudman, Bradburn, & Schwarz, 1996) . [15] Respondents must interpret the question, retrieve relevant information from memory, form a tentative judgment, convert the tentative judgment into one of the response options provided (e.g., a rating on a 1-to-7 scale), and finally edit their response as necessary.

research methods survey design

Consider, for example, the following questionnaire item:

  • How many alcoholic drinks do you consume in a typical day?
  • a lot more than average
  • somewhat more than average
  • somewhat fewer than average
  • a lot fewer than average

Although this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “typical day” is a typical weekday, typical weekend day, or both . Even though Chang and Krosnick (2003) [16] found that asking about “typical” behavior has been shown to be more valid than asking about “past” behavior, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days) .

Once respondents have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this  mental calculation  might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does “average” mean, and what would count as “somewhat more” than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink a lot more than average, they might not want to report that  for fear of looking bad in the eyes of the researcher, so instead, they may opt to select the “somewhat more than average” response option.

At first glance, this question is clearly worded and includes a set of mutually exclusive, exhaustive, and balanced response options. However, it is difficult to follow the logic of what is truly being asked. Again, this complexity can lead to unintended influences on respondents’ answers. Confounds like this are often referred to as context effects   because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990) . [17] For example, there is an  item-order effect when the order in which the items are presented affects people’s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, & Schwarz, 1988) . [18] When the life satisfaction item came first, the correlation between the two was only −.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.

The response options provided can also have unintended effects on people’s responses (Schwarz, 1999) . [19] For example, when people are asked how often they are “really irritated” and given response options ranging from “less than once a year” to “more than once a month,” they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from “less than once a day” to “several times a month,” they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options (i.e., fence-sitting). For example, people are likely to report watching more television when the response options are centered on a middle option of 4 hours than when centered on a middle option of 2 hours.  To mitigate against order effects, rotate questions and response items when there is no natural order. Counterbalancing or randomizing the order of presentation of the questions in online surveys are good practices for survey questions and can reduce response order effects that show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first! [20]

Other context effects that can confound the causal relationship under examination in a survey include social desirability bias, recall bias, and common method bias. As we discussed in Chapter 11 , s ocial desirability bias occurs when we create questions that lead respondents to answer in ways that don’t reflect their genuine thoughts or feelings to avoid being perceived negatively. With negative questions such as, “do you think that your project team is dysfunctional?”, “is there a lot of office politics in your workplace?”, or “have you ever illegally downloaded music files from the Internet?”, the researcher may not get truthful responses. This tendency among respondents to “spin the truth” in order to portray themselves in a socially desirable manner is called social desirability bias, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming social desirability bias in a questionnaire survey outside of wording questions using nonjudgmental language. However, in a quantitative interview, a researcher may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

As you can see, participants’ responses to survey questions often depend on their motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviors, or perhaps their memory of such events may have evolved with time and are no longer retrievable. This phenomenon is know as recall bias . For instance, if a respondent is asked to describe their utilization of computer technology one year ago, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Cross-sectional and retrospective surveys are particularly vulnerable to recall bias as well as common method bias. Common method bias can occur when measuring both independent and dependent variables at the same time (like a cross-section survey) and using the same instrument (like a questionnaire). In such cases, the phenomenon under investigation may not be adequately separated from measurement artifacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff et al. 2003), [21] , Lindell and Whitney’s (2001) [22] market variable technique, and so forth. This bias can be potentially avoided if the independent and dependent variables are measured at different points in time, using a longitudinal survey design, or if these variables are measured using different data sources, such as medical or student records rather than self-report questionnaires.

research methods survey design

Bias in recruitment and response to surveys

So far, we have discussed errors that researchers make when they design questionnaires that accidentally influence participants to respond one way or another. However, even well designed questionnaires can produce biased results when administered to survey respondents because of the biases in who actually responds to your survey.

Survey research is notorious for its low response rates. A response rate of 15-20% is typical in a mail survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, then a legitimate concern is whether non-respondents are not responding due to a systematic reason, which may raise questions about the validity and generalizability of the study’s results, especially as this relates to the representativeness of the sample. This is known as non-response bias . For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to satisfaction questionnaires. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. [23] In this instance, the results would not be generalizable beyond this one biased sample. Here are several strategies for addressing non-response bias:

  • Advance notification: A short letter sent in advance to the targeted respondents soliciting their participation in an upcoming survey can prepare them and improve likelihood of response. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their cooperation. A variation of this technique may request the respondent to return a postage-paid postcard indicating whether or not they are willing to participate in the study.
  • Ensuring that content is relevant: If a survey examines issues of relevance or importance to respondents, then they are more likely to respond.
  • Creating a respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, inoffensive, and easy to respond to tend to get higher response rates.
  • Having the project endorsed: For organizational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organization. Such endorsements can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.
  • Providing follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.
  • Ensuring that interviewers are properly trained: Response rates for interviews can be improved with skilled interviewers trained on how to request interviews, use computerized dialing techniques to identify potential respondents, and schedule callbacks for respondents who could not be reached.
  • Providing incentives: Response rates, at least with certain populations, may increase with the use of incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, the promise of contribution to charity, and so forth.
  • Providing non-monetary incentives: Organizations in particular are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive sharing trainings and other resources based on the results of a project with a key stakeholder.
  • Making participants fully aware of confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates.

Nonresponse bias impairs the ability of the researcher to generalize from the total number of respondents in the sample to the overall sampling frame. Of course, this assumes that the sampling frame is itself representative and generalizable to the larger target population.  Sampling bias is present when the people in our sampling frame or the approach we use to sample them results in a sample that does not represent our population in some way. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and will include a disproportionate number of respondents who have land-line telephone service and stay home during much of the day, such as people who are unemployed, disabled, or of advanced age. Likewise, online surveys tend to include a disproportionate number of students and younger people who are more digitally connected, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. A different kind of sampling bias relates to generalizing from key informants to a target population, such as asking teachers (or parents) about the academic learning of their students (or children) or asking CEOs about operational details in their company. These sampling frames may provide a clearer picture of what key informants think and feel, rather than the target population.

research methods survey design

Cultural bias

The acknowledgement that most research in social work and other adjacent fields is overwhelmingly based on so-called WEIRD (Western, educated, industrialized, rich and democratic) populations—a topic we discussed in Chapter 10 —has given way to intensified research funding, publication, and visibility of collaborative cross-cultural studies across the social sciences that expand the geographical range of study populations. Many of the so-called non-WEIRD communities who increasingly participate in research are Indigenous, from low- and middle-income countries in the global South, live in post-colonial contexts, and/or are marginalized within their political systems, revealing and reproducing power differentials between researchers and researched (Whiteford & Trotter, 2008). [24] Cross-cultural research has historically been rooted in racist, capitalist ideas and motivations (Gordon, 1991). [25] Scholars have long debated whether research aiming to standardize cross-cultural measurements and analysis is tacitly engaged and/or continues to be rooted in colonial and imperialist practices (Kline et al., 2018; Stearman, 1984). [26] Given this history, it is critical that scientists reflect upon these issues and be accountable to their participants and colleagues for their research practices. We argue that cross-cultural research be grounded in the recognition of the historical, political, sociological and cultural forces acting on the communities and individuals of focus. These perspectives are often contrasted with ‘science’; here we argue that they are necessary as a foundation for the study of human behavior.

We stress that our goal is not to review the literature on colonial or neo-colonial research practices, to provide a comprehensive primer on decolonizing approaches to field research, nor to identify or admonish past harms in these respects—harms to which many of the authors of this piece would readily admit. Furthermore, we acknowledge that we ourselves are writing from a place of privilege as researchers educated and trained in disciplines with colonial pasts. Our goal is simply to help students understand the broader issues in cross-cultural studies for appropriate consideration of diverse communities and culturally appropriate methodologies for student research projects.

Equivalence of measures across cultures

Data collection methods largely stemming from WEIRD intellectual traditions are being exported to a range of cultural contexts. This is often done with insufficient consideration of the translatability (e.g. equivalence or applicability) or implementation of such concepts and methods in different contexts, as already well documented (e.g., Hruschka et al., 2018). [27] For example, in a developmental psychology study conducted by Broesch and colleagues (2011), [28] the research team exported a task to examine the development and variability of self-recognition in children across cultures. Typically, this milestone is measured by surreptitiously placing a mark on a child’s forehead and allowing them to discover their reflective image and the mark in a mirror. While self-recognition in WEIRD contexts typically manifests in children by 18 months of age, the authors tested found that only 2 out of 82 children (aged 1–6 years) ‘passed’ the test by removing the mark using the reflected image. The authors’ interpretation of these results was that the test produced false negatives and instead measured implicit compliance to the local authority figure who placed the mark on the child. This raises the possibility that the mirror test may lack construct validity in cross-cultural contexts—in other words, that it may not measure the theoretical construct it was designed to measure.

As we discussed previously, survey researchers want to make sure everyone receives the same questionnaire, but how can we be sure everyone understands the questionnaire in the same way? Cultural equivalence means that a measure produces comparable data when employed in different cultural populations (Van de Vijver & Poortinga, 1992). [29] If concepts differ in meaning across cultures, cultural bias may better explain what is going on with your key variables better than your hypotheses. Cultural bias may result because of poor item translation, inappropriate content of items, and unstandardized procedures (Waltz et al., 2010). [30] Of particular importance is construct bias , or “when the construct measured is not identical across cultures or when behaviors that characterize the construct are not identical across cultures” (Meiring et al., 2005, p. 2) [31] Construct bias emerges when there is: a) disagreement about the appropriateness of content, b) inadequate sampling, c) underrepresentation of the construct, and d) incomplete overlap of the construct across cultures (Van de Vijver & Poortinga, 1992). [32]

research methods survey design

Addressing cultural bias

To address these issues, we propose that careful scrutiny of (a) study site selection, (b) community involvement and (c) culturally appropriate research methods. Particularly for those initiating collaborative cross-cultural projects, we focus here on pragmatic and implementable steps. For student researchers, it is important to be aware of these issues and assess for them in the strengths and limitations of your own study, though the degree to which you can feasibly implement some of these measures will be impaired by a lack of resources.

Study site selection

Researchers are increasingly interested in cross-cultural research applicable outside of WEIRD contexts., but this has sometimes led to an uncritical and haphazard inclusion of ‘non-WEIRD’ populations in cross-cultural research without further regard for why specific populations should be included (Barrett, 2020). [33] One particularly egregious example is the grouping of all non-Western populations as a comparative sample to the cultural West (i.e. the ‘West versus rest’ approach) is often unwittingly adopted by researchers performing cross-cultural research (Henrich, 2010). [34] Other researcher errors include the exoticization of particular cultures or viewing non-Western cultures as a window into the past rather than cultures that have co-evolved over time.

Thus, some of the cultural biases in survey research emerge when researchers fail to identify a clear  theoretical justification for inclusion of any subpopulation—WEIRD or not—based on knowledge of the relevant cultural and/or environmental context (see Tucker, 2017 [35] for a good example). For example, a researcher asking about satisfaction with daycare must acquire the relevant cultural and environmental knowledge about a daycare that caters exclusively to Orthodox Jewish families. Simply including this study site without doing appropriate background research and identifying a specific aspect of this cultural group that is of theoretical interest in your study (e.g., spirituality and parenthood) indicates a lack of rigor in research. It undercuts the validity and generalizability of your findings by introducing sources of cultural bias that are unexamined in your study.

Sampling decisions are also important as they involve unique ethical and social challenges. For example, foreign researchers (as sources of power, information and resources) represent both opportunities for and threats to community members. These relationships are often complicated by power differentials due to unequal access to wealth, education and historical legacies of colonization. As such, it is important that investigators are alert to the possible bias among individuals who initially interact with researchers, to the potential negative consequences for those excluded, and to the (often unspoken) power dynamics between the researcher and their study participants (as well as among and between study participants).

We suggest that a necessary first step is to carefully consult existing resources outlining best practices for ethical principles of research before engaging in cross-cultural research. Many of these resources have been developed over years of dialogue in various academic and professional societies (e.g. American Anthropological Association, International Association for Cross Cultural Psychology, International Union of Psychological Science). Furthermore, communities themselves are developing and launching research-based codes of ethics and providing carefully curated open-access materials such as those from the Indigenous Peoples’ Health Research Centre , often written in consultation with ethicists in low- to middle-income countries (see Schroeder et al., 2019 ). [36]

Community involvement

Too often researchers engage in ‘extractive’ research, whereby a researcher selects a study community and collects the necessary data to exclusively further their own scientific and/or professional goals without benefiting the community. This reflects a long history of colonialism in social science. Extractive methods lead to methodological flaws and alienate participants from the scientific process, poisoning the well of scientific knowledge on a macro level. Many researchers are associated with institutions tainted with colonial, racist and sexist histories, sentiments and in some instances perpetuating into the present. Much cross-cultural research is carried out in former or contemporary colonies, and in the colonial language. Explicit and implicit power differentials create ethical challenges that can be acknowledged by researchers and in the design of their study (see Schuller, 2010 [37] for an example in which the power and politics of various roles played by researchers).

An understanding of cultural norms may ensure that data collection and questionnaire design are culturally and linguistically relevant. This can be achieved by implementing several complementary strategies. A first step may be to collaborate with members of the study community to check the relevance of the instruments being used. Incorporating perspectives from the study community from the outset can reduce the likelihood of making scientific errors in measurement and inference (First Nations Information Governance Centre, 2014). [38]

An additional approach is to use mixed methods in data collection, such that each method ‘checks’ the data collected using the other methods. A recent paper by Fisher and Poortinga (2018) [39] provides suggestions for a rigorous methodological approach to conducting cross-cultural comparative psychology, underscoring the importance of using multiple methods with an eye towards a convergence of evidence. A mixed-method approach can incorporate a variety of qualitative methods over and on top of a quantitative survey including open-ended questions, focus groups, and interviews.

Research design and methods

It is critical that researchers translate the language, technological references and stimuli as well as examine the underlying cultural context of the original method for assumptions that rely upon WEIRD epistemologies (Hrushcka, 2020). [40] This extends to non-complex visual aids, attempting to ensure that even scales measure what the researcher is intending (see Purzycki and Lang, 2019 [41] for discussion on the use of a popular economic experiment in small-scale societies).

For more information on assessing cultural equivalence, consult this free training from RTI International, a well-regarded non-profit research firm, entitled “ The essential role of language in survey design ” and this free training from the Center for Capacity Building in Survey Methods and Statistics entitled “ Questionnaire design: For surveys in 3MC (multinational, multiregional, and multi cultural) contexts . These trainings guide researchers using survey design through the details of evaluating and writing survey questions using culturally sensitive language. Moreover, if you are planning to conduct cross-cultural research, you should consult this guide for assessing measurement equivalency and bias across cultures , as well.

  • Bias can come from both how questionnaire items are presented to participants as well as how participants are recruited and respond to surveys.
  • Cultural bias emerges from the differences in how people think and behave across cultures.
  • Cross-cultural research requires a theoretically-informed sampling approach, evaluating measurement equivalency across cultures, and generalizing findings with caution.

Review your questionnaire and assess it for potential sources of bias.

  • Include the results of pilot testing from the previous exercise.
  • Make any changes to your questionnaire (or sampling approach) you think would reduce the potential for bias in your study.

Create a first draft of your limitations section by identifying sources of bias in your survey.

  • Write a bulleted list or paragraph or the potential sources of bias in your study.
  • Remember that all studies, especially student-led studies, have limitations. To the extent you can address these limitations now and feasibly make changes, do so. But keep in mind that your goal should be more to correctly describe the bias in your study than to collect bias-free results. Ultimately, your study needs to get done!
  • Unless researchers change the order of questions as part of their methodology and ensuring accurate responses to questions ↵
  • Not that there are any personal vendettas I'm aware of in academia...everyone gets along great here... ↵
  • Blackstone, A. (2013). Harassment of older adults in the workplace. In P. Brownell & J. J. Kelly (eds.) Ageism and mistreatment of older workers . Springer ↵
  • Smith, T. W. (2009). Trends in willingness to vote for a Black and woman for president, 1972–2008.  GSS Social Change Report No. 55 . Chicago, IL: National Opinion Research Center ↵
  • Enriquez , L. E., Rosales , W. E., Chavarria, K., Morales Hernandez, M., & Valadez, M. (2021). COVID on Campus: Assessing the Impact of the Pandemic on Undocumented College Students. AERA Open. https://doi.org/10.1177/23328584211033576 ↵
  • Mortimer, J. T. (2003).  Working and growing up in America . Cambridge, MA: Harvard University Press. ↵
  • Lindert, J., Lee, L. O., Weisskopf, M. G., McKee, M., Sehner, S., & Spiro III, A. (2020). Threats to Belonging—Stressful Life Events and Mental Health Symptoms in Aging Men—A Longitudinal Cohort Study.  Frontiers in psychiatry ,  11 , 1148. ↵
  • Kleschinsky, J. H., Bosworth, L. B., Nelson, S. E., Walsh, E. K., & Shaffer, H. J. (2009). Persistence pays off: follow-up methods for difficult-to-track longitudinal samples.  Journal of studies on alcohol and drugs ,  70 (5), 751-761. ↵
  • Silver, N. (2021, March 25). The death of polling is greatly exaggerated. FiveThirtyEight . Retrieved from: https://fivethirtyeight.com/features/the-death-of-polling-is-greatly-exaggerated/ ↵
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. ↵
  • Peterson, R. A. (2000).  Constructing effective questionnaires . Thousand Oaks, CA: Sage. ↵
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth; Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley; Neuman, W. L. (2003). Social research methods: Qualitative and quantitative approaches (5th ed.). Boston, MA: Pearson. ↵
  • Babbie, E. (2010). The practice of social research  (12th ed.). Belmont, CA: Wadsworth. ↵
  • Hopper, J. (2010). How long should a survey be? Retrieved from  http://www.verstaresearch.com/blog/how-long-should-a-survey-be ↵
  • Sudman, S., Bradburn, N. M., & Schwarz, N. (1996).  Thinking about answers: The application of cognitive processes to survey methodology . San Francisco, CA: Jossey-Bass. ↵
  • Chang, L., & Krosnick, J.A. (2003). Measuring the frequency of regular behaviors: Comparing the ‘typical week’ to the ‘past week’.  Sociological Methodology, 33 , 55-80. ↵
  • Schwarz, N., & Strack, F. (1990). Context effects in attitude surveys: Applying cognitive theory to social research. In W. Stroebe & M. Hewstone (Eds.),  European review of social psychology  (Vol. 2, pp. 31–50). Chichester, UK: Wiley. ↵
  • Strack, F., Martin, L. L., & Schwarz, N. (1988). Priming and communication: The social determinants of information use in judgments of life satisfaction.  European Journal of Social Psychology, 18 , 429–442. ↵
  • Schwarz, N. (1999). Self-reports: How the questions shape the answers.  American Psychologist, 54 , 93–105. ↵
  • Miller, J.M. & Krosnick, J.A. (1998). The impact of candidate name order on election outcomes.  Public Opinion Quarterly, 62 (3), 291-330. ↵
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. Journal of Applied Psychology, 88 (5), 879. ↵
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology, 86 (1), 114. ↵
  • This is why my ratemyprofessor.com score is so low. Or that's what I tell myself. ↵
  • Whiteford, L. M., & Trotter II, R. T. (2008). Ethics for anthropological research and practice . Waveland Press. ↵
  • Gordon, E. T. (1991). Anthropology and liberation. In F V Harrison (ed.) Decolonizing anthropology: Moving further toward an anthropology for liberation (pp. 149-167). Arlington, VA: American Anthropological Association. ↵
  • Kline, M. A., Shamsudheen, R., & Broesch, T. (2018). Variation is the universal: Making cultural evolution work in developmental psychology.  Philosophical Transactions of the Royal Society B: Biological Sciences ,  373 (1743), 20170059. Stearman, A. M. (1984). The Yuquí connection: Another look at Sirionó deculturation.  American Anthropologist ,  86 (3), 630-650. ↵
  • Hruschka, D. J., Munira, S., Jesmin, K., Hackman, J., & Tiokhin, L. (2018). Learning from failures of protocol in cross-cultural research.  Proceedings of the National Academy of Sciences ,  115 (45), 11428-11434. ↵
  • Broesch, T., Callaghan, T., Henrich, J., Murphy, C., & Rochat, P. (2011). Cultural variations in children’s mirror self-recognition.  Journal of Cross-Cultural Psychology ,  42 (6), 1018-1029. ↵
  • Van de Vijver, F. J., & Poortinga, Y. H. (1992). Testing in culturally heterogeneous populations: When are cultural loadings undesirable?. European Journal of Psychological Assessment . ↵
  • Waltz, C. F., Strickland, O. L., & Lenz, E. R. (Eds.). (2010). Measurement in nursing and health research (4th ed.) . Springer. ↵
  • Meiring, D., Van de Vijver, A. J. R., Rothmann, S., & Barrick, M. R. (2005). Construct, item and method bias of cognitive and personality tests in South Africa.  SA Journal of Industrial Psychology ,  31 (1), 1-8. ↵
  • Van de Vijver, F. J., & Poortinga, Y. H. (1992). Testing in culturally heterogeneous populations: When are cultural loadings undesirable?.  European Journal of Psychological Assessment . ↵
  • Barrett, H. C. (2020). Deciding what to observe: Thoughts for a post-WEIRD generation.  Evolution and Human Behavior ,  41 (5), 445-453. ↵
  • Henrich, J., Heine, S. J., & Norenzayan, A. (2010). Beyond WEIRD: Towards a broad-based behavioral science.  Behavioral and Brain Sciences ,  33 (2-3), 111. ↵
  • Tucker, B. (2017). From risk and time preferences to cultural models of causality: on the challenges and possibilities of field experiments, with examples from rural Southwestern Madagascar.  Impulsivity , 61-114. ↵
  • Schroeder, D., Chatfield, K., Singh, M., Chennells, R., & Herissone-Kelly, P. (2019).  Equitable research partnerships: a global code of conduct to counter ethics dumping . Springer Nature. ↵
  • Schuller, M. (2010). From activist to applied anthropologist to anthropologist? On the politics of collaboration.  Practicing Anthropology ,  32 (1), 43-47. ↵
  • First Nations Information Governance Centre. (2014). Ownership, control, access and possession (OCAP): The path to First Nations information governance. ↵
  • Fischer, R., & Poortinga, Y. H. (2018). Addressing methodological challenges in culture-comparative research.  Journal of Cross-Cultural Psychology ,  49 (5), 691-712. ↵
  • Hruschka, D. J. (2020). What we look with” is as important as “What we look at.  Evolution and Human Behavior ,  41 (5), 458-459. ↵
  • Purzycki, B. G., & Lang, M. (2019). Identity fusion, outgroup relations, and sacrifice: a cross-cultural test.  Cognition ,  186 , 1-6. ↵

The use of questionnaires to gather data from multiple participants.

the group of people you successfully recruit from your sampling frame to participate in your study

A research instrument consisting of a set of questions (items) intended to capture responses from participants in a standardized manner

Refers to research that is designed specifically to answer the question of whether there is a causal relationship between two variables.

A measure of a participant's condition before they receive an intervention or treatment.

A measure of a participant's condition after an intervention or, if they are part of the control/comparison group, at the end of an experiment.

a participant answers questions about themselves

the entities that a researcher actually observes, measures, or collects in the course of trying to learn something about her unit of analysis (individuals, groups, or organizations)

entity that a researcher wants to say something about at the end of her study (individual, group, or organization)

whether you can practically and ethically complete the research project you propose

Someone who is especially knowledgeable about a topic being studied.

a person who completes a survey on behalf of another person

In measurement, conditions that are subtle and complex that we must use existing knowledge and intuition to define.

The ability of a measurement tool to measure a phenomenon the same way, time after time. Note: Reliability does not imply validity.

study publicly available information or data that has been collected by another person

When a researcher collects data only once from participants using a questionnaire

Researcher collects data from participants at multiple points over an extended period of time using a questionnaire.

A type of longitudinal survey where the researchers gather data at multiple times, but each time they ask different people from the group they are studying because their concern is capturing the sentiment of the group, not the individual people they survey.

A questionnaire that is distributed to participants (in person, by mail, virtually) to complete independently.

A questionnaire that is read to respondents

when a researcher administers a questionnaire verbally to participants

any possible changes in interviewee responses based on how or when the researcher presents question-and-answer options

Triangulation of data refers to the use of multiple types, measures or sources of data in a research project to increase the confidence that we have in our findings.

Testing out your research materials in advance on people who are not included as participants in your study.

items on a questionnaire designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample

a question that asks more than one thing at a time, making it difficult to respond accurately

When a participant answers in a way that they believe is socially the most acceptable answer.

the answers researchers provide to participants to choose from when completing a questionnaire

questions in which the researcher provides all of the response options

Questions for which the researcher does not include response options, allowing for respondents to answer the question in their own words

respondents to a survey who choose neutral response options, even if they have an opinion

respondents to a survey who choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion

An ordered outline that includes your research question, a description of the data you are going to use to answer it, and the exact analyses, step-by-step, that you plan to run to answer your research question.

A process through which the researcher explains the research process, procedures, risks and benefits to a potential participant, usually through a written document, which the participant than signs, as evidence of their agreement to participate.

a type of survey question that lists a set of questions for which the response options are all the same in a grid layout

unintended influences on respondents’ answers because they are not related to the content of the item but to the context in which the item appears.

when the order in which the items are presented affects people’s responses

Social desirability bias occurs when we create questions that lead respondents to answer in ways that don't reflect their genuine thoughts or feelings to avoid being perceived negatively.

When respondents have difficult providing accurate answers to questions due to the passage of time.

Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time.

If the majority of the targeted respondents fail to respond to a survey, then a legitimate concern is whether non-respondents are not responding due to a systematic reason, which may raise questions about the validity of the study’s results, especially as this relates to the representativeness of the sample.

Sampling bias is present when our sampling process results in a sample that does not represent our population in some way.

the concept that scores obtained from a measure are similar when employed in different cultural populations

spurious covariance between your independent and dependent variables that is in fact caused by systematic error introduced by culturally insensitive or incompetent research practices

"when the construct measured is not identical across cultures or when behaviors that characterize the construct are not identical across cultures" (Meiring et al., 2005, p. 2)

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

research methods survey design

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

research methods survey design

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

research methods survey design

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

research methods survey design

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods, sign up for our weekly newsletter.

Fresh data delivered Saturday mornings

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

SSRIC

Chapter 3 -- Survey Research Design and Quantitative Methods of Analysis for Cross-sectional Data

Almost everyone has had experience with surveys. Market surveys ask respondents whether they recognize products and their feelings about them. Political polls ask questions about candidates for political office or opinions related to political and social issues. Needs assessments use surveys that identify the needs of groups. Evaluations often use surveys to assess the extent to which programs achieve their goals. Survey research is a method of collecting information by asking questions. Sometimes interviews are done face-to-face with people at home, in school, or at work. Other times questions are sent in the mail for people to answer and mail back. Increasingly, surveys are conducted by telephone. SAMPLE SURVEYS Although we want to have information on all people, it is usually too expensive and time consuming to question everyone. So we select only some of these individuals and question them. It is important to select these people in ways that make it likely that they represent the larger group. The population is all the individuals in whom we are interested. (A population does not always consist of individuals. Sometimes, it may be geographical areas such as all cities with populations of 100,000 or more. Or we may be interested in all households in a particular area. In the data used in the exercises of this module the population consists of individuals who are California residents.) A sample is the subset of the population involved in a study. In other words, a sample is part of the population. The process of selecting the sample is called sampling . The idea of sampling is to select part of the population to represent the entire population. The United States Census is a good example of sampling. The census tries to enumerate all residents every ten years with a short questionnaire. Approximately every fifth household is given a longer questionnaire. Information from this sample (i.e., every fifth household) is used to make inferences about the population. Political polls also use samples. To find out how potential voters feel about a particular race, pollsters select a sample of potential voters. This module uses opinions from three samples of California residents age 18 and over. The data were collected during July, 1985, September, 1991, and February, 1995, by the Field Research Corporation (The Field Institute 1985, 1991, 1995). The Field Research Corporation is a widely-respected survey research firm and is used extensively by the media, politicians, and academic researchers. Since a survey can be no better than the quality of the sample, it is essential to understand the basic principles of sampling. There are two types of sampling-probability and nonprobability. A probability sample is one in which each individual in the population has a known, nonzero chance of being selected in the sample. The most basic type is the simple random sample . In a simple random sample, every individual (and every combination of individuals) has the same chance of being selected in the sample. This is the equivalent of writing each person's name on a piece of paper, putting them in plastic balls, putting all the balls in a big bowl, mixing the balls thoroughly, and selecting some predetermined number of balls from the bowl. This would produce a simple random sample. The simple random sample assumes that we can list all the individuals in the population, but often this is impossible. If our population were all the households or residents of California, there would be no list of the households or residents available, and it would be very expensive and time consuming to construct one. In this type of situation, a multistage cluster sample would be used. The idea is very simple. If we wanted to draw a sample of all residents of California, we might start by dividing California into large geographical areas such as counties and selecting a sample of these counties. Our sample of counties could then be divided into smaller geographical areas such as blocks and a sample of blocks would be selected. We could then construct a list of all households for only those blocks in the sample. Finally, we would go to these households and randomly select one member of each household for our sample. Once the household and the member of that household have been selected, substitution would not be allowed. This often means that we must call back several times, but this is the price we must pay for a good sample. The Field Poll used in this module is a telephone survey. It is a probability sample using a technique called random-digit dialing . With random-digit dialing, phone numbers are dialed randomly within working exchanges (i.e., the first three digits of the telephone number). Numbers are selected in such a way that all areas have the proper proportional chance of being selected in the sample. Random-digit dialing makes it possible to include numbers that are not listed in the telephone directory and households that have moved into an area so recently that they are not included in the current telephone directory. A nonprobability sample is one in which each individual in the population does not have a known chance of selection in the sample. There are several types of nonprobability samples. For example, magazines often include questionnaires for readers to fill out and return. This is a volunteer sample since respondents self-select themselves into the sample (i.e., they volunteer to be in the sample). Another type of nonprobability sample is a quota sample . Survey researchers may assign quotas to interviewers. For example, interviewers might be told that half of their respondents must be female and the other half male. This is a quota on sex. We could also have quotas on several variables (e.g., sex and race) simultaneously. Probability samples are preferable to nonprobability samples. First, they avoid the dangers of what survey researchers call "systematic selection biases" which are inherent in nonprobability samples. For example, in a volunteer sample, particular types of persons might be more likely to volunteer. Perhaps highly-educated individuals are more likely to volunteer to be in the sample and this would produce a systematic selection bias in favor of the highly educated. In a probability sample, the selection of the actual cases in the sample is left to chance. Second, in a probability sample we are able to estimate the amount of sampling error (our next concept to discuss). We would like our sample to give us a perfectly accurate picture of the population. However, this is unrealistic. Assume that the population is all employees of a large corporation, and we want to estimate the percent of employees in the population that is satisfied with their jobs. We select a simple random sample of 500 employees and ask the individuals in the sample how satisfied they are with their jobs. We discover that 75 percent of the employees in our sample are satisfied. Can we assume that 75 percent of the population is satisfied? That would be asking too much. Why would we expect one sample of 500 to give us a perfect representation of the population? We could take several different samples of 500 employees and the percent satisfied from each sample would vary from sample to sample. There will be a certain amount of error as a result of selecting a sample from the population. We refer to this as sampling error . Sampling error can be estimated in a probability sample, but not in a nonprobability sample. It would be wrong to assume that the only reason our sample estimate is different from the true population value is because of sampling error. There are many other sources of error called nonsampling error . Nonsampling error would include such things as the effects of biased questions, the tendency of respondents to systematically underestimate such things as age, the exclusion of certain types of people from the sample (e.g., those without phones, those without permanent addresses), or the tendency of some respondents to systematically agree to statements regardless of the content of the statements. In some studies, the amount of nonsampling error might be far greater than the amount of sampling error. Notice that sampling error is random in nature, while nonsampling error may be nonrandom producing systematic biases. We can estimate the amount of sampling error (assuming probability sampling), but it is much more difficult to estimate nonsampling error. We can never eliminate sampling error entirely, and it is unrealistic to expect that we could ever eliminate nonsampling error. It is good research practice to be diligent in seeking out sources of nonsampling error and trying to minimize them.   DATA ANALYSIS Examining Variables One at a Time (Univariate Analysis) The rest of this chapter will deal with the analysis of survey data . Data analysis involves looking at variables or "things" that vary or change. A variable is a characteristic of the individual (assuming we are studying individuals). The answer to each question on the survey forms a variable. For example, sex is a variable-some individuals in the sample are male and some are female. Age is a variable; individuals vary in their ages. Looking at variables one at a time is called univariate analysis . This is the usual starting point in analyzing survey data. There are several reasons to look at variables one at a time. First, we want to describe the data. How many of our sample are men and how many are women? How many are black and how many are white? What is the distribution by age? How many say they are going to vote for Candidate A and how many for Candidate B? How many respondents agree and how many disagree with a statement describing a particular opinion? Another reason we might want to look at variables one at a time involves recoding. Recoding is the process of combining categories within a variable. Consider age, for example. In the data set used in this module, age varies from 18 to 89, but we would want to use fewer categories in our analysis, so we might combine age into age 18 to 29, 30 to 49, and 50 and over. We might want to combine African Americans with the other races to classify race into only two categories-white and nonwhite. Recoding is used to reduce the number of categories in the variable (e.g., age) or to combine categories so that you can make particular types of comparisons (e.g., white versus nonwhite). The frequency distribution is one of the basic tools for looking at variables one at a time. A frequency distribution is the set of categories and the number of cases in each category. Percent distributions show the percentage in each category. Table 3.1 shows frequency and percent distributions for two hypothetical variables-one for sex and one for willingness to vote for a woman candidate. Begin by looking at the frequency distribution for sex. There are three columns in this table. The first column specifies the categories-male and female. The second column tells us how many cases there are in each category, and the third column converts these frequencies into percents. Table 3.1 -- Frequency and Percent Distributions for Sex and Willingness to Vote for a Woman Candidate (Hypothetical Data) Sex Voting Preference Category  Freq.  Percent  Category  Freq.  Percent  Valid Percent  Male  380  40.0  Willing to Vote for a Woman  460  48.4  51.1  Female  570  60.0  Not Willing to Vote for a Woman  440  46.3  48.9  Total  950  100.0  Refused  50  5.3  Missing  Total  950  100.0  100.0  In this hypothetical example, there are 380 males and 570 females or 40 percent male and 60 percent female. There are a total of 950 cases. Since we know the sex for each case, there are no missing data (i.e., no cases where we do not know the proper category). Look at the frequency distribution for voting preference in Table 3.1. How many say they are willing to vote for a woman candidate and how many are unwilling? (Answer: 460 willing and 440 not willing) How many refused to answer the question? (Answer: 50) What percent say they are willing to vote for a woman, what percent are not, and what percent refused to answer? (Answer: 48.4 percent willing to vote for a woman, 46.3 percent not willing, and 5.3 percent refused to tell us.) The 50 respondents who didn't want to answer the question are called missing data because we don't know which category into which to place them, so we create a new category (i.e., refused) for them. Since we don't know where they should go, we might want a percentage distribution considering only the 900 respondents who answered the question. We can determine this easily by taking the 50 cases with missing information out of the base (i.e., the denominator of the fraction) and recomputing the percentages. The fourth column in the frequency distribution (labeled "valid percent") gives us this information. Approximately 51 percent of those who answered the question were willing to vote for a woman and approximately 49 percent were not. With these data we will use frequency distributions to describe variables one at a time. There are other ways to describe single variables. The mean, median, and mode are averages that may be used to describe the central tendency of a distribution. The range and standard deviation are measures of the amount of variability or dispersion of a distribution. (We will not be using measures of central tendency or variability in this module.)   Exploring the Relationship Between Two Variables (Bivariate Analysis) Usually we want to do more than simply describe variables one at a time. We may want to analyze the relationship between variables. Morris Rosenberg (1968:2) suggests that there are three types of relationships: "(1) neither variable may influence one another .... (2) both variables may influence one another ... (3) one of the variables may influence the other." We will focus on the third of these types which Rosenberg calls "asymmetrical relationships." In this type of relationship, one of the variables (the independent variable ) is assumed to be the cause and the other variable (the dependent variable ) is assumed to be the effect. In other words, the independent variable is the factor that influences the dependent variable. For example, researchers think that smoking causes lung cancer. The statement that specifies the relationship between two variables is called a hypothesis (see Hoover 1992, for a more extended discussion of hypotheses). In this hypothesis, the independent variable is smoking (or more precisely, the amount one smokes) and the dependent variable is lung cancer. Consider another example. Political analysts think that income influences voting decisions, that rich people vote differently from poor people. In this hypothesis, income would be the independent variable and voting would be the dependent variable. In order to demonstrate that a causal relationship exists between two variables, we must meet three criteria: (1) there must be a statistical relationship between the two variables, (2) we must be able to demonstrate which one of the variables influences the other, and (3) we must be able to show that there is no other alternative explanation for the relationship. As you can imagine, it is impossible to show that there is no other alternative explanation for a relationship. For this reason, we can show that one variable does not influence another variable, but we cannot prove that it does. We can only show that it is more plausible or credible to believe that a causal relationship exists. In this section, we will focus on the first two criteria and leave this third criterion to the next section. In the previous section we looked at the frequency distributions for sex and voting preference. All we can say from these two distributions is that the sample is 40 percent men and 60 percent women and that slightly more than half of the respondents said they would be willing to vote for a woman, and slightly less than half are not willing to. We cannot say anything about the relationship between sex and voting preference. In order to determine if men or women are more likely to be willing to vote for a woman candidate, we must move from univariate to bivariate analysis. A crosstabulation (or contingency table ) is the basic tool used to explore the relationship between two variables. Table 3.2 is the crosstabulation of sex and voting preference. In the lower right-hand corner is the total number of cases in this table (900). Notice that this is not the number of cases in the sample. There were originally 950 cases in this sample, but any case that had missing information on either or both of the two variables in the table has been excluded from the table. Be sure to check how many cases have been excluded from your table and to indicate this figure in your report. Also be sure that you understand why these cases have been excluded. The figures in the lower margin and right-hand margin of the table are called the marginal distributions. They are simply the frequency distributions for the two variables in the whole table. Here, there are 360 males and 540 females (the marginal distribution for the column variable-sex) and 460 people who are willing to vote for a woman candidate and 440 who are not (the marginal distribution for the row variable-voting preference). The other figures in the table are the cell frequencies. Since there are two columns and two rows in this table (sometimes called a 2 x 2 table), there are four cells. The numbers in these cells tell us how many cases fall into each combination of categories of the two variables. This sounds complicated, but it isn't. For example, 158 males are willing to vote for a woman and 302 females are willing to vote for a woman. Table 3.2 -- Crosstabulation of Sex and Voting Preference (Frequencies)   Sex Voting Preference Male  Female  Total  Willing to Vote for a Woman 158  302  460  Not Willing to Vote for a Woman 202  238  440  Total 360  540  900  We could make comparisons rather easily if we had an equal number of women and men. Since these numbers are not equal, we must use percentages to help us make the comparisons. Since percentages convert everything to a common base of 100, the percent distribution shows us what the table would look like if there were an equal number of men and women. Before we percentage Table 3.2, we must decide which of these two variables is the independent and which is the dependent variable. Remember that the independent variable is the variable we think might be the influencing factor. The independent variable is hypothesized to be the cause, and the dependent variable is the effect. Another way to express this is to say that the dependent variable is the one we want to explain. Since we think that sex influences willingness to vote for a woman candidate, sex would be the independent variable. Once we have decided which is the independent variable, we are ready to percentage the table. Notice that percentages can be computed in different ways. In Table 3.3, the percentages have been computed so that they sum down to 100. These are called column percents . If they sum across to 100, they are called row percents . If the independent variable is the column variable, then we want the percents to sum down to 100 (i.e., we want the column percents). If the independent variable is the row variable, we want the percents to sum across to 100 (i.e., we want the row percents). This is a simple, but very important, rule to remember. We'll call this our rule for computing percents . Although we often see the independent variable as the column variable so the table sums down to 100 percent, it really doesn't matter whether the independent variable is the column or the row variable. In this module, we will put the independent variable as the column variable. Many others (but not everyone) use this convention. It would be helpful if you did this when you write your report. Table 3.3 -- Voting Preference by Sex (Percents) Voting Preference Male Female Total Willing to Vote for a Woman 43.9  55.9  51.1  Not Willing to Vote for a Woman 56.1  44.1  100.0  Total Percent 100.0  100.0  100.0  (Total Frequency) (360)  (540)  (900)  Now we are ready to interpret this table. Interpreting a table means to explain what the table is saying about the relationship between the two variables. First, we can look at each category of the independent variable separately to describe the data and then we compare them to each other. Since the percents sum down to 100 percent, we describe down and compare across. The rule for interpreting percents is to compare in the direction opposite to the way the percents sum to 100. So, if the percents sum down to 100, we compare across, and if the percents sum across to 100, compare down. If the independent variable is the column variable, the percents will always sum down to 100. We can look at each category of the independent variable separately to describe the data and then compare them to each other-describe down and then compare across. In Table 3.3, row one shows the percent of males and the percent of females who are willing to vote for a woman candidate--43.9 percent of males are willing to vote for a woman, while 55.9 percent of the females are. This is a difference of 12 percentage points. Somewhat more females than males are willing to vote for a woman. The second row shows the percent of males and females who are not willing to vote for a woman. Since there are only two rows, the second row will be the complement (or the reverse) of the first row. It shows that males are somewhat more likely to be unwilling to vote for a woman candidate (a difference of 12 percentage points in the opposite direction). When we observe a difference, we must also decide whether it is significant. There are two different meanings for significance-statistical significance and substantive significance. Statistical significance considers whether the difference is great enough that it is probably not due to chance factors. Substantive significance considers whether a difference is large enough to be important. With a very large sample, a very small difference is often statistically significant, but that difference may be so small that we decide it isn't substantively significant (i.e., it's so small that we decide it doesn't mean very much). We're going to focus on statistical significance, but remember that even if a difference is statistically significant, you must also decide if it is substantively significant. Let's discuss this idea of statistical significance. If our population is all men and women of voting age in California, we want to know if there is a relationship between sex and voting preference in the population of all individuals of voting age in California. All we have is information about a sample from the population. We use the sample information to make an inference about the population. This is called statistical inference . We know that our sample is not a perfect representation of our population because of sampling error . Therefore, we would not expect the relationship we see in our sample to be exactly the same as the relationship in the population. Suppose we want to know whether there is a relationship between sex and voting preference in the population. It is impossible to prove this directly, so we have to demonstrate it indirectly. We set up a hypothesis (called the null hypothesis ) that says that sex and voting preference are not related to each other in the population. This basically says that any difference we see is likely to be the result of random variation. If the difference is large enough that it is not likely to be due to chance, we can reject this null hypothesis of only random differences. Then the hypothesis that they are related (called the alternative or research hypothesis ) will be more credible.
In the first column of Table 3.4, we have listed the four cell frequencies from the crosstabulation of sex and voting preference. We'll call these the observed frequencies (f o ) because they are what we observe from our table. In the second column, we have listed the frequencies we would expect if, in fact, there is no relationship between sex and voting preference in the population. These are called the expected frequencies (f e ). We'll briefly explain how these expected frequencies are obtained. Notice from Table 3.1 that 51.1 percent of the sample were willing to vote for a woman candidate, while 48.9 percent were not. If sex and voting preference are independent (i.e., not related), we should find the same percentages for males and females. In other words, 48.9 percent (or 176) of the males and 48.9 percent (or 264) of the females would be unwilling to vote for a woman candidate. (This explanation is adapted from Norusis 1997.) Now, we want to compare these two sets of frequencies to see if the observed frequencies are really like the expected frequencies. All we do is to subtract the expected from the observed frequencies (column three). We are interested in the sum of these differences for all cells in the table. Since they always sum to zero, we square the differences (column four) to get positive numbers. Finally, we divide this squared difference by the expected frequency (column five). (Don't worry about why we do this. The reasons are technical and don't add to your understanding.) The sum of column five (12.52) is called the chi square statistic . If the observed and the expected frequencies are identical (no difference), chi square will be zero. The greater the difference between the observed and expected frequencies, the larger the chi square. If we get a large chi square, we are willing to reject the null hypothesis. How large does the chi square have to be? We reject the null hypothesis of no relationship between the two variables when the probability of getting a chi square this large or larger by chance is so small that the null hypothesis is very unlikely to be true. That is, if a chi square this large would rarely occur by chance (usually less than once in a hundred or less than five times in a hundred). In this example, the probability of getting a chi square as large as 12.52 or larger by chance is less than one in a thousand. This is so unlikely that we reject the null hypothesis, and we conclude that the alternative hypothesis (i.e., there is a relationship between sex and voting preference) is credible (not that it is necessarily true, but that it is credible). There is always a small chance that the null hypothesis is true even when we decide to reject it. In other words, we can never be sure that it is false. We can only conclude that there is little chance that it is true. Just because we have concluded that there is a relationship between sex and voting preference does not mean that it is a strong relationship. It might be a moderate or even a weak relationship. There are many statistics that measure the strength of the relationship between two variables. Chi square is not a measure of the strength of the relationship. It just helps us decide if there is a basis for saying a relationship exists regardless of its strength. Measures of association estimate the strength of the relationship and are often used with chi square. (See Appendix D for a discussion of how to compute the two measures of association discussed below.) Cramer's V is a measure of association appropriate when one or both of the variables consists of unordered categories. For example, race (white, African American, other) or religion (Protestant, Catholic, Jewish, other, none) are variables with unordered categories. Cramer's V is a measure based on chi square. It ranges from zero to one. The closer to zero, the weaker the relationship; the closer to one, the stronger the relationship. Gamma (sometimes referred to as Goodman and Kruskal's Gamma) is a measure of association appropriate when both of the variables consist of ordered categories. For example, if respondents answer that they strongly agree, agree, disagree, or strongly disagree with a statement, their responses are ordered. Similarly, if we group age into categories such as under 30, 30 to 49, and 50 and over, these categories would be ordered. Ordered categories can logically be arranged in only two ways-low to high or high to low. Gamma ranges from zero to one, but can be positive or negative. For this module, the sign of Gamma would have no meaning, so ignore the sign and focus on the numerical value. Like V, the closer to zero, the weaker the relationship and the closer to one, the stronger the relationship. Choosing whether to use Cramer's V or Gamma depends on whether the categories of the variable are ordered or unordered. However, dichotomies (variables consisting of only two categories) may be treated as if they are ordered even if they are not. For example, sex is a dichotomy consisting of the categories male and female. There are only two possible ways to order sex-male, female and female, male. Or, race may be classified into two categories-white and nonwhite. We can treat dichotomies as if they consisted of ordered categories because they can be ordered in only two ways. In other words, when one of the variables is a dichotomy, treat this variable as if it were ordinal and use gamma. This is important when choosing an appropriate measure of association. In this chapter we have described how surveys are done and how we analyze the relationship between two variables. In the next chapter we will explore how to introduce additional variables into the analysis.   REFERENCES AND SUGGESTED READING Methods of Social Research Riley, Matilda White. 1963. Sociological Research I: A Case Approach . New York: Harcourt, Brace and World. Hoover, Kenneth R. 1992. The Elements of Social Scientific Thinking (5 th Ed.). New York: St. Martin's. Interviewing Gorden, Raymond L. 1987. Interviewing: Strategy, Techniques and Tactics . Chicago: Dorsey. Survey Research and Sampling Babbie, Earl R. 1990. Survey Research Methods (2 nd Ed.). Belmont, CA: Wadsworth. Babbie, Earl R. 1997. The Practice of Social Research (8 th Ed). Belmont, CA: Wadsworth. Statistical Analysis Knoke, David, and George W. Bohrnstedt. 1991. Basic Social Statistics . Itesche, IL: Peacock. Riley, Matilda White. 1963. Sociological Research II Exercises and Manual . New York: Harcourt, Brace & World. Norusis, Marija J. 1997. SPSS 7.5 Guide to Data Analysis . Upper Saddle River, New Jersey: Prentice Hall. Data Sources The Field Institute. 1985. California Field Poll Study, July, 1985 . Machine-readable codebook. The Field Institute. 1991. California Field Poll Study, September, 1991 . Machine-readable codebook. The Field Institute. 1995. California Field Poll Study, February, 1995 . Machine-readable codebook.

Document Viewers

  • Free PDF Viewer
  • Free Word Viewer
  • Free Excel Viewer
  • Free PowerPoint Viewer

Creative Commons License

Survey descriptive research: Method, design, and examples

  • November 2, 2022

What is survey descriptive research?

The observational method: monitor people while they engage with a subject, the case study method: gain an in-depth understanding of a subject, survey descriptive research: easy and cost-effective, types of descriptive research design, what is the descriptive survey research design definition by authors, 1. quantitativeness and qualitatively, 2. uncontrolled variables, 3. natural environment, 4. provides a solid basis for further research, describe a group and define its characteristics, measure data trends by conducting descriptive marketing research, understand how customers perceive a brand, descriptive survey research design: how to make the best descriptive questionnaire, create descriptive surveys with surveyplanet.

Survey descriptive research is a quantitative method that focuses on describing the characteristics of a phenomenon rather than asking why it occurs. Doing this provides a better understanding of the nature of the subject at hand and creates a good foundation for further research.

Descriptive market research is one of the most commonly used ways of examining trends and changes in the market. It is easy, low-cost, and provides valuable in-depth information on a chosen subject.

This article will examine the basic principles of the descriptive survey study and show how to make the best descriptive survey questionnaire and how to conduct effective research.

It is often said to be quantitative research that focuses more on the what, how, when, and where instead of the why. But what does that actually mean?

The answer is simple. By conducting descriptive survey research, the nature of a phenomenon is focused upon without asking about what causes it.

The main goal of survey descriptive research is to shed light on the heart of the research problem and better understand it. The technique provides in-depth knowledge of what the research problem is before investigating why it exists.

Survey descriptive research and data collection methods

Descriptive research methods can differ based on data collection. We distinguish three main data collection methods: case study, observational method, and descriptive survey method.

Of these, the descriptive survey research method is most commonly used in fields such as market research, social research, psychology, politics, etc.

Sometimes also called the observational descriptive method, this is simply monitoring people while they engage with a particular subject. The aim is to examine people’s real-life behavior by maintaining a natural environment that does not change the respondents’ behavior—because they do not know they are being observed.

It is often used in fields such as market research, psychology, or social research. For example, customers can be monitored while dining at a restaurant or browsing through the products in a shop.

When doing case studies, researchers conduct thorough examinations of individuals or groups. The case study method is not used to collect general information on a particular subject. Instead, it provides an in-depth understanding of a particular subject and can give rise to interesting conclusions and new hypotheses.

The term case study can also refer to a sample group, which is a specific group of people that are examined and, afterward, findings are generalized to a larger group of people. However, this kind of generalization is rather risky because it is not always accurate.

Additionally, case studies cannot be used to determine cause and effect because of potential bias on the researcher’s part.

The survey descriptive research method consists of creating questionnaires or polls and distributing them to respondents, who then answer the questions (usually a mix of open-ended and closed-ended).

Surveys are the easiest and most cost-efficient way to gain feedback on a particular topic. They can be conducted online or offline, the size of the sample is highly flexible, and they can be distributed through many different channels.

When doing market research , use such surveys to understand the demographic of a certain market or population, better determine the target audience, keep track of the changes in the market, and learn about customer experience and satisfaction with products and services.

Several types of survey descriptive research are classified based on the approach used:

  • Descriptive surveys gather information about a certain subject.
  • Descriptive-normative surveys gather information just like a descriptive survey, after which results are compared with a norm.
  • Correlative surveys explore the relationship between two variables and conclude if it is positive, neutral, or negative.

A descriptive survey research design is a methodology used in social science and other fields to gather information and describe the characteristics, behaviors, or attitudes of a particular population or group of interest. While there may not be a single definition provided by specific authors, the concept is widely understood and defined similarly across the literature.

Here’s a general definition that captures the essence of a descriptive survey research design definition by authors:

A descriptive survey research design is a systematic and structured approach to collecting data from a sample of individuals or entities within a larger population, with the primary aim of providing a detailed and accurate description of the characteristics, behaviors, opinions, or attitudes that exist within the target group. This method involves the use of surveys, questionnaires, interviews, or observations to collect data, which is then analyzed and summarized to draw conclusions about the population of interest.

It’s important to note that descriptive survey research is often used when researchers want to gain insights into a population or phenomenon, but without manipulating variables or testing hypotheses, as is common in experimental research. Instead, it focuses on providing a comprehensive overview of the subject under investigation. Researchers often use various statistical and analytical techniques to summarize and interpret the collected data in descriptive survey research.

The characteristics and advantages of a descriptive survey questionnaire

There are numerous advantages to using a descriptive survey design. First of all, it is cheap and easy to conduct. A large sample can be surveyed and extensive data gathered quickly and inexpensively.

The data collected provides both quantitative and qualitative information , which provides a holistic understanding of the topic. Moreover, it can be used in further research on this or related topics.

Here are some of the most important advantages of conducting a survey descriptive research:

The descriptive survey research design uses both quantitative and qualitative research methods. It is used primarily to conduct quantitative research and gather data that is statistically easy to analyze. However, it can also provide qualitative data that helps describe and understand the research subject.

Descriptive research explores more than one variable. However, unlike experimental research, descriptive survey research design doesn’t allow control of variables. Instead, observational methods are used during research. Even though these variables can change and have an unexpected impact on an inquiry, they will give access to honest responses.

The descriptive research is conducted in a natural environment. This way, answers gathered from responses are more honest because the nature of the research does not influence them.

The data collected through descriptive research can be used to further explore the same or related subjects. Additionally, it can help develop the next line of research and the best method to use moving forward.

Descriptive survey example: When to use a descriptive research questionnaire?

Descriptive research design can be used for many purposes. It is mainly utilized to test a hypothesis, define the characteristics of a certain phenomenon, and examine the correlations between them.

Market research is one of the main fields in which descriptive methods are used to conduct studies. Here’s what can be done using this method:

Understanding the needs of customers and their desires is the key to a business’s success. By truly understanding these, it will be possible to offer exactly what customers need and prevent them from turning to competitors.

By using a descriptive survey, different customer characteristics—such as traits, opinions, or behavior patterns—can be determined. With this data, different customer types can be defined and profiles developed that focus on their interests and the behavior they exhibit. This information can be used to develop new products and services that will be successful.

Measuring data trends is extremely important. Explore the market and get valuable insights into how consumers’ interests change over time—as well as how the competition is performing in the marketplace.

Over time, the data gathered from a descriptive questionnaire can be subjected to statistical analysis. This will deliver valuable insights.

Another important aspect to consider is brand awareness. People need to know about your brand, and they need to have a positive opinion of it. The best way to discover their perception is to conduct a brand survey , which gives deeper insight into brand awareness, perception, identity, and customer loyalty .

When conducting survey descriptive research, there are a few basic steps that are needed for a survey to be successful:

  • Define the research goals.
  • Decide on the research method.
  • Define the sample population.
  • Design the questionnaire.
  • Write specific questions.
  • Distribute the questionnaire.
  • Analyze the data .
  • Make a survey report.

First of all, define the research goals. By setting up clear objectives, every other step can be worked through. This will result in the perfect descriptive questionnaire example and collect only valuable data.

Next, decide on the research method to use—in this case, the descriptive survey method. Then, define the sample population for (that is, the target audience). After that, think about the design itself and the questions that will be asked in the survey .

If you’re not sure where to start, we’ve got you covered. As free survey software, SurveyPlanet offers pre-made themes that are clean and eye-catching, as well as pre-made questions that will save you the trouble of making new ones.

Simply scroll through our library and choose a descriptive survey questionnaire sample that best suits your needs, though our user-friendly interface can help you create bespoke questions in a process that is easy and efficient.

With a survey in hand, it will then need to be delivered to the target audience. This is easy with our survey embedding feature, which allows for the linking of surveys on a website, via emails, or by sharing on social media.

When all the responses are gathered, it’s time to analyze them. Use SurveyPlanet to easily filter data and do cross-sectional analysis. Finally, just export the results and make a survey report.

Conducting descriptive survey research is the best way to gain a deeper knowledge of a topic of interest and develop a sound basis for further research. Sign up for a free SurveyPlanet account to start improving your business today!

Photo by Scott Graham on Unsplash

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Language: English | German

How to Construct a Mixed Methods Research Design

Wie man ein mixed methods-forschungs-design konstruiert, judith schoonenboom.

1 Institut für Bildungswissenschaft, Universität Wien, Sensengasse 3a, 1090 Wien, Austria

R. Burke Johnson

2 Department of Professional Studies, University of South Alabama, UCOM 3700, 36688-0002 Mobile, AL USA

This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.

Zusammenfassung

Der Beitrag gibt einen Überblick darüber, wie das Forschungsdesign bei Mixed Methods-Studien angelegt sein sollte. Um ein Mixed Methods-Forschungsdesign aufzustellen, müssen Forschende sorgfältig alle Dimensionen von Methodenkombinationen abwägen und von Anfang an auf die Güte und damit verbundene etwaige Probleme achten. Wir erklären und diskutieren die für Forschungsdesigns relevanten sieben Dimensionen von Methodenkombinationen: Untersuchungsziel, Rolle von Theorie im Forschungsprozess, Timing (Simultanität und Abhängigkeit), Schnittstellen, an denen Integration stattfindet, systematische vs. interaktive Design-Ansätze, geplante vs. emergente Designs und Komplexität des Designs. Es gibt außerdem zahlreiche sekundäre Dimensionen, die bei der Aufstellung des Forschungsdesigns berücksichtigt werden müssen, von denen wir zehn erklären. Der Beitrag schließt mit zwei Fallbeispielen ab, anhand derer konkret gezeigt wird, wie Mixed Methods-Forschungsdesigns aufgestellt werden können.

What is a mixed methods design?

This article addresses the process of selecting and constructing mixed methods research (MMR) designs. The word “design” has at least two distinct meanings in mixed methods research (Maxwell 2013 ). One meaning focuses on the process of design; in this meaning, design is often used as a verb. Someone can be engaged in designing a study (in German: “eine Studie konzipieren” or “eine Studie designen”). Another meaning is that of a product, namely the result of designing. The result of designing as a verb is a mixed methods design as a noun (in German: “das Forschungsdesign” or “Design”), as it has, for example, been described in a journal article. In mixed methods design, both meanings are relevant. To obtain a strong design as a product, one needs to carefully consider a number of rules for designing as an activity. Obeying these rules is not a guarantee of a strong design, but it does contribute to it. A mixed methods design is characterized by the combination of at least one qualitative and one quantitative research component. For the purpose of this article, we use the following definition of mixed methods research (Johnson et al. 2007 , p. 123):

Mixed methods research is the type of research in which a researcher or team of researchers combines elements of qualitative and quantitative research approaches (e. g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding and corroboration.

Mixed methods research (“Mixed Methods” or “MM”) is the sibling of multimethod research (“Methodenkombination”) in which either solely multiple qualitative approaches or solely multiple quantitative approaches are combined.

In a commonly used mixed methods notation system (Morse 1991 ), the components are indicated as qual and quan (or QUAL and QUAN to emphasize primacy), respectively, for qualitative and quantitative research. As discussed below, plus (+) signs refer to concurrent implementation of components (“gleichzeitige Durchführung der Teilstudien” or “paralleles Mixed Methods-Design”) and arrows (→) refer to sequential implementation (“Sequenzielle Durchführung der Teilstudien” or “sequenzielles Mixed Methods-Design”) of components. Note that each research tradition receives an equal number of letters (four) in its abbreviation for equity. In this article, this notation system is used in some depth.

A mixed methods design as a product has several primary characteristics that should be considered during the design process. As shown in Table  1 , the following primary design “dimensions” are emphasized in this article: purpose of mixing, theoretical drive, timing, point of integration, typological use, and degree of complexity. These characteristics are discussed below. We also provide some secondary dimensions to consider when constructing a mixed methods design (Johnson and Christensen 2017 ).

List of Primary and Secondary Design Dimensions

On the basis of these dimensions, mixed methods designs can be classified into a mixed methods typology or taxonomy. In the mixed methods literature, various typologies of mixed methods designs have been proposed (for an overview see Creswell and Plano Clark 2011 , p. 69–72).

The overall goal of mixed methods research, of combining qualitative and quantitative research components, is to expand and strengthen a study’s conclusions and, therefore, contribute to the published literature. In all studies, the use of mixed methods should contribute to answering one’s research questions.

Ultimately, mixed methods research is about heightened knowledge and validity. The design as a product should be of sufficient quality to achieve multiple validities legitimation (Johnson and Christensen 2017 ; Onwuegbuzie and Johnson 2006 ), which refers to the mixed methods research study meeting the relevant combination or set of quantitative, qualitative, and mixed methods validities in each research study.

Given this goal of answering the research question(s) with validity, a researcher can nevertheless have various reasons or purposes for wanting to strengthen the research study and its conclusions. Following is the first design dimension for one to consider when designing a study: Given the research question(s), what is the purpose of the mixed methods study?

A popular classification of purposes of mixed methods research was first introduced in 1989 by Greene, Caracelli, and Graham, based on an analysis of published mixed methods studies. This classification is still in use (Greene 2007 ). Greene et al. ( 1989 , p. 259) distinguished the following five purposes for mixing in mixed methods research:

1.  Triangulation seeks convergence, corroboration, correspondence of results from different methods; 2.  Complementarity seeks elaboration, enhancement, illustration, clarification of the results from one method with the results from the other method; 3.  Development seeks to use the results from one method to help develop or inform the other method, where development is broadly construed to include sampling and implementation, as well as measurement decisions; 4.  Initiation seeks the discovery of paradox and contradiction, new perspectives of frameworks, the recasting of questions or results from one method with questions or results from the other method; 5.  Expansion seeks to extend the breadth and range of inquiry by using different methods for different inquiry components.

In the past 28 years, this classification has been supplemented by several others. On the basis of a review of the reasons for combining qualitative and quantitative research mentioned by the authors of mixed methods studies, Bryman ( 2006 ) formulated a list of more concrete rationales for performing mixed methods research (see Appendix). Bryman’s classification breaks down Greene et al.’s ( 1989 ) categories into several aspects, and he adds a number of additional aspects, such as the following:

(a)  Credibility – refers to suggestions that employing both approaches enhances the integrity of findings. (b)  Context – refers to cases in which the combination is justified in terms of qualitative research providing contextual understanding coupled with either generalizable, externally valid findings or broad relationships among variables uncovered through a survey. (c)  Illustration – refers to the use of qualitative data to illustrate quantitative findings, often referred to as putting “meat on the bones” of “dry” quantitative findings. (d)  Utility or improving the usefulness of findings – refers to a suggestion, which is more likely to be prominent among articles with an applied focus, that combining the two approaches will be more useful to practitioners and others. (e)  Confirm and discover – this entails using qualitative data to generate hypotheses and using quantitative research to test them within a single project. (f)  Diversity of views – this includes two slightly different rationales – namely, combining researchers’ and participants’ perspectives through quantitative and qualitative research respectively, and uncovering relationships between variables through quantitative research while also revealing meanings among research participants through qualitative research. (Bryman, p. 106)

Views can be diverse (f) in various ways. Some examples of mixed methods design that include a diversity of views are:

  • Iteratively/sequentially connecting local/idiographic knowledge with national/general/nomothetic knowledge;
  • Learning from different perspectives on teams and in the field and literature;
  • Achieving multiple participation, social justice, and action;
  • Determining what works for whom and the relevance/importance of context;
  • Producing interdisciplinary substantive theory, including/comparing multiple perspectives and data regarding a phenomenon;
  • Juxtaposition-dialogue/comparison-synthesis;
  • Breaking down binaries/dualisms (some of both);
  • Explaining interaction between/among natural and human systems;
  • Explaining complexity.

The number of possible purposes for mixing is very large and is increasing; hence, it is not possible to provide an exhaustive list. Greene et al.’s ( 1989 ) purposes, Bryman’s ( 2006 ) rationales, and our examples of a diversity of views were formulated as classifications on the basis of examination of many existing research studies. They indicate how the qualitative and quantitative research components of a study relate to each other. These purposes can be used post hoc to classify research or a priori in the design of a new study. When designing a mixed methods study, it is sometimes helpful to list the purpose in the title of the study design.

The key point of this section is for the researcher to begin a study with at least one research question and then carefully consider what the purposes for mixing are. One can use mixed methods to examine different aspects of a single research question, or one can use separate but related qualitative and quantitative research questions. In all cases, the mixing of methods, methodologies, and/or paradigms will help answer the research questions and make improvements over a more basic study design. Fuller and richer information will be obtained in the mixed methods study.

Theoretical drive

In addition to a mixing purpose, a mixed methods research study might have an overall “theoretical drive” (Morse and Niehaus 2009 ). When designing a mixed methods study, it is occasionally helpful to list the theoretical drive in the title of the study design. An investigation, in Morse and Niehaus’s ( 2009 ) view, is focused primarily on either exploration-and-description or on testing-and-prediction. In the first case, the theoretical drive is called “inductive” or “qualitative”; in the second case, it is called “deductive” or “quantitative”. In the case of mixed methods, the component that corresponds to the theoretical drive is referred to as the “core” component (“Kernkomponente”), and the other component is called the “supplemental” component (“ergänzende Komponente”). In Morse’s notation system, the core component is written in capitals and the supplemental component is written in lowercase letters. For example, in a QUAL → quan design, more weight is attached to the data coming from the core qualitative component. Due to the decisive character of the core component, the core component must be able to stand on its own, and should be implemented rigorously. The supplemental component does not have to stand on its own.

Although this distinction is useful in some circumstances, we do not advise to apply it to every mixed methods design. First, Morse and Niehaus contend that the supplemental component can be done “less rigorously” but do not explain which aspects of rigor can be dropped. In addition, the idea of decreased rigor is in conflict with one key theme of the present article, namely that mixed methods designs should always meet the criterion of multiple validities legitimation (Onwuegbuzie and Johnson 2006 ).

The idea of theoretical drive as explicated by Morse and Niehaus has been criticized. For example, we view a theoretical drive as a feature not of a whole study, but of a research question, or, more precisely, of an interpretation of a research question. For example, if one study includes multiple research questions, it might include several theoretical drives (Schoonenboom 2016 ).

Another criticism of Morse and Niehaus’ conceptualization of theoretical drive is that it does not allow for equal-status mixed methods research (“Mixed Methods Forschung, bei der qualitative und quantitative Methoden die gleiche Bedeutung haben” or “gleichrangige Mixed Methods-Designs”), in which both the qualitative and quantitative component are of equal value and weight; this same criticism applies to Morgan’s ( 2014 ) set of designs. We agree with Greene ( 2015 ) that mixed methods research can be integrated at the levels of method, methodology, and paradigm. In this view, equal-status mixed methods research designs are possible, and they result when both the qualitative and the quantitative components, approaches, and thinking are of equal value, they take control over the research process in alternation, they are in constant interaction, and the outcomes they produce are integrated during and at the end of the research process. Therefore, equal-status mixed methods research (that we often advocate) is also called “interactive mixed methods research”.

Mixed methods research can have three different drives, as formulated by Johnson et al. ( 2007 , p. 123):

Qualitative dominant [or qualitatively driven] mixed methods research is the type of mixed research in which one relies on a qualitative, constructivist-poststructuralist-critical view of the research process, while concurrently recognizing that the addition of quantitative data and approaches are likely to benefit most research projects. Quantitative dominant [or quantitatively driven] mixed methods research is the type of mixed research in which one relies on a quantitative, postpositivist view of the research process, while concurrently recognizing that the addition of qualitative data and approaches are likely to benefit most research projects. (p. 124) The area around the center of the [qualitative-quantitative] continuum, equal status , is the home for the person that self-identifies as a mixed methods researcher. This researcher takes as his or her starting point the logic and philosophy of mixed methods research. These mixed methods researchers are likely to believe that qualitative and quantitative data and approaches will add insights as one considers most, if not all, research questions.

We leave it to the reader to decide if he or she desires to conduct a qualitatively driven study, a quantitatively driven study, or an equal-status/“interactive” study. According to the philosophies of pragmatism (Johnson and Onwuegbuzie 2004 ) and dialectical pluralism (Johnson 2017 ), interactive mixed methods research is very much a possibility. By successfully conducting an equal-status study, the pragmatist researcher shows that paradigms can be mixed or combined, and that the incompatibility thesis does not always apply to research practice. Equal status research is most easily conducted when a research team is composed of qualitative, quantitative, and mixed researchers, interacts continually, and conducts a study to address one superordinate goal.

Timing: simultaneity and dependence

Another important distinction when designing a mixed methods study relates to the timing of the two (or more) components. When designing a mixed methods study, it is usually helpful to include the word “concurrent” (“parallel”) or “sequential” (“sequenziell”) in the title of the study design; a complex design can be partially concurrent and partially sequential. Timing has two aspects: simultaneity and dependence (Guest 2013 ).

Simultaneity (“Simultanität”) forms the basis of the distinction between concurrent and sequential designs. In a  sequential design , the quantitative component precedes the qualitative component, or vice versa. In a  concurrent design , both components are executed (almost) simultaneously. In the notation of Morse ( 1991 ), concurrence is indicated by a “+” between components (e. g., QUAL + quan), while sequentiality is indicated with a “→” (QUAL → quan). Note that the use of capital letters for one component and lower case letters for another component in the same design suggest that one component is primary and the other is secondary or supplemental.

Some designs are sequential by nature. For example, in a  conversion design, qualitative categories and themes might be first obtained by collection and analysis of qualitative data, and then subsequently quantitized (Teddlie and Tashakkori 2009 ). Likewise, with Greene et al.’s ( 1989 ) initiation purpose, the initiation strand follows the unexpected results that it is supposed to explain. In other cases, the researcher has a choice. It is possible, e. g., to collect interview data and survey data of one inquiry simultaneously; in that case, the research activities would be concurrent. It is also possible to conduct the interviews after the survey data have been collected (or vice versa); in that case, research activities are performed sequentially. Similarly, a study with the purpose of expansion can be designed in which data on an effect and the intervention process are collected simultaneously, or they can be collected sequentially.

A second aspect of timing is dependence (“Abhängigkeit”) . We call two research components dependent if the implementation of the second component depends on the results of data analysis in the first component. Two research components are independent , if their implementation does not depend on the results of data analysis in the other component. Often, a researcher has a choice to perform data analysis independently or not. A researcher could analyze interview data and questionnaire data of one inquiry independently; in that case, the research activities would be independent. It is also possible to let the interview questions depend upon the outcomes of the analysis of the questionnaire data (or vice versa); in that case, research activities are performed dependently. Similarly, the empirical outcome/effect and process in a study with the purpose of expansion might be investigated independently, or the process study might take the effect/outcome as given (dependent).

In the mixed methods literature, the distinction between sequential and concurrent usually refers to the combination of concurrent/independent and sequential/dependent, and to the combination of data collection and data analysis. It is said that in a concurrent design, the data collection and data analysis of both components occurs (almost) simultaneously and independently, while in a sequential design, the data collection and data analysis of one component take place after the data collection and data analysis of the other component and depends on the outcomes of the other component.

In our opinion, simultaneity and dependence are two separate dimensions. Simultaneity indicates whether data collection is done concurrent or sequentially. Dependence indicates whether the implementation of one component depends upon the results of data analysis of the other component. As we will see in the example case studies, a concurrent design could include dependent data analysis, and a sequential design could include independent data analysis. It is conceivable that one simultaneously conducts interviews and collects questionnaire data (concurrent), while allowing the analysis focus of the interviews to depend on what emerges from the survey data (dependence).

Dependent research activities include a redirection of subsequent research inquiry. Using the outcomes of the first research component, the researcher decides what to do in the second component. Depending on the outcomes of the first research component, the researcher will do something else in the second component. If this is so, the research activities involved are said to be sequential-dependent, and any component preceded by another component should appropriately build on the previous component (see sequential validity legitimation ; Johnson and Christensen 2017 ; Onwuegbuzie and Johnson 2006 ).

It is under the purposive discretion of the researcher to determine whether a concurrent-dependent design, a concurrent-independent design, a sequential-dependent design, or a sequential-dependent design is needed to answer a particular research question or set of research questions in a given situation.

Point of integration

Each true mixed methods study has at least one “point of integration” – called the “point of interface” by Morse and Niehaus ( 2009 ) and Guest ( 2013 ) –, at which the qualitative and quantitative components are brought together. Having one or more points of integration is the distinguishing feature of a design based on multiple components. It is at this point that the components are “mixed”, hence the label “mixed methods designs”. The term “mixing”, however, is misleading, as the components are not simply mixed, but have to be integrated very carefully.

Determining where the point of integration will be, and how the results will be integrated, is an important, if not the most important, decision in the design of mixed methods research. Morse and Niehaus ( 2009 ) identify two possible points of integration: the results point of integration and the analytical point of integration.

Most commonly, integration takes place in the results point of integration . At some point in writing down the results of the first component, the results of the second component are added and integrated. A  joint display (listing the qualitative and quantitative findings and an integrative statement) might be used to facilitate this process.

In the case of an analytical point of integration , a first analytical stage of a qualitative component is followed by a second analytical stage, in which the topics identified in the first analytical stage are quantitized. The results of the qualitative component ultimately, and before writing down the results of the analytical phase as a whole, become quantitative; qualitizing also is a possible strategy, which would be the converse of this.

Other authors assume more than two possible points of integration. Teddlie and Tashakkori ( 2009 ) distinguish four different stages of an investigation: the conceptualization stage, the methodological experimental stage (data collection), the analytical experimental stage (data analysis), and the inferential stage. According to these authors, in all four stages, mixing is possible, and thus all four stages are potential points or integration.

However, the four possible points of integration used by Teddlie and Tashakkori ( 2009 ) are still too coarse to distinguish some types of mixing. Mixing in the experiential stage can take many different forms, for example the use of cognitive interviews to improve a questionnaire (tool development), or selecting people for an interview on the basis of the results of a questionnaire (sampling). Extending the definition by Guest ( 2013 ), we define the point of integration as “any point in a study where two or more research components are mixed or connected in some way”. Then, the point of integration in the two examples of this paragraph can be defined more accurately as “instrument development”, and “development of the sample”.

It is at the point of integration that qualitative and quantitative components are integrated. Some primary ways that the components can be connected to each other are as follows:

(1) merging the two data sets, (2) connecting from the analysis of one set of data to the collection of a second set of data, (3) embedding of one form of data within a larger design or procedure, and (4) using a framework (theoretical or program) to bind together the data sets (Creswell and Plano Clark 2011 , p. 76).

More generally, one can consider mixing at any or all of the following research components: purposes, research questions, theoretical drive, methods, methodology, paradigm, data, analysis, and results. One can also include mixing views of different researchers, participants, or stakeholders. The creativity of the mixed methods researcher designing a study is extensive.

Substantively, it can be useful to think of integration or mixing as comparing and bringing together two (or more) components on the basis of one or more of the purposes set out in the first section of this article. For example, it is possible to use qualitative data to illustrate a quantitative effect, or to determine whether the qualitative and the quantitative component yield convergent results ( triangulation ). An integrated result could also consist of a combination of a quantitatively established effect and a qualitative description of the underlying process . In the case of development, integration consists of an adjustment of an, often quantitative, for example, instrument or model or interpretation, based on qualitative assessments by members of the target group.

A special case is the integration of divergent results. The power of mixed methods research is its ability to deal with diversity and divergence. In the literature, we find two kinds of strategies for dealing with divergent results. A first set of strategies takes the detected divergence as the starting point for further analysis, with the aim to resolve the divergence. One possibility is to carry out further research (Cook 1985 ; Greene and Hall 2010 ). Further research is not always necessary. One can also look for a more comprehensive theory, which is able to account for both the results of the first component and the deviating results of the second component. This is a form of abduction (Erzberger and Prein 1997 ).

A fruitful starting point in trying to resolve divergence through abduction is to determine which component has resulted in a finding that is somehow expected, logical, and/or in line with existing research. The results of this research component, called the “sense” (“Lesart”), are subsequently compared to the results of the other component, called the “anti-sense” (“alternative Lesart”), which are considered dissonant, unexpected, and/or contrary to what had been found in the literature. The aim is to develop an overall explanation that fits both the sense and the anti-sense (Bazeley and Kemp 2012 ; Mendlinger and Cwikel 2008 ). Finally, a reanalysis of the data can sometimes lead to resolving divergence (Creswell and Plano Clark 2011 ).

Alternatively, one can question the existence of the encountered divergence. In this regard, Mathison ( 1988 ) recommends determining whether deviating results shown by the data can be explained by knowledge about the research and/or knowledge of the social world. Differences between results from different data sources could also be the result of properties of the methods involved, rather than reflect differences in reality (Yanchar and Williams 2006 ). In general, the conclusions of the individual components can be subjected to an inference quality audit (Teddlie and Tashakkori 2009 ), in which the researcher investigates the strength of each of the divergent conclusions. We recommend that researchers first determine whether there is “real” divergence, according to the strategies mentioned in the last paragraph. Next, an attempt can be made to resolve cases of “true” divergence, using one or more of the methods mentioned in this paragraph.

Design typology utilization

As already mentioned in Sect. 1, mixed methods designs can be classified into a mixed methods typology or taxonomy. A typology serves several purposes, including the following: guiding practice, legitimizing the field, generating new possibilities, and serving as a useful pedagogical tool (Teddlie and Tashakkori 2009 ). Note, however, that not all types of typologies are equally suitable for all purposes. For generating new possibilities, one will need a more exhaustive typology, while a useful pedagogical tool might be better served by a non-exhaustive overview of the most common mixed methods designs. Although some of the current MM design typologies include more designs than others, none of the current typologies is fully exhaustive. When designing a mixed methods study, it is often useful to borrow its name from an existing typology, or to construct a superior and nuanced clear name when your design is based on a modification of one or more of the designs.

Various typologies of mixed methods designs have been proposed. Creswell and Plano Clark’s ( 2011 ) typology of some “commonly used designs” includes six “major mixed methods designs”. Our summary of these designs runs as follows:

  • Convergent parallel design (“paralleles Design”) (the quantitative and qualitative strands of the research are performed independently, and their results are brought together in the overall interpretation),
  • Explanatory sequential design (“explanatives Design”) (a first phase of quantitative data collection and analysis is followed by the collection of qualitative data, which are used to explain the initial quantitative results),
  • Exploratory sequential design (“exploratives Design”) (a first phase of qualitative data collection and analysis is followed by the collection of quantitative data to test or generalize the initial qualitative results),
  • Embedded design (“Einbettungs-Design”) (in a traditional qualitative or quantitative design, a strand of the other type is added to enhance the overall design),
  • Transformative design (“politisch-transformatives Design”) (a transformative theoretical framework, e. g. feminism or critical race theory, shapes the interaction, priority, timing and mixing of the qualitative and quantitative strand),
  • Multiphase design (“Mehrphasen-Design”) (more than two phases or both sequential and concurrent strands are combined over a period of time within a program of study addressing an overall program objective).

Most of their designs presuppose a specific juxtaposition of the qualitative and quantitative component. Note that the last design is a complex type that is required in many mixed methods studies.

The following are our adapted definitions of Teddlie and Tashakkori’s ( 2009 ) five sets of mixed methods research designs (adapted from Teddlie and Tashakkori 2009 , p. 151):

  • Parallel mixed designs (“paralleles Mixed-Methods-Design”) – In these designs, one has two or more parallel quantitative and qualitative strands, either with some minimal time lapse or simultaneously; the strand results are integrated into meta-inferences after separate analysis are conducted; related QUAN and QUAL research questions are answered or aspects of the same mixed research question is addressed.
  • Sequential mixed designs (“sequenzielles Mixed-Methods-Design”) – In these designs, QUAL and QUAN strands occur across chronological phases, and the procedures/questions from the later strand emerge/depend/build on on the previous strand; the research questions are interrelated and sometimes evolve during the study.
  • Conversion mixed designs (“Transfer-Design” or “Konversionsdesign”) – In these parallel designs, mixing occurs when one type of data is transformed to the other type and then analyzed, and the additional findings are added to the results; this design answers related aspects of the same research question,
  • Multilevel mixed designs (“Mehrebenen-Mixed-Methods-Design”) – In these parallel or sequential designs, mixing occurs across multiple levels of analysis, as QUAN and QUAL data are analyzed and integrated to answer related aspects of the same research question or related questions.
  • Fully integrated mixed designs (“voll integriertes Mixed-Methods-Design”) – In these designs, mixing occurs in an interactive manner at all stages of the study. At each stage, one approach affects the formulation of the other, and multiple types of implementation processes can occur. For example, rather than including integration only at the findings/results stage, or only across phases in a sequential design, mixing might occur at the conceptualization stage, the methodological stage, the analysis stage, and the inferential stage.

We recommend adding to Teddlie and Tashakkori’s typology a sixth design type, specifically, a  “hybrid” design type to include complex combinations of two or more of the other design types. We expect that many published MM designs will fall into the hybrid design type.

Morse and Niehaus ( 2009 ) listed eight mixed methods designs in their book (and suggested that authors create more complex combinations when needed). Our shorthand labels and descriptions (adapted from Morse and Niehaus 2009 , p. 25) run as follows:

  • QUAL + quan (inductive-simultaneous design where, the core component is qualitative and the supplemental component is quantitative)
  • QUAL → quan (inductive-sequential design, where the core component is qualitative and the supplemental component is quantitative)
  • QUAN + qual (deductive-simultaneous design where, the core component is quantitative and the supplemental component is qualitative)
  • QUAN → qual (deductive-sequential design, where the core component is quantitative and the supplemental component is qualitative)
  • QUAL + qual (inductive-simultaneous design, where both components are qualitative; this is a multimethod design rather than a mixed methods design)
  • QUAL → qual (inductive-sequential design, where both components are qualitative; this is a multimethod design rather than a mixed methods design)
  • QUAN + quan (deductive-simultaneous design, where both components are quantitative; this is a multimethod design rather than a mixed methods design)
  • QUAN → quan (deductive-sequential design, where both components are quantitative; this is a multimethod design rather than a mixed methods design).

Notice that Morse and Niehaus ( 2009 ) included four mixed methods designs (the first four designs shown above) and four multimethod designs (the second set of four designs shown above) in their typology. The reader can, therefore, see that the design notation also works quite well for multimethod research designs. Notably absent from Morse and Niehaus’s book are equal-status or interactive designs. In addition, they assume that the core component should always be performed either concurrent with or before the supplemental component.

Johnson, Christensen, and Onwuegbuzie constructed a set of mixed methods designs without these limitations. The resulting mixed methods design matrix (see Johnson and Christensen 2017 , p. 478) contains nine designs, which we can label as follows (adapted from Johnson and Christensen 2017 , p. 478):

  • QUAL + QUAN (equal-status concurrent design),
  • QUAL + quan (qualitatively driven concurrent design),
  • QUAN + qual (quantitatively driven concurrent design),
  • QUAL → QUAN (equal-status sequential design),
  • QUAN → QUAL (equal-status sequential design),
  • QUAL → quan (qualitatively driven sequential design),
  • qual → QUAN (quantitatively driven sequential design),
  • QUAN → qual (quantitatively driven sequential design), and
  • quan → QUAL (qualitatively driven sequential design).

The above set of nine designs assumed only one qualitative and one quantitative component. However, this simplistic assumption can be relaxed in practice, allowing the reader to construct more complex designs. The Morse notation system is very powerful. For example, here is a three-stage equal-status concurrent-sequential design:

The key point here is that the Morse notation provides researchers with a powerful language for depicting and communicating the design constructed for a specific research study.

When designing a mixed methods study, it is sometimes helpful to include the mixing purpose (or characteristic on one of the other dimensions shown in Table  1 ) in the title of the study design (e. g., an explanatory sequential MM design, an exploratory-confirmatory MM design, a developmental MM design). Much more important, however, than a design name is for the author to provide an accurate description of what was done in the research study, so the reader will know exactly how the study was conducted. A design classification label can never replace such a description.

The common complexity of mixed methods design poses a problem to the above typologies of mixed methods research. The typologies were designed to classify whole mixed methods studies, and they are basically based on a classification of simple designs. In practice, many/most designs are complex. Complex designs are sometimes labeled “complex design”, “multiphase design”, “fully integrated design”, “hybrid design” and the like. Because complex designs occur very often in practice, the above typologies are not able to classify a large part of existing mixed methods research any further than by labeling them “complex”, which in itself is not very informative about the particular design. This problem does not fully apply to Morse’s notation system, which can be used to symbolize some more complex designs.

Something similar applies to the classification of the purposes of mixed methods research. The classifications of purposes mentioned in the “Purpose”-section, again, are basically meant for the classification of whole mixed methods studies. In practice, however, one single study often serves more than one purpose (Schoonenboom et al. 2017 ). The more purposes that are included in one study, the more difficult it becomes to select a design on the basis of the purpose of the investigation, as advised by Greene ( 2007 ). Of all purposes involved, then, which one should be the primary basis for the design? Or should the design be based upon all purposes included? And if so, how? For more information on how to articulate design complexity based on multiple purposes of mixing, see Schoonenboom et al. ( 2017 ).

It should be clear to the reader that, although much progress has been made in the area of mixed methods design typologies, the problem remains in developing a single typology that is effective in comprehensively listing a set of designs for mixed methods research. This is why we emphasize in this article the importance of learning to build on simple designs and construct one’s own design for one’s research questions. This will often result in a combination or “hybrid” design that goes beyond basic designs found in typologies, and a methodology section that provides much more information than a design name.

Typological versus interactive approaches to design

In the introduction, we made a distinction between design as a product and design as a process. Related to this, two different approaches to design can be distinguished: typological/taxonomic approaches (“systematische Ansätze”), such as those in the previous section, and interactive approaches (“interaktive Ansätze”) (the latter were called “dynamic” approaches by Creswell and Plano Clark 2011 ). Whereas typological/taxonomic approaches view designs as a sort of mold, in which the inquiry can be fit, interactive approaches (Maxwell 2013 ) view design as a process, in which a certain design-as-a-product might be the outcome of the process, but not its input.

The most frequently mentioned interactive approach to mixed methods research is the approach by Maxwell and Loomis ( 2003 ). Maxwell and Loomis distinguish the following components of a design: goals, conceptual framework, research question, methods, and validity. They argue convincingly that the most important task of the researcher is to deliver as the end product of the design process a design in which these five components fit together properly. During the design process, the researcher works alternately on the individual components, and as a result, their initial fit, if it existed, tends to get lost. The researcher should therefore regularly check during the research and continuing design process whether the components still fit together, and, if not, should adapt one or the other component to restore the fit between them. In an interactive approach, unlike the typological approach, design is viewed as an interactive process in which the components are continually compared during the research study to each other and adapted to each other.

Typological and interactive approaches to mixed methods research have been presented as mutually exclusive alternatives. In our view, however, they are not mutually exclusive. The interactive approach of Maxwell is a very powerful tool for conducting research, yet this approach is not specific to mixed methods research. Maxwell’s interactive approach emphasizes that the researcher should keep and monitor a close fit between the five components of research design. However, it does not indicate how one should combine qualitative and quantitative subcomponents within one of Maxwell’s five components (e. g., how one should combine a qualitative and a quantitative method, or a qualitative and a quantitative research question). Essential elements of the design process, such as timing and the point of integration are not covered by Maxwell’s approach. This is not a shortcoming of Maxwell’s approach, but it indicates that to support the design of mixed methods research, more is needed than Maxwell’s model currently has to offer.

Some authors state that design typologies are particularly useful for beginning researchers and interactive approaches are suited for experienced researchers (Creswell and Plano Clark 2011 ). However, like an experienced researcher, a research novice needs to align the components of his or her design properly with each other, and, like a beginning researcher, an advanced researcher should indicate how qualitative and quantitative components are combined with each other. This makes an interactive approach desirable, also for beginning researchers.

We see two merits of the typological/taxonomic approach . We agree with Greene ( 2007 ), who states that the value of the typological approach mainly lies in the different dimensions of mixed methods that result from its classifications. In this article, the primary dimensions include purpose, theoretical drive, timing, point of integration, typological vs. interactive approaches, planned vs. emergent designs, and complexity (also see secondary dimensions in Table  1 ). Unfortunately, all of these dimensions are not reflected in any single design typology reviewed here. A second merit of the typological approach is the provision of common mixed methods research designs, of common ways in which qualitative and quantitative research can be combined, as is done for example in the major designs of Creswell and Plano Clark ( 2011 ). Contrary to other authors, however, we do not consider these designs as a feature of a whole study, but rather, in line with Guest ( 2013 ), as a feature of one part of a design in which one qualitative and one quantitative component are combined. Although one study could have only one purpose, one point of integration, et cetera, we believe that combining “designs” is the rule and not the exception. Therefore, complex designs need to be constructed and modified as needed, and during the writing phase the design should be described in detail and perhaps given a creative and descriptive name.

Planned versus emergent designs

A mixed methods design can be thought out in advance, but can also arise during the course of the conduct of the study; the latter is called an “emergent” design (Creswell and Plano Clark 2011 ). Emergent designs arise, for example, when the researcher discovers during the study that one of the components is inadequate (Morse and Niehaus 2009 ). Addition of a component of the other type can sometimes remedy such an inadequacy. Some designs contain an emergent component by their nature. Initiation, for example, is the further exploration of unexpected outcomes. Unexpected outcomes are by definition not foreseen, and therefore cannot be included in the design in advance.

The question arises whether researchers should plan all these decisions beforehand, or whether they can make them during, and depending on the course of, the research process. The answer to this question is twofold. On the one hand, a researcher should decide beforehand which research components to include in the design, such that the conclusion that will be drawn will be robust. On the other hand, developments during research execution will sometimes prompt the researcher to decide to add additional components. In general, the advice is to be prepared for the unexpected. When one is able to plan for emergence, one should not refrain from doing so.

Dimension of complexity

Next, mixed methods designs are characterized by their complexity. In the literature, simple and complex designs are distinguished in various ways. A common distinction is between simple investigations with a single point of integration versus complex investigations with multiple points of integration (Guest 2013 ). When designing a mixed methods study, it can be useful to mention in the title whether the design of the study is simple or complex. The primary message of this section is as follows: It is the responsibility of the researcher to create more complex designs when needed to answer his or her research question(s) .

Teddlie and Tashakkori’s ( 2009 ) multilevel mixed designs and fully integrated mixed designs are both complex designs, but for different reasons. A multilevel mixed design is more complex ontologically, because it involves multiple levels of reality. For example, data might be collected both at the levels of schools and students, neighborhood and households, companies and employees, communities and inhabitants, or medical practices and patients (Yin 2013 ). Integration of these data does not only involve the integration of qualitative and quantitative data, but also the integration of data originating from different sources and existing at different levels. Little if any published research has discussed the possible ways of integrating data obtained in a multilevel mixed design (see Schoonenboom 2016 ). This is an area in need of additional research.

The fully-integrated mixed design is more complex because it contains multiple points of integration. As formulated by Teddlie and Tashakkori ( 2009 , p. 151):

In these designs, mixing occurs in an interactive manner at all stages of the study. At each stage, one approach affects the formulation of the other, and multiple types of implementation processes can occur.

Complexity, then, not only depends on the number of components, but also on the extent to which they depend on each other (e. g., “one approach affects the formulation of the other”).

Many of our design dimensions ultimately refer to different ways in which the qualitative and quantitative research components are interdependent. Different purposes of mixing ultimately differ in the way one component relates to, and depends upon, the other component. For example, these purposes include dependencies, such as “x illustrates y” and “x explains y”. Dependencies in the implementation of x and y occur to the extent that the design of y depends on the results of x (sequentiality). The theoretical drive creates dependencies, because the supplemental component y is performed and interpreted within the context and the theoretical drive of core component x. As a general rule in designing mixed methods research, one should examine and plan carefully the ways in which and the extent to which the various components depend on each other.

The dependence among components, which may or may not be present, has been summarized by Greene ( 2007 ). It is seen in the distinction between component designs (“Komponenten-Designs”), in which the components are independent of each other, and integrated designs (“integrierte Designs”), in which the components are interdependent. Of these two design categories, integrated designs are the more complex designs.

Secondary design considerations

The primary design dimensions explained above have been the focus of this article. There are a number of secondary considerations for researchers to also think about when they design their studies (Johnson and Christensen 2017 ). Now we list some secondary design issues and questions that should be thoughtfully considered during the construction of a strong mixed methods research design.

  • Phenomenon: Will the study be addressing (a) the same part or different parts of one phenomenon? (b) different phenomena?, or (c) the phenomenon/phenomena from different perspectives? Is the phenomenon (a) expected to be unique (e. g., historical event, particular group)?, (b) something expected to be part of a more regular and predictable phenomenon, or (c) a complex mixture of these?
  • Social scientific theory: Will the study generate a new substantive theory, test an already constructed theory, or achieve both in a sequential arrangement? Or is the researcher not interested in substantive theory based on empirical data?
  • Ideological drive: Will the study have an explicitly articulated ideological drive (e. g., feminism, critical race paradigm, transformative paradigm)?
  • Combination of sampling methods: What specific quantitative sampling method(s) will be used? What specific qualitative sampling methods(s) will be used? How will these be combined or related?
  • Degree to which the research participants will be similar or different: For example, participants or stakeholders with known differences of perspective would provide participants that are quite different.
  • Degree to which the researchers on the research team will be similar or different: For example, an experiment conducted by one researcher would be high on similarity, but the use of a heterogeneous and participatory research team would include many differences.
  • Implementation setting: Will the phenomenon be studied naturalistically, experimentally, or through a combination of these?
  • Degree to which the methods similar or different: For example, a structured interview and questionnaire are fairly similar but administration of a standardized test and participant observation in the field are quite different.
  • Validity criteria and strategies: What validity criteria and strategies will be used to address the defensibility of the study and the conclusions that will be drawn from it (see Chapter 11 in Johnson and Christensen 2017 )?
  • Full study: Will there be essentially one research study or more than one? How will the research report be structured?

Two case studies

The above design dimensions are now illustrated by examples. A nice collection of examples of mixed methods studies can be found in Hesse-Biber ( 2010 ), from which the following examples are taken. The description of the first case example is shown in Box 1.

Box 1

Summary of Roth ( 2006 ), research regarding the gender-wage gap within Wall Street securities firms. Adapted from Hesse-Biber ( 2010 , pp. 457–458)

Louise Marie Roth’s research, Selling Women Short: Gender and Money on Wall Street ( 2006 ), tackles gender inequality in the workplace. She was interested in understanding the gender-wage gap among highly performing Wall Street MBAs, who on the surface appeared to have the same “human capital” qualifications and were placed in high-ranking Wall Street securities firms as their first jobs. In addition, Roth wanted to understand the “structural factors” within the workplace setting that may contribute to the gender-wage gap and its persistence over time. […] Roth conducted semistructured interviews, nesting quantitative closed-ended questions into primarily qualitative in-depth interviews […] In analyzing the quantitative data from her sample, she statistically considered all those factors that might legitimately account for gendered differences such as number of hours worked, any human capital differences, and so on. Her analysis of the quantitative data revealed the presence of a significant gender gap in wages that remained unexplained after controlling for any legitimate factors that might otherwise make a difference. […] Quantitative findings showed the extent of the wage gap while providing numerical understanding of the disparity but did not provide her with an understanding of the specific processes within the workplace that might have contributed to the gender gap in wages. […] Her respondents’ lived experiences over time revealed the hidden inner structures of the workplace that consist of discriminatory organizational practices with regard to decision making in performance evaluations that are tightly tied to wage increases and promotion.

This example nicely illustrates the distinction we made between simultaneity and dependency. On the two aspects of the timing dimension, this study was a concurrent-dependent design answering a set of related research questions. The data collection in this example was conducted simultaneously, and was thus concurrent – the quantitative closed-ended questions were embedded into the qualitative in-depth interviews. In contrast, the analysis was dependent, as explained in the next paragraph.

One of the purposes of this study was explanation: The qualitative data were used to understand the processes underlying the quantitative outcomes. It is therefore an explanatory design, and might be labelled an “explanatory concurrent design”. Conceptually, explanatory designs are often dependent: The qualitative component is used to explain and clarify the outcomes of the quantitative component. In that sense, the qualitative analysis in the case study took the outcomes of the quantitative component (“the existence of the gender-wage gap” and “numerical understanding of the disparity”), and aimed at providing an explanation for that result of the quantitative data analysis , by relating it to the contextual circumstances in which the quantitative outcomes were produced. This purpose of mixing in the example corresponds to Bryman’s ( 2006 ) “contextual understanding”. On the other primary dimensions, (a) the design was ongoing over a three-year period but was not emergent, (b) the point of integration was results, and (c) the design was not complex with respect to the point of integration, as it had only one point of integration. Yet, it was complex in the sense of involving multiple levels; both the level of the individual and the organization were included. According to the approach of Johnson and Christensen ( 2017 ), this was a QUAL + quan design (that was qualitatively driven, explanatory, and concurrent). If we give this study design a name, perhaps it should focus on what was done in the study: “explaining an effect from the process by which it is produced”. Having said this, the name “explanatory concurrent design” could also be used.

The description of the second case example is shown in Box 2.

Box 2

Summary of McMahon’s ( 2007 ) explorative study of the meaning, role, and salience of rape myths within the subculture of college student athletes. Adapted from Hesse-Biber ( 2010 , pp. 461–462)

Sarah McMahon ( 2007 ) wanted to explore the subculture of college student athletes and specifically the meaning, role, and salience of rape myths within that culture. […] While she was looking for confirmation between the quantitative ([structured] survey) and qualitative (focus groups and individual interviews) findings, she entered this study skeptical of whether or not her quantitative and qualitative findings would mesh with one another. McMahon […] first administered a survey [instrument] to 205 sophomore and junior student athletes at one Northeast public university. […] The quantitative data revealed a very low acceptance of rape myths among this student population but revealed a higher acceptance of violence among men and individuals who did not know a survivor of sexual assault. In the second qualitative (QUAL) phase, “focus groups were conducted as semi-structured interviews” and facilitated by someone of the same gender as the participants (p. 360). […] She followed this up with a third qualitative component (QUAL), individual interviews, which were conducted to elaborate on themes discovered in the focus groups and determine any differences in students’ responses between situations (i. e., group setting vs. individual). The interview guide was designed specifically to address focus group topics that needed “more in-depth exploration” or clarification (p. 361). The qualitative findings from the focus groups and individual qualitative interviews revealed “subtle yet pervasive rape myths” that fell into four major themes: “the misunderstanding of consent, the belief in ‘accidental’ and fabricated rape, the contention that some women provoke rape, and the invulnerability of female athletes” (p. 363). She found that the survey’s finding of a “low acceptance of rape myths … was contradicted by the findings of the focus groups and individual interviews, which indicated the presence of subtle rape myths” (p. 362).

On the timing dimension, this is an example of a sequential-independent design. It is sequential, because the qualitative focus groups were conducted after the survey was administered. The analysis of the quantitative and qualitative data was independent: Both were analyzed independently, to see whether they yielded the same results (which they did not). This purpose, therefore, was triangulation. On the other primary dimensions, (a) the design was planned, (b) the point of integration was results, and (c) the design was not complex as it had only one point of integration, and involved only the level of the individual. The author called this a “sequential explanatory” design. We doubt, however, whether this is the most appropriate label, because the qualitative component did not provide an explanation for quantitative results that were taken as given. On the contrary, the qualitative results contradicted the quantitative results. Thus, a “sequential-independent” design, or a “sequential-triangulation” design or a “sequential-comparative” design would probably be a better name.

Notice further that the second case study had the same point of integration as the first case study. The two components were brought together in the results. Thus, although the case studies are very dissimilar in many respects, this does not become visible in their point of integration. It can therefore be helpful to determine whether their point of extension is different. A  point of extension is the point in the research process at which the second (or later) component comes into play. In the first case study, two related, but different research questions were answered, namely the quantitative question “How large is the gender-wage gap among highly performing Wall Street MBAs after controlling for any legitimate factors that might otherwise make a difference?”, and the qualitative research question “How do structural factors within the workplace setting contribute to the gender-wage gap and its persistence over time?” This case study contains one qualitative research question and one quantitative research question. Therefore, the point of extension is the research question. In the second case study, both components answered the same research question. They differed in their data collection (and subsequently in their data analysis): qualitative focus groups and individual interviews versus a quantitative questionnaire. In this case study, the point of extension was data collection. Thus, the point of extension can be used to distinguish between the two case studies.

Summary and conclusions

The purpose of this article is to help researchers to understand how to design a mixed methods research study. Perhaps the simplest approach is to design is to look at a single book and select one from the few designs included in that book. We believe that is only useful as a starting point. Here we have shown that one often needs to construct a research design to fit one’s unique research situation and questions.

First, we showed that there are there are many purposes for which qualitative and quantitative methods, methodologies, and paradigms can be mixed. This must be determined in interaction with the research questions. Inclusion of a purpose in the design name can sometimes provide readers with useful information about the study design, as in, e. g., an “explanatory sequential design” or an “exploratory-confirmatory design”.

The second dimension is theoretical drive in the sense that Morse and Niehaus ( 2009 ) use this term. That is, will the study have an inductive or a deductive drive, or, we added, a combination of these. Related to this idea is whether one will conduct a qualitatively driven, a quantitatively driven, or an equal-status mixed methods study. This language is sometimes included in the design name to communicate this characteristic of the study design (e. g., a “quantitatively driven sequential mixed methods design”).

The third dimension is timing , which has two aspects: simultaneity and dependence. Simultaneity refers to whether the components are to be implemented concurrently, sequentially, or a combination of these in a multiphase design. Simultaneity is commonly used in the naming of a mixed methods design because it communicates key information. The second aspect of timing, dependence , refers to whether a later component depends on the results of an earlier component, e. g., Did phase two specifically build on phase one in the research study? The fourth design dimension is the point of integration, which is where the qualitative and quantitative components are brought together and integrated. This is an essential dimension, but it usually does not need to be incorporated into the design name.

The fifth design dimension is that of typological vs. interactive design approaches . That is, will one select a design from a typology or use a more interactive approach to construct one’s own design? There are many typologies of designs currently in the literature. Our recommendation is that readers examine multiple design typologies to better understand the design process in mixed methods research and to understand what designs have been identified as popular in the field. However, when a design that would follow from one’s research questions is not available, the researcher can and should (a) combine designs into new designs or (b) simply construct a new and unique design. One can go a long way in depicting a complex design with Morse’s ( 1991 ) notation when used to its full potential. We also recommend that researchers understand the process approach to design from Maxwell and Loomis ( 2003 ), and realize that research design is a process and it needs, oftentimes, to be flexible and interactive.

The sixth design dimension or consideration is whether a design will be fully specified during the planning of the research study or if the design (or part of the design) will be allowed to emerge during the research process, or a combination of these. The seventh design dimension is called complexity . One sort of complexity mentioned was multilevel designs, but there are many complexities that can enter designs. The key point is that good research often requires the use of complex designs to answer one’s research questions. This is not something to avoid. It is the responsibility of the researcher to learn how to construct and describe and name mixed methods research designs. Always remember that designs should follow from one’s research questions and purposes, rather than questions and purposes following from a few currently named designs.

In addition to the six primary design dimensions or considerations, we provided a set of additional or secondary dimensions/considerations or questions to ask when constructing a mixed methods study design. Our purpose throughout this article has been to show what factors must be considered to design a high quality mixed methods research study. The more one knows and thinks about the primary and secondary dimensions of mixed methods design the better equipped one will be to pursue mixed methods research.

Acknowledgments

Open access funding provided by University of Vienna.

Biographies

1965, Dr., Professor of Empirical Pedagogy at University of Vienna, Austria. Research Areas: Mixed Methods Design, Philosophy of Mixed Methods Research, Innovation in Higher Education, Design and Evaluation of Intervention Studies, Educational Technology. Publications: Mixed methods in early childhood education. In: M. Fleer & B. v. Oers (Eds.), International handbook on early childhood education (Vol. 1). Dordrecht, The Netherlands: Springer 2017; The multilevel mixed intact group analysis: A mixed method to seek, detect, describe and explain differences between intact groups. Journal of Mixed Methods Research 10, 2016; The realist survey: How respondents’ voices can be used to test and revise correlational models. Journal of Mixed Methods Research 2015. Advance online publication.

1957, PhD, Professor of Professional Studies at University of South Alabama, Mobile, Alabama USA. Research Areas: Methods of Social Research, Program Evaluation, Quantitative, Qualitative and Mixed Methods, Philosophy of Social Science. Publications: Research methods, design and analysis. Boston, MA 2014 (with L. Christensen and L. Turner); Educational research: Quantitative, qualitative and mixed approaches. Los Angeles, CA 2017 (with L. Christensen); The Oxford handbook of multimethod and mixed methods research inquiry. New York, NY 2015 (with S. Hesse-Biber).

Bryman’s ( 2006 ) scheme of rationales for combining quantitative and qualitative research 1

  • Triangulation or greater validity – refers to the traditional view that quantitative and qualitative research might be combined to triangulate findings in order that they may be mutually corroborated. If the term was used as a synonym for integrating quantitative and qualitative research, it was not coded as triangulation.
  • Offset – refers to the suggestion that the research methods associated with both quantitative and qualitative research have their own strengths and weaknesses so that combining them allows the researcher to offset their weaknesses to draw on the strengths of both.
  • Completeness – refers to the notion that the researcher can bring together a more comprehensive account of the area of enquiry in which he or she is interested if both quantitative and qualitative research are employed.
  • Process – quantitative research provides an account of structures in social life but qualitative research provides sense of process.
  • Different research questions – this is the argument that quantitative and qualitative research can each answer different research questions but this item was coded only if authors explicitly stated that they were doing this.
  • Explanation – one is used to help explain findings generated by the other.
  • Unexpected results – refers to the suggestion that quantitative and qualitative research can be fruitfully combined when one generates surprising results that can be understood by employing the other.
  • Instrument development – refers to contexts in which qualitative research is employed to develop questionnaire and scale items – for example, so that better wording or more comprehensive closed answers can be generated.
  • Sampling – refers to situations in which one approach is used to facilitate the sampling of respondents or cases.
  • Credibility – refer s to suggestions that employing both approaches enhances the integrity of findings.
  • Context – refers to cases in which the combination is rationalized in terms of qualitative research providing contextual understanding coupled with either generalizable, externally valid findings or broad relationships among variables uncovered through a survey.
  • Illustration – refers to the use of qualitative data to illustrate quantitative findings, often referred to as putting “meat on the bones” of “dry” quantitative findings.
  • Utility or improving the usefulness of findings – refers to a suggestion, which is more likely to be prominent among articles with an applied focus, that combining the two approaches will be more useful to practitioners and others.
  • Confirm and discover – this entails using qualitative data to generate hypotheses and using quantitative research to test them within a single project.
  • Diversity of views – this includes two slightly different rationales – namely, combining researchers’ and participants’ perspectives through quantitative and qualitative research respectively, and uncovering relationships between variables through quantitative research while also revealing meanings among research participants through qualitative research.
  • Enhancement or building upon quantitative/qualitative findings – this entails a reference to making more of or augmenting either quantitative or qualitative findings by gathering data using a qualitative or quantitative research approach.
  • Other/unclear.
  • Not stated.

1 Reprinted with permission from “Integrating quantitative and qualitative research: How is it done?” by Alan Bryman ( 2006 ), Qualitative Research, 6, pp. 105–107.

Contributor Information

Judith Schoonenboom, Email: [email protected] .

R. Burke Johnson, Email: ude.amabalahtuos@nosnhojb .

  • Bazeley, Pat, Lynn Kemp Mosaics, triangles, and DNA: Metaphors for integrated analysis in mixed methods research. Journal of Mixed Methods Research. 2012; 6 :55–72. doi: 10.1177/1558689811419514. [ CrossRef ] [ Google Scholar ]
  • Bryman A. Integrating quantitative and qualitative research: how is it done? Qualitative Research. 2006; 6 :97–113. doi: 10.1177/1468794106058877. [ CrossRef ] [ Google Scholar ]
  • Cook TD. Postpositivist critical multiplism. In: Shotland RL, Mark MM, editors. Social science and social policy. Beverly Hills: SAGE; 1985. pp. 21–62. [ Google Scholar ]
  • Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. 2. Los Angeles: SAGE; 2011. [ Google Scholar ]
  • Erzberger C, Prein G. Triangulation: Validity and empirically-based hypothesis construction. Quality and Quantity. 1997; 31 :141–154. doi: 10.1023/A:1004249313062. [ CrossRef ] [ Google Scholar ]
  • Greene JC. Mixed methods in social inquiry. San Francisco: Jossey-Bass; 2007. [ Google Scholar ]
  • Greene JC. Preserving distinctions within the multimethod and mixed methods research merger. Sharlene Hesse-Biber and R. Burke Johnson. New York: Oxford University Press; 2015. [ Google Scholar ]
  • Greene JC, Valerie J, Caracelli, Graham WF. Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis. 1989; 11 :255–274. doi: 10.3102/01623737011003255. [ CrossRef ] [ Google Scholar ]
  • Greene JC, Hall JN. Dialectics and pragmatism. In: Tashakkori A, Teddlie C, editors. SAGE handbook of mixed methods in social & behavioral research. 2. Los Angeles: SAGE; 2010. pp. 119–167. [ Google Scholar ]
  • Guest, Greg Describing mixed methods research: An alternative to typologies. Journal of Mixed Methods Research. 2013; 7 :141–151. doi: 10.1177/1558689812461179. [ CrossRef ] [ Google Scholar ]
  • Hesse-Biber S. Qualitative approaches to mixed methods practice. Qualitative Inquiry. 2010; 16 :455–468. doi: 10.1177/1077800410364611. [ CrossRef ] [ Google Scholar ]
  • Johnson BR. Dialectical pluralism: A metaparadigm whose time has come. Journal of Mixed Methods Research. 2017; 11 :156–173. doi: 10.1177/1558689815607692. [ CrossRef ] [ Google Scholar ]
  • Johnson BR, Christensen LB. Educational research: Quantitative, qualitative, and mixed approaches. 6. Los Angeles: SAGE; 2017. [ Google Scholar ]
  • Johnson BR, Onwuegbuzie AJ. Mixed methods research: a research paradigm whose time has come. Educational Researcher. 2004; 33 (7):14–26. doi: 10.3102/0013189X033007014. [ CrossRef ] [ Google Scholar ]
  • Johnson BR, Onwuegbuzie AJ, Turner LA. Toward a definition of mixed methods research. Journal of Mixed Methods Research. 2007; 1 :112–133. doi: 10.1177/1558689806298224. [ CrossRef ] [ Google Scholar ]
  • Mathison S. Why triangulate? Educational Researcher. 1988; 17 :13–17. doi: 10.3102/0013189X017002013. [ CrossRef ] [ Google Scholar ]
  • Maxwell JA. Qualitative research design: An interactive approach. 3. Los Angeles: SAGE; 2013. [ Google Scholar ]
  • Maxwell, Joseph A., and Diane M. Loomis. 2003. Mixed methods design: An alternative approach. In Handbook of mixed methods in social & behavioral research , Eds. Abbas Tashakkori and Charles Teddlie, 241–271. Thousand Oaks: Sage.
  • McMahon S. Understanding community-specific rape myths: Exploring student athlete culture. Affilia. 2007; 22 :357–370. doi: 10.1177/0886109907306331. [ CrossRef ] [ Google Scholar ]
  • Mendlinger S, Cwikel J. Spiraling between qualitative and quantitative data on women’s health behaviors: A double helix model for mixed methods. Qualitative Health Research. 2008; 18 :280–293. doi: 10.1177/1049732307312392. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morgan DL. Integrating qualitative and quantitative methods: a pragmatic approach. Los Angeles: Sage; 2014. [ Google Scholar ]
  • Morse JM. Approaches to qualitative-quantitative methodological triangulation. Nursing Research. 1991; 40 :120–123. doi: 10.1097/00006199-199103000-00014. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morse JM, Niehaus L. Mixed method design: Principles and procedures. Walnut Creek: Left Coast Press; 2009. [ Google Scholar ]
  • Onwuegbuzie AJ, Burke Johnson R. The “validity” issue in mixed research. Research in the Schools. 2006; 13 :48–63. [ Google Scholar ]
  • Roth LM. Selling women short: Gender and money on Wall Street. Princeton: Princeton University Press; 2006. [ Google Scholar ]
  • Schoonenboom J. The multilevel mixed intact group analysis: a mixed method to seek, detect, describe and explain differences between intact groups. Journal of Mixed Methods Research. 2016; 10 :129–146. doi: 10.1177/1558689814536283. [ CrossRef ] [ Google Scholar ]
  • Schoonenboom, Judith, R. Burke Johnson, and Dominik E. Froehlich. 2017, in press. Combining multiple purposes of mixing within a mixed methods research design. International Journal of Multiple Research Approaches .
  • Teddlie CB, Tashakkori A. Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Los Angeles: Sage; 2009. [ Google Scholar ]
  • Yanchar SC, Williams DD. Reconsidering the compatibility thesis and eclecticism: Five proposed guidelines for method use. Educational Researcher. 2006; 35 (9):3–12. doi: 10.3102/0013189X035009003. [ CrossRef ] [ Google Scholar ]
  • Yin RK. Case study research: design and methods. 5. Los Angeles: SAGE; 2013. [ Google Scholar ]
  • Victor Yocco
  • Apr 9, 2024

Connecting With Users: Applying Principles Of Communication To UX Research

  • 30 min read
  • UX , User Research , Communication
  • Share on Twitter ,  LinkedIn

About The Author

Victor Yocco, PhD, has over a decade of experience as a UX researcher and research director. He is currently affiliated with Allelo Design and is taking on … More about Victor ↬

Email Newsletter

Weekly tips on front-end & UX . Trusted by 200,000+ folks.

Communication is in everything we do. We communicate with users through our research, our design, and, ultimately, the products and services we offer. UX practitioners and those working on digital product teams benefit from understanding principles of communication and their application to our craft. Treating our UX processes as a mode of communication between users and the digital environment can help unveil in-depth, actionable insights.

In this article, I’ll focus on UX research. Communication is a core component of UX research , as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful products.

Fundamentals of Communication Theory

Communications as an academic field encompasses various models and principles that highlight the dynamics of communication between individuals and groups. Communication theory examines the transfer of information from one person or group to another. It explores how messages are transmitted, encoded, and decoded, acknowledges the potential for interference (or ‘noise’), and accounts for feedback mechanisms in enhancing the communication process.

In this article, I will focus on the Transactional Model of Communication . There are many other models and theories in the academic literature on communication. I have included references at the end of the article for those interested in learning more.

The Transactional Model of Communication (Figure 1) is a two-way process that emphasizes the simultaneous sending and receiving of messages and feedback . Importantly, it recognizes that communication is shaped by context and is an ongoing, evolving process. I’ll use this model and understanding when applying principles from the model to UX research. You’ll find that much of what is covered in the Transactional Model would also fall under general best practices for UX research, suggesting even if we aren’t communications experts, much of what we should be doing is supported by research in this field.

Understanding the Transactional Model

Let’s take a deeper dive into the six key factors and their applications within the realm of UX research:

  • Sender: In UX research, the sender is typically the researcher who conducts interviews, facilitates usability tests, or designs surveys. For example, if you’re administering a user interview, you are the sender who initiates the communication process by asking questions.
  • Receiver: The receiver is the individual who decodes and interprets the messages sent by the sender. In our context, this could be the user you interview or the person taking a survey you have created. They receive and process your questions, providing responses based on their understanding and experiences.
  • Message: This is the content being communicated from the sender to the receiver. In UX research, the message can take various forms, like a set of survey questions, interview prompts, or tasks in a usability test.
  • Channel: This is the medium through which the communication flows. For instance, face-to-face interviews, phone interviews, email surveys administered online, and usability tests conducted via screen sharing are all different communication channels. You might use multiple channels simultaneously, for example, communicating over voice while also using a screen share to show design concepts.
  • Noise: Any factor that may interfere with the communication is regarded as ‘noise.’ In UX research, this could be complex jargon that confuses respondents in a survey, technical issues during a remote usability test, or environmental distractions during an in-person interview.
  • Feedback: The communication received by the receiver, who then provides an output, is called feedback. For example, the responses given by a user during an interview or the data collected from a completed survey are types of feedback or the physical reaction of a usability testing participant while completing a task.

Applying the Transactional Model of Communication to Preparing for UX Research

We can become complacent or feel rushed to create our research protocols. I think this is natural in the pace of many workplaces and our need to deliver results quickly. You can apply the lens of the Transactional Model of Communication to your research preparation without adding much time. Applying the Transactional Model of Communication to your preparation should:

  • Improve Clarity The model provides a clear representation of communication, empowering the researcher to plan and conduct studies more effectively.
  • Minimize misunderstanding By highlighting potential noise sources, user confusion or misunderstandings can be better anticipated and mitigated.
  • Enhance research participant participation With your attentive eye on feedback, participants are likely to feel valued, thus increasing active involvement and quality of input.

You can address the specific elements of the Transactional Model through the following steps while preparing for research:

Defining the Sender and Receiver

In UX research, the sender can often be the UX researcher conducting the study, while the receiver is usually the research participant. Understanding this dynamic can help researchers craft questions or tasks more empathetically and efficiently. You should try to collect some information on your participant in advance to prepare yourself for building a rapport.

For example, if you are conducting contextual inquiry with the field technicians of an HVAC company, you’ll want to dress appropriately to reflect your understanding of the context in which your participants (receivers) will be conducting their work. Showing up dressed in formal attire might be off-putting and create a negative dynamic between sender and receiver.

Message Creation

The message in UX research typically is the questions asked or tasks assigned during the study. Careful consideration of tenor, terminology, and clarity can aid data accuracy and participant engagement. Whether you are interviewing or creating a survey, you need to double-check that your audience will understand your questions and provide meaningful answers. You can pilot-test your protocol or questionnaire with a few representative individuals to identify areas that might cause confusion.

Using the HVAC example again, you might find that field technicians use certain terminology in a different way than you expect, such as asking them about what “tools” they use to complete their tasks yields you an answer that doesn’t reflect digital tools you’d find on a computer or smartphone, but physical tools like a pipe and wrench.

Choosing the Right Channel

The channel selection depends on the method of research. For instance, face-to-face methods might use physical verbal communication, while remote methods might rely on emails, video calls, or instant messaging. The choice of the medium should consider factors like tech accessibility, ease of communication, reliability, and participant familiarity with the channel. For example, you introduce an additional challenge (noise) if you ask someone who has never used an iPhone to test an app on an iPhone.

Minimizing Noise

Noise in UX research comes in many forms, from unclear questions inducing participant confusion to technical issues in remote interviews that cause interruptions. The key is to foresee potential issues and have preemptive solutions ready.

Facilitating Feedback

You should be prepared for how you might collect and act on participant feedback during the research. Encouraging regular feedback from the user during UX research ensures their understanding and that they feel heard. This could range from asking them to ‘think aloud’ as they perform tasks or encouraging them to email queries or concerns after the session. You should document any noise that might impact your findings and account for that in your analysis and reporting.

Track Your Alignment to the Framework

You can track what you do to align your processes with the Transactional Model prior to and during research using a spreadsheet. I’ll provide an example of a spreadsheet I’ve used in the later case study section of this article. You should create your spreadsheet during the process of preparing for research, as some of what you do to prepare should align with the factors of the model.

You can use these tips for preparation regardless of the specific research method you are undertaking. Let’s now look closer at a few common methods and get specific on how you can align your actions with the Transactional Model.

Applying the Transactional Model to Common UX Research Methods

UX research relies on interaction with users. We can easily incorporate aspects of the Transactional Model of Communication into our most common methods. Utilizing the Transactional Model in conducting interviews, surveys, and usability testing can help provide structure to your process and increase the quality of insights gathered.

Interviews are a common method used in qualitative UX research. They provide the perfect method for applying principles from the Transactional Model. In line with the Transactional Model, the researcher (sender) sends questions (messages) in-person or over the phone/computer medium (channel) to the participant (receiver), who provides answers (feedback) while contending with potential distraction or misunderstanding (noise). Reflecting on communication as transactional can help remind us we need to respect the dynamic between ourselves and the person we are interviewing. Rather than approaching an interview as a unidirectional interrogation, researchers need to view it as a conversation.

Applying the Transactional Model to conducting interviews means we should account for a number of facts to allow for high-quality communication. Note how the following overlap with what we typically call best practices.

Asking Open-ended Questions

To truly harness a two-way flow of communication, open-ended questions, rather than close-ended ones, are crucial. For instance, rather than asking, “Do you use our mobile application?” ask, “Can you describe your use of our mobile app?”. This encourages the participant to share more expansive and descriptive insights, furthering the dialogue.

Actively Listening

As the success of an interview relies on the participant’s responses, active listening is a crucial skill for UX researchers. The researcher should encourage participants to express their thoughts and feelings freely. Reflective listening techniques , such as paraphrasing or summarizing what the participant has shared, can reinforce to the interviewee that their contributions are being acknowledged and valued. It also provides an opportunity to clarify potential noise or misunderstandings that may arise.

Being Responsive

Building on the simultaneous send-receive nature of the Transactional Model, researchers must remain responsive during interviews. Providing non-verbal cues (like nodding) and verbal affirmations (“I see,” “Interesting”) lets participants know their message is being received and understood, making them feel comfortable and more willing to share.

We should always attempt to account for noise in advance, as well as during our interview sessions. Noise, in the form of misinterpretations or distractions, can disrupt effective communication. Researchers can proactively reduce noise by conducting a dry run in advance of the scheduled interviews . This helps you become more fluent at going through the interview and also helps identify areas that might need improvement or be misunderstood by participants. You also reduce noise by creating a conducive interview environment, minimizing potential distractions, and asking clarifying questions during the interview whenever necessary.

For example, if a participant uses a term the researcher doesn’t understand, the researcher should politely ask for clarification rather than guessing its meaning and potentially misinterpreting the data.

Additional forms of noise can include participant confusion or distraction. You should let participants know to ask if they are unclear on anything you say or do. It’s a good idea to always ask participants to put their smartphones on mute. You should only provide information critical to the process when introducing the interview or tasks. For example, you don’t need to give a full background of the history of the product you are researching if that isn’t required for the participant to complete the interview. However, you should let them know the purpose of the research, gain their consent to participate, and inform them of how long you expect the session to last.

Strategizing the Flow

Researchers should build strategic thinking into their interviews to support the Transaction Model. Starting the interview with less intrusive questions can help establish rapport and make the participant more comfortable, while more challenging or sensitive questions can be left for later when the interviewee feels more at ease.

A well-planned interview encourages a fluid dialogue and exchange of ideas. This is another area where conducting a dry run can help to ensure high-quality research. You and your dry-run participants should recognize areas where questions aren’t flowing in the best order or don’t make sense in the context of the interview, allowing you to correct the flow in advance.

While much of what the Transactional Model informs for interviews already aligns with common best practices, the model would suggest we need to have a deeper consideration of factors that we can sometimes give less consideration when we become overly comfortable with interviewing or are unaware of the implications of forgetting to address the factors of context considerations, power dynamics, and post-interview actions.

Context Considerations

You need to account for both the context of the participant, e.g., their background, demographic, and psychographic information, as well as the context of the interview itself. You should make subtle yet meaningful modifications depending on the channel you are conducting an interview.

For example, you should utilize video and be aware of your facial and physical responses if you are conducting an interview using an online platform, whereas if it’s a phone interview, you will need to rely on verbal affirmations that you are listening and following along, while also being mindful not to interrupt the participant while they are speaking.

Power Dynamics

Researchers need to be aware of how your role, background, and identity might influence the power dynamics of the interview. You can attempt to address power dynamics by sharing research goals transparently and addressing any potential concerns about bias a participant shares.

We are responsible for creating a safe and inclusive space for our interviews. You do this through the use of inclusive language, listening actively without judgment, and being flexible to accommodate different ways of knowing and expressing experiences. You should also empower participants as collaborators whenever possible . You can offer opportunities for participants to share feedback on the interview process and analysis. Doing this validates participants’ experiences and knowledge and ensures their voices are heard and valued.

Post-Interview Actions

You have a number of options for actions that can close the loop of your interviews with participants in line with the “feedback” the model suggests is a critical part of communication. Some tactics you can consider following your interview include:

  • Debriefing Dedicate a few minutes at the end to discuss the participant’s overall experience, impressions, and suggestions for future interviews.
  • Short surveys Send a brief survey via email or an online platform to gather feedback on the interview experience.
  • Follow-up calls Consider follow-up calls with specific participants to delve deeper into their feedback and gain additional insight if you find that is warranted.
  • Thank you emails Include a “feedback” section in your thank you email, encouraging participants to share their thoughts on the interview.

You also need to do something with the feedback you receive. Researchers and product teams should make time for reflexivity and critical self-awareness.

As practitioners in a human-focused field, we are expected to continuously examine how our assumptions and biases might influence our interviews and findings. “

We shouldn’t practice our craft in a silo. Instead, seeking feedback from colleagues and mentors to maintain ethical research practices should be a standard practice for interviews and all UX research methods.

By considering interviews as an ongoing transaction and exchange of ideas rather than a unidirectional Q&A, UX researchers can create a more communicative and engaging environment. You can see how models of communication have informed best practices for interviews. With a better knowledge of the Transactional Model, you can go deeper and check your work against the framework of the model.

The Transactional Model of Communication reminds us to acknowledge the feedback loop even in seemingly one-way communication methods like surveys. Instead of merely sending out questions and collecting responses, we need to provide space for respondents to voice their thoughts and opinions freely. When we make participants feel heard, engagement with our surveys should increase, dropouts should decrease, and response quality should improve.

Like other methods, surveys involve the researcher(s) creating the instructions and questionnaire (sender), the survey, including any instructions, disclaimers, and consent forms (the message), how the survey is administered, e.g., online, in person, or pen and paper (the channel), the participant (receiver), potential misunderstandings or distractions (noise), and responses (feedback).

Designing the Survey

Understanding the Transactional Model will help researchers design more effective surveys. Researchers are encouraged to be aware of both their role as the sender and to anticipate the participant’s perspective as the receiver. Begin surveys with clear instructions, explaining why you’re conducting the survey and how long it’s estimated to take. This establishes a more communicative relationship with respondents right from the start. Test these instructions with multiple people prior to launching the survey.

Crafting Questions

The questions should be crafted to encourage feedback and not just a simple yes or no. You should consider asking scaled questions or items that have been statistically validated to measure certain attributes of users.

For example, if you were looking deeper at a mobile banking application, rather than asking, “Did you find our product easy to use?” you would want to break that out into multiple aspects of the experience and ask about each with a separate question such as “On a scale of 1–7, with 1 being extremely difficult and 7 being extremely easy, how would you rate your experience transferring money from one account to another?” .

Reducing ‘noise,’ or misunderstandings, is crucial for increasing the reliability of responses. Your first line of defense in reducing noise is to make sure you are sampling from the appropriate population you want to conduct the research with. You need to use a screener that will filter out non-viable participants prior to including them in the survey. You do this when you correctly identify the characteristics of the population you want to sample from and then exclude those falling outside of those parameters.

Additionally, you should focus on prioritizing finding participants through random sampling from the population of potential participants versus using a convenience sample, as this helps to ensure you are collecting reliable data.

When looking at the survey itself, there are a number of recommendations to reduce noise. You should ensure questions are easily understandable, avoid technical jargon, and sequence questions logically. A question bank should be reviewed and tested before being finalized for distribution.

For example, question statements like “Do you use and like this feature?” can confuse respondents because they are actually two separate questions: do you use the feature, and do you like the feature? You should separate out questions like this into more than one question.

You should use visual aids that are relevant whenever possible to enhance the clarity of the questions. For example, if you are asking questions about an application’s “Dashboard” screen, you might want to provide a screenshot of that page so survey takers have a clear understanding of what you are referencing. You should also avoid the use of jargon if you are surveying a non-technical population and explain any terminology that might be unclear to participants taking the survey.

The Transactional Model suggests active participation in communication is necessary for effective communication . Participants can become distracted or take a survey without intending to provide thoughtful answers. You should consider adding a question somewhere in the middle of the survey to check that participants are paying attention and responding appropriately, particularly for longer surveys.

This is often done using a simple math problem such as “What is the answer to 1+1?” Anyone not responding with the answer of “2” might not be adequately paying attention to the responses they are providing and you’d want to look closer at their responses, eliminating them from your analysis if deemed appropriate.

Encouraging Feedback

While descriptive feedback questions are one way of promoting dialogue, you can also include areas where respondents can express any additional thoughts or questions they have outside of the set question list. This is especially useful in online surveys, where researchers can’t immediately address participant’s questions or clarify doubts.

You should be mindful that too many open-ended questions can cause fatigue , so you should limit the number of open-ended questions. I recommend two to three open-ended questions depending on the length of your overall survey.

Post-Survey Actions

After collecting and analyzing the data, you can send follow-up communications to the respondents. Let them know the changes made based on their feedback, thank them for their participation, or even share a summary of the survey results. This fulfills the Transactional Model’s feedback loop and communicates to the respondent that their input was received, valued, and acted upon.

You can also meet this suggestion by providing an email address for participants to follow up if they desire more information post-survey. You are allowing them to complete the loop themselves if they desire.

Applying the transactional model to surveys can breathe new life into the way surveys are conducted in UX research. It encourages active participation from respondents, making the process more interactive and engaging while enhancing the quality of the data collected. You can experiment with applying some or all of the steps listed above. You will likely find you are already doing much of what’s mentioned, however being explicit can allow you to make sure you are thoughtfully applying these principles from the field communication.

Usability Testing

Usability testing is another clear example of a research method highlighting components of the Transactional Model. In the context of usability testing, the Transactional Model of Communication’s application opens a pathway for a richer understanding of the user experience by positioning both the user and the researcher as sender and receiver of communication simultaneously.

Here are some ways a researcher can use elements of the Transactional Model during usability testing:

Task Assignment as Message Sending

When a researcher assigns tasks to a user during usability testing, they act as the sender in the communication process. To ensure the user accurately receives the message, these tasks need to be clear and well-articulated. For example, a task like “Register a new account on the app” sends a clear message to the user about what they need to do.

You don’t need to tell them how to do the task, as usually, that’s what we are trying to determine from our testing, but if you are not clear on what you want them to do, your message will not resonate in the way it is intended. This is another area where a dry run in advance of the testing is an optimal solution for making sure tasks are worded clearly.

Observing and Listening as Message Receiving

As the participant interacts with the application, concept, or design, the researcher, as the receiver, picks up on verbal and nonverbal cues. For instance, if a user is clicking around aimlessly or murmuring in confusion, the researcher can take these as feedback about certain elements of the design that are unclear or hard to use. You can also ask the user to explain why they are giving these cues you note as a way to provide them with feedback on their communication.

Real-time Interaction

The transactional nature of the model recognizes the importance of real-time interaction. For example, if during testing, the user is unsure of what a task means or how to proceed, the researcher can provide clarification without offering solutions or influencing the user’s action. This interaction follows the communication flow prescribed by the transactional model. We lose the ability to do this during unmoderated testing; however, many design elements are forms of communication that can serve to direct users or clarify the purpose of an experience (to be covered more in article two).

In usability testing, noise could mean unclear tasks, users’ preconceived notions, or even issues like slow software response. Acknowledging noise can help researchers plan and conduct tests better. Again, carrying out a pilot test can help identify any noise in the main test scenarios, allowing for necessary tweaks before actual testing. Other forms of noise can be less obvious but equally intrusive. For example, if you are conducting a test using a Macbook laptop and your participant is used to a PC, there is noise you need to account for, given their unfamiliarity with the laptop you’ve provided.

The fidelity of the design artifact being tested might introduce another form of noise. I’ve always advocated testing at any level of fidelity, but you should note that if you are using “Lorem Ipsum” or black and white designs, this potentially adds noise.

One of my favorite examples of this was a time when I was testing a financial services application, and the designers had put different balances on the screen; however, the total for all balances had not been added up to the correct total. Virtually every person tested noted this discrepancy, although it had nothing to do with the tasks at hand. I had to acknowledge we’d introduced noise to the testing. As at least one participant noted, they wouldn’t trust a tool that wasn’t able to total balances correctly.

Under the Transactional Model’s guidance, feedback isn’t just final thoughts after testing; it should be facilitated at each step of the process. Encouraging ‘think aloud’ protocols , where the user verbalizes their thoughts, reactions, and feelings during testing, ensures a constant flow of useful feedback.

You are receiving feedback throughout the process of usability testing, and the model provides guidance on how you should use that feedback to create a shared meaning with the participants. You will ultimately summarize this meaning in your report. You’ll later end up uncovering if this shared meaning was correctly interpreted when you design or redesign the product based on your findings.

We’ve now covered how to apply the Transactional Model of Communication to three common UX Research methods. All research with humans involves communication. You can break down other UX methods using the Model’s factors to make sure you engage in high-quality research.

Analyzing and Reporting UX Research Data Through the Lens of the Transactional Model

The Transactional Model of Communication doesn’t only apply to the data collection phase (interviews, surveys, or usability testing) of UX research. Its principles can provide valuable insights during the data analysis process.

The Transactional Model instructs us to view any communication as an interactive, multi-layered dialogue — a concept that is particularly useful when unpacking user responses. Consider the ‘message’ components: In the context of data analysis, the messages are the users’ responses. As researchers, thinking critically about how respondents may have internally processed the survey questions, interview discussion, or usability tasks can yield richer insights into user motivations.

Understanding Context

Just as the Transactional Model emphasizes the simultaneous interchange of communication, UX researchers should consider the user’s context while interpreting data. Decoding the meaning behind a user’s words or actions involves understanding their background, experiences, and the situation when they provide responses.

Deciphering Noise

In the Transactional Model, noise presents a potential barrier to effective communication. Similarly, researchers must be aware of snowballing themes or frequently highlighted issues during analysis. Noise, in this context, could involve patterns of confusion, misunderstandings, or consistently highlighted problems by users. You need to account for this, e.g., the example I provided where participants constantly referred to the incorrect math on static wireframes.

Considering Sender-Receiver Dynamics

Remember that as a UX researcher, your interpretation of user responses will be influenced by your understandings, biases, or preconceptions, just as the responses were influenced by the user’s perceptions. By acknowledging this, researchers can strive to neutralize any subjective influence and ensure the analysis remains centered on the user’s perspective. You can ask other researchers to double-check your work to attempt to account for bias.

For example, if you come up with a clear theme that users need better guidance in the application you are testing, another researcher from outside of the project should come to a similar conclusion if they view the data; if not, you should have a conversation with them to determine what different perspectives you are each bringing to the data analysis.

Reporting Results

Understanding your audience is crucial for delivering a persuasive UX research presentation. Tailoring your communication to resonate with the specific concerns and interests of your stakeholders can significantly enhance the impact of your findings. Here are some more details:

  • Identify Stakeholder Groups Identify the different groups of stakeholders who will be present in your audience. This could include designers, developers, product managers, and executives.
  • Prioritize Information Prioritize the information based on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives may prioritize business impact.
  • Adapt Communication Style Adjust your communication style to align with the communication preferences of each group. Provide technical details for developers and emphasize user experience benefits for executives.

Acknowledging Feedback

Respecting this Transactional Model’s feedback loop, remember to revisit user insights after implementing design changes. This ensures you stay user-focused, continuously validating or adjusting your interpretations based on users’ evolving feedback. You can do this in a number of ways. You can reconnect with users to show them updated designs and ask questions to see if the issues you attempted to resolve were resolved.

Another way to address this without having to reconnect with the users is to create a spreadsheet or other document to track all the recommendations that were made and reconcile the changes with what is then updated in the design. You should be able to map the changes users requested to updates or additions to the product roadmap for future updates. This acknowledges that users were heard and that an attempt to address their pain points will be documented.

Crucially, the Transactional Model teaches us that communication is rarely simple or one-dimensional. It encourages UX researchers to take a more nuanced, context-aware approach to data analysis, resulting in deeper user understanding and more accurate, user-validated results.

By maintaining an ongoing feedback loop with users and continually refining interpretations, researchers can ensure that their work remains grounded in real user experiences and needs. “

Tracking Your Application of the Transactional Model to Your Practice

You might find it useful to track how you align your research planning and execution to the framework of the Transactional Model. I’ve created a spreadsheet to outline key factors of the model and used this for some of my work. Demonstrated below is an example derived from a study conducted for a banking client that included interviews and usability testing. I completed this spreadsheet during the process of planning and conducting interviews. Anonymized data from our study has been furnished to show an example of how you might populate a similar spreadsheet with your information.

You can customize the spreadsheet structure to fit your specific research topic and interview approach. By documenting your application of the transactional model, you can gain valuable insights into the dynamic nature of communication and improve your interview skills for future research.

You can use the suggested columns from this table as you see fit, adding or subtracting as needed, particularly if you use a method other than interviews. I usually add the following additional Columns for logistical purposes:

  • Date of Interview,
  • Participant ID,
  • Interview Format (e.g., in person, remote, video, phone).

By incorporating aspects of communication theory into UX research, UX researchers and those who work with UX researchers can enhance the effectiveness of their communication strategies, gather more accurate insights, and create better user experiences. Communication theory provides a framework for understanding the dynamics of communication, and its application to UX research enables researchers to tailor their approaches to specific audiences, employ effective interviewing techniques, design surveys and questionnaires, establish seamless communication channels during usability testing, and interpret data more effectively.

As the field of UX research continues to evolve, integrating communication theory into research practices will become increasingly essential for bridging the gap between users and design teams, ultimately leading to more successful products that resonate with target audiences.

As a UX professional, it is important to continually explore and integrate new theories and methodologies to enhance your practice . By leveraging communication theory principles, you can better understand user needs, improve the user experience, and drive successful outcomes for digital products and services.

Integrating communication theory into UX research is an ongoing journey of learning and implementing best practices. Embracing this approach empowers researchers to effectively communicate their findings to stakeholders and foster collaborative decision-making, ultimately driving positive user experiences and successful design outcomes.

References and Further Reading

  • The Mathematical Theory of Communication (PDF), Shannon, C. E., & Weaver, W.
  • From organizational effectiveness to relationship indicators: Antecedents of relationships, public relations strategies, and relationship outcomes , Grunig, J. E., & Huang, Y. H.
  • Communication and persuasion: Psychological studies of opinion change, Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Yale University Press
  • Communication research as an autonomous discipline, Chaffee, S. H. (1986). Communication Yearbook, 10, 243-274
  • Interpersonal Communication: Everyday Encounters (PDF), Wood, J. (2015)
  • Theories of Human Communication , Littlejohn, S. W., & Foss, K. A. (2011)
  • McQuail’s Mass Communication Theory (PDF), McQuail, D. (2010)
  • Bridges Not Walls: A Book About Interpersonal Communication , Stewart, J. (2012)

Smashing Newsletter

Tips on front-end & UX, delivered weekly in your inbox. Just the things you can actually use.

Front-End & UX Workshops, Online

With practical takeaways, live sessions, video recordings and a friendly Q&A.

TypeScript in 50 Lessons

Everything TypeScript, with code walkthroughs and examples. And other printed books.

IMAGES

  1. Survey Design: A 10-step Guide with Examples

    research methods survey design

  2. A Comprehensive Guide to Survey Research Methodologies

    research methods survey design

  3. 15 Research Methodology Examples (2023)

    research methods survey design

  4. Survey Method

    research methods survey design

  5. Survey Design

    research methods survey design

  6. Survey Research: Definition, Examples & Methods

    research methods survey design

VIDEO

  1. Design-Based Analysis of Survey Data (Sept. 2019) Part 1

  2. Lecture 7(1) Business Research Methods Survey Research Tools

  3. Psych Research Methods: Observational and Survey Research: Day 3 Part 1

  4. Design-Based Analysis of Survey Data (Sept. 2019) Part 2

  5. Why collect the data through Questionnaires || The Power of Questionnaires in Data Collection

  6. despondent design 😤😤ll #shorts #youtubeshorts

COMMENTS

  1. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  2. Survey Research

    Survey Research. Definition: Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

  3. Survey Research: Definition, Examples and Methods

    Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. ... Survey research design. Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large ...

  4. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  5. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  6. Perspectives on Survey Research Design

    The experiment was conducted in the Detroit Metro Area Communities Study in 2021. We evaluated the adaptive design in five outcomes: 1) response rates, 2) demographic composition of respondents, 3) bias and variance of key survey estimates, 4) changes in significant results of regression models, and 5) costs.

  7. Designing Survey Research

    Survey research is common in academic research, as well as in business, political, and other kinds of projects that aim to get a sense of the preferences or attitudes of a group of people. Surveys are primarily associated with quantitative research, but can also be used in qualitative inquiries. As Andres (2012) points out, designing a study ...

  8. Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions.

  9. Survey Research: Definition, Examples & Methods

    Here, we cover a few: 1. They're relatively easy to do. Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience, the data collection is usually straightforward regardless of which survey type you use. 2.

  10. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  11. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  12. A quick guide to survey research

    After settling on your research goal and beginning to design a questionnaire, the main considerations are the method of data collection, the survey instrument and the type of question you are going to ask. Methods of data collection include personal interviews, telephone, postal or electronic (Table 1).

  13. Survey Research

    Survey designs. Kerry Tanner, in Research Methods (Second Edition), 2018. Conclusion. Survey research designs remain pervasive in many fields. Surveys can appear deceptively simple and straightforward to implement. However valid results depend on the researcher having a clear understanding of the circumstances where their use is appropriate and the constraints on inference in interpreting and ...

  14. Survey Research: Definition, Types & Methods

    Descriptive research is the most common and conclusive form of survey research due to its quantitative nature. Unlike exploratory research methods, descriptive research utilizes pre-planned, structured surveys with closed-ended questions. It's also deductive, meaning that the survey structure and questions are determined beforehand based on existing theories or areas of inquiry.

  15. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  16. Survey Design

    6 Survey Design . Survey research is one of the most ubiquitous forms of research, Seemingly we see questions being asked everywhere. However, let's think specifically about surveys for research purposes rather than about forms for collecting information for administrative or other purposes (although what we know about surveys can help make better forms).

  17. 12. Survey design

    Assuming well-constructed questions and survey design, one strength of this methodology is its potential to produce reliable results. The versatility of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. They can measure anything that people can self-report.

  18. Writing Survey Questions

    [View more Methods 101 Videos] . An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would "favor or oppose taking military action in Iraq to end Saddam Hussein's rule," 68% said they favored military action while 25% said they opposed military action.

  19. Chapter 3 -- Survey Research Design and Quantitative Methods of ...

    Chapter 3 -- Survey Research Design and Quantitative Methods of Analysis for Cross-sectional Data. ... Survey research is a method of collecting information by asking questions. Sometimes interviews are done face-to-face with people at home, in school, or at work. Other times questions are sent in the mail for people to answer and mail back.

  20. PDF Survey Design

    A survey is a systematic method for gathering information from (a sample of) entities for the purposes of constructing quantitative descriptors of the attributes of the larger population of which the entities are members. Surveys are conducted to gather information that reflects population's attitudes, behaviors, opinions and beliefs that ...

  21. Research Design

    This will guide your research design and help you select appropriate methods. Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.

  22. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  23. Survey Descriptive Research: Design & Examples

    The descriptive survey research design uses both quantitative and qualitative research methods. It is used primarily to conduct quantitative research and gather data that is statistically easy to analyze. However, it can also provide qualitative data that helps describe and understand the research subject. 2.

  24. How to Construct a Mixed Methods Research Design

    Quantitative dominant [or quantitatively driven] mixed methods research is the type of mixed research in which one relies on a quantitative, postpositivist view of the research process, while concurrently recognizing that the addition of qualitative data and approaches are likely to benefit most research projects. (p.

  25. Connecting With Users: Applying Principles Of Communication To UX Research

    The channel selection depends on the method of research. For instance, face-to-face methods might use physical verbal communication, while remote methods might rely on emails, video calls, or instant messaging. ... Understanding the Transactional Model will help researchers design more effective surveys. Researchers are encouraged to be aware ...