• USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

Glossary of research terms.

  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

This glossary is intended to assist you in understanding commonly used terms and concepts when reading, interpreting, and evaluating scholarly research. Also included are common words and phrases defined within the context of how they apply to research in the social and behavioral sciences.

  • Acculturation -- refers to the process of adapting to another culture, particularly in reference to blending in with the majority population [e.g., an immigrant adopting American customs]. However, acculturation also implies that both cultures add something to one another, but still remain distinct groups unto themselves.
  • Accuracy -- a term used in survey research to refer to the match between the target population and the sample.
  • Affective Measures -- procedures or devices used to obtain quantified descriptions of an individual's feelings, emotional states, or dispositions.
  • Aggregate -- a total created from smaller units. For instance, the population of a county is an aggregate of the populations of the cities, rural areas, etc. that comprise the county. As a verb, it refers to total data from smaller units into a large unit.
  • Anonymity -- a research condition in which no one, including the researcher, knows the identities of research participants.
  • Baseline -- a control measurement carried out before an experimental treatment.
  • Behaviorism -- school of psychological thought concerned with the observable, tangible, objective facts of behavior, rather than with subjective phenomena such as thoughts, emotions, or impulses. Contemporary behaviorism also emphasizes the study of mental states such as feelings and fantasies to the extent that they can be directly observed and measured.
  • Beliefs -- ideas, doctrines, tenets, etc. that are accepted as true on grounds which are not immediately susceptible to rigorous proof.
  • Benchmarking -- systematically measuring and comparing the operations and outcomes of organizations, systems, processes, etc., against agreed upon "best-in-class" frames of reference.
  • Bias -- a loss of balance and accuracy in the use of research methods. It can appear in research via the sampling frame, random sampling, or non-response. It can also occur at other stages in research, such as while interviewing, in the design of questions, or in the way data are analyzed and presented. Bias means that the research findings will not be representative of, or generalizable to, a wider population.
  • Case Study -- the collection and presentation of detailed information about a particular participant or small group, frequently including data derived from the subjects themselves.
  • Causal Hypothesis -- a statement hypothesizing that the independent variable affects the dependent variable in some way.
  • Causal Relationship -- the relationship established that shows that an independent variable, and nothing else, causes a change in a dependent variable. It also establishes how much of a change is shown in the dependent variable.
  • Causality -- the relation between cause and effect.
  • Central Tendency -- any way of describing or characterizing typical, average, or common values in some distribution.
  • Chi-square Analysis -- a common non-parametric statistical test which compares an expected proportion or ratio to an actual proportion or ratio.
  • Claim -- a statement, similar to a hypothesis, which is made in response to the research question and that is affirmed with evidence based on research.
  • Classification -- ordering of related phenomena into categories, groups, or systems according to characteristics or attributes.
  • Cluster Analysis -- a method of statistical analysis where data that share a common trait are grouped together. The data is collected in a way that allows the data collector to group data according to certain characteristics.
  • Cohort Analysis -- group by group analytic treatment of individuals having a statistical factor in common to each group. Group members share a particular characteristic [e.g., born in a given year] or a common experience [e.g., entering a college at a given time].
  • Confidentiality -- a research condition in which no one except the researcher(s) knows the identities of the participants in a study. It refers to the treatment of information that a participant has disclosed to the researcher in a relationship of trust and with the expectation that it will not be revealed to others in ways that violate the original consent agreement, unless permission is granted by the participant.
  • Confirmability Objectivity -- the findings of the study could be confirmed by another person conducting the same study.
  • Construct -- refers to any of the following: something that exists theoretically but is not directly observable; a concept developed [constructed] for describing relations among phenomena or for other research purposes; or, a theoretical definition in which concepts are defined in terms of other concepts. For example, intelligence cannot be directly observed or measured; it is a construct.
  • Construct Validity -- seeks an agreement between a theoretical concept and a specific measuring device, such as observation.
  • Constructivism -- the idea that reality is socially constructed. It is the view that reality cannot be understood outside of the way humans interact and that the idea that knowledge is constructed, not discovered. Constructivists believe that learning is more active and self-directed than either behaviorism or cognitive theory would postulate.
  • Content Analysis -- the systematic, objective, and quantitative description of the manifest or latent content of print or nonprint communications.
  • Context Sensitivity -- awareness by a qualitative researcher of factors such as values and beliefs that influence cultural behaviors.
  • Control Group -- the group in an experimental design that receives either no treatment or a different treatment from the experimental group. This group can thus be compared to the experimental group.
  • Controlled Experiment -- an experimental design with two or more randomly selected groups [an experimental group and control group] in which the researcher controls or introduces the independent variable and measures the dependent variable at least two times [pre- and post-test measurements].
  • Correlation -- a common statistical analysis, usually abbreviated as r, that measures the degree of relationship between pairs of interval variables in a sample. The range of correlation is from -1.00 to zero to +1.00. Also, a non-cause and effect relationship between two variables.
  • Covariate -- a product of the correlation of two related variables times their standard deviations. Used in true experiments to measure the difference of treatment between them.
  • Credibility -- a researcher's ability to demonstrate that the object of a study is accurately identified and described based on the way in which the study was conducted.
  • Critical Theory -- an evaluative approach to social science research, associated with Germany's neo-Marxist “Frankfurt School,” that aims to criticize as well as analyze society, opposing the political orthodoxy of modern communism. Its goal is to promote human emancipatory forces and to expose ideas and systems that impede them.
  • Data -- factual information [as measurements or statistics] used as a basis for reasoning, discussion, or calculation.
  • Data Mining -- the process of analyzing data from different perspectives and summarizing it into useful information, often to discover patterns and/or systematic relationships among variables.
  • Data Quality -- this is the degree to which the collected data [results of measurement or observation] meet the standards of quality to be considered valid [trustworthy] and  reliable [dependable].
  • Deductive -- a form of reasoning in which conclusions are formulated about particulars from general or universal premises.
  • Dependability -- being able to account for changes in the design of the study and the changing conditions surrounding what was studied.
  • Dependent Variable -- a variable that varies due, at least in part, to the impact of the independent variable. In other words, its value “depends” on the value of the independent variable. For example, in the variables “gender” and “academic major,” academic major is the dependent variable, meaning that your major cannot determine whether you are male or female, but your gender might indirectly lead you to favor one major over another.
  • Deviation -- the distance between the mean and a particular data point in a given distribution.
  • Discourse Community -- a community of scholars and researchers in a given field who respond to and communicate to each other through published articles in the community's journals and presentations at conventions. All members of the discourse community adhere to certain conventions for the presentation of their theories and research.
  • Discrete Variable -- a variable that is measured solely in whole units, such as, gender and number of siblings.
  • Distribution -- the range of values of a particular variable.
  • Effect Size -- the amount of change in a dependent variable that can be attributed to manipulations of the independent variable. A large effect size exists when the value of the dependent variable is strongly influenced by the independent variable. It is the mean difference on a variable between experimental and control groups divided by the standard deviation on that variable of the pooled groups or of the control group alone.
  • Emancipatory Research -- research is conducted on and with people from marginalized groups or communities. It is led by a researcher or research team who is either an indigenous or external insider; is interpreted within intellectual frameworks of that group; and, is conducted largely for the purpose of empowering members of that community and improving services for them. It also engages members of the community as co-constructors or validators of knowledge.
  • Empirical Research -- the process of developing systematized knowledge gained from observations that are formulated to support insights and generalizations about the phenomena being researched.
  • Epistemology -- concerns knowledge construction; asks what constitutes knowledge and how knowledge is validated.
  • Ethnography -- method to study groups and/or cultures over a period of time. The goal of this type of research is to comprehend the particular group/culture through immersion into the culture or group. Research is completed through various methods but, since the researcher is immersed within the group for an extended period of time, more detailed information is usually collected during the research.
  • Expectancy Effect -- any unconscious or conscious cues that convey to the participant in a study how the researcher wants them to respond. Expecting someone to behave in a particular way has been shown to promote the expected behavior. Expectancy effects can be minimized by using standardized interactions with subjects, automated data-gathering methods, and double blind protocols.
  • External Validity -- the extent to which the results of a study are generalizable or transferable.
  • Factor Analysis -- a statistical test that explores relationships among data. The test explores which variables in a data set are most related to each other. In a carefully constructed survey, for example, factor analysis can yield information on patterns of responses, not simply data on a single response. Larger tendencies may then be interpreted, indicating behavior trends rather than simply responses to specific questions.
  • Field Studies -- academic or other investigative studies undertaken in a natural setting, rather than in laboratories, classrooms, or other structured environments.
  • Focus Groups -- small, roundtable discussion groups charged with examining specific topics or problems, including possible options or solutions. Focus groups usually consist of 4-12 participants, guided by moderators to keep the discussion flowing and to collect and report the results.
  • Framework -- the structure and support that may be used as both the launching point and the on-going guidelines for investigating a research problem.
  • Generalizability -- the extent to which research findings and conclusions conducted on a specific study to groups or situations can be applied to the population at large.
  • Grey Literature -- research produced by organizations outside of commercial and academic publishing that publish materials, such as, working papers, research reports, and briefing papers.
  • Grounded Theory -- practice of developing other theories that emerge from observing a group. Theories are grounded in the group's observable experiences, but researchers add their own insight into why those experiences exist.
  • Group Behavior -- behaviors of a group as a whole, as well as the behavior of an individual as influenced by his or her membership in a group.
  • Hypothesis -- a tentative explanation based on theory to predict a causal relationship between variables.
  • Independent Variable -- the conditions of an experiment that are systematically manipulated by the researcher. A variable that is not impacted by the dependent variable, and that itself impacts the dependent variable. In the earlier example of "gender" and "academic major," (see Dependent Variable) gender is the independent variable.
  • Individualism -- a theory or policy having primary regard for the liberty, rights, or independent actions of individuals.
  • Inductive -- a form of reasoning in which a generalized conclusion is formulated from particular instances.
  • Inductive Analysis -- a form of analysis based on inductive reasoning; a researcher using inductive analysis starts with answers, but formulates questions throughout the research process.
  • Insiderness -- a concept in qualitative research that refers to the degree to which a researcher has access to and an understanding of persons, places, or things within a group or community based on being a member of that group or community.
  • Internal Consistency -- the extent to which all questions or items assess the same characteristic, skill, or quality.
  • Internal Validity -- the rigor with which the study was conducted [e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and was not measured]. It is also the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore. In studies that do not explore causal relationships, only the first of these definitions should be considered when assessing internal validity.
  • Life History -- a record of an event/events in a respondent's life told [written down, but increasingly audio or video recorded] by the respondent from his/her own perspective in his/her own words. A life history is different from a "research story" in that it covers a longer time span, perhaps a complete life, or a significant period in a life.
  • Margin of Error -- the permittable or acceptable deviation from the target or a specific value. The allowance for slight error or miscalculation or changing circumstances in a study.
  • Measurement -- process of obtaining a numerical description of the extent to which persons, organizations, or things possess specified characteristics.
  • Meta-Analysis -- an analysis combining the results of several studies that address a set of related hypotheses.
  • Methodology -- a theory or analysis of how research does and should proceed.
  • Methods -- systematic approaches to the conduct of an operation or process. It includes steps of procedure, application of techniques, systems of reasoning or analysis, and the modes of inquiry employed by a discipline.
  • Mixed-Methods -- a research approach that uses two or more methods from both the quantitative and qualitative research categories. It is also referred to as blended methods, combined methods, or methodological triangulation.
  • Modeling -- the creation of a physical or computer analogy to understand a particular phenomenon. Modeling helps in estimating the relative magnitude of various factors involved in a phenomenon. A successful model can be shown to account for unexpected behavior that has been observed, to predict certain behaviors, which can then be tested experimentally, and to demonstrate that a given theory cannot account for certain phenomenon.
  • Models -- representations of objects, principles, processes, or ideas often used for imitation or emulation.
  • Naturalistic Observation -- observation of behaviors and events in natural settings without experimental manipulation or other forms of interference.
  • Norm -- the norm in statistics is the average or usual performance. For example, students usually complete their high school graduation requirements when they are 18 years old. Even though some students graduate when they are younger or older, the norm is that any given student will graduate when he or she is 18 years old.
  • Null Hypothesis -- the proposition, to be tested statistically, that the experimental intervention has "no effect," meaning that the treatment and control groups will not differ as a result of the intervention. Investigators usually hope that the data will demonstrate some effect from the intervention, thus allowing the investigator to reject the null hypothesis.
  • Ontology -- a discipline of philosophy that explores the science of what is, the kinds and structures of objects, properties, events, processes, and relations in every area of reality.
  • Panel Study -- a longitudinal study in which a group of individuals is interviewed at intervals over a period of time.
  • Participant -- individuals whose physiological and/or behavioral characteristics and responses are the object of study in a research project.
  • Peer-Review -- the process in which the author of a book, article, or other type of publication submits his or her work to experts in the field for critical evaluation, usually prior to publication. This is standard procedure in publishing scholarly research.
  • Phenomenology -- a qualitative research approach concerned with understanding certain group behaviors from that group's point of view.
  • Philosophy -- critical examination of the grounds for fundamental beliefs and analysis of the basic concepts, doctrines, or practices that express such beliefs.
  • Phonology -- the study of the ways in which speech sounds form systems and patterns in language.
  • Policy -- governing principles that serve as guidelines or rules for decision making and action in a given area.
  • Policy Analysis -- systematic study of the nature, rationale, cost, impact, effectiveness, implications, etc., of existing or alternative policies, using the theories and methodologies of relevant social science disciplines.
  • Population -- the target group under investigation. The population is the entire set under consideration. Samples are drawn from populations.
  • Position Papers -- statements of official or organizational viewpoints, often recommending a particular course of action or response to a situation.
  • Positivism -- a doctrine in the philosophy of science, positivism argues that science can only deal with observable entities known directly to experience. The positivist aims to construct general laws, or theories, which express relationships between phenomena. Observation and experiment is used to show whether the phenomena fit the theory.
  • Predictive Measurement -- use of tests, inventories, or other measures to determine or estimate future events, conditions, outcomes, or trends.
  • Principal Investigator -- the scientist or scholar with primary responsibility for the design and conduct of a research project.
  • Probability -- the chance that a phenomenon will occur randomly. As a statistical measure, it is shown as p [the "p" factor].
  • Questionnaire -- structured sets of questions on specified subjects that are used to gather information, attitudes, or opinions.
  • Random Sampling -- a process used in research to draw a sample of a population strictly by chance, yielding no discernible pattern beyond chance. Random sampling can be accomplished by first numbering the population, then selecting the sample according to a table of random numbers or using a random-number computer generator. The sample is said to be random because there is no regular or discernible pattern or order. Random sample selection is used under the assumption that sufficiently large samples assigned randomly will exhibit a distribution comparable to that of the population from which the sample is drawn. The random assignment of participants increases the probability that differences observed between participant groups are the result of the experimental intervention.
  • Reliability -- the degree to which a measure yields consistent results. If the measuring instrument [e.g., survey] is reliable, then administering it to similar groups would yield similar results. Reliability is a prerequisite for validity. An unreliable indicator cannot produce trustworthy results.
  • Representative Sample -- sample in which the participants closely match the characteristics of the population, and thus, all segments of the population are represented in the sample. A representative sample allows results to be generalized from the sample to the population.
  • Rigor -- degree to which research methods are scrupulously and meticulously carried out in order to recognize important influences occurring in an experimental study.
  • Sample -- the population researched in a particular study. Usually, attempts are made to select a "sample population" that is considered representative of groups of people to whom results will be generalized or transferred. In studies that use inferential statistics to analyze results or which are designed to be generalizable, sample size is critical, generally the larger the number in the sample, the higher the likelihood of a representative distribution of the population.
  • Sampling Error -- the degree to which the results from the sample deviate from those that would be obtained from the entire population, because of random error in the selection of respondent and the corresponding reduction in reliability.
  • Saturation -- a situation in which data analysis begins to reveal repetition and redundancy and when new data tend to confirm existing findings rather than expand upon them.
  • Semantics -- the relationship between symbols and meaning in a linguistic system. Also, the cuing system that connects what is written in the text to what is stored in the reader's prior knowledge.
  • Social Theories -- theories about the structure, organization, and functioning of human societies.
  • Sociolinguistics -- the study of language in society and, more specifically, the study of language varieties, their functions, and their speakers.
  • Standard Deviation -- a measure of variation that indicates the typical distance between the scores of a distribution and the mean; it is determined by taking the square root of the average of the squared deviations in a given distribution. It can be used to indicate the proportion of data within certain ranges of scale values when the distribution conforms closely to the normal curve.
  • Statistical Analysis -- application of statistical processes and theory to the compilation, presentation, discussion, and interpretation of numerical data.
  • Statistical Bias -- characteristics of an experimental or sampling design, or the mathematical treatment of data, that systematically affects the results of a study so as to produce incorrect, unjustified, or inappropriate inferences or conclusions.
  • Statistical Significance -- the probability that the difference between the outcomes of the control and experimental group are great enough that it is unlikely due solely to chance. The probability that the null hypothesis can be rejected at a predetermined significance level [0.05 or 0.01].
  • Statistical Tests -- researchers use statistical tests to make quantitative decisions about whether a study's data indicate a significant effect from the intervention and allow the researcher to reject the null hypothesis. That is, statistical tests show whether the differences between the outcomes of the control and experimental groups are great enough to be statistically significant. If differences are found to be statistically significant, it means that the probability [likelihood] that these differences occurred solely due to chance is relatively low. Most researchers agree that a significance value of .05 or less [i.e., there is a 95% probability that the differences are real] sufficiently determines significance.
  • Subcultures -- ethnic, regional, economic, or social groups exhibiting characteristic patterns of behavior sufficient to distinguish them from the larger society to which they belong.
  • Testing -- the act of gathering and processing information about individuals' ability, skill, understanding, or knowledge under controlled conditions.
  • Theory -- a general explanation about a specific behavior or set of events that is based on known principles and serves to organize related events in a meaningful way. A theory is not as specific as a hypothesis.
  • Treatment -- the stimulus given to a dependent variable.
  • Trend Samples -- method of sampling different groups of people at different points in time from the same population.
  • Triangulation -- a multi-method or pluralistic approach, using different methods in order to focus on the research topic from different viewpoints and to produce a multi-faceted set of data. Also used to check the validity of findings from any one method.
  • Unit of Analysis -- the basic observable entity or phenomenon being analyzed by a study and for which data are collected in the form of variables.
  • Validity -- the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the same thing, but not valid.
  • Variable -- any characteristic or trait that can vary from one person to another [race, gender, academic major] or for one person over time [age, political beliefs].
  • Weighted Scores -- scores in which the components are modified by different multipliers to reflect their relative importance.
  • White Paper -- an authoritative report that often states the position or philosophy about a social, political, or other subject, or a general explanation of an architecture, framework, or product technology written by a group of researchers. A white paper seeks to contain unbiased information and analysis regarding a business or policy problem that the researchers may be facing.

Elliot, Mark, Fairweather, Ian, Olsen, Wendy Kay, and Pampaka, Maria. A Dictionary of Social Research Methods. Oxford, UK: Oxford University Press, 2016; Free Social Science Dictionary. Socialsciencedictionary.com [2008]. Glossary. Institutional Review Board. Colorado College; Glossary of Key Terms. Writing@CSU. Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Miller, Robert L. and Brewer, John D. The A-Z of Social Research: A Dictionary of Key Social Science Research Concepts London: SAGE, 2003; Jupp, Victor. The SAGE Dictionary of Social and Cultural Research Methods . London: Sage, 2006.

  • << Previous: Independent and Dependent Variables
  • Next: 1. Choosing a Research Problem >>
  • Last Updated: Apr 1, 2024 9:56 AM
  • URL: https://libguides.usc.edu/writingguide

jargon in research methodology

Qualitative and Quantitative Research: Glossary of Key Terms

This glossary provides definitions of many of the terms used in the guides to conducting qualitative and quantitative research. The definitions were developed by members of the research methods seminar (E600) taught by Mike Palmquist in the 1990s and 2000s.

Members of the Research Methods Seminar (E600) taught by Mike Palmquist in the 1990s and 2000s. (1994-2022). Glossary of Key Terms. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=90

Multidisciplinary Methods for Exploring Organizations

Bias :  a lack of balance and accuracy in the use of research methods. It can appear at any phase of research, from deciding on a sampling frame, sampling, to data collection and analysis.  Bias also arises in the identity of the researcher through assumptions and ideas related to his or her own culture that may influence data collection and analysis.  Bias interfere with the extent to which results are valid and accurate, whether or not the research is reliable, and the potential for results to be representative of, or generalizable to, a wider population.   Click here to access a brief article from the National Institutes of Health on research bias. 

Case Study :  the collection and presentation of in-depth information about a specific individual, group, or community.  Often these data represent the subjective experiences of an individual or group.   Click here to access more information on the case study approach to research.  

Causality :  the relation between cause and effect.  Causality is the agency that links one process or event (the cause) with another process, state, or event (the effect).  The first of these is normally understood to be at least partly responsible for the occurrence of the second, thus the second is dependent upon the first.  Causality is an abstraction based upon experience that is used to show and explain how change happens in the world.  Below is a very useful video explaining causality and how it relates to research.

Cultural Relativism : the idea that cultures are value-neutral.  This means that rather than various cultures being a better or worse ways of organizing behavior, they are simply different.  In anthropology, this idea has been used to make sense out of behaviors and values that seem alien or morally wrong to an outside observer; it has also been used to raise awareness of the potential for bias by an observer.  The concept has been debated in anthropology and has raised concern that it inherently leads to moral relativism.   Most modern anthropologists use the idea of cultural relativism as a way to bracket off one’s own cultural assumptions and biases to the extent possible.  Here is a brief article on cultural relativism by anthropologist Clifford Geertz.

Data :  factual information, collected through systematic methods, that is used as a basis for reasoning and analysis of a phenomenon.

Deductive Reasoning:   a type of reasoning in which conclusions are formulated about particulars from general or universal premises.  Here’s Monty Python’s take on deductive reasoning.

Dependent Variable:  a variable that varies due, at least in part, to the impact of the independent variable. In other words, its value “depends” on the value of the independent variable. For example, in the variables “gender” and “academic major,” academic major is the dependent variable, meaning that your major cannot determine whether you are male or female, but your gender might indirectly lead you to favor one major over another.  Check out the video under the entry for independent variables  for more information on the difference between dependent and independent variables.

Emic : an approach to the study or description of a language or culture that focuses on its internal elements and logic and their functioning rather than in terms of any existing external scheme.  The term can also refer to the native explanation for a behavior or cultural pattern.  The video below will help you to understand the differences between emic and etic perspectives as they are understood by cultural anthropologists.

Etic : an approach to the study or description of a language or culture that is general, nonstructural, and objective in its perspective.  It is typically explanations for behavior from the perspective of the scientist/researcher observing a culture or language.

Epistemology:   theory of knowledge that questions how we know things, how knowledge is constructed, and what constitutes valid knowledge.  Here is a very detailed definition/discussion of   epistemology from the Stanford online dictionary of philosophy.

Ethnography:  method for studying study groups and/or cultures over an extended period of time using a variety of qualitative (and sometimes quantitative) research techniques. Ethnography employs participant observation, which is intended to allow researchers to understand a group through immersion into its lifestyles.  This allows for a detailed, in-depth, understanding of human experience.  Check out the TEDx video below for a nice discussion of the use of ethnography in business.

Field Studies : research studies carried out in natural settings, rather than in laboratories, classrooms, or other structured environments.

Focus Groups :  small, roundtable discussion groups charged with examining or discussing topics or problems associated with a research project.  In some cases, these may also involve discussion of solutions to identified problems.   Focus groups usually consist of 4-12 participants and are guided by moderators to keep the discussion moving and collect data.  Here is more on focus group research from the Robert Wood Johnson Foundation.

Grounded Theory:  an approach to research in which theories emerge from observing a group rather than being brought to the context of observation. Theories are grounded in the group’s observable experiences and interpretations, but researchers add their own insight into why those experiences exist.  Click here to access the website Grounded Theory Online .

Hypothesis : a tentative explanation or educated guess based on theory or observation that is used to predict a causal relationship between variables.  Click here to review some examples of hypotheses .

Independent Variable:   the conditions or variables of an experiment that are systematically manipulated by the researcher or a variable that is not impacted by the dependent variable, but that itself impacts the dependent variable.  Check out the video below for more information on the difference between dependent and independent variables.

Inductive Reasoning:  a type of reasoning in which a generalized conclusion is formulated generated based on particular instances.  Below is a video on the difference between inductive and deductive approaches to reasoning.

Naturalistic Observation:   observation of behaviors and events in natural settings rather than in experimental contexts that involve manipulation of variables or other types of interference.

Ontology:   a discipline of philosophy that explores the science of what is, the kinds and structures of objects, properties, events, processes, and relations in every area of reality.  Click here for a detailed discussion of logic and ontology from the Stanford online dictionary of philosophy.

Organization :  For the purposes of MMEO, and organization is an institutionalized structure that is formed for a specific purpose.  Examples of organizations are businesses, academic institutions, religious institutions, or government institutions.

Participant observation :  a form of qualitative research that involves participating in the activities of the people being observed as a way of developing an experience-near understanding of their behaviors and ideas.

Phenomenology:  a qualitative research approach that focuses on meaning expressed by individuals through their lived experience of a particular idea, concept, or event.  This link will take you to more information on phenomenology .

Probability :  the likelihood that a phenomenon will occur randomly. As a statistical measure, it is represented as p.

Qualitative research:  a systematic approach to creating knowledge about how people interpret their surroundings, construct meaning, and interpret the meanings they construct. Qualitative research relies upon subtle and complex techniques of observation, recording data, and writing to develop an interpretive framework for analyzing and explaining why people do what they do and think what they think.

Quantitative research:  Quantitative research focuses on identifying objective measurements of phenomena such as human behavior.  In human subjects research it makes use of statistical, mathematical, and numerical analysis of empirical data collected using instruments such as questionnaires or through analyzing and manipulating pre-existing statistical data using computational techniques. Quantitative research uses numerical data to draw general conclusions across groups of people as a way of explaining particular behaviors or phenomena.  This link to a site as USC will give you more details on quantitative research .

Questionnaire :  structured groups of questions used to gather information, attitudes, or opinions.  Questionnaires can be either quantitative, including forced-choice questions, or qualitative, including open-ended questions.

Random Sampling : a process used in research to draw a sample of a population that does not reflect any pattern or order beyond chance.

Reliability : the extent to which a research method yields consistent results.  If the observational or measurement instrument is reliable, then administering it to similar groups should yield similar results. Reliability is a prerequisite for validity. If a data collection approach is unreliable, then cannot produce trustworthy results.

Rigor:   degree to which research methods are carefully designed and carried out.

Sample :  any population researched in a study. In many studies, researchers often try to select a “sample population” that is believed to be representative of the behaviors or other qualities (race, ethnicity, gender) of people for whom results will be generalized.  This video will help you understand different types of sampling and the goals in sampling.

Sampling Error : the degree to which the results from the sample deviate from those that would be obtained from the entire population.  This can be a result of random error in the selection of participants and any corresponding reduction in reliability that arises as a result of that error.

Standard Deviation : a measure  used to quantify how much variation or dispersion there is in a set of data values.  A low standard deviation means that the data points tend to be close to the mean; a high standard deviation means the data points are spread out over a wider range of values and further from the mean.

Statistical Analysis :  application of statistical methods and theory to the collection, presentation, and interpretation of numerical data.

Statistical Significance:  in any experiment or observation that involves using a sample from a population, statistical significance refers to the likelihood that a behavior or set of behaviors is due to chance.  The probability that the null hypothesis can be rejected at a predetermined significance level [0.05 or 0.01].

Theory:   a general explanation about a specific behavior or set of events that is based on known principles and serves to organize related events in a meaningful way. A theory is not as specific as a hypothesis.

Triangulation:   a multi-method or pluralistic approach to research that uses a variety of methods to collect data from different viewpoints.  This produces a complex and multi-faceted data set that helps in checking the validity of findings.

Unit of Analysis:   the thing being observed, analyzed, and for which data are collected in the form of variables.

Validity — the degree to which a study accurately represents and assesses the specific phenomenon a researcher wants to measure.  This brief video will help you to understand the difference between validity and reliability in research.

Variable:  any characteristic or trait that can vary from person to person.  Race, gender, education level, hair color, age, political beliefs, religion are all examples of variables.  This link will take you to a website that provides more detail on variables.

Popular searches

  • How to Get Participants For Your Study
  • How to Do Segmentation?
  • Conjoint Preference Share Simulator
  • MaxDiff Analysis
  • Likert Scales
  • Reliability & Validity

Request consultation

Do you need support in running a pricing or product study? We can help you with agile consumer research and conjoint analysis.

Looking for an online survey platform?

Conjointly offers a great survey tool with multiple question types, randomisation blocks, and multilingual support. The Basic tier is always free.

Research Methods Knowledge Base

  • Navigating the Knowledge Base
  • Five Big Words
  • Types of Research Questions
  • Time in Research
  • Types of Relationships
  • Types of Data
  • Unit of Analysis
  • Two Research Fallacies
  • Philosophy of Research
  • Ethics in Research
  • Conceptualizing
  • Evaluation Research
  • Measurement
  • Research Design
  • Table of Contents

Fully-functional online survey tool with various question types, logic, randomisation, and reporting for unlimited number of surveys.

Completely free for academics and students .

Language Of Research

Learning about research is a lot like learning about anything else. To start, you need to learn the jargon people use, the big controversies they fight over, and the different factions that define the major players. We’ll start by considering five really big multi-syllable words that researchers sometimes use to describe what they do. We’ll only do a few for now, to give you an idea of just how esoteric the discussion can get (but not enough to cause you to give up in total despair). We can then take on some of the major issues in research like the types of questions we can ask in a project, the role of time in research , and the different types of relationships we can estimate. Then we have to consider defining some basic terms like variable , hypothesis , data , and unit of analysis . If you’re like me, you hate learning vocabulary, so we’ll quickly move along to consideration of two of the major fallacies of research, just to give you an idea of how wrong even researchers can be if they’re not careful (of course, there’s always a certainly probability that they’ll be wrong even if they’re extremely careful).

Cookie Consent

Conjointly uses essential cookies to make our site work. We also use additional cookies in order to understand the usage of the site, gather audience analytics, and for remarketing purposes.

For more information on Conjointly's use of cookies, please read our Cookie Policy .

Which one are you?

I am new to conjointly, i am already using conjointly.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

A tutorial on methodological studies: the what, when, how and why

Lawrence mbuagbaw.

1 Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON Canada

2 Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario L8N 4A6 Canada

3 Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Daeria O. Lawson

Livia puljak.

4 Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia

David B. Allison

5 Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN 47405 USA

Lehana Thabane

6 Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON Canada

7 Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON Canada

8 Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON Canada

Associated Data

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 – 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 – 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig1_HTML.jpg

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 – 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

  • Comparing two groups
  • Determining a proportion, mean or another quantifier
  • Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

  • Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
  • Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
  • Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
  • Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 – 67 ].
  • Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
  • Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
  • Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
  • Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

  • What is the aim?

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

  • 2. What is the design?

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

  • 3. What is the sampling strategy?

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

  • 4. What is the unit of analysis?

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig2_HTML.jpg

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Acknowledgements

Abbreviations, authors’ contributions.

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

This work did not receive any dedicated funding.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Grad Coach

How To Write The Methodology Chapter

The what, why & how explained simply (with examples).

By: Jenna Crossley (PhD) | Reviewed By: Dr. Eunice Rautenbach | September 2021 (Updated April 2023)

So, you’ve pinned down your research topic and undertaken a review of the literature – now it’s time to write up the methodology section of your dissertation, thesis or research paper . But what exactly is the methodology chapter all about – and how do you go about writing one? In this post, we’ll unpack the topic, step by step .

Overview: The Methodology Chapter

  • The purpose  of the methodology chapter
  • Why you need to craft this chapter (really) well
  • How to write and structure the chapter
  • Methodology chapter example
  • Essential takeaways

What (exactly) is the methodology chapter?

The methodology chapter is where you outline the philosophical underpinnings of your research and outline the specific methodological choices you’ve made. The point of the methodology chapter is to tell the reader exactly how you designed your study and, just as importantly, why you did it this way.

Importantly, this chapter should comprehensively describe and justify all the methodological choices you made in your study. For example, the approach you took to your research (i.e., qualitative, quantitative or mixed), who  you collected data from (i.e., your sampling strategy), how you collected your data and, of course, how you analysed it. If that sounds a little intimidating, don’t worry – we’ll explain all these methodological choices in this post .

Free Webinar: Research Methodology 101

Why is the methodology chapter important?

The methodology chapter plays two important roles in your dissertation or thesis:

Firstly, it demonstrates your understanding of research theory, which is what earns you marks. A flawed research design or methodology would mean flawed results. So, this chapter is vital as it allows you to show the marker that you know what you’re doing and that your results are credible .

Secondly, the methodology chapter is what helps to make your study replicable. In other words, it allows other researchers to undertake your study using the same methodological approach, and compare their findings to yours. This is very important within academic research, as each study builds on previous studies.

The methodology chapter is also important in that it allows you to identify and discuss any methodological issues or problems you encountered (i.e., research limitations ), and to explain how you mitigated the impacts of these. Every research project has its limitations , so it’s important to acknowledge these openly and highlight your study’s value despite its limitations . Doing so demonstrates your understanding of research design, which will earn you marks. We’ll discuss limitations in a bit more detail later in this post, so stay tuned!

Need a helping hand?

jargon in research methodology

How to write up the methodology chapter

First off, it’s worth noting that the exact structure and contents of the methodology chapter will vary depending on the field of research (e.g., humanities, chemistry or engineering) as well as the university . So, be sure to always check the guidelines provided by your institution for clarity and, if possible, review past dissertations from your university. Here we’re going to discuss a generic structure for a methodology chapter typically found in the sciences.

Before you start writing, it’s always a good idea to draw up a rough outline to guide your writing. Don’t just start writing without knowing what you’ll discuss where. If you do, you’ll likely end up with a disjointed, ill-flowing narrative . You’ll then waste a lot of time rewriting in an attempt to try to stitch all the pieces together. Do yourself a favour and start with the end in mind .

Section 1 – Introduction

As with all chapters in your dissertation or thesis, the methodology chapter should have a brief introduction. In this section, you should remind your readers what the focus of your study is, especially the research aims . As we’ve discussed many times on the blog, your methodology needs to align with your research aims, objectives and research questions. Therefore, it’s useful to frontload this component to remind the reader (and yourself!) what you’re trying to achieve.

In this section, you can also briefly mention how you’ll structure the chapter. This will help orient the reader and provide a bit of a roadmap so that they know what to expect. You don’t need a lot of detail here – just a brief outline will do.

The intro provides a roadmap to your methodology chapter

Section 2 – The Methodology

The next section of your chapter is where you’ll present the actual methodology. In this section, you need to detail and justify the key methodological choices you’ve made in a logical, intuitive fashion. Importantly, this is the heart of your methodology chapter, so you need to get specific – don’t hold back on the details here. This is not one of those “less is more” situations.

Let’s take a look at the most common components you’ll likely need to cover. 

Methodological Choice #1 – Research Philosophy

Research philosophy refers to the underlying beliefs (i.e., the worldview) regarding how data about a phenomenon should be gathered , analysed and used . The research philosophy will serve as the core of your study and underpin all of the other research design choices, so it’s critically important that you understand which philosophy you’ll adopt and why you made that choice. If you’re not clear on this, take the time to get clarity before you make any further methodological choices.

While several research philosophies exist, two commonly adopted ones are positivism and interpretivism . These two sit roughly on opposite sides of the research philosophy spectrum.

Positivism states that the researcher can observe reality objectively and that there is only one reality, which exists independently of the observer. As a consequence, it is quite commonly the underlying research philosophy in quantitative studies and is oftentimes the assumed philosophy in the physical sciences.

Contrasted with this, interpretivism , which is often the underlying research philosophy in qualitative studies, assumes that the researcher performs a role in observing the world around them and that reality is unique to each observer . In other words, reality is observed subjectively .

These are just two philosophies (there are many more), but they demonstrate significantly different approaches to research and have a significant impact on all the methodological choices. Therefore, it’s vital that you clearly outline and justify your research philosophy at the beginning of your methodology chapter, as it sets the scene for everything that follows.

The research philosophy is at the core of the methodology chapter

Methodological Choice #2 – Research Type

The next thing you would typically discuss in your methodology section is the research type. The starting point for this is to indicate whether the research you conducted is inductive or deductive .

Inductive research takes a bottom-up approach , where the researcher begins with specific observations or data and then draws general conclusions or theories from those observations. Therefore these studies tend to be exploratory in terms of approach.

Conversely , d eductive research takes a top-down approach , where the researcher starts with a theory or hypothesis and then tests it using specific observations or data. Therefore these studies tend to be confirmatory in approach.

Related to this, you’ll need to indicate whether your study adopts a qualitative, quantitative or mixed  approach. As we’ve mentioned, there’s a strong link between this choice and your research philosophy, so make sure that your choices are tightly aligned . When you write this section up, remember to clearly justify your choices, as they form the foundation of your study.

Methodological Choice #3 – Research Strategy

Next, you’ll need to discuss your research strategy (also referred to as a research design ). This methodological choice refers to the broader strategy in terms of how you’ll conduct your research, based on the aims of your study.

Several research strategies exist, including experimental , case studies , ethnography , grounded theory, action research , and phenomenology . Let’s take a look at two of these, experimental and ethnographic, to see how they contrast.

Experimental research makes use of the scientific method , where one group is the control group (in which no variables are manipulated ) and another is the experimental group (in which a specific variable is manipulated). This type of research is undertaken under strict conditions in a controlled, artificial environment (e.g., a laboratory). By having firm control over the environment, experimental research typically allows the researcher to establish causation between variables. Therefore, it can be a good choice if you have research aims that involve identifying causal relationships.

Ethnographic research , on the other hand, involves observing and capturing the experiences and perceptions of participants in their natural environment (for example, at home or in the office). In other words, in an uncontrolled environment.  Naturally, this means that this research strategy would be far less suitable if your research aims involve identifying causation, but it would be very valuable if you’re looking to explore and examine a group culture, for example.

As you can see, the right research strategy will depend largely on your research aims and research questions – in other words, what you’re trying to figure out. Therefore, as with every other methodological choice, it’s essential to justify why you chose the research strategy you did.

Methodological Choice #4 – Time Horizon

The next thing you’ll need to detail in your methodology chapter is the time horizon. There are two options here: cross-sectional and longitudinal . In other words, whether the data for your study were all collected at one point in time (cross-sectional) or at multiple points in time (longitudinal).

The choice you make here depends again on your research aims, objectives and research questions. If, for example, you aim to assess how a specific group of people’s perspectives regarding a topic change over time , you’d likely adopt a longitudinal time horizon.

Another important factor to consider is simply whether you have the time necessary to adopt a longitudinal approach (which could involve collecting data over multiple months or even years). Oftentimes, the time pressures of your degree program will force your hand into adopting a cross-sectional time horizon, so keep this in mind.

Methodological Choice #5 – Sampling Strategy

Next, you’ll need to discuss your sampling strategy . There are two main categories of sampling, probability and non-probability sampling.

Probability sampling involves a random (and therefore representative) selection of participants from a population, whereas non-probability sampling entails selecting participants in a non-random  (and therefore non-representative) manner. For example, selecting participants based on ease of access (this is called a convenience sample).

The right sampling approach depends largely on what you’re trying to achieve in your study. Specifically, whether you trying to develop findings that are generalisable to a population or not. Practicalities and resource constraints also play a large role here, as it can oftentimes be challenging to gain access to a truly random sample. In the video below, we explore some of the most common sampling strategies.

Methodological Choice #6 – Data Collection Method

Next up, you’ll need to explain how you’ll go about collecting the necessary data for your study. Your data collection method (or methods) will depend on the type of data that you plan to collect – in other words, qualitative or quantitative data.

Typically, quantitative research relies on surveys , data generated by lab equipment, analytics software or existing datasets. Qualitative research, on the other hand, often makes use of collection methods such as interviews , focus groups , participant observations, and ethnography.

So, as you can see, there is a tight link between this section and the design choices you outlined in earlier sections. Strong alignment between these sections, as well as your research aims and questions is therefore very important.

Methodological Choice #7 – Data Analysis Methods/Techniques

The final major methodological choice that you need to address is that of analysis techniques . In other words, how you’ll go about analysing your date once you’ve collected it. Here it’s important to be very specific about your analysis methods and/or techniques – don’t leave any room for interpretation. Also, as with all choices in this chapter, you need to justify each choice you make.

What exactly you discuss here will depend largely on the type of study you’re conducting (i.e., qualitative, quantitative, or mixed methods). For qualitative studies, common analysis methods include content analysis , thematic analysis and discourse analysis . In the video below, we explain each of these in plain language.

For quantitative studies, you’ll almost always make use of descriptive statistics , and in many cases, you’ll also use inferential statistical techniques (e.g., correlation and regression analysis). In the video below, we unpack some of the core concepts involved in descriptive and inferential statistics.

In this section of your methodology chapter, it’s also important to discuss how you prepared your data for analysis, and what software you used (if any). For example, quantitative data will often require some initial preparation such as removing duplicates or incomplete responses . Similarly, qualitative data will often require transcription and perhaps even translation. As always, remember to state both what you did and why you did it.

Section 3 – The Methodological Limitations

With the key methodological choices outlined and justified, the next step is to discuss the limitations of your design. No research methodology is perfect – there will always be trade-offs between the “ideal” methodology and what’s practical and viable, given your constraints. Therefore, this section of your methodology chapter is where you’ll discuss the trade-offs you had to make, and why these were justified given the context.

Methodological limitations can vary greatly from study to study, ranging from common issues such as time and budget constraints to issues of sample or selection bias . For example, you may find that you didn’t manage to draw in enough respondents to achieve the desired sample size (and therefore, statistically significant results), or your sample may be skewed heavily towards a certain demographic, thereby negatively impacting representativeness .

In this section, it’s important to be critical of the shortcomings of your study. There’s no use trying to hide them (your marker will be aware of them regardless). By being critical, you’ll demonstrate to your marker that you have a strong understanding of research theory, so don’t be shy here. At the same time, don’t beat your study to death . State the limitations, why these were justified, how you mitigated their impacts to the best degree possible, and how your study still provides value despite these limitations .

Section 4 – Concluding Summary

Finally, it’s time to wrap up the methodology chapter with a brief concluding summary. In this section, you’ll want to concisely summarise what you’ve presented in the chapter. Here, it can be a good idea to use a figure to summarise the key decisions, especially if your university recommends using a specific model (for example, Saunders’ Research Onion ).

Importantly, this section needs to be brief – a paragraph or two maximum (it’s a summary, after all). Also, make sure that when you write up your concluding summary, you include only what you’ve already discussed in your chapter; don’t add any new information.

Keep it simple

Methodology Chapter Example

In the video below, we walk you through an example of a high-quality research methodology chapter from a dissertation. We also unpack our free methodology chapter template so that you can see how best to structure your chapter.

Wrapping Up

And there you have it – the methodology chapter in a nutshell. As we’ve mentioned, the exact contents and structure of this chapter can vary between universities , so be sure to check in with your institution before you start writing. If possible, try to find dissertations or theses from former students of your specific degree program – this will give you a strong indication of the expectations and norms when it comes to the methodology chapter (and all the other chapters!).

Also, remember the golden rule of the methodology chapter – justify every choice ! Make sure that you clearly explain the “why” for every “what”, and reference credible methodology textbooks or academic sources to back up your justifications.

If you need a helping hand with your research methodology (or any other component of your research), be sure to check out our private coaching service , where we hold your hand through every step of the research journey. Until next time, good luck!

jargon in research methodology

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Quantitative results chapter in a dissertation

50 Comments

DAUDI JACKSON GYUNDA

highly appreciated.

florin

This was very helpful!

Nophie

This was helpful

mengistu

Thanks ,it is a very useful idea.

Thanks ,it is very useful idea.

Lucia

Thank you so much, this information is very useful.

Shemeka Hodge-Joyce

Thank you very much. I must say the information presented was succinct, coherent and invaluable. It is well put together and easy to comprehend. I have a great guide to create the research methodology for my dissertation.

james edwin thomson

Highly clear and useful.

Amir

I understand a bit on the explanation above. I want to have some coach but I’m still student and don’t have any budget to hire one. A lot of question I want to ask.

Henrick

Thank you so much. This concluded my day plan. Thank you so much.

Najat

Thanks it was helpful

Karen

Great information. It would be great though if you could show us practical examples.

Patrick O Matthew

Thanks so much for this information. God bless and be with you

Atugonza Zahara

Thank you so so much. Indeed it was helpful

Joy O.

This is EXCELLENT!

I was totally confused by other explanations. Thank you so much!.

keinemukama surprise

justdoing my research now , thanks for the guidance.

Yucong Huang

Thank uuuu! These contents are really valued for me!

Thokozani kanyemba

This is powerful …I really like it

Hend Zahran

Highly useful and clear, thank you so much.

Harry Kaliza

Highly appreciated. Good guide

Fateme Esfahani

That was helpful. Thanks

David Tshigomana

This is very useful.Thank you

Kaunda

Very helpful information. Thank you

Peter

This is exactly what I was looking for. The explanation is so detailed and easy to comprehend. Well done and thank you.

Shazia Malik

Great job. You just summarised everything in the easiest and most comprehensible way possible. Thanks a lot.

Rosenda R. Gabriente

Thank you very much for the ideas you have given this will really help me a lot. Thank you and God Bless.

Eman

Such great effort …….very grateful thank you

Shaji Viswanathan

Please accept my sincere gratitude. I have to say that the information that was delivered was congruent, concise, and quite helpful. It is clear and straightforward, making it simple to understand. I am in possession of an excellent manual that will assist me in developing the research methods for my dissertation.

lalarie

Thank you for your great explanation. It really helped me construct my methodology paper.

Daniel sitieney

thank you for simplifieng the methodoly, It was realy helpful

Kayode

Very helpful!

Nathan

Thank you for your great explanation.

Emily Kamende

The explanation I have been looking for. So clear Thank you

Abraham Mafuta

Thank you very much .this was more enlightening.

Jordan

helped me create the in depth and thorough methodology for my dissertation

Nelson D Menduabor

Thank you for the great explaination.please construct one methodology for me

I appreciate you for the explanation of methodology. Please construct one methodology on the topic: The effects influencing students dropout among schools for my thesis

This helped me complete my methods section of my dissertation with ease. I have managed to write a thorough and concise methodology!

ASHA KIUNGA

its so good in deed

leslie chihope

wow …what an easy to follow presentation. very invaluable content shared. utmost important.

Ahmed khedr

Peace be upon you, I am Dr. Ahmed Khedr, a former part-time professor at Al-Azhar University in Cairo, Egypt. I am currently teaching research methods, and I have been dealing with your esteemed site for several years, and I found that despite my long experience with research methods sites, it is one of the smoothest sites for evaluating the material for students, For this reason, I relied on it a lot in teaching and translated most of what was written into Arabic and published it on my own page on Facebook. Thank you all… Everything I posted on my page is provided with the names of the writers of Grad coach, the title of the article, and the site. My best regards.

Daniel Edwards

A remarkably simple and useful guide, thank you kindly.

Magnus Mahenge

I real appriciate your short and remarkable chapter summary

Olalekan Adisa

Bravo! Very helpful guide.

Arthur Margraf

Only true experts could provide such helpful, fantastic, and inspiring knowledge about Methodology. Thank you very much! God be with you and us all!

Aruni Nilangi

highly appreciate your effort.

White Label Blog Content

This is a very well thought out post. Very informative and a great read.

FELEKE FACHA

THANKS SO MUCH FOR SHARING YOUR NICE IDEA

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

No internet connection.

All search filters on the page have been cleared., your search has been saved..

  • All content
  • Dictionaries
  • Encyclopedias
  • Expert Insights
  • Foundations
  • How-to Guides
  • Journal Articles
  • Little Blue Books
  • Little Green Books
  • Project Planner
  • Tools Directory
  • Sign in to my profile My Profile

Not Logged In

  • Sign in Signed in
  • My profile My Profile

Not Logged In

Pocket Glossary for Commonly Used Research Terms

  • By: Michael J. Holosko & Bruce A. Thyer
  • Publisher: SAGE Publications, Inc.
  • Publication year: 2011
  • Online pub date: April 18, 2013
  • Discipline: Anthropology
  • Methods: Survey research , Evaluation , Measurement
  • DOI: https:// doi. org/10.4135/9781452269917
  • Keywords: journals , population Show all Show less
  • Print ISBN: 9781412995139
  • Online ISBN: 9781452269917
  • Buy the book icon link

This book contains over 1500 research and statistical terms, written in jargon-free, easy-to-understand terminology to help students understand difficult concepts in their research courses. This pocket guide is in an ideal supplement to the many discipline-specific texts on research methods and statistics.

Front Matter

  • Acknowledgments
  • Chapter 1 | Glossary of Research Terms
  • Chapter 2 | Commonly Used Acronyms, Symbols, Abbreviations, and Terms Found in Research and Evaluation Studies
  • Chapter 3 | Commonly Used Statistical Terms
  • Chapter 4 | Some Helpful Research and Evaluation Websites
  • Chapter 5 | Core Disciplinary Journals in Selected Social and Behavioral Sciences

Back Matter

  • About the Authors

Sign in to access this content

Get a 30 day free trial, more like this, sage recommends.

We found other relevant content for you on other Sage platforms.

Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches

  • Sign in/register

Navigating away from this page will delete your results

Please save your results to "My Self-Assessments" in your profile before navigating away from this page.

Sign in to my profile

Sign up for a free trial and experience all Sage Learning Resources have to offer.

You must have a valid academic email address to sign up.

Get off-campus access

  • View or download all content my institution has access to.

Sign up for a free trial and experience all Sage Research Methods has to offer.

  • view my profile
  • view my lists

Child Care and Early Education Research Connections

Research glossary.

The research glossary defines terms used in conducting social science and policy research, for example those describing methods, measurements, statistical procedures, and other aspects of research; the child care glossary defines terms used to describe aspects of child care and early education practice and policy.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Psychology Research Jargon You Should Know

Common Terms and Concepts Used to Investigate the Mind

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

jargon in research methodology

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

jargon in research methodology

At a Glance

There are specific terms and phrases that researchers use when they do psychological studies. Knowing the definitions of these words will help you understand how psychology research is done.

Psychologists use different research methods to investigate the human mind and behavior. The words that they use to design and report their studies can be very complex. You’ll find it easier to understand research papers if you know some key terms used in psychology.

Here are some examples of common psychology jargon, terminologies, vocabulary, and concepts that every psychology student should know.

Applied Research

Fuse / Getty Images

Applied research  focuses on solving practical problems. Rather than focusing on developing or investigating theoretical questions, researchers are interested in finding solutions to problems that impact daily life.

For example, researchers could investigate which treatments for a mental health condition led to the best outcomes. This research is directly applicable and can help people improve their day-to-day lives.

A baseline is the starting point of a study. It gives the researchers something to compare the results of their study to.

For example, if researchers are going to be testing a new therapy, they need to establish a baseline for the participants before the study starts.

They might do this by asking participants to fill out a questionnaire about their symptoms before they do the treatment. At the end of the study, the participants can be asked about their symptoms again.

By comparing participant responses before and after the study, the researchers could get an idea of whether the treatment was effective.

Basic Research

Basic research  explores theories to expand the scientific knowledge base on a psychological subject. While this type of research contributes to our understanding of the human mind and behavior, it does not necessarily help solve immediate practical problems as applied research does.

A  case study  is an in-depth narrative about a single person or group. In a case study, nearly every aspect of the subject’s life and history is analyzed to look for patterns and causes for their behavior.

Causation is when there is a clear, direct cause-and-effect relationship between two things. When one variable in a study is changed, it changes another.

To establish causation between variables, researchers have to make sure that other variables could not have been responsible for the change they observed (confounding variable).

Correlational Research

Correlational studies  are used to look for relationships between variables. There are three  possible results of a correlational study : a positive correlation, a negative correlation, and no correlation.

The correlation coefficient is a measure of correlation strength and can range from -1.00 to +1.00.

Cross-Sectional Research

Cross-sectional research  is often used in  developmental psychology  but can also be used in social science, education, and other branches of science.

This type of study examines a "cross-section" of a population (for example, first through fifth graders) at a single point in time.

Demand Characteristic

A  demand characteristic  is a cue that makes participants aware of what the experimenter expects to find out in the study or how participants are expected to behave.

These cues can be subtle—for example, a survey that asks participants if a particular piece of music they’re listening to during the study made them feel calm could suggest to them that the researchers expect the music to be calming.

To avoid these cues, the question could have been “How did the music make you feel?” which would not have led the participants toward making a specific response.   

Dependent Variable

The  dependent variable  is the variable that is being measured in an experiment. Researchers alter one or more independent variables and then measure the dependent variable(s) to see if there are any changes.

Double-Blind Study

A  double-blind study  is a type of study in which neither the participants nor the experimenters know who is receiving a particular treatment.

This type of research design eliminates the possibility that the researchers will give subtle clues about what they expect to find or influence the behavior of the participants.

Experimental Method

The  experimental method  involves manipulating one variable to determine if changes in one variable cause changes in another.

This method helps researchers determine if cause-and-effect relationships exist between different variables.

Hawthorne Effect

The  Hawthorne effect  refers to the tendency of some people to work harder and perform better when they are taking part in a research experiment.

This can affect the study because people may change their behavior in response to the attention they are receiving from researchers rather than the manipulation of  independent variables .

Independent Variable

The  independent variable  is the variable of interest that the researcher varies systematically. The researcher will check to see if changing the independent variable changes the dependent variable(s) in any way.

This process helps determine if there is a cause-and-effect relationship between two different variables.

Longitudinal Research

Longitudinal research examines long-term effects and relationships. These studies take place over an extended time, be it several weeks, years, or even decades.

For example, researchers can start with a study population of children and continue to observe them throughout their lives. 

Naturalistic Observation

Naturalistic observation  involves observing subjects in their natural environments. This type of research can be used when doing lab research would be unrealistic, cost-prohibitive, or have an unwanted effect on the subject's behavior.

An example of naturalistic observation in a psychology study would be observing how children play at recess rather than putting them in an exam room where they might be too uncomfortable, scared, or distracted to play as they normally do 

A placebo is also called a “sham treatment” because it does not have any effect. For example, when researchers are testing medications, they may give some patients real pills and other patients placebo pills that have no medicine in them (often, they are just sugar pills).

In some studies, the researchers know which group got a placebo and which did not. Other times, they don’t know which group got the placebo until after the study.

Sometimes, people taking a placebo report feeling better even though there was no real medicine in the pill they took. This is called the “placebo effect ."

A punishment is a negative consequence for a behavior that is intended to make the behavior less likely to happen again. Punishment is meant to act as a deterrent for unwanted behavior.

A classic example of a punishment is sending a child to “time out” when they misbehave.

Qualitative Research

Qualitative research does not focus on numbers and data. Instead, it focuses on the experiences, behaviors, attitudes, ways of thinking, and belief systems that people have.

An example of qualitative research in psychology could be doing a series of interviews with people who went through a traumatic event to better understand how they feel about what happened.

Quasi-Experiment

A quasi-experiment involves changing an independent variable without randomly assigning participants to random groups. Often, this design is necessary because people are already in groups that can’t be changed.

For example, let’s say researchers want to compare smartphone use between people who identify as women and people who identify as men. People are already in groups according to their gender identity, so they cannot be randomly assigned to one or the other. 

Random Assignment

Random assignment  is when researchers use chance procedures to group participants in a study. This means that every participant has the same opportunity to be assigned to any group, rather than the researchers assigning patients to specific groups.

For example, researchers may randomly assign participants to groups in a study that compares two treatments. 

Regression Analysis

Regression analysis is a statistical method researchers can use to look for relationships between dependent and independent variables in their study.

For example, researchers may use regression analysis to consider the variables that can affect a person’s risk for depression, such as their socioeconomic background, relationships, and genetics.

Reinforcement

Reinforcement is a consequence of a desired behavior that is meant to increase the chances that the behavior will continue or happen again.

A common example of reinforcement is rewarding a child for doing their chores.

Reliability

Reliability  means the consistency of a measure. A test is considered reliable if researchers get the same result repeatedly.

For example, if a test is designed to measure a trait (such as  introversion ), then each time the test is administered to a subject, the results should be approximately the same.

It is impossible to calculate reliability exactly. Instead, researchers use several different methods to estimate reliability.

Replication

Replication  means the repetition of a research study, usually with different situations and subjects.

Replication helps researchers determine if the basic findings of the original study can also apply (be generalized) to other participants and circumstances.

Selective Attrition

In psychology experiments,  selective attrition  is the tendency of some people to be more likely to drop out of a study than others. This tendency can threaten the validity of a psychological experiment if it skews the results of the study.

A stimulus is something that causes a response, like an object, event, or circumstance.

One of the most well-known examples of a stimulus in psychology is the bell in Pavlov’s experiment with dogs.

The dogs began to associate the sound of the bell ringing (stimulus) with the arrival of their food. Eventually, they would salivate in anticipation of the meal every time they heard the bell.

Validity  is the extent to which a test measures what it claims to measure. A test needs to be valid to ensure the results can be accurately applied and interpreted.

APA. Applied research .

APA. Baseline .

APA. Basic research .

APA. Case study .

APA. Causation .

APA. Confounding variable .

APA. Correlational research .

Heath W.  Psychology Research Methods: Connecting Research to Students’ Lives . Cambridge University Press.

APA. Cross-sectional design .

APA. Demand characteristics .

APA. Dependent variable .

APA. Double-blind .

APA. Experimental method .

APA. Hawthorne effect.

APA. Independent variable .

APA. Longitudinal design .

APA. Naturalistic observation .

APA. Placebo .

APA. Punishment .

APA. Qualitative research .

APA. Quasi-experimental design .

APA. Random assignment .

APA. Regression analysis .

APA. Reinforcement .

APA. Reliability .

APA. Replication .

APA. Attrition .

APA. Stimulus .

APA. Validity .

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Technical Support
  • Find My Rep

You are here

Pocket Glossary for Commonly Used Research Terms

Pocket Glossary for Commonly Used Research Terms

  • Michael J. Holosko - University of Georgia, USA
  • Bruce A. Thyer - Florida State University, USA
  • Description

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

"The text is quite comprehensive and I am happy to see that both quantitative and qualitative terms are included. The definitions are generally easy to understand and clear."

"It is a clear and easy to read description of terms and concepts with an alphabetic order so it’s easy to find what one is looking for. It is general so it applied to many disciplines."

"Comprehensive, organized and well-written."

"It brings key terminology together all in one place. It’s simple to read, not too long, and easy to grasp."

"It is simple and clear, covers a large variety of research methodology terms and the internet addresses are terrific."

"Extensive, easy to understand, easy to use…"

My students NEED this book!

A gem - fantastic resource for students and early career academics

But may use as an adjunct in the research methods course. Concise and accessible but a little too brief to use as a main text

So easy for students to follow!

  • The Glossary of Research Terms chapter contains definitions and descriptions of over 1500 research terms. It is the heart of the glossary, features crisp and clear statements as to the meaning of each entry.
  • Commonly Used Statistical Terms: A dedicated chapter offers a brief synopsis of commonly used statistical terms, ranging from the alpha coefficient to the Z-test. Questions related to the meaning of a statistical term will likely be found here.
  • Commonly Used Acronyms, Symbols, Abbreviations and Terms Found in Research and Evaluation Studies are organized into categories so readers can easily find them: for example, research and evaluation studies, statistics, Internet, and U.S. federal government.
  • Core Disciplinary Journals in Selected Social and Behavioral Science chapter lists the most highly ranked journals in over a dozen social and behavioral sciences and professional disciplines, as determined by their impact factors published by the Journal Citation Reports database. URLs are also provided to each journal so that the reader can locate each journal's individual webpage.

Sample Materials & Chapters

Chapter Two: Commonly Used Acronyms, Symbols, Abbreviations, and Terms Found in

Chapter Three: Commonly Used Statistical Terms

For instructors

Select a purchasing option.

SAGE Research Methods Promotion

This title is also available on SAGE Research Methods , the ultimate digital methods library. If your library doesn’t have access, ask your librarian to start a trial .

jargon in research methodology

Final dates! Join the tutor2u subject teams in London for a day of exam technique and revision at the cinema. Learn more →

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Research Methods Key Term Glossary

Last updated 22 Mar 2021

  • Share on Facebook
  • Share on Twitter
  • Share by Email

This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology.

Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision.

The researcher’s area of interest – what they are looking at (e.g. to investigate helping behaviour).

A graph that shows the data in the form of categories (e.g. behaviours observed) that the researcher wishes to compare.

Behavioural categories

Key behaviours or, collections of behaviour, that the researcher conducting the observation will pay attention to and record

In-depth investigation of a single person, group or event, where data are gathered from a variety of sources and by using several different methods (e.g. observations & interviews).

Closed questions

Questions where there are fixed choices of responses e.g. yes/no. They generate quantitative data

Co-variables

The variables investigated in a correlation

Concurrent validity

Comparing a new test with another test of the same thing to see if they produce similar results. If they do then the new test has concurrent validity

Confidentiality

Unless agreed beforehand, participants have the right to expect that all data collected during a research study will remain confidential and anonymous.

Confounding variable

An extraneous variable that varies systematically with the IV so we cannot be sure of the true source of the change to the DV

Content analysis

Technique used to analyse qualitative data which involves coding the written data into categories – converting qualitative data into quantitative data.

Control group

A group that is treated normally and gives us a measure of how people behave when they are not exposed to the experimental treatment (e.g. allowed to sleep normally).

Controlled observation

An observation study where the researchers control some variables - often takes place in laboratory setting

Correlational analysis

A mathematical technique where the researcher looks to see whether scores for two covariables are related

Counterbalancing

A way of trying to control for order effects in a repeated measures design, e.g. half the participants do condition A followed by B and the other half do B followed by A

Covert observation

Also known as an undisclosed observation as the participants do not know their behaviour is being observed

Critical value

The value that a test statistic must reach in order for the hypothesis to be accepted.

After completing the research, the true aim is revealed to the participant. Aim of debriefing = to return the person to the state s/he was in before they took part.

Involves misleading participants about the purpose of s study.

Demand characteristics

Occur when participants try to make sense of the research situation they are in and try to guess the purpose of the research or try to present themselves in a good way.

Dependent variable

The variable that is measured to tell you the outcome.

Descriptive statistics

Analysis of data that helps describe, show or summarize data in a meaningful way

Directional hypothesis

A one-tailed hypothesis that states the direction of the difference or relationship (e.g. boys are more helpful than girls).

Dispersion measure

A dispersion measure shows how a set of data is spread out, examples are the range and the standard deviation

Double blind control

Participants are not told the true purpose of the research and the experimenter is also blind to at least some aspects of the research design.

Ecological validity

The extent to which the findings of a research study are able to be generalized to real-life settings

Ethical guidelines

These are provided by the BPS - they are the ‘rules’ by which all psychologists should operate, including those carrying out research.

Ethical issues

There are 3 main ethical issues that occur in psychological research – deception, lack of informed consent and lack of protection of participants.

Evaluation apprehension

Participants’ behaviour is distorted as they fear being judged by observers

Event sampling

A target behaviour is identified and the observer records it every time it occurs

Experimental group

The group that received the experimental treatment (e.g. sleep deprivation)

External validity

Whether it is possible to generalise the results beyond the experimental setting.

Extraneous variable

Variables that if not controlled may affect the DV and provide a false impression than an IV has produced changes when it hasn’t.

Face validity

Simple way of assessing whether a test measures what it claims to measure which is concerned with face value – e.g. does an IQ test look like it tests intelligence.

Field experiment

An experiment that takes place in a natural setting where the experimenter manipulates the IV and measures the DV

A graph that is used for continuous data (e.g. test scores). There should be no space between the bars, because the data is continuous.

This is a formal statement or prediction of what the researcher expects to find. It needs to be testable.

Independent groups design

An experimental design where each participants only takes part in one condition of the IV

Independent variable

The variable that the experimenter manipulates (changes).

Inferential statistics

Inferential statistics are ways of analyzing data using statistical tests that allow the researcher to make conclusions about whether a hypothesis was supported by the results.

Informed consent

Psychologists should ensure that all participants are helped to understand fully all aspects of the research before they agree (give consent) to take part

Inter-observer reliability

The extent to which two or more observers are observing and recording behaviour in the same way

Internal validity

In relation to experiments, whether the results were due to the manipulation of the IV rather than other factors such as extraneous variables or demand characteristics.

Interval level data

Data measured in fixed units with equal distance between points on the scale

Investigator effects

These result from the effects of a researcher’s behaviour and characteristics on an investigation.

Laboratory experiment

An experiment that takes place in a controlled environment where the experimenter manipulates the IV and measures the DV

Matched pairs design

An experimental design where pairs of participants are matched on important characteristics and one member allocated to each condition of the IV

Measure of central tendency calculated by adding all the scores in a set of data together and dividing by the total number of scores

Measures of central tendency

A measurement of data that indicates where the middle of the information lies e.g. mean, median or mode

Measure of central tendency calculated by arranging scores in a set of data from lowest to highest and finding the middle score

Meta-analysis

A technique where rather than conducting new research with participants, the researchers examine the results of several studies that have already been conducted

Measure of central tendency which is the most frequently occurring score in a set of data

Natural experiment

An experiment where the change in the IV already exists rather than being manipulated by the experimenter

Naturalistic observation

An observation study conducted in the environment where the behaviour would normally occur

Negative correlation

A relationship exists between two covariables where as one increases, the other decreases

Nominal level data

Frequency count data that consists of the number of participants falling into categories. (e.g. 7 people passed their driving test first time, 6 didn’t).

Non-directional hypothesis

A two-tailed hypothesis that does not predict the direction of the difference or relationship (e.g. girls and boys are different in terms of helpfulness).

Normal distribution

An arrangement of a data that is symmetrical and forms a bell shaped pattern where the mean, median and mode all fall in the centre at the highest peak

Observed value

The value that you have obtained from conducting your statistical test

Observer bias

Occurs when the observers know the aims of the study study or the hypotheses and allow this knowledge to influence their observations

Open questions

Questions where there is no fixed response and participants can give any answer they like. They generate qualitative data.

Operationalising variables

This means clearly describing the variables (IV and DV) in terms of how they will be manipulated (IV) or measured (DV).

Opportunity sample

A sampling technique where participants are chosen because they are easily available

Order effects

Order effects can occur in a repeated measures design and refers to how the positioning of tasks influences the outcome e.g. practice effect or boredom effect on second task

Ordinal level data

Data that is capable of being out into rank order (e.g. places in a beauty contest, or ratings for attractiveness).

Overt observation

Also known as a disclosed observation as the participants given their permission for their behaviour to be observed

Participant observation

Observation study where the researcher actually joins the group or takes part in the situation they are observing.

Peer review

Before going to publication, a research report is sent other psychologists who are knowledgeable in the research topic for them to review the study, and check for any problems

Pilot study

A small scale study conducted to ensure the method will work according to plan. If it doesn’t then amendments can be made.

Positive correlation

A relationship exists between two covariables where as one increases, so does the other

Presumptive consent

Asking a group of people from the same target population as the sample whether they would agree to take part in such a study, if yes then presume the sample would

Primary data

Information that the researcher has collected him/herself for a specific purpose e.g. data from an experiment or observation

Prior general consent

Before participants are recruited they are asked whether they are prepared to take part in research where they might be deceived about the true purpose

Probability

How likely something is to happen – can be expressed as a number (0.5) or a percentage (50% change of tossing coin and getting a head)

Protection of participants

Participants should be protected from physical or mental health, including stress - risk of harm must be no greater than that to which they are exposed in everyday life

Qualitative data

Descriptive information that is expressed in words

Quantitative data

Information that can be measured and written down with numbers.

Quasi experiment

An experiment often conducted in controlled conditions where the IV simply exists so there can be no random allocation to the conditions

Questionnaire

A set of written questions that participants fill in themselves

Random sampling

A sampling technique where everyone in the target population has an equal chance of being selected

Randomisation

Refers to the practice of using chance methods (e.g. flipping a coin' to allocate participants to the conditions of an investigation

The distance between the lowest and the highest value in a set of scores.

A measure of dispersion which involves subtracting the lowest score from the highest score in a set of data

Reliability

Whether something is consistent. In the case of a study, whether it is replicable.

Repeated measures design

An experimental design where each participants takes part in both/all conditions of the IV

Representative sample

A sample that that closely matched the target population as a whole in terms of key variables and characteristics

Retrospective consent

Once the true nature of the research has been revealed, participants should be given the right to withdraw their data if they are not happy.

Right to withdraw

Participants should be aware that they can leave the study at any time, even if they have been paid to take part.

A group of people that are drawn from the target population to take part in a research investigation

Scattergram

Used to plot correlations where each pair of values is plotted against each other to see if there is a relationship between them.

Secondary data

Information that someone else has collected e.g. the work of other psychologists or government statistics

Semi-structured interview

Interview that has some pre-determined questions, but the interviewer can develop others in response to answers given by the participant

A statistical test used to analyse the direction of differences of scores between the same or matched pairs of subjects under two experimental conditions

Significance

If the result of a statistical test is significant it is highly unlikely to have occurred by chance

Single-blind control

Participants are not told the true purpose of the research

Skewed distribution

An arrangement of data that is not symmetrical as data is clustered ro one end of the distribution

Social desirability bias

Participants’ behaviour is distorted as they modify this in order to be seen in a positive light.

Standard deviation

A measure of the average spread of scores around the mean. The greater the standard deviation the more spread out the scores are. .

Standardised instructions

The instructions given to each participant are kept identical – to help prevent experimenter bias.

Standardised procedures

In every step of the research all the participants are treated in exactly the same way and so all have the same experience.

Stratified sample

A sampling technique where groups of participants are selected in proportion to their frequency in the target population

Structured interview

Interview where the questions are fixed and the interviewer reads them out and records the responses

Structured observation

An observation study using predetermined coding scheme to record the participants' behaviour

Systematic sample

A sampling technique where every nth person in a list of the target population is selected

Target population

The group that the researchers draws the sample from and wants to be able to generalise the findings to

Temporal validity

Refers to how likely it is that the time period when a study was conducted has influenced the findings and whether they can be generalised to other periods in time

Test-retest reliability

Involves presenting the same participants with the same test or questionnaire on two separate occasions and seeing whether there is a positive correlation between the two

Thematic analysis

A method for analysing qualitative data which involves identifying, analysing and reporting patterns within the data

Time sampling

A way of sampling the behaviour that is being observed by recording what happens in a series of fixed time intervals.

Type 1 error

Is a false positive. It is where you accept the alternative/experimental hypothesis when it is false

Type 2 error

Is a false negative. It is where you accept the null hypothesis when it is false

Unstructured interview

Also know as a clinical interview, there are no fixed questions just general aims and it is more like a conversation

Unstructured observation

Observation where there is no checklist so every behaviour seen is written down in an much detail as possible

Whether something is true – measures what it sets out to measure.

Volunteer sample

A sampling technique where participants put themselves forward to take part in research, often by answering an advertisement

You might also like

Working memory model, coding & encoding, episodic, procedural and semantic memory, multi-store model of memory, content analysis, investigator effects, our subjects.

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

What is Research Methodology? Definition, Types, and Examples

jargon in research methodology

Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research. Several aspects must be considered before selecting an appropriate research methodology, such as research limitations and ethical concerns that may affect your research.

The research methodology section in a scientific paper describes the different methodological choices made, such as the data collection and analysis methods, and why these choices were selected. The reasons should explain why the methods chosen are the most appropriate to answer the research question. A good research methodology also helps ensure the reliability and validity of the research findings. There are three types of research methodology—quantitative, qualitative, and mixed-method, which can be chosen based on the research objectives.

What is research methodology ?

A research methodology describes the techniques and procedures used to identify and analyze information regarding a specific research topic. It is a process by which researchers design their study so that they can achieve their objectives using the selected research instruments. It includes all the important aspects of research, including research design, data collection methods, data analysis methods, and the overall framework within which the research is conducted. While these points can help you understand what is research methodology, you also need to know why it is important to pick the right methodology.

Why is research methodology important?

Having a good research methodology in place has the following advantages: 3

  • Helps other researchers who may want to replicate your research; the explanations will be of benefit to them.
  • You can easily answer any questions about your research if they arise at a later stage.
  • A research methodology provides a framework and guidelines for researchers to clearly define research questions, hypotheses, and objectives.
  • It helps researchers identify the most appropriate research design, sampling technique, and data collection and analysis methods.
  • A sound research methodology helps researchers ensure that their findings are valid and reliable and free from biases and errors.
  • It also helps ensure that ethical guidelines are followed while conducting research.
  • A good research methodology helps researchers in planning their research efficiently, by ensuring optimum usage of their time and resources.

Writing the methods section of a research paper? Let Paperpal help you achieve perfection

Types of research methodology.

There are three types of research methodology based on the type of research and the data required. 1

  • Quantitative research methodology focuses on measuring and testing numerical data. This approach is good for reaching a large number of people in a short amount of time. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations.
  • Qualitative research methodology examines the opinions, behaviors, and experiences of people. It collects and analyzes words and textual data. This research methodology requires fewer participants but is still more time consuming because the time spent per participant is quite large. This method is used in exploratory research where the research problem being investigated is not clearly defined.
  • Mixed-method research methodology uses the characteristics of both quantitative and qualitative research methodologies in the same study. This method allows researchers to validate their findings, verify if the results observed using both methods are complementary, and explain any unexpected results obtained from one method by using the other method.

What are the types of sampling designs in research methodology?

Sampling 4 is an important part of a research methodology and involves selecting a representative sample of the population to conduct the study, making statistical inferences about them, and estimating the characteristics of the whole population based on these inferences. There are two types of sampling designs in research methodology—probability and nonprobability.

  • Probability sampling

In this type of sampling design, a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are:

  • Systematic —sample members are chosen at regular intervals. It requires selecting a starting point for the sample and sample size determination that can be repeated at regular intervals. This type of sampling method has a predefined range; hence, it is the least time consuming.
  • Stratified —researchers divide the population into smaller groups that don’t overlap but represent the entire population. While sampling, these groups can be organized, and then a sample can be drawn from each group separately.
  • Cluster —the population is divided into clusters based on demographic parameters like age, sex, location, etc.
  • Convenience —selects participants who are most easily accessible to researchers due to geographical proximity, availability at a particular time, etc.
  • Purposive —participants are selected at the researcher’s discretion. Researchers consider the purpose of the study and the understanding of the target audience.
  • Snowball —already selected participants use their social networks to refer the researcher to other potential participants.
  • Quota —while designing the study, the researchers decide how many people with which characteristics to include as participants. The characteristics help in choosing people most likely to provide insights into the subject.

What are data collection methods?

During research, data are collected using various methods depending on the research methodology being followed and the research methods being undertaken. Both qualitative and quantitative research have different data collection methods, as listed below.

Qualitative research 5

  • One-on-one interviews: Helps the interviewers understand a respondent’s subjective opinion and experience pertaining to a specific topic or event
  • Document study/literature review/record keeping: Researchers’ review of already existing written materials such as archives, annual reports, research articles, guidelines, policy documents, etc.
  • Focus groups: Constructive discussions that usually include a small sample of about 6-10 people and a moderator, to understand the participants’ opinion on a given topic.
  • Qualitative observation : Researchers collect data using their five senses (sight, smell, touch, taste, and hearing).

Quantitative research 6

  • Sampling: The most common type is probability sampling.
  • Interviews: Commonly telephonic or done in-person.
  • Observations: Structured observations are most commonly used in quantitative research. In this method, researchers make observations about specific behaviors of individuals in a structured setting.
  • Document review: Reviewing existing research or documents to collect evidence for supporting the research.
  • Surveys and questionnaires. Surveys can be administered both online and offline depending on the requirement and sample size.

Let Paperpal help you write the perfect research methods section. Start now!

What are data analysis methods.

The data collected using the various methods for qualitative and quantitative research need to be analyzed to generate meaningful conclusions. These data analysis methods 7 also differ between quantitative and qualitative research.

Quantitative research involves a deductive method for data analysis where hypotheses are developed at the beginning of the research and precise measurement is required. The methods include statistical analysis applications to analyze numerical data and are grouped into two categories—descriptive and inferential.

Descriptive analysis is used to describe the basic features of different types of data to present it in a way that ensures the patterns become meaningful. The different types of descriptive analysis methods are:

  • Measures of frequency (count, percent, frequency)
  • Measures of central tendency (mean, median, mode)
  • Measures of dispersion or variation (range, variance, standard deviation)
  • Measure of position (percentile ranks, quartile ranks)

Inferential analysis is used to make predictions about a larger population based on the analysis of the data collected from a smaller population. This analysis is used to study the relationships between different variables. Some commonly used inferential data analysis methods are:

  • Correlation: To understand the relationship between two or more variables.
  • Cross-tabulation: Analyze the relationship between multiple variables.
  • Regression analysis: Study the impact of independent variables on the dependent variable.
  • Frequency tables: To understand the frequency of data.
  • Analysis of variance: To test the degree to which two or more variables differ in an experiment.

Qualitative research involves an inductive method for data analysis where hypotheses are developed after data collection. The methods include:

  • Content analysis: For analyzing documented information from text and images by determining the presence of certain words or concepts in texts.
  • Narrative analysis: For analyzing content obtained from sources such as interviews, field observations, and surveys. The stories and opinions shared by people are used to answer research questions.
  • Discourse analysis: For analyzing interactions with people considering the social context, that is, the lifestyle and environment, under which the interaction occurs.
  • Grounded theory: Involves hypothesis creation by data collection and analysis to explain why a phenomenon occurred.
  • Thematic analysis: To identify important themes or patterns in data and use these to address an issue.

How to choose a research methodology?

Here are some important factors to consider when choosing a research methodology: 8

  • Research objectives, aims, and questions —these would help structure the research design.
  • Review existing literature to identify any gaps in knowledge.
  • Check the statistical requirements —if data-driven or statistical results are needed then quantitative research is the best. If the research questions can be answered based on people’s opinions and perceptions, then qualitative research is most suitable.
  • Sample size —sample size can often determine the feasibility of a research methodology. For a large sample, less effort- and time-intensive methods are appropriate.
  • Constraints —constraints of time, geography, and resources can help define the appropriate methodology.

Got writer’s block? Kickstart your research paper writing with Paperpal now!

How to write a research methodology .

A research methodology should include the following components: 3,9

  • Research design —should be selected based on the research question and the data required. Common research designs include experimental, quasi-experimental, correlational, descriptive, and exploratory.
  • Research method —this can be quantitative, qualitative, or mixed-method.
  • Reason for selecting a specific methodology —explain why this methodology is the most suitable to answer your research problem.
  • Research instruments —explain the research instruments you plan to use, mainly referring to the data collection methods such as interviews, surveys, etc. Here as well, a reason should be mentioned for selecting the particular instrument.
  • Sampling —this involves selecting a representative subset of the population being studied.
  • Data collection —involves gathering data using several data collection methods, such as surveys, interviews, etc.
  • Data analysis —describe the data analysis methods you will use once you’ve collected the data.
  • Research limitations —mention any limitations you foresee while conducting your research.
  • Validity and reliability —validity helps identify the accuracy and truthfulness of the findings; reliability refers to the consistency and stability of the results over time and across different conditions.
  • Ethical considerations —research should be conducted ethically. The considerations include obtaining consent from participants, maintaining confidentiality, and addressing conflicts of interest.

Streamline Your Research Paper Writing Process with Paperpal

The methods section is a critical part of the research papers, allowing researchers to use this to understand your findings and replicate your work when pursuing their own research. However, it is usually also the most difficult section to write. This is where Paperpal can help you overcome the writer’s block and create the first draft in minutes with Paperpal Copilot, its secure generative AI feature suite.  

With Paperpal you can get research advice, write and refine your work, rephrase and verify the writing, and ensure submission readiness, all in one place. Here’s how you can use Paperpal to develop the first draft of your methods section.  

  • Generate an outline: Input some details about your research to instantly generate an outline for your methods section 
  • Develop the section: Use the outline and suggested sentence templates to expand your ideas and develop the first draft.  
  • P araph ras e and trim : Get clear, concise academic text with paraphrasing that conveys your work effectively and word reduction to fix redundancies. 
  • Choose the right words: Enhance text by choosing contextual synonyms based on how the words have been used in previously published work.  
  • Check and verify text : Make sure the generated text showcases your methods correctly, has all the right citations, and is original and authentic. .   

You can repeat this process to develop each section of your research manuscript, including the title, abstract and keywords. Ready to write your research papers faster, better, and without the stress? Sign up for Paperpal and start writing today!

Frequently Asked Questions

Q1. What are the key components of research methodology?

A1. A good research methodology has the following key components:

  • Research design
  • Data collection procedures
  • Data analysis methods
  • Ethical considerations

Q2. Why is ethical consideration important in research methodology?

A2. Ethical consideration is important in research methodology to ensure the readers of the reliability and validity of the study. Researchers must clearly mention the ethical norms and standards followed during the conduct of the research and also mention if the research has been cleared by any institutional board. The following 10 points are the important principles related to ethical considerations: 10

  • Participants should not be subjected to harm.
  • Respect for the dignity of participants should be prioritized.
  • Full consent should be obtained from participants before the study.
  • Participants’ privacy should be ensured.
  • Confidentiality of the research data should be ensured.
  • Anonymity of individuals and organizations participating in the research should be maintained.
  • The aims and objectives of the research should not be exaggerated.
  • Affiliations, sources of funding, and any possible conflicts of interest should be declared.
  • Communication in relation to the research should be honest and transparent.
  • Misleading information and biased representation of primary data findings should be avoided.

Q3. What is the difference between methodology and method?

A3. Research methodology is different from a research method, although both terms are often confused. Research methods are the tools used to gather data, while the research methodology provides a framework for how research is planned, conducted, and analyzed. The latter guides researchers in making decisions about the most appropriate methods for their research. Research methods refer to the specific techniques, procedures, and tools used by researchers to collect, analyze, and interpret data, for instance surveys, questionnaires, interviews, etc.

Research methodology is, thus, an integral part of a research study. It helps ensure that you stay on track to meet your research objectives and answer your research questions using the most appropriate data collection and analysis tools based on your research design.

Accelerate your research paper writing with Paperpal. Try for free now!

  • Research methodologies. Pfeiffer Library website. Accessed August 15, 2023. https://library.tiffin.edu/researchmethodologies/whatareresearchmethodologies
  • Types of research methodology. Eduvoice website. Accessed August 16, 2023. https://eduvoice.in/types-research-methodology/
  • The basics of research methodology: A key to quality research. Voxco. Accessed August 16, 2023. https://www.voxco.com/blog/what-is-research-methodology/
  • Sampling methods: Types with examples. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/types-of-sampling-for-social-research/
  • What is qualitative research? Methods, types, approaches, examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-qualitative-research-methods-types-examples/
  • What is quantitative research? Definition, methods, types, and examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-quantitative-research-types-and-examples/
  • Data analysis in research: Types & methods. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/data-analysis-in-research/#Data_analysis_in_qualitative_research
  • Factors to consider while choosing the right research methodology. PhD Monster website. Accessed August 17, 2023. https://www.phdmonster.com/factors-to-consider-while-choosing-the-right-research-methodology/
  • What is research methodology? Research and writing guides. Accessed August 14, 2023. https://paperpile.com/g/what-is-research-methodology/
  • Ethical considerations. Business research methodology website. Accessed August 17, 2023. https://research-methodology.net/research-methodology/ethical-considerations/

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • Dangling Modifiers and How to Avoid Them in Your Writing 
  • Webinar: How to Use Generative AI Tools Ethically in Your Academic Writing
  • Research Outlines: How to Write An Introduction Section in Minutes with Paperpal Copilot
  • How to Paraphrase Research Papers Effectively

Language and Grammar Rules for Academic Writing

Climatic vs. climactic: difference and examples, you may also like, do plagiarism checkers detect ai content, word choice problems: how to use the right..., how to avoid plagiarism when using generative ai..., what are journal guidelines on using generative ai..., types of plagiarism and 6 tips to avoid..., how to write an essay introduction (with examples)..., similarity checks: the author’s guide to plagiarism and..., what is a master’s thesis: a guide for..., should you use ai tools like chatgpt for..., what are the benefits of generative ai for....

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Research Methods | Definitions, Types, Examples

Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs. quantitative : Will your data take the form of words or numbers?
  • Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
  • Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyze the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.

Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs. quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.

Primary vs. secondary research

Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Descriptive vs. experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.

Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:

  • From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that was collected either:

  • During an experiment .
  • Using probability sampling methods .

Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

jargon in research methodology

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

Other students also liked, writing strong research questions | criteria & examples.

  • What Is a Research Design | Types, Guide & Examples
  • Data Collection | Definition, Methods & Examples

More interesting articles

  • Between-Subjects Design | Examples, Pros, & Cons
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | Guide, Methods & Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Control Variables | What Are They & Why Do They Matter?
  • Correlation vs. Causation | Difference, Designs & Examples
  • Correlational Research | When & How to Use
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definition, Uses & Examples
  • Descriptive Research | Definition, Types, Methods & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory and Response Variables | Definitions & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Definition, Types, Threats & Examples
  • Extraneous Variables | Examples, Types & Controls
  • Guide to Experimental Design | Overview, Steps, & Examples
  • How Do You Incorporate an Interview into a Dissertation? | Tips
  • How to Do Thematic Analysis | Step-by-Step Guide & Examples
  • How to Write a Literature Review | Guide, Examples, & Templates
  • How to Write a Strong Hypothesis | Steps & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs. Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs. Deductive Research Approach | Steps & Examples
  • Internal Validity in Research | Definition, Threats, & Examples
  • Internal vs. External Validity | Understanding Differences & Threats
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs. Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide & Examples
  • Multistage Sampling | Introductory Guide & Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalization | A Guide with Examples, Pros & Cons
  • Population vs. Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs. Quantitative Research | Differences, Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Random vs. Systematic Error | Definition & Examples
  • Reliability vs. Validity in Research | Difference, Types and Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Reproducibility vs. Replicability | Difference & Examples
  • Sampling Methods | Types, Techniques & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Single, Double, & Triple Blind Study | Definition & Examples
  • Stratified Sampling | Definition, Guide & Examples
  • Structured Interview | Definition, Guide & Examples
  • Survey Research | Definition, Examples & Methods
  • Systematic Review | Definition, Example, & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity in Research | Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Guide & Examples
  • Types of Variables in Research & Statistics | Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Is a Case Study? | Definition, Examples & Methods
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Controlled Experiment? | Definitions & Examples
  • What Is a Double-Barreled Question?
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Data Cleansing? | Definition, Guide & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Definition, Guide & Examples
  • What Is Face Validity? | Guide, Definition & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition, Uses & Methods

What is your plagiarism score?

  • Scroll to top
  • Light Dark Light Dark

Explore Remarkable Survey Point Knowledge for Free

Cart review

No products in the cart.

Understanding Method vs. Methodology: A Comprehensive Guide

  • Author Survey Point Team
  • Published September 19, 2023

In the world of research and academia, the terms “method” and “methodology” are often used interchangeably. However, they represent distinct concepts with crucial differences. Understanding these differences is essential for anyone involved in research, as they can significantly impact the quality and credibility of your work. In this article, we will delve into the definitions of method and methodology, explore how to write them effectively, and highlight the key dissimilarities between the two.

Table of Contents

What Is a Method?

Definition and purpose.

A method in research refers to a systematic and structured approach used to gather and analyze data. It is essentially the step-by-step process you follow to answer your research question. The purpose of a method is to provide a clear and replicable way of conducting your study.

Types of Methods

There are various types of methods in research, including qualitative, quantitative, and mixed methods. Each type has its strengths and weaknesses, making it crucial to choose the most suitable one for your research.

What Is a Methodology?

Definition and Significance

Methodology, on the other hand, encompasses a broader framework that outlines the theoretical underpinnings of your research. It defines the overall approach and strategy you employ to conduct your study. A robust methodology adds credibility to your research.

Components of a Methodology

A well-developed methodology includes sections on research design, data collection, and data analysis. These components work together to ensure your research is systematic and rigorous.

How to Write an Effective Method

Choosing the Right Approach

Selecting the appropriate method for your research is crucial. Consider your research question, the type of data you need, and the resources available. Justify your choice clearly in your research proposal or paper.

Detailed Description

When writing your method, provide a detailed step-by-step description of how you conducted your study. Include information about participants, materials, procedures, and data analysis techniques.

Method Vs Methodology

How to Develop a Robust Methodology

Research Design

Your methodology should begin with a clear research design. Describe the overall structure of your study, including its scope and objectives. Explain how your research fits into the broader academic context.

Data Collection

Detail the methods you used to collect data. Discuss any ethical considerations, sampling techniques, and data sources. Transparency is key to a credible methodology.

Differences Between Method and Methodology

Scope and Purpose

The primary difference lies in their scope and purpose. A method focuses on the specific steps to gather and analyze data, while a methodology outlines the broader research framework.

Level of Detail

Methods are more detailed and specific, providing instructions for data collection and analysis. Methodologies are more abstract and theoretical, emphasizing the overall approach.

When to Use Each

Research Context

Choose a method when you need a practical approach to collect and analyze data. Opt for a methodology when you want to establish the theoretical foundation of your research.

Research Goals

Consider your research goals. If you aim to answer a specific research question, a method is appropriate. If you want to contribute to a larger field of study, a methodology is essential.

Tips for Effective Writing

Clarity and Precision

Ensure your writing is clear and precise. Use straightforward language and avoid jargon. Your readers should be able to replicate your research based on your method or understand your research’s theoretical framework with your methodology.

Consistency and Coherence

Maintain consistency and coherence throughout your method or methodology section. Use a logical structure and connect ideas seamlessly. This enhances the readability and credibility of your work.

In summary, while method and methodology are closely related, they serve distinct purposes in research. Methods are the practical tools for data collection and analysis, while methodologies provide the theoretical underpinning of your research. Both are essential for conducting rigorous and credible research.

What is the key difference between a method and a methodology?

The primary difference lies in their scope and purpose. A method is a specific set of steps for data collection and analysis, while a methodology outlines the broader research framework.

When should I use a method in my research?

You should use a method when you need a practical approach to gather and analyze data for a specific research question.

What should be included in a methodology section?

A methodology section should include information on research design, data collection methods, and data analysis techniques, providing the theoretical foundation for your research.

Can I use both a method and a methodology in my research?

Yes, it is common to use both. Methods are employed for data collection and analysis, while methodologies establish the overall research framework.

How can I ensure the credibility of my research?

To ensure credibility, choose appropriate methods and develop a robust methodology. Additionally, provide clear and detailed descriptions in your research paper to enable replication.

Survey Point Team

jargon in research methodology

  • Literary Terms
  • Definition & Examples
  • When & How to Use Jargon

I. What is Jargon?

Jargon is the specific type of language used by a particular group or profession. Jargon (pronounced jär-gən) can be used to describe correctly used technical language in a positive way. Or, it can describe language which is overly technical, obscure, and pretentious in a negative way.

II. Examples of Jargon

There is a wide variety of jargon, as each specific career or area of study has its own set of vocabulary that is shared between those who work within the profession or field. Here are a few common examples of jargon:

A common dictum in allergy practice is that the patient’s medical history is the primary diagnostic test. Laboratory studies, including skin and in vitro tests for specific immunoglobulin E (IgE) antibodies, have relevance only when correlated with the patient’s medical history. Furthermore, treatment should always be directed toward current symptomatology and not merely toward the results of specific allergy tests.

This excerpt from a PubMed research paper is a prime example of medical jargon. In plain English, a dictum is a generally accepted truth, the laboratory is the lab, and symptomatology is simply a patient’s set of symptoms.

I acknowledge receipt of your letter dated the 2nd of April. The purpose of my suggestion that my client purchases an area of land from yourself is that this can be done right up to your clearly defined boundary in which case notwithstanding that the plan is primarily for identification purposes on the ground the position of the boundary would be clearly ascertainable this in our opinion would overcome the existing problem.

This is an example of legal jargon, taken from a clause within a commercial lease schedule. In plain English, it states that a letter was received on April 2, concerning exactly which plot of land a client hopes to purchase.

This man was an involuntarily un-domiciled.

Whereas the previous two examples concerned technical and acceptable jargon, this third phrase is an example of unwanted, unnecessary jargon: jargon in the negative sense. Here, “involuntarily undomiciled” is a jargon-addled term which allows someone to avoid saying the less attractive phrase “homeless.”

III. The Importance of Using Jargon

Jargon has both positive and negative connotations . On one hand, jargon is necessary and very important: various specialized fields such as medicine, technology, and law require the use of jargon to explain complicated ideas and concepts. On the other hand, sometimes jargon is used for doublespeak , or purposely obscure language used to avoid harsh truths or to manipulate those ignorant of its true meaning. An example of doublespeak is “collateral damage,” a phrase used by the military to describe people have who been unintentionally or accidentally wounded or killed, often civilian casualties. The phrase “collateral damage” sounds a lot nicer than the reality of “innocent person killed.”

IV. Examples of Jargon in Literature

Often, literary writers make use of jargon in order to create realistic situations. A well-written fictional doctor will use medical lingo, just as a medical writer will use medical jargon in a creative nonfiction piece about the profession. Below are a few examples of jargon in literature.

The poisonous molecules of benzene arrived in the bone marrow in a crescendo. The foreign chemical surged with the blood and was carried between the narrow spicules of supporting bone into the farthest reaches of the delicate tissue. It was like a frenzied horde of barbarians descending into Rome. And the result was equally as disastrous. The complicated nature of the marrow, designed to make most of the cellular content of the blood, succumbed to the invaders.

This excerpt from Robin Cook’s medical thriller called Fever makes use of medical jargon like “molecules of benzene,” “spicules of bone,” and “cellular content of blood” but writes of such topics in a literary fashion, comparing the spread of benzene to a horde of barbarians invading Rome.

The worst scenario would be for Bruiser to get indicted and arrested and put on trial. That process would take at least a year. He’d still be able to work and operate his office. I think. They can’t disbar him until he’s convicted.

In John Grisham’s legal thriller, legal jargon is used by those working in law. In plain English, “being indicted” is being formally accused of a crime and “being disbarred” is being prevented from practicing law as a failed lawyer.

As these examples show, the use of jargon creates a richer narrative landscape which realistically represents how certain professionals communicate amongst one another within their selected field of work and study.

V. Examples of Jargon in Pop Culture

Just like literature, pop culture uses jargon to accurately represent real life. Here are a few examples of jargon in pop culture:

In “Mission Statement,” Weird Al Yankovic mocks business jargon with jargon-addled lyrics which make fun of business English:

We must all efficiently Operationalize our strategies Invest in world-class technology And leverage our core competencies In order to holistically administrate Exceptional synergy

In Legally Blonde, Elle Woods’ admissions essay to Harvard Law presents the blonde beauty queen attempting to use legal jargon with “I object!” expressing disdain for cat-callers.

I feel comfortable using legal jargon in everyday life: I object!

Legally Blonde (3/11) Movie CLIP - Harvard Video Essay (2001) HD

VI. Related Terms

Just like jargon, slang is a specialized vocabulary used by a certain group. The similarities end there. Unlike jargon, slang is not used by professionals and is, in fact, avoided by them. Slang is particularly informal language typically used in everyday speech rather than writing. Because slang is based on popularity and the present, it is constantly changing and evolving with social trends and groups. Here is an example of slang versus jargon:

Whoa, that’s sick!

The slang phrase “sick” has a much different meaning than an illness when used by skaters. Rather, it means that something is cool or appealing.

The patient is ill.

In this example of medical jargon, a patient is described as ill rather than more common colloquial phrases like “sick” or “feeling under the weather.”

Lingo is often used in place of both slang and jargon. The reason is this: lingo refers to a specific type of language used by a specific group. In other words, lingo encompasses both slang and jargon. “What’s the lingo?” could be used to casually ask what the jargon is or to ask what the slang phrase is in a certain situation.

VII. Conclusion

Jargon is professional language used by a specific group of people. When used to confuse or mislead, jargon is considered a negative thing, but it is acceptable when used within a specific profession or area of study. From the toilet salesman to the gardener to the mathematician, jargon is used in a wide variety of professions.

List of Terms

  • Alliteration
  • Amplification
  • Anachronism
  • Anthropomorphism
  • Antonomasia
  • APA Citation
  • Aposiopesis
  • Autobiography
  • Bildungsroman
  • Characterization
  • Circumlocution
  • Cliffhanger
  • Comic Relief
  • Connotation
  • Deus ex machina
  • Deuteragonist
  • Doppelganger
  • Double Entendre
  • Dramatic irony
  • Equivocation
  • Extended Metaphor
  • Figures of Speech
  • Flash-forward
  • Foreshadowing
  • Intertextuality
  • Juxtaposition
  • Literary Device
  • Malapropism
  • Onomatopoeia
  • Parallelism
  • Pathetic Fallacy
  • Personification
  • Point of View
  • Polysyndeton
  • Protagonist
  • Red Herring
  • Rhetorical Device
  • Rhetorical Question
  • Science Fiction
  • Self-Fulfilling Prophecy
  • Synesthesia
  • Turning Point
  • Understatement
  • Urban Legend
  • Verisimilitude
  • Essay Guide
  • Cite This Website

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: mm1: methods, analysis & insights from multimodal llm pre-training.

Abstract: In this work, we discuss building performant Multimodal Large Language Models (MLLMs). In particular, we study the importance of various architecture components and data choices. Through careful and comprehensive ablations of the image encoder, the vision language connector, and various pre-training data choices, we identified several crucial design lessons. For example, we demonstrate that for large-scale multimodal pre-training using a careful mix of image-caption, interleaved image-text, and text-only data is crucial for achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks, compared to other published pre-training results. Further, we show that the image encoder together with image resolution and the image token count has substantial impact, while the vision-language connector design is of comparatively negligible importance. By scaling up the presented recipe, we build MM1, a family of multimodal models up to 30B parameters, including both dense models and mixture-of-experts (MoE) variants, that are SOTA in pre-training metrics and achieve competitive performance after supervised fine-tuning on a range of established multimodal benchmarks. Thanks to large-scale pre-training, MM1 enjoys appealing properties such as enhanced in-context learning, and multi-image reasoning, enabling few-shot chain-of-thought prompting.

Submission history

Access paper:.

  • Download PDF
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum

The Commission, together with the European Research Area countries and stakeholders, has put forward a set of guidelines to support the European research community in their responsible use of generative artificial intelligence (AI).

With the rapid spread of the use of this technology in all domains including in science, these recommendations address key opportunities and challenges. Building on the principles of research integrity, they offer guidance to researchers, research organisations, and research funders to ensure a coherent approach across Europe. The principles framing the new guidelines are based on existing frameworks such as the European Code of Conduct for Research Integrity and the guidelines on trustworthy AI .

AI is transforming research, making scientific work more efficient and accelerating discovery. While generative AI tools offer speed and convenience in producing text, images and code, researchers must also be mindful of the technology’s limitations, including plagiarism, revealing sensitive information, or inherent biases in the models.

Margrethe Vestager, Executive Vice-President said:

We are committed to innovation of AI and innovation with AI. And we will do our best to build a thriving AI ecosystem in Europe. With these guidelines, we encourage the research community to use generative AI to help supercharge European science and its applications to the benefit of society and for all of us.

Iliana Ivanova, Commissioner for Innovation, Research, Culture, Education and Youth said:

Generative AI can hugely boost research, but its use demands transparency and responsibility. These guidelines aim to uphold scientific integrity and preserve public trust in science amidst rapid technological advancements. I call on the scientific community to join us in turning these guidelines into the reference for European research.

Key takeaways from the guidelines include:

  • Researchers refrain from using generative AI tools in sensitive activities such as peer reviews or evaluations and use generative AI respecting privacy, confidentiality, and intellectual property rights.
  • Research organisations should facilitate the responsible use of generative AI and actively monitor how these tools are developed and used within their organisations.
  • Funding organisations should support applicants in using generative AI transparently

As generative AI is constantly evolving, these guidelines will be updated with regular feedback from the scientific community and stakeholders.

Share your views

The widespread uptake of generative AI has triggered numerous institutional responses. While the EU is taking the global lead with its AI Act regulating AI products, many academic institutions and organisations across Europe have been developing guidelines on the use generative AI. The goal of the ERA Forum representatives, including Member States, Horizon Europe associated countries, and other research and innovation stakeholders, was to develop a guidance that could bring consistency across countries and research organisations.

More information

Artificial Intelligence in Science

Press contact:

EC Spokesperson for Research, Science and Innovation

Share this page

University of South Florida

Main Navigation

Two study leaders speak to participants in a computer lab

USF research reveals language barriers limit effectiveness of cybersecurity resources

  • April 1, 2024

Research and Innovation

By: John Dudley , University Communications & Marketing

The idea for Fawn Ngo’s latest research came from a television interview.

Ngo, a University of South Florida criminologist, had spoken with a Vietnamese language network in California about her interest in better understanding how people become victims of cybercrime.

Afterward, she began receiving phone calls from viewers recounting their own experiences of victimization.

Fawn Ngo

Fawn Ngo, associate professor in the USF College of Behavioral and Community Sciences

“Some of the stories were unfortunate and heartbreaking,” said Ngo, an associate professor in the USF College of Behavioral and Community Sciences. “They made me wonder about the availability and accessibility of cybersecurity information and resources for non-English speakers. Upon investigating further, I discovered that such information and resources were either limited or nonexistent.”

The result is what’s believed to be the first study to explore the links among demographic characteristics, cyber hygiene practices and cyber victimization using a sample of limited English proficiency internet users.

Ngo is the lead author of an article, “Cyber Hygiene and Cyber Victimization Among Limited English Proficiency (LEP) Internet Users: A Mixed-Method Study,” which just published in the journal Victims & Offenders. The article’s co-authors are Katherine Holman, a USF graduate student and former Georgia state prosecutor, and Anurag Agarwal, professor of information systems, analytics and supply chain at Florida Gulf Coast University. 

Their research, which focused on Spanish and Vietnamese speakers, led to two closely connected main takeaways:

  • LEP internet users share the same concern about cyber threats and the same desire for online safety as any other individual. However, they are constrained by a lack of culturally and linguistically appropriate resources, which also hampers accurate collection of cyber victimization data among vulnerable populations.
  • Online guidance that provides the most effective educational tools and reporting forms is only available in English. The most notable example is the website for the Internet Crime Complaint Center, which serves as the FBI’s primary apparatus for combatting cybercrime.

As a result, the study showed that many well-intentioned LEP users still engage in such risky online behaviors as using unsecured networks and sharing passwords. For example, only 29 percent of the study’s focus group participants avoided using public Wi-Fi over the previous 12 months, and only 17 percent said they had antivirus software installed on their digital devices.

Previous research cited in Ngo’s paper has shown that underserved populations exhibit poorer cybersecurity knowledge and outcomes, most commonly in the form of computer viruses and hacked accounts, including social media accounts. Often, it’s because they lack awareness and understanding and isn’t a result of disinterest, Ngo said.

“According to cybersecurity experts, humans are the weakest link in the chain of cybersecurity,” Ngo said. “If we want to secure our digital borders, we must ensure that every member in society, regardless of their language skills, is well-informed about the risks inherent in the cyber world.”

The study’s findings point to a need for providing cyber hygiene information and resources in multiple formats, including visual aids and audio guides, to accommodate diverse literacy levels within LEP communities, Ngo said. She added that further research is needed to address the current security gap and ensure equitable access to cybersecurity resources for all internet users.

In the meantime, Ngo is working to create a website that will include cybersecurity information and resources in different languages and a link to report victimization.

“It’s my hope that cybersecurity information and resources will become as readily accessible in other languages as other vital information, such as information related to health and safety,” Ngo said. “I also want LEP victims to be included in national data and statistics on cybercrime and their experiences accurately represented and addressed in cybersecurity initiatives.” 

Return to article listing

Cybersecurity , John Dudley , USF Sarasota-Manatee

News Archive

Learn more about USF's journey to Preeminence by viewing Newsroom articles from past years.

USF in the News

Tampa bay times: tampa doctors fit device that restores arm movement to stroke victims.

March 27, 2024

Associated Press: Debate emerges over whether modern protections could have saved Baltimore bridge

Inside higher ed: university of south florida launching ai college.

March 22, 2024

New York Times: Geologists Make It Official - We’re Not in an ‘Anthropocene’ Epoch

March 20, 2024

More USF in the News

jargon in research methodology

  • Today, we’re introducing SceneScript , a novel method for reconstructing environments and representing the layout of physical spaces.
  • SceneScript was trained in simulation using the Aria Synthetic Environments dataset, which is available for academic use.

Imagine a pair of stylish, lightweight glasses that combined contextualized AI with a display that could seamlessly give you access to real-time information when you need it and proactively help you as you go about your day. In order for such a pair of augmented reality (AR) glasses to become reality, the system must be able to understand the layout of your physical environment and how the world is shaped in 3D. That understanding would let AR glasses tailor content to you and your individual context, like seamlessly blending a digital overlay with your physical space or giving you turn-by-turn directions to help you navigate unfamiliar locations.

However, building these 3D scene representations is a complex task. Current MR headsets like Meta Quest 3 create a virtual representation of physical spaces based on raw visual data from cameras or 3D sensors. This raw data is converted into a series of shapes that describe distinct features of the environment, like walls, ceilings, and doors. Typically, these systems rely on pre-defined rules to convert the raw data into shapes. Yet that heuristic approach can often lead to errors, especially in spaces with unique or irregular geometries.

Introducing SceneScript

Today, Reality Labs Research is announcing SceneScript , a novel method of generating scene layouts and representing scenes using language.

jargon in research methodology

Rather than using hard-coded rules to convert raw visual data into an approximation of a room’s architectural elements, SceneScript is trained to directly infer a room’s geometry using end-to-end machine learning.

This results in a representation of physical scenes which is compact , reducing memory requirements to only a few bytes; complete , resulting in crisp geometry, similar to scalable vector graphics; and importantly, interpretable , meaning that we can easily read and edit those representations.

How is SceneScript trained?

Large language models (LLMs) like Llama operate using a technique called next token prediction, in which the AI model predicts the next word in a sentence based on the words that came before it. For example, if you typed the words, “The cat sat on the...,” the model would predict that the next word is likely to be “mat” or “floor.”

jargon in research methodology

SceneScript leverages the same concept of next token prediction used by LLMs. However, instead of predicting a general language token, the SceneScript model predicts the next architectural token, such as ‘wall’ or ‘door.’

By giving the network a large amount of training data, the SceneScript model learns how to encode visual data into a fundamental representation of the scene, which it can then decode into language that describes the room layout. This allows SceneScript to interpret and reconstruct complex environments from visual data and create text descriptions that effectively describe the structure of the scenes that it analyzes.

However, the team required a substantial amount of data to train the network and teach it how physical spaces are typically laid out—and they needed to ensure they were preserving privacy.

This presented a unique challenge.

Training SceneScript in simulation

While LLMs rely on vast amounts of training data that typically comes from a range of publicly available text sources on the web, no such repository of information yet exists for physical spaces at the scale needed for training an end-to-end model. So the Reality Labs Research team had to find another solution.

Instead of relying on data from physical environments, the SceneScript team created a synthetic dataset of indoor environments, called Aria Synthetic Environments . This dataset comprises 100,000 completely unique interior environments, each described using the SceneScript language and paired with a simulated video walking through each scene.

The video rendered through each scene is simulated using the same sensor characteristics as Project Aria , Reality Labs Research’s glasses for accelerating AI and ML research. This approach allows the SceneScript model to be completely trained in simulation, under privacy-preserving conditions. The model can then be validated using physical-world footage from Project Aria glasses, confirming the model’s ability to generalize to actual environments.

Last year, we made the Aria Synthetic Environments dataset available to academic researchers, which we hope will help accelerate public research within this exciting area of study.

Extending SceneScript to describe objects, states, and complex geometry

Another of SceneScript’s strengths is its extensibility .

Simply by adding a few additional parameters to scene language that describes doors in the Aria Synthetic Environments dataset, the network can be trained to accurately predict the degree to which doors are open or closed in physical environments.

Additionally, by adding new features to the architectural language, it’s possible to accurately predict the location of objects and—further still—decompose those objects into their constituent parts.

For example, a sofa could be represented within the SceneScript language as a set of geometric shapes including the cushions, legs, and arms. This level of detail could eventually be used by designers to create AR content that is truly customized to a wide range of physical environments.

Accelerating AR, pushing LLMs forward, and advancing the state of the art in AI and ML research

SceneScript could unlock key use cases for both MR headsets and future AR glasses, like generating the maps needed to provide step-by-step navigation for people who are visually impaired , as demonstrated by Carnegie Mellon University in 2022.

SceneScript also gives LLMs the vocabulary necessary to reason about physical spaces. This could ultimately unlock the potential of next-generation digital assistants, providing them with the physical-world context necessary to answer complex spatial queries. For example, with the ability to reason about physical spaces, we could pose questions to a chat assistant like, “Will this desk fit in my bedroom?” or, “How many pots of paint would it take to paint this room?” Rather than having to find your tape measure, jot down measurements, and do your best to estimate the answer with some back-of-the-napkin math, a chat assistant with access to SceneScript could arrive at the answer in mere fractions of a second.

We believe SceneScript represents a significant milestone on the path to true AR glasses that will bridge the physical and digital worlds. As we dive deeper into this potential at Reality Labs Research, we’re thrilled at the prospect of how this pioneering approach will help shape the future of AI and ML research.

Learn more about SceneScript here .

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

jargon in research methodology

Latest Work

Our Actions

Meta © 2024

IMAGES

  1. Jargon

    jargon in research methodology

  2. Get to know the research jargon

    jargon in research methodology

  3. Your Step-by-Step Guide to Writing a Good Research Methodology

    jargon in research methodology

  4. 15 Types of Research Methods (2024)

    jargon in research methodology

  5. Jargon

    jargon in research methodology

  6. Definition and Examples of Jargon

    jargon in research methodology

VIDEO

  1. Referencing Basics (Part 1b)

  2. Research Methods and Methodology in Hindi

  3. How to read Clinical Trials #ClinicalResearch #Ozempic #BigPharma

  4. Online Workshop on Research Methodology (12-16 February 2024) Day 2

COMMENTS

  1. Glossary of Research Terms

    Methodology-- a theory or analysis of how research does and should proceed. Methods -- systematic approaches to the conduct of an operation or process. It includes steps of procedure, application of techniques, systems of reasoning or analysis, and the modes of inquiry employed by a discipline.

  2. PDF Glossary of Key Terms in Educational Research

    research terminologies in educational research. It provides definitions of many of the terms used in the guidebooks to conducting qualitative, quantitative, and mixed methods of research. The terms are arranged in alphabetical order. Abstract A brief summary of a research project and its findings. A summary of a study that

  3. Qualitative and Quantitative Research: Glossary of Key Terms

    This glossary provides definitions of many of the terms used in the guides to conducting qualitative and quantitative research. The definitions were developed by members of the research methods seminar (E600) taught by Mike Palmquist in the 1990s and 2000s. Accuracy: A term used in survey research to refer to the match between the target ...

  4. Glossary of Research Terms

    Glossary. Bias: a lack of balance and accuracy in the use of research methods. It can appear at any phase of research, from deciding on a sampling frame, sampling, to data collection and analysis. Bias also arises in the identity of the researcher through assumptions and ideas related to his or her own culture that may influence data collection ...

  5. What Is Research Methodology? Definition + Examples

    As we mentioned, research methodology refers to the collection of practical decisions regarding what data you'll collect, from who, how you'll collect it and how you'll analyse it. Research design, on the other hand, is more about the overall strategy you'll adopt in your study. For example, whether you'll use an experimental design ...

  6. What Is a Research Methodology?

    Step 1: Explain your methodological approach. Step 2: Describe your data collection methods. Step 3: Describe your analysis method. Step 4: Evaluate and justify the methodological choices you made. Tips for writing a strong methodology chapter. Other interesting articles.

  7. Language Of Research

    Language Of Research. Learning about research is a lot like learning about anything else. To start, you need to learn the jargon people use, the big controversies they fight over, and the different factions that define the major players. We'll start by considering five really big multi-syllable words that researchers sometimes use to describe ...

  8. A tutorial on methodological studies: the what, when, how and why

    In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts). In the past 10 years, there has been an increase in the use of terms related to ...

  9. How To Write The Methodology Chapter

    Do yourself a favour and start with the end in mind. Section 1 - Introduction. As with all chapters in your dissertation or thesis, the methodology chapter should have a brief introduction. In this section, you should remind your readers what the focus of your study is, especially the research aims. As we've discussed many times on the blog ...

  10. Pocket Glossary for Commonly Used Research Terms

    This book contains over 1500 research and statistical terms, written in jargon-free, easy-to-understand terminology to help students understand difficult concepts in their research courses. This pocket guide is in an ideal supplement to the many discipline-specific texts on research methods and statistics.

  11. Research Glossary

    The research glossary defines terms used in conducting social science and policy research, for example those describing methods, measurements, statistical procedures, and other aspects of research; the child care glossary defines terms used to describe aspects of child care and early education practice and policy. In survey research, accuracy ...

  12. PDF Glossary of Common Research Terms

    Glossary of Common Research Terms Term Definition Abstract This is a brief summary of a research study and its results. It should tell you why the study was done, how the researchers went about it and what they found. Action Research Action research is used to bring about improvement or practical change. A group of people who know about a

  13. Psychology Research Jargon You Should Know

    Psychologists use different research methods to investigate the human mind and behavior. The words that they use to design and report their studies can be very complex. ... You'll find it easier to understand research papers if you know some key terms used in psychology. Here are some examples of common psychology jargon, terminologies ...

  14. Pocket Glossary for Commonly Used Research Terms

    This book contains over 1500 research and statistical terms, written in jargon-free, easy-to-understand terminology to help students understand difficult concepts in their research courses. This pocket guide is in an ideal supplement to the many discipline-specific texts on research methods and statistics. Available Formats. ISBN: 9781412995139.

  15. Research Methods Key Term Glossary

    Research Methods Key Term Glossary. This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology. Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision. Aim. The researcher's area of interest ...

  16. What is Research Methodology? Definition, Types, and Examples

    Definition, Types, and Examples. Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of ...

  17. Research Terminology

    Part 1. Demystifying nursing research terminology: Part 2. Research. The process of systematic study or investigation to discover new knowledge or expand on existing knowledge. Research method. A means of collecting data. Primary and Secondary Research. Theory. A theory is a set of interrelated concepts, definitions, and propositions that ...

  18. PDF Methodology: What It Is and Why It Is So Important

    SCIENTIFIC METHODOLOGY AND ITS COMPONENTS. Methodologyin science refers to the diverse prin- ciples, procedures, and practices that govern empiri- cal research. It is useful to distinguish five major components to convey the scope of the topics and to organize the subject matter. 1.

  19. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  20. Understanding Method vs. Methodology: A Comprehensive Guide

    Consider your research goals. If you aim to answer a specific research question, a method is appropriate. If you want to contribute to a larger field of study, a methodology is essential. Tips for Effective Writing. Clarity and Precision. Ensure your writing is clear and precise. Use straightforward language and avoid jargon. Your readers ...

  21. Jargon: Definition and Examples

    Jargon is the specific type of language used by a particular group or profession. Jargon (pronounced jär-gən) can be used to describe correctly used technical language in a positive way. Or, it can describe language which is overly technical, obscure, and pretentious in a negative way. II. Examples of Jargon. There is a wide variety of jargon ...

  22. What Is Research Methodology? (Why It's Important and Types)

    Research methodology is a way of explaining how a researcher intends to carry out their research. It's a logical, systematic plan to resolve a research problem. A methodology details a researcher's approach to the research to ensure reliable, valid results that address their aims and objectives. It encompasses what data they're going to collect ...

  23. Knowledge-Building in Classroom: A Multimodal Semantic Wave Model

    Based on the deep learning systems and polynomial fitting method, a computational method is presented to study multimodal semantic waves, which may provide reliable data for multimodal corpora. This method is implemented by extracting multimodal values from classroom videos and plotting the multimodal semantic wave by an nth-order polynomial.

  24. PDF arXiv:2403.20329v1 [cs.CL] 29 Mar 2024

    2019 Conference on Empirical Methods in Natu-ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2024-2033. ... Transactions on Machine Learning Research. Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha

  25. MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training

    In this work, we discuss building performant Multimodal Large Language Models (MLLMs). In particular, we study the importance of various architecture components and data choices. Through careful and comprehensive ablations of the image encoder, the vision language connector, and various pre-training data choices, we identified several crucial design lessons. For example, we demonstrate that ...

  26. Response Latencies During Confrontation Picture Naming in Aphasia: Are

    Method: Archival data from 10 people with aphasia were used. Trained research assistants phonemically transcribed participants' responses, and RTs were generated from the onset of picture stimulus to the initial phoneme of the first complete attempt.

  27. Guidelines on the responsible use of generative AI in research

    The Commission, together with the European Research Area countries and stakeholders, has put forward a set of guidelines to support the European research community in their responsible use of generative artificial intelligence (AI).. With the rapid spread of the use of this technology in all domains including in science, these recommendations address key opportunities and challenges.

  28. USF research reveals language barriers limit effectiveness of

    By: John Dudley, University Communications & Marketing The idea for Fawn Ngo's latest research came from a television interview. Ngo, a University of South Florida criminologist, had spoken with a Vietnamese language network in California about her interest in better understanding how people become victims of cybercrime.

  29. Introducing SceneScript, a novel approach for 3D scene reconstruction

    Today, Reality Labs Research is announcing SceneScript, a novel method of generating scene layouts and representing scenes using language. Rather than using hard-coded rules to convert raw visual data into an approximation of a room's architectural elements, SceneScript is trained to directly infer a room's geometry using end-to-end machine ...