Research Methods - Surveys Pt1 - Survey Design

Research Methods - Surveys Pt1 - Survey Design

Survey Research: Understanding Data Collection Methods

Overview of Survey Research

  • Surveys are a common method for collecting data from large populations, often conducted online, by telephone, or in person at events.
  • The advantage of online and telephone surveys is the ability to sample a representative population through methods like simple random sampling or systematic sampling.

Self-Report Data in Surveys

  • Surveys primarily rely on self-report data, although they can also include behavioral measurements (e.g., response times).
  • They can collect both quantitative and qualitative data; examples include numerical age responses or Likert scale questions assessing feelings.

Types of Survey Questions

  • Common question formats include ratio scales (e.g., age), interval/ordinal scales (e.g., depression levels), and open-ended questions allowing detailed responses.
  • Misconceptions exist that surveys are only correlational; they can be designed experimentally by manipulating independent variables and randomly assigning participants to conditions.

Importance of Wording in Survey Design

  • The wording of survey questions significantly influences responses; poorly phrased questions can lead to biased results. A historical example illustrates how different wordings about campaign finance led to drastically different public opinions.
  • This phenomenon is known as the framing effect, where presenting options differently alters decision-making outcomes. For instance, emphasizing risks versus gains affects choices made by respondents.

Constructing Effective Survey Questions

  • Researchers must avoid double-barreled questions that ask multiple things simultaneously, which can confuse respondents and obscure interpretation of results. An example includes ambiguous questions about outlawing certain foods that could lead to unclear answers.
  • Leading questions should also be avoided as they assume certain facts that may not be true, limiting the validity of the responses collected. An example highlights how assumptions embedded in a question can skew results negatively.

Understanding Effective Survey Question Design

The Importance of Clear Questions

  • Double negative questions can confuse survey takers, leading to uninterpretable answers. It's crucial to avoid such constructions to ensure clarity.
  • Even if a double negative has one clear meaning, respondents may still misinterpret it. Simplifying language increases the likelihood of valid responses.

BRUSSO Acronym for Questionnaire Construction

  • The acronym BRUSSO outlines five qualities for effective survey questions: Brief, Relevant, Unambiguous, Specific, and Objective.

1. Brief

  • Questions should be concise; for example, "Do you currently have a pet?" is preferable over lengthy alternatives that complicate understanding.

2. Relevant

  • Irrelevant questions can lead to survey fatigue and affect how participants respond to important queries.

3. Unambiguous

  • Questions must be clear; asking if someone is a "cat person" can be interpreted in various ways. Instead, ask direct questions like "Do you own one or more cats?"

4. Specific

  • Avoid double-barreled questions that address multiple topics without allowing separate answers. Be specific about what you're asking.

5. Objective

  • Use neutral wording in questions to prevent bias and ensure that personal views do not influence the responses.

Designing Closed-ended Questions

  • Ensure options in closed-ended questions are mutually exclusive (no overlap between choices) and exhaustive (cover all possible answers).

Examples of Improvement

  • A poorly designed question might list overlapping options like “40 hours” under two different categories; this creates confusion.

Addressing Non-applicable Responses

  • Include an option for those who do not fit any listed category (e.g., people who don’t work), ensuring everyone can provide an answer.

Check All That Apply Style Questions

  • In some cases, such as gender identity surveys, allow respondents to select multiple applicable options rather than forcing them into mutually exclusive categories.

Survey Construction and Reverse Scoring

Importance of Survey Design

  • When constructing surveys, especially political polls, it's crucial to ensure that respondents provide accurate answers rather than defaulting to a pattern of responses.
  • Respondents may stop reading questions carefully in long surveys, leading them to select answers based on patterns rather than their true opinions.

Issues with Response Patterns

  • A respondent who dislikes a subject (e.g., Obama) might circle low numbers consistently without considering each question, resulting in inaccurate data.
  • This autopilot response can create misleading results where all answers appear uniform, indicating a lack of engagement with the survey content.

Implementing Reverse Scored Questions

  • To combat response bias, incorporating reverse scored questions is recommended. These require respondents to think critically about their answers by reversing the scoring scale for certain items.
  • For example, in personality assessments like the Big Five Inventory, some statements are phrased positively while others negatively to ensure careful consideration from respondents.

Data Analysis Adjustments

  • After collecting responses, it’s necessary to convert scores from reverse scored questions before analysis. This involves flipping the score (e.g., 1 becomes 7 and vice versa).
  • An alternative method for conversion is using the formula: minimum score + maximum score - selected answer. This ensures consistency in interpreting high and low scores across different questions.

Real-Life Application of Reverse Scoring

  • The benevolent sexism scale exemplifies reverse scoring; certain questions are designed so that agreement indicates higher levels of sexism while others do not.
  • Accurately totaling scores requires adjusting for these reversed items to reflect true attitudes towards benevolent sexism effectively.

Avoiding Order Effects in Surveys

Understanding Order Effects

  • Order effects occur when earlier questions influence how respondents answer later ones. This can lead to biased or skewed results based on prior context provided by initial queries.

Example of Priming Effects

  • If early survey questions prime participants to think negatively about their eating habits or willpower, this could affect their responses regarding food enjoyment later on.

Understanding Order Effects in Survey Design

The Impact of Math Performance on Beliefs

  • Discusses the relationship between math performance and beliefs about math, suggesting that initial poor performance can lead to negative beliefs about the subject.
  • Highlights how respondents' mindset can be influenced by their experiences, particularly when they fail at math questions before expressing their opinions.

Real-Life Examples of Order Effects

  • Introduces a Pew Research example from the early 2000s regarding civil unions, showing that prior questions about gay marriage affected responses to subsequent questions.
  • Demonstrates how framing influences public opinion; asking about gay marriage first led to more favorable views on civil unions afterward.

Variability in Responses Based on Question Order

  • Another Pew example illustrates how satisfaction with the country varied based on whether participants were asked about President Bush first or not.
  • Emphasizes that question order can significantly sway responses, demonstrating the importance of survey design.

Counterbalancing and Randomization Techniques

  • Suggests counterbalancing question order as a method to mitigate order effects in surveys, allowing for balanced data collection.
  • Explains an example involving test anxiety and locus of control scales, where different groups answer questions in varying orders to assess potential biases.

Addressing Sensitive Questions in Surveys

  • Discusses challenges with sensitive topics like sexual orientation or illegal behavior, noting that respondents may provide socially desirable answers rather than truthful ones.
  • Highlights the concept of reactive or emotional questions and how societal pressures can influence honesty in responses.

Understanding Anonymity in Research

The Importance of Anonymity

  • Participants may alter their responses to please the experimenter or sabotage results; anonymity can mitigate this issue.
  • Online surveys often yield more honest answers due to perceived anonymity, even when conducted in person using computers.
  • The random response method allows participants to answer truthfully while maintaining confidentiality by flipping a coin before each question.

Ethical Considerations

  • Researchers cannot identify individual responses, which protects participants from judgment for potentially illegal or embarrassing admissions.
  • Maintaining anonymity is crucial not only for honest responses but also for ethical reasons, especially regarding sensitive topics like health and personal trauma.

Risks of Data Breach

  • A small chance of data breach can have severe consequences, particularly in regions where certain behaviors are criminalized.
  • If anonymity can't be guaranteed, confidentiality through strong encryption should be prioritized to protect participant information.

Limitations of Self-Report Surveys

Discrepancies Between Attitudes and Behavior

  • Self-reported intentions may not align with actual behavior; individuals might act contrary to their stated beliefs under different circumstances.
  • Affective forecasting shows that people struggle to predict how future events will impact their happiness accurately.

Impact Bias and Emotional Resilience

  • Research indicates that people's happiness tends to revert to baseline levels within 6–12 months after significant life events, such as winning the lottery or experiencing trauma.
  • Individuals often underestimate their emotional resilience and overestimate the long-term effects of both positive and negative experiences.

Understanding Self-Reporting Bias in Human Behavior

The Challenge of Accurate Self-Reporting

  • Researchers often face discrepancies between what individuals say they would do or feel and the actual reality, highlighting a significant challenge in behavioral studies.
  • Historical context is provided through Kinsey's innovative methods for measuring human sexuality, which included sending postcards for private self-reporting on sensitive topics.
  • Findings revealed that self-reported measurements can be inaccurate; individuals tend to bias their answers when measuring personal attributes like physical size or psychological traits.

Implicit Bias and Measurement Techniques

  • Self-assessments regarding memory or extroversion may reflect subjective biases rather than objective truths, leading to misleading conclusions about one's abilities.
  • The implicit association test (IAT) serves as an example where individuals may deny biases (e.g., racism, sexism), yet their behaviors reveal unconscious prejudices not captured by self-reports.

Exploring the Implicit Association Test (IAT)

  • The IAT is designed to uncover unconscious biases by measuring reaction times to various word associations, revealing underlying stereotypes absorbed from culture and experience.
  • For instance, cultural stereotypes suggest girls excel in reading while boys are better at math; these beliefs can influence responses even if individuals consciously reject them.

Reaction Time Variations in IAT

  • Participants respond quickly to words associated with gender and subject matter (math vs. reading), but performance varies significantly based on how tasks are framed.
  • When tasked with associating positive terms with traditionally negative stereotypes (e.g., pairing "good" with Black faces), participants exhibit slower response times compared to more conventional pairings.

Real-world Applications of IAT Findings

  • Studies involving doctors demonstrate how implicit biases affect decision-making; variations in task conditions reveal differences in processing speed related to racial associations.
  • Results indicate that doctors were slower when positive attributes were paired with Black patients versus White patients, suggesting deeper cognitive processing required due to ingrained societal biases.

Understanding Unconscious Bias and Self-Report Limitations

The Nature of Unconscious Bias

  • People often unconsciously associate negative traits with black and positive traits with white, highlighting the prevalence of unconscious biases that individuals may not consciously acknowledge.
  • The Implicit Association Test (IAT) has faced criticism regarding its predictive validity for real-world behaviors, emphasizing the need to be cautious about relying solely on self-reported data.

Limitations of Self-Reporting

  • Self-reporting may fail to capture all unconscious influences on behavior, as individuals are often unaware of these factors affecting their beliefs and actions.
  • When asked about their voting choices, people tend to provide confident explanations that may not accurately reflect their true motivations, suggesting a tendency for confabulation in self-explanations.

Influences on Behavior

  • Research indicates that superficial factors like candidates' appearances can significantly influence voting behavior, even when voters lack familiarity with the candidates.
  • People's decisions are shaped by various unconscious influences including genetics, environment, and situational contexts which they might not recognize or acknowledge.

Constructing Narratives Post-Hoc

  • Individuals often create narratives about their past behaviors after the fact as a way to rationalize actions influenced by unconscious processes.
  • Researchers must recognize that human self-report is not always reliable; narratives constructed post-event may obscure true motivations behind actions.

Value of Self-Report Data

  • Despite limitations, self-report data remains crucial in psychology for gathering large representative samples and identifying patterns in human behavior.
  • Careful survey design is essential; poorly constructed surveys can lead to ambiguous results. Thoughtful question framing can reveal how different contexts affect responses.

This structured summary provides an overview of key concepts related to unconscious bias and the limitations inherent in self-report methodologies within psychological research.

Video description

This is a lecture video for a university course in Research Methods taught by Dr. Brian W. Stone. You may wish to play it at x1.25 speed. As with anything taught at the undergraduate level the information here may be simplified, and at higher levels of study there is more nuance to all of it.