Adv. Res. Met. in Social and Organizatial Psych (PSMSM1)
All documents for this subject (1)
Seller
Follow
danae1
Reviews received
Content preview
Advanced Research Methods in Social and Organizational Psychology – Lecture
Notes (COMPLETE!)
Lecture 1 – Intro to course; validity and reliability of survey measures; and experimental designs
Refreshing vocabulary:
• 1. True experiment:
22. A design in which participants are assigned randomly to treatments
• 2. Quasi experiment:
5. A design that resembles that of an experiment in that discrete groups are used, but
Pp’s aren’t randomly assigned to treatments nor are treatments randomly determined
for groups
• 3. “Between groups" vs. "within groups" design:
6. A treatment between conditions and a measurement referring to differences within
the particpants at different times
• 4. Construct variable:
9. A theoretical variable that has ‘reality status” such as competition, attractiveness,
negative mood
• 5. Operational variable:
10. The researchers’ precise operational definition (i.e., measurement) of a theoretical
construct
• 6. Independent variable:
12. The variable presumed to cause a change in the dependent variable
• 7. Dependent variable:
2. The variable presumed to be affected by the independent variable
• 8. Hypothesis:
13. A statement of a proposed relation between constructs
• 9. Theory:
4. A well-established principle that has been developed to explain some aspect of the
natural world.
• 10. Construct validity:
19. The degree to which the operational definition accurately measures the construct of
interest
• 11. Convergent validity:
1. Overlap among variables presumed to measure the same construct
• 12. Discriminant validity:
14. The extent to which it is possible to discriminate between dissimilar constructs
• 13. Random assignment:
15. The process by which subjects receive an equal chance of being assigned to a
particular condition
• 14. Manipulation check:
16. A measured variable designed to assess whether the manipulation worked and
tapped a desired construct
• 15. Demand characteristic:
17. Aspect of the experiment encouraging the participant to respond according to
situational constraints
• 16. Reliability:
18. The extent to which a construct is measured without error or bias
• 17. Subject expectancies:
7. A demand characteristic whereby subjects think they know the experimenters’
interests and act accordingly
, • 18. Double-blind procedure:
3. A procedure in which neither experimenter nor pp knows to which condition the pp is
assigned
• 19. Order effects:
21. The effects on behavior of presenting two or more treatments to the same pps.
• 20. Counter-balancing:
8. A technique for controlling order effects by which each condition is presented first to
each participant an equal number of times: present each condition in each possible
order
• 21. Moderation (interaction)
11. The effect of an independent variable on a dependent variable depends on the level
of a third variable (i.e., the moderator)
• 22. Mediation:
20. The effect of an independent variable on a dependent variable is explained by the
change in a third variable (i.e., the mediator)
How to measure psychological constructs: Measurement starts from a theoretical construct,
which leads to observations, which leaders to values/scores (measures). How people score on a
measure should reflect the theoretical construct. Two important things for measurement:
1. Construct validity: Does it measure what is supposed to measure?
2. Reliability: Is the construct measured reliably, that is, without too much error?
Self-report
A self-report measure is when participants’ give the information themselves (e.g., diary of
reflection). An example is ‘I try to restrict my consumption of meat’ and let the participants’
answer on a scale. Reasons to use self-report measures are: easy to administer, cheap etc.
Important things for self-report are:
• Question wording is important in self-report measured. They need to be easy to
understand; unambiguous (e.g., ‘I identify with my groups’ or’ I work hard’ are not very
clear); and you need to avoid double-barreled questions.
• Open versus closed questions: Open questions lead to better, more valid responses. But
it takes time to code the responses to open questions. Closed questions are easier to
use for the investigator.
• Response options: In many cases it is better to use response options that match the
question. The statement with agree-disagree is options is widely used. Most
respondents prefer all response options to be labeled. You need 5 response options for
a unipolar scale and 7 for a bipolar scale but it depends also a lot on a specific question
and on target respondents.
• Creating scales from multiple items I: Measuring a construct with a single
item/statement is often undesirable. Reliability is assessed with Crohnbach’s Alpha or
Omega. Reliability increases with the number of items and with correlation between
items.
o Bandwidth versus fidelity problem: A broad construct requires many diverse
items. However, diversity in item content leads to low fidelity (lower correlation
between items). It is silly to ask the same question 5 times in a slightly different
way to increase alpha. So, if you want to measure a broad construct, then the
item content should reflect this.
▪ For example, extraversion includes sociability, assertiveness, and
talkativeness. So, can you measure extraversion with five items about
how much people like parties? Of course, liking parties is relevant, but it
, is only a small part of the construct. These five items lead to a scale with
high reliability but low validity because only one aspect of extraversion
has been assessed.
• Creating scales from multiple items II: Acquiescence means that some people tend to
agree with most statements. The solution for this is to include both forward-scored and
reverse-scored items. This causes a new problem, namely, negation in reverse-scored
item is often overlooked by respondents (so put it in CAPITALS). Another problem is that
sometimes items get different substantive meaning (e.g., I like my boss and I dislike my
boss).
• Construct validity of self-report scales
o Construct validity tests if items are representative of its construct (e.g., MC
exam).
o Structural validity tests if the factor structure (dimensions) consist with the
structure of the theoretical construct (e.g., personality measure).
o Generalizability: Measure works in different contexts and for different
populations.
o Convergent and discriminant validity tests the correlation with related constructs
but does not test the correlation with unrelated constructs.
o Method variance is when two constructs are measured with the same method
(e.g., self-report, maybe even with structurally similar items) this inflates their
correlation. For example, a relation between self-esteem and health when both
are self-reported, are these due to real variation or to common method
variance? So, avoiding common method variance makes for a stronger design
(also in OUR final assignment).
What are potential problems of self-report measures?
Potential problems of self-report measures:
• Answers do not reflect the underlying attitude (social desirability bias or response bias).
E.g., acquiescence response style: respond positively (‘true’ of ‘yes’).
• Questions are misinterpreted or interpreted differently by different subjects.
• Survey fatigue
• Low introspective ability (people do not know their own mental state).
→ Concern about construct validity because there are many other influences other than
item content. Solutions are: Other report on target individual or target itself completes a
non-self-report measure.
Other-ratings
Other-ratings/observer-ratings frequently used in occupational psychological risk assessment.
Research found a moderate to high agreement between workers self-rated and observed
occupational psychosocial demands. Both methods are useful assessment strategies in the
context of psychosocial risk assessment of adverse working conditions (Schneider et al., 2019).
Observer-ratings can be done by occupational safety and health (OSH) committees). OSH
experts are for example: occupational health physicians, health & safety experts, industrial and
organizational psychologistic.
However, other-ratings may bring other biases:
• An example is supervisor ratings, which can lead to:
, o Halo: inappropriate generalization from one aspect of a worker (e.g.,
outstanding trait) on other aspects.
o Contrast: Tendency to evaluate a person relative to another person.
o Liking bias: The tendency to judge in favor of people and symbols we like.
• Another example is rater motivation/emotions/intentions: These are processes that
might lead raters to provide inaccurate ratings intentionally or unintentionally (e.g., role
of power motive, trust, threat perceptions). It’s important to remember that a lot
depends on the relation between the rater and the target.
• Another example is that other-ratings of OCB (i.e., colleagues, supervisors) could lead to
underestimation, because some OCBs may be difficult to observe (e.g., keeping up with
what is going on with the organization and preventing work-related conflicts with others
(courtesy)). Only public behaviors are observable, private behaviors are not and mental
states are not always observable. In addition, people might behave differently in front of
others; few have others have a complete view on who we are (e.g., in front of
supervisors and different friends can bring out different aspects of who we are).
Multi-rater example - 360° Feedback: Feedback regarding an employee’s
behavior from a variety of points of view. It is used for personnel development,
performance appraisal, and decisions (e.g., compensation). Cons are that multi-
rater assessment may generate diverging feedback (e.g., self-ratings tend to be
higher). This often leads to reactivity, defensiveness, and dissatisfaction in terms
of discrepant evaluations.
In sum – How good are other-reports?:
• Varying degrees of agreement depending on both the construct (e.g., behavior, affect,
attitude) and the source (e.g., supervisor, external committee)
• Other-ratings (i.e., colleagues, supervisors) could lead to underestimation. Some states
are difficult to observe e.g., keeping up with what is going on with the organization.
• Only public, not private behaviors are observable for other.
• People might behave differently in front of (different) others.
• Other-ratings may also introduce other biases so be aware of those when you use them
in your study design.
• In defense of self-reports: Other-ratings fail to capture individual subjective experience
and perception and some constructs are perceptual in nature → self-report (values,
attitudes, affect).
Research design
Experimental designs test causal relationships and have high internal validity.
Correlational designs observe and describe social reality (e.g., Which groups in society are not
following coronavirus measures and why?). Experimental designs (Do social norms affect
people’s willingness to get testes and self-quarantine?) test causal relationships. The choice
between these two designs depends on your research questions.
Experiments test causality and have high internal validity that X → Y. This is important for
theory and for effective interventions/policy.
Manipulation: Construct validity of the manipulation concerns if it manipulates (only) what you
wanted to manipulate. With a manipulation you need to add a control condition where
everything is equal to the other conditions, apart from the crucial element of the manipulation.
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller danae1. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $6.91. You're not tied to anything after your purchase.