100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Samenvatting KOM Research Methods Experimenteel en integriteit $6.37   Add to cart

Summary

Samenvatting KOM Research Methods Experimenteel en integriteit

 6 views  0 purchase
  • Course
  • Institution

Samenvatting boek Research Methods van Beth Morling voor Kennismaking onderzoeksmethoden en Statistiek, deel Experimenteel en Integriteit

Preview 3 out of 20  pages

  • September 19, 2023
  • 20
  • 2021/2022
  • Summary
avatar-seller
Boek H10 blz 273 - 286
Experiment: the researcher must have manipulated at least one variable and measured
another.
Manipulated variable: variable that is controlled.
Measured variable: take the form of records of behavior or attitudes, such as self-reports,
behavioral observations or physiological measures.
The manipulated variable is also called the independent variable. The levels of an
independent variable are called conditions.
The measured variable is also called the dependent variable, or outcome variable. How a
participant acts on the measured variable depends on the level of the independent variable.
When researchers graph their results, the independent variable is almost always on the x-
axis and the dependent variable is on the y-axis. The independent variable always comes
first in time, and the dependent variable comes second or later.
Researchers also control third variables (nuisance variables) in their studies by holding all
other factors constant between the levels of the independent variable. Any variable that an
experimenter holds constant on purpose is called a control variable. Control variables are
not really variables at all because they do not vary.

The three rules for an experiment to support a causal claim are:
- Covariance: do the results show that the causal variable is related to the effect
variable?
- Temporal precedence: does the study design ensure that the causal variable
comes before the outcome variable in time?
- Internal validity: Does the study design rule out alternative explanations for the
results?
One of the upsides of an experiment is the fact that they have comparison groups, this way
you can answer the question; compared to what?
Control group: a level of an independent variable that is intended to represent ‘no
treatment’ or a neutral condition. When a study has a control group, the other level or levels
of the independent variable are usually called the treatment group(s). When the control
group is given an inert treatment it is called a placebo group or placebo control group.
When a study uses comparison groups, the levels of the independent variable differ in some
intended and meaningful way. All experiments need a comparison group so the researchers
can compare one condition to another, but the comparison group does not need to be a
control group.
The ability to establish temporal precedence is a feature that makes experiments superior to
correlational designs. Experiments unfold over time, and the experimenter makes sure the
independent variable comes first.
For a study to be internally valid one must ensure that the causal variable and not other
factors is what is responsible for the change in the outcome variable. You can interrogate
this validity by exploring alternative explanations.
For any given research question, there can be several possible alternative explanations,
known as confounds or potential threats to internal validity.
Internal validity is subject to a number of distinct threats:
- Design confound: an experimenter’s mistake in designing the independent variable;
it is a second variable that happens to vary systematically along with the intended
independent variable and therefore is an alternative explanation for the result.

,When an experiment has a design confound, it has poor internal validity and cannot support
a causal claim.
Not every potentially problematic variable is a confound. It is only a problem if it shows
systematic variability with the independent variable. However, if it shows unsystematic
variability (random or haphazard) then it is not a confound. Unsystematic variability can
lead to other problems in an experiment. It can obscure, or make difficult to detect
differences in, the dependent variable. However, unsystematic variability should not be
called a design confound.
- Selection effect: when the kinds of participants in one level of the independent
variable are systematically different from those in the other.
A selection effect may occur if the experimenters assign one type of person to one condition,
and another type of person to another condition. Well-designed experiments often use
random assignment to avoid selection effects. Assigning participants at random to different
levels of the independent variable controls for all sorts of potential selection effects.
In the case that researchers wish to be absolutely sure the experimental groups are as equal
as possible before they administer the independent variable, they may choose to use
matched groups or matching. To create a matched group, the researchers would first
measure the participants on a particular variable that might matter to the dependent variable.
They would next match participants up in pairs, starting with the two with the highest scores,
and within that matched set, randomly assign one of them to each of the two conditions.
They would continue this process until they reach the participants with the lowest scores.

Blz 298 - 306
In an experiment, researchers operationalize two constructs, the independent variable and
the dependent variable. When you interrogate the construct validity of an experiment, you
should ask about the construct validity of each of these variables.

Dependent variables: you should start by asking how well the researchers measured their
dependent variables. One aspect of good measurement is face validity.

To interrogate the construct validity of the independent variables, you would ask how well
the researchers manipulated (or operationalized) them. In some studies, researchers need
to use manipulation checks to collect empirical data on the construct validity of their
independent variables. Manipulation check: an extra dependent variable that researchers
can insert into an experiment to convince them that their experimental manipulation worked.
Manipulation checks are more likely to be used when the intention is to make the
participants think or feel certain ways.
The same procedure might also be used in a pilot study: a simple study, using a seperate
group of participants, that is completed before (or sometimes after) conducting the study of
primary interest.
Experiments are designed to test theories. Therefore interrogating the construct validity of an
experiment requires you to evaluate how well the measures and manipulations researchers
used in their study capture the conceptual variables in their theory.

As with an association or frequency claim, when interrogating a causal claim’s external
validity, you ask how the experimenters recruited their participants.
When asking about external validity, you ask about random sampling.
When asking about internal validity, you ask about random assignment.

, In experiments, internal validity is often prioritized over external validity. To get a clean,
confound-free manipulation, researchers may have to conduct their study in an artificial
environment, hese locations may not represent situations in the real world.

When interrogating statistical validity the first question to ask is whether the difference
between means obtained in the study is statistically significant. A statistically significant
result suggests covariance exists between the variables in the population from which the
sample was drawn.

Knowing a result is statistically significant tells you the result was probably not drawn by
chance from a population in which there is no difference between groups. However, if a
study used a very large sample even tiny differences might be statistically significant. Asking
about effect size can help you evaluate the strength of the covariance.
The correlation coefficient r can help researchers evaluate the effect size of an association.
In experiments the indicator of standardized effect size is called d, it represents how far
apart two experimental groups are on the dependent variable. It also indicates how much the
scores within the groups overlap. It takes into account both the difference between means
and the spread of scores within each group (standard deviation). When d is larger, it
usually means that the independent variable caused the dependent variable to
change for more of the participants in the study. When d is smaller, it usually means
the scores of participants in the two experimental groups overlap more.

Statistical review blz 479 - 495
Inferential statistics: a set of techniques that uses the laws of chance and probability to
help researchers make decisions about the meaning of their data and the inferences they
can make from that information. Inferential statistics are performed with the goal of
estimation.
The traditional inferential statistics technique is called null hypothesis significance testing
(NHST). It follows a set of steps to determine whether the result from a study is statistically
significant.
Null hypothesis: assume that nothing is going on
The steps of null hypothesis significance testing:
1. Assume there is no effect (the null hypothesis)
2. Collect data
3. Calculate the probability of getting such data, or even more extreme data, if the null
hypothesis is true
4. Decide whether to reject or retain the null hypothesis
When we reject the null hypothesis we are essentially saying that: data like these could have
come by chance, but data like these happen very rarely by chance; therefore we are pretty
sure the data were not the result of chance.
When we retain the null hypothesis, we are essentially saying that: data like these could
have happened just by chance, in fact data like these are likely to happen by chance …% of
the time, therefore we conclude that we are not confident enough, based on these data, to
reject the null hypothesis.
Alpha level: the point at which researchers will decide whether the p is too high (and
therefore will retain the null hypothesis) or very low (and therefore will reject the null
hypothesis). Usually set at 5%.

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller rosavleemingh. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $6.37. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

67163 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$6.37
  • (0)
  Add to cart