Comprehensive CRC Exam Prep
Questions and Answers.
Scales of measurement -
nominal, ordinal, interval, ratio
Nominal scale -
Classifies, assigns numerals but does not distinguish size, amount
Classifying by name based on characteristics of the person or object being measured
- Variables are not numerical - can be measured only in terms of frequencies
- No ordering of the cases is implied
- Cannot average a nominal level - i.e., average of male/female
- AKA categorical variables
- E.g., any categorical variable, such as ethnicity, gender, race, etc.
Ordinal scale -
Indication of ordering, but no indication of distances between objects on the scale
- Provides a measure of magnitude and, thus, often provides more info than a nominal
scale
- Instrument with an ordinal scale makes it possible to determine which scores are
smaller or larger than other scores
- Rank ordered according to degree
- Categories described ad more or less
- Eg. placing 1st, 2nd, 3rd
- Ex. Education: 0=less than HS, 1=some HS, etc.
- Ex. SES (upper, middle, working, lower)
- Ex. Client satisfaction
- Ex. GPA (distance between 4 and 3 is not the same as that of 3 and 2)
- Ex. Likert scale (strongly agree, agree, etc.)
Interval scale -
- Units are equal intervals on the scale
- Thus, a difference of 5 points between 45 and 50 represents the same amount of
change as the difference of 5 points between 85 and 90
- Distance between attributes DOES have meaning, but NO true 0 point
,- Standardized scales that have had norms developed based on very large samples are
about the only type of interval measures that practitioners encounter
- E.g., temperature scale
- Ratios do not make sense because distance from 30 to 40 is the same as from 70 to
80, but 80 degrees is not twice as hot as 40 degrees
Ratio scale -
- Has all the properties of an interval scale (meaningful distance), but also possesses a
true, non-arbitrary zero point
E.g., measures of weight, height, income, years of schooling, blood pressure, number of
kids, driving mph
Validity -
Estimates how well a test measures what it purports to measure
The degree to which accumulated evidence and theory support specific interpretations
of test scores entailed by proposed use of a test
Reliability is a prerequisite for validity
Ex. A scale accurately measures a person's true weight
-- If the scale does not accurately measure the person's true weight, even if each time
they stepped on the scale the weight displayed was consistent, the scale would have
little value
Reliability -
The dependability, CONSISTENCY, and precision of an assessment procedure
Produces similar results when repeated
(The degree to which test scores are dependable and repeatable for an individual test
taker)
Prerequisite for validity
Ex. a person steps on a scale and the scale consistently indicates the exact same
weight (whether or not this is the person's accurate, true weight)
Test-retest reliability -
Measure of consistency over time
,Indicate relationships between scores obtained by individuals within the same group on
2 administrations of the test
(A common method for estimating the reliability of an instrument is to give the identical
instrument twice to the same group of people)
- A reliability coefficient si calculated by correlation the performance on the first
administration with the performance on the second administration
Internal consistency -
A method of reliability in which we judge how well the items on a test that are proposed
to measure the same construct produce similar results
- Internal consistency reliability is assessed by using responses at only one point in time
Split-half reliability -
Measure of internal consistency
(Spearman-Brown formula)
Consistency of scales obtained by people within the same group on 2 different parts of
the test (comparing someone's score on one half of a measure to their score on the
other half)
E.g., odd vs. even items
- The instrument is given once and then split in half to determine the reliability
- First step in this method involves dividing the instrument into equivalent halves
- splitting the instrument into the first half and the second half is sometimes not
appropriate because some tests become progressively more difficult
Interrater reliability -
The extent to which different raters agree on their observations of a behavior being
measured
Parallel forms reliability -
consistency of scores of people within the same group on 2 alternate but equivalent
forms of the same test taken at the same time
Cronbach's alpha -
Internal consistency statistic calculated from the pairwise correlation between items
- Measures internal consistency by correlating each item on a measure with every other
item
, Face validity -
appraisal of a test's content based on the "face" of the test (looking at the content)
Content validity -
- Degree to which the evidence indicates the items, questions, or tasks adequately
represent the intended behavior domain
- Evaluation by subject matter experts of test items' representativeness of the construct
being measured
- Central focus on how the instrument's content was determined
Criterion or predictive validity -
Comparison of the test with a related outcome measure
- Prediction is used with concurrent validity in a broad sense because it is predicting
behavior based on the current context --- this type of criterion related validity is used
when we want to make an immediate prediction such as with diagnosis
-- How well scores on the measure predict behavior at a time in the future
Classification consistency -
the instrument may be given twice and a statistical analysis of consistency is conducted
Concurrent validity -
Different from predictive validity - difference being the period of time between taking the
instrument and gathering the criterion info
In concurrent validation, there is no time lag between when the instrument is given and
when the criterion info is gathered
- How well scores are related to a criterion measured at the same time
Construct validity -
The extent to which the measure actually measures the theoretical construct
- How adequately the operational definition of a variable actually reflects the true
meaning of the variable
Indicators of construct validity:
- face validity - the measure appears to measure what it is supposed to measure
- criterion-oriented validity - how well scores on a measure relate to a criterion (an
indicator of the construct)
Criterion-oriented validity -
- An indicator of construct validity
- How well scores on a measure relate to a criterion (an indicator of the construct)
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller JayGracey. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $25.99. You're not tied to anything after your purchase.