Perception les 1 13/11/2020 Akyürek
Why should we study perception? Perception is fundamental to our life; it is essential to anything
that we do. This is not a novel idea, ancient philosophers already thought about this issue. There is a
difference between what is physical real (physical reality) and what we perceive (perceptual reality).
If you look at the left flower, it has an interesting property, if you look at it with a special device, you
would see the right picture. This is not something that we can see but it seems to be in the flower.
There are animals that can see this (bees). What we consider to be out there, that we perceive
physical reality, isn’t all that simple, there are many things that we can’t perceive.
A closely related question is the following, is our perceptual system the perfect measuring device
that we think it is? Can we trust our perceptual reality? Kant said about this that the senses don’t
make errors, because it is just an objective (senses don’t judge). There is a stimulation in the outside
world, it hits your senses and then there is a percept. We are not making any judgement about what
we perceive. But is this true? No, because sometimes the physical reality (stimulus) is different from
what we perceive (visual illusion). There are many things that are guesses, what our perceptual
system has learned to be probable.
The perceptual process there is a stimulus in the world, this is called a distal stimulus. This distal
stimulus could be light entering your eyes, sound entering your ears etc. those sorts of more physical
properties are proximal stimulus. These proximal stimuli are t3hen converted by our senses (eye, ear)
into neural signals. This is then processed in the brain and at some point, you will get perceptual
awareness of the stimulus. Sensation is sort of the early registration of the proximal stimulus. The
perceptual process is further down the hierarchy (at the brain level), it is the later process where you
have an actual percept. This dissociation between sensation and perception is not very strict. Some
things are driven by the world around us, this is called bottom up, then the properties of the stimulus
cause something in our brain. There is also top-down, if we have strong expectations about
something, we are more likely to hear that type of thing. So, there are influences from both sides,
not just the stimulus but also how we are set up.
There are three main types of questions:
- How does the proximal stimulus carry information about the thing that is perceived (i.e., the
distal stimulus)?
- How is the proximal stimulus transformed into neural signals?
- What is the relationship between perceptual experience and the distal stimulus, the thing
that is perceived?
The primary senses
- Vision (sight) the physical stimulus is light; the transduced receptors are the
photoreceptors in the eye; what is sensed physically is the intensity, wavelength;
perceptually we perceive brightness, colour, shape etc.
- Audition (hearing) involved in the processing of sound; the transduced receptors are the
hair cells in the inner ear; sensed physically is the amplitude, wavelength etc.; sensed
perceptually is loudness, pitch, etc.
- Tactile perception (touch); Olfaction (smell); Gustation (taste)
- But there are others, like proprioception, pain perception, thermoreception; all these things
are considered to be senses.
Evolution has played a large role in the development of our perceptual system. There is a huge
variety between animals, these eyes can also have slightly different functions/sensitivities etc.
,Measuring perception (perception is not only the physical stimulus that is out there but also what is
in our heads) is a challenge. In order to measure it objectively, we need a solid measuring method. In
behavior, we make use of psychophysics. We are dealing with first person data, this is something that
is in the individual’s heads, this makes it difficult. In perception, you can say that you perceive a
certain colour, but what is e.g. ‘red’ to you perceptually, the percept of this colour can be different
from how others see the colour, this is hard. Another person can’t see what you see so we need a
measure for this. This is the main challenge. Psychophysics for perception is one way to measure this
methodologically for research.
Topic that psychophysics is concerned with:
- Measuring thresholds (what is the minimum intensity that we need to perceive a particular
stimulus; what is the minimum difference between stimuli that we can perceive, at what
point can we tell e.g. that one light is brighter than the other).
- Scaling (how does a perceptual experience change with differences in the stimulus in the
environment, e.g. if we turn up the volume of our speaker system, how does our perception
of that sound changes in relation to the change in actual physical sound)
Psychophysics finding the absolute threshold (the minimum), how can we find this. Also referred
to as a detection task (how can we have a stimulus that we can minimally detect). How can we find
this absolute threshold (detection)?
1. Method of constant stimuli they present stimuli of varying intensity; they present the
participant with a number of trials in which they can measure the participants response to
that light. They can determine at what moment (at what intensity), the participant can
indicate that they saw the light (or e.g. heard a sound). They have three trials in which they
present the participant with all the different intensities and measure their response. You
have to do a lot of presentations of every intensity, this makes it hard. This method could be
cleverer, by simplifying the method and present fewer stimuli so that we get the measures
more quickly without losing too much information.
2. This is called the method of limits / method of adjustment they look for the threshold in a
slightly different way. Again, there are three trial series and they have the same intensities of
stimuli. However, in this method they don’t randomly present everything, there is a certain
sequence. E.g. in the first trial series they start at the bottom and see at what point the
participants responds changes. As soon as that happens, they quit, they don’t show the other
intensities. In the next trial series, they go from the top to the bottom (switch the order) and
again they are going to approach the threshold. So, they are more closely approaching the
actual threshold without actually showing all the stimuli.
Finding the absolute threshold: the psychometric function. You would expect that at the low
intensity tones, there is no percept; but at some point, the threshold is reached and now you will
always see/hear it. This doesn’t happen, it is a bit more gradual. What we typically see is a very
gradual graph (psychometric function). The responses are a bit more graded, so at some of the lower
intensities, there is sometimes a percept. Sometimes at the high intensities, people still miss it. This
graded function is more probabilistic. With this psychometric function we are still able to define the
minimum threshold.
3. The last method of psychophysics is the staircase method, this is the most efficient method.
They start at the lowest or highest intensity level and do this as the method of limits. They go
up until there is a change and then stop. Then they start at a level that is one below where
the change occurred. So, when the participant on the first trial has a changed response at 9,
then on the second trial they will start from 7 and work down. So, you are kind of hovering
around the threshold. There is a slight drawback in this method, namely the empty rows of
the higher intensities. There are no observations of the high intensities. So, we don’t know
, what the participant would do there. It is only a small drawback because the participants are
likely to perceive those stimuli, but you can’t be sure. This doesn’t happen with the other
methods.
Why would we do all this in the first place, the reason is because of people. People are unreliable and
they can do weird things, this can be a problem in experiments. People might not behave as a
perfect, honest and objective observer. We need a way to get a grip on this, this is where the signal
detection theory (SDT) comes in. The question really is, how do we get such responses that were
shown in the table? How do the yes/no responses come to be? There are two factors that might
contribute. The first one is sensitivity, how well people are able to sense the signal in random
background noise. The second one is that maybe they have decision strategies (bias). We need a way
to try and figure this out.
Neural noise: you see a presentation of a stimulus across times and the trial number. You can
observe the spike rate of neurons per trial while the stimulus was on. In trial 1, first there is a period
in which nothing happens. Then a stimulus comes on and the firing rate of the neuron involved in the
perception of that stimulus starts to change, there is an increasing firing rate in the interval. When
the stimulus is off again, the firing rate decrease again. This is how perception might work on a neural
level. The point is that it is not that we can say that whenever it is not 0, there is a percept. Because
there is also firing rate when there is no stimulus presented. So, we need something that is smarter
to work out whether or not there is as stimulus there. You can plot the spikes rate per trial in a graph,
you get a normal distribution. There is a range of firings that are associated with the presence of that
stimulus, in this example it is mostly around 15 spikes per trial, but it is not either 15 or nothing.
Sometimes there are more spikes in an interval and sometimes less. This is the neural noise that is
the first factor that influences the decision to say yes or no.
What happens if we change the intensity of a stimulus? There is an important thing in the
distribution, there is a certain tone of a certain intensity, this elicits a certain spike distribution in
response to this particular stimulus. But there is also spikes when no stimulus is presented, the signal
absence distribution. This is very important, there is also neural firing in the absence of a stimulus
(this is shown in red in the graph left upper corner). In this distribution, you might say that you want
to be ab le to detect the stimulus but not be fooled in the absence of the stimulus to think that there
is one. So, you might have a criterion value at 15 spikes per trial, if you use this value, you have no
cases in which the event occurs where you have signal absence but there was some firing rate
somehow. So, you only get situations in which there is just green stuff (not the red line), only the
actual stimulus. In this scenario with a relatively low tone intensity, there are not many of those
cases. What happens if we increase the tone intensity? You can see that in the graph things start to
shift. The criterion of 15 spikes per trial is still the same, excluding all the trials in which the target
was absent, you see that there are more trials in which we can detect the stimulus without a risk of
having no stimulus. These distributions lead to the psychometric function. At tone intensity 3, you
end up relatively close to 0 in the function because there are very few yes responses. We can only
detect very few of them because there is a high overlap between the distributions. They are largely
the same, to be able to really chose only the target present distributions, there is only a low chance
(a few yes responses). As the separation between the two distributions increases, we do a bit better,
there are more yes responses. If you increase the intensity even further (e.g. 7), the distributions
separate even more which leads to the situation in which you can detect halve of the trials in which
there is a stimulus. At intensity 9, even more yes responses result. Then we get this psychometric
function. These distributions and the overlap between the normal distributions of signal absent and
signal present make the smooth psychometric function. This comes from the overlap in distributions.
This is mostly about sensitivity, there is a pair of distributions (between signal absence and signal
presence) and the separation between these two determines how sensitive we are.
, But we can also vary our criterion value, we took the criteria of 15 to make sure that there are no
signal absence cases in the answer. You can shift this criterion; this happens a lot in real life. E.g.
when you look at suitcases in airports, you don’t want to mis these cases. So, maybe you want a few
false alarms just to make sure that you never mis a case in which there is a valid alarm. So, this
criterion value can shift.
In order to separate these two things, you can separate the decision-making bias (criterion; b) from
your sensitivity (separation between distributions). These two can be separated by using catch trials
there are signal present and signal absent trials and two responses (yes/no). If there is a signal
present and the participants responds with yes, there is a hit (true positive). If you say ‘no’ even
though there was a signal present, this is a miss (false negative). In case that there is no signal and
you respond with yes, this is a false alarm (false positive. If you respond in the last scenario with ‘no’,
then there is a correct rejection (true negative). This distribution gives us the means to separate the
sensitivity and the decision-making bias.
In the left graph you can have the criterion response as the cut-off value. This means that for all the
trials in the signal present distribution (on the right side) are hits. There are also a few in this
distribution, of their being a stimulus but that fall under the criterion value, so you miss those. In the
right graph, you have the signal absence distribution. Already, with this criterion, there is a fair
number of false alarms (all cases on the right side of the criterion response). Fortunately, in this case,
there are also a lot of correct rejections as well.
A very good way to visualize this sort of behavior (criterion) in these experiments is to use a receiver
operating characteristic (ROC) plot, it is a very good way to think about these things in terms of
criterion, your bias and sensitivity.
You see the distributions on the left, the layout of these distributions doesn’t change (no change in
your sensitivity, or intensity). There are just two cases, there is a tone/light that is absent that has a
certain distribution of spike rates; and there is a tone present condition with a slightly different
distribution of spikes per trial. These distributions don’t change in A, B or C, the only thing that does
change across these panels is that we change the decision criterion. In A, we consider anything above
16 to be tone present, in B above 19, and in C above 25 (you will have very few false alarms here). If
you then plot this, false alarm rates on the horizontal and hit rates on the vertical axis, then you can
visualize this.
Decision criterion = 16 spikes/trial you see that the false alarm rate at .42 is the red patch in the
graph. You get a hit rate of .9, this means that we have a lot of the tone present distribution covered.
If you shift the criterion, then there is more of the distribution that falls under the cut-off, so the hit
rate is lower. But also, the false alarm rate is lower.
We can also similarly, plot the sensitivity that someone has on the ROC plot. As the stimulus intensity
increases, the separation between the distributions starts to increase as well. The difference
between the two distributions is the sensitivity that you have, it is d’. If you look at the bottom graph,
you have a stimulus intensity that is below the absolute threshold, you can’t detect this. Then you
get a diagonal line, with a 50% hit and false alarm rate. You have no performance; it is just guessing.
When the stimulus intensity is low, you can see that you get a d’ of 1. Now, if you are at a certain
point in the curve and you have a certain decision criterion (which can vary), you get a certain hit rate
and a certain false alarm rate. This changes with the different d’, as you have more separation
between signal absence and signal presence case, your sensitivity increases, and the graph lines are
pushed towards the upper left corner.
You can shift the criterion (b) around on a curve like this but it doesn’t change your sensitivity,
because your sensitivity is shown by the d’ distribution. This capture both elements in a single plot.