NEURONAL NETWORKS AND BEHAVIOR – SUMMARY LECTURES
LECTURE 1.1: INTRODUCTION & ORGANIZATION OF THE BRAIN
-> Final grade is the exam (80%) and the practical report (20%)
Purves 5th edition: Chapters 8, 9 (p189-205), 11, 12, 13, 15, 16, 17, 20, 21, 26, 29, 31 -> look at Purves
animations https://learninglink.oup.com/access/neuroscience-sixth-edition-student-resources#tag_animations
Purves Chapter 9 (p189-205)
This course is about how our system interacts with the world -> via sensory systems. We know anything about
the outside world because of our 5 senses (touch, hearing, taste, vision, smell) -> but we have many more
senses. Other animals have completely different ways in which they use their senses to understand the world;
every animal constructs a view of the world through its brain and senses (for their particular way of life,
example shark: hears, smells, feels, perceives electrical activity, tastes and blurry vision). It always have been a
question if we can trust our senses (it is easy to miss something you are not looking for). This is because not
only the sensory system allows us to see something, but also attention and assumptions about your reality
(which the sensory system makes) plays an important role. This leads to missing things. Our senses don’t show
us everything (example video 21 changes going unnoticed).
The brain is a prediction machine, it is kind of playing movies in our head of what we think is out there and
what will happen next. The sensory cortices, frontal and parietal lobe will make predictions of what is going to
happen. Thus, our brain creates predictions, expectations and interpretations (of the world around us). Our
brain gets only a limited view of reality outside us. Besides, it largely constructs the image we have of the
outside world. But we are not just our brain, for all senses we need our body -> our nervous system is
interconnected to our body. This way the brain is embedded in our body. Besides the five mentioned senses we
have many more:
• Sight • Tension sensors
• Taste • Nociception (sense of pain)
• Touch • Equilibrioception (sense of balance)
• Smell • Stretch Receptors
• Hearing • Chemoreceptors (sense the chemical
• Pressure (touch sense) environment of our body)
• Itch (touch sense) • Thirst
• Thermoception • Hunger
• Proprioception (how our body (joints, • Magnetoception
limbs) is positioned to each other) • Time
Course aims (biological basis of perception)
• Exploring the biological basis of perception, motor and behavioural control, cognitive and emotional
processes. -> These three parts will be studied at different levels of organization.
1. Perception: receptor neurons, the anatomical pathways and representation in the brain
Neuronal mechanisms of the sensory system -> not only about the organization of this system in our brain but
also how the receptors receive electrochemical signaling and how this signals are transported along the
anatomical pathway. Finally, information about the representation of perception in the brain.
2. Movement: how does the brain react to the world?
In movement we study how lower motor neurons initiate muscle contractions, how upper motor neurons plan
and initiate movement and how filtering and modulating loops (basal ganglia and cerebellum) function.
3. Cognition and emotion: what happens in between sensing and reacting?
Study what happens in between sensing (perception) and reacting (movement) -> attention, cognition,
memory and emotion.
1
,How does the brain work?
The brain consists of many different parts which have its own functionality. These brain parts consist of
neurons which integrates information which comes from other neurons. Neurons create electrical signals
which are transported along axons and when the signals arrives at the pre-synapse, a neurotransmitter (NT)
can be released onto the post-synapse. We have >86 billion neurons in the human brain and one neuron can
have 1000-10.000 synapses per neuron. Taken together, this leads to the enormous amount of 1014-1015
synapses. We use synapses to communicate between neurons -> this happens via so-called synaptic
transmission. The axon makes contact with receptors on the post-synaptic neuron. Electrical signals are
translated into chemical signals via NT release upon the opening of voltage-gated Ca2+ channels and again back
to electrical transmission via binding to the post-synaptic receptor. The communication between neurons
depend on the type of NT and the type of receptor (excitatory or inhibitory, there are excitatory postsynaptic
potentials (EPSP) and inhibitory postsynaptic potentials (IPSP)). If there is enough excitation an action potential
(AP) arises (all-or-nothing).
Summary synaptic transmission:
LECTURE 1.2: GENERAL PLAN OF SENSORY SYSTEMS
Purves Chapter 9 (p189-205)
System of touch (as an example of a sensory system)
If you want to build a sensing machine what would you need? -> a sensor which receives and translates
information into the language of the machine (for example temperature to electrical signaling), besides you
need a transporter which transports information for processing. Finally, it will need to integrate and process
this information (compare the measured temperature to preferred temperature) and give the output.
The sensor are the sensory receptors/axons which translate the energy of the stimulus into electrical signals.
Different receptors have different ways in which the energy is translated into electrical signals. Touch receptors
are channels in your skin which physically can feel the stretch of skin (= touch). After touch the ions flow inside
through the channels and electrical energy is generated (positively charged stream of ions). In the retina there
are different receptors which react to light (photons). These receptors physically change its shape and
electrical signaling can occur. The transporter are axons which transport the signal to the series of relay nuclei.
There are no sensory receptors in your skin but in the dorsal root (ganglion cells) there are. They have very long
axons into the skin and spinal cord. This way, they can react to electrical chemical changes in your skin. The
integration happen via interneurons and local circuitry in nuclei which process the signals from the skin.
1. Reception – sensory receptors translate the energy of the stimulus into electrical signals (modality, location,
intensity and timing).
2
,1.1 Modality means the type of stimulus (connected to the type of energy). A non-sensory neuron (any type of
neuron in your brain) receives information from other neurons (via axons). This cell can either generate EPSP or
IPSP and sum up these to not or do generate APs. With sensory neurons this is different because they don’t
have axons that come from other cells. Sensory receptors are activated by the energy of the stimulus. Each
sensory system has a receptor which reacts only to a certain type of stimulus. For example, vision uses rods and
cones in retina which are filled with sensory receptors that will only react to a certain type of light. Sensory
neurons don’t have cell body’s in our skin, only axons that will react to the changes that happen in the skin
(noticed by the receptors that are also located in the skin membranes). Light/dark will cause a change in
membrane potentials after which NTs can be released.
Modality of receptors: types of sensory receptors:
- Mechanical -> react to mechanical energy (touch, proprioception,
hearing and balance). Physical stretch or tension on the receptor
deforms the membrane and opens the channels.
Corpuscles are axons wrapped
around structures which can sense
the stretch of skin.
Merkel cells: responses very close to the shape of, for example, braille letters -> most accurate stimulus
measurement to understand the shape and form of the stimulus.
Meissner corpuscle: less accurate but more sensitive thus more actions potentials.
Ruffini corpuscle: not really sensitive.
Pacinian corpuscle: you can’t see the shape of the structure (Merkel cells) but these cells detect very small
changes in the stimuli but not the shape and fine details of the object.
Proprioception are receptors which can sense the position of muscles and joints of the body. They are not
located in the skin but in the muscles and tendons. These are axons which are wrapped around certain
structures. If the muscle is stretched, the axons sense this and send the signal to the brain. The same structures
we can find in tendons which can sense the position of our joints. This way, our brain knows what
the position of our muscles is and if we are moving or not.
- Chemical (pain, itch, smell, taste) -> binding of a chemical to the receptor. Chemical receptors have
7 transmembrane domains and can bind a chemical molecule upon which they change conformation
and start an intracellular cascade. We taste actually chemicals which bind to the receptors.
- Photoreceptors (vision: photoreceptors in retina) -> change in conformation of a photosensitive protein.
- Thermal (only react to temperature). These receptors are not as well studied as others.
In general, these 4 different types of sensory receptors react to different types of stimuli but all receptors
activation results in the change of postsynaptic potential.
1.2. Location of receptors. The location of the stimulus is also really important. What is the position of the
stimulus relative to the body? Our body has a map of dermatomes where stimuli arrive in a certain part of the
body -> which determine where the stimuli will arrive at a specific part of the spine -> and then be transmitted
to a specific part in the brain. This is called topographical arrangement of neuronal receptive fields. Each part
of the body has its own receptive fields -> can only receive information from a certain location.
3
,For example, in the skin it are dermatomes but in the retina it are retinotopy (each part of the retina is
sensitive to a certain part of the visual field), tonotopy in cochlea (cochlea are sensitive to certain frequencies
of sound) and gustotopy in gustatory cortex (each taste present in a different part of the cortex).
The topographical arrangement of neuronal receptive fields continue onto the cortex. Information about each
part of the body is processed in a certain part of the brain. Besides, the size of receptor fields is really
important: spatial resolution is determined by the size of the receptive field (= the size of, for example, the skin
that that can process certain stimulus -> differs along our body) and density of receptors. For example, at your
fingertips there are really small receptive fields (sensitive to small changes in shape of object), and the fovea in
the retina also has a really small receptive field (fine details for the image). In contrast, your back and stomach
have bigger receptive fields which process information from a wide area. Thus, different neurons have different
receptive fields.
1.3. The intensity of the stimulus is also important because if the
strength of the stimulus is increasing, our sensory systems needs to
relay this increase of the stimulus. The sensory threshold is
determined by the sensitivity of the receptors -> how much
stimulation is needed to trigger APs? If there is a change in energy of
the stimulus there will be a change in membrane potentials which will
be translated into a digital code of APs. For example, if the stimulus is
increasing there is an increase in membrane potential which can
ultimately result in an AP. In contrast, if there is a change in
membrane potential below the threshold, information can’t be
transported to the brain -> you won’t be aware of the touch.
-> Thus, all sensory information contain thresholds for all stimuli.
1.4. The timing of the stimulus. How does the stimulus change in time? There are several ways to detect this.
Where there is an increasing stimulus which stays, the neuron can start firing action potentials and continue
firing (the frequency will slowly diminish over time). These neurons have a slowly
adapting responses: changes in stimulus are coded in frequency -> they react to
the stimulus and adapt quite slowly (reliable). However, most of our neurons have
rapidly adapting responses which react to the change in the stimulus -> they only
detect the beginning and the end of the stimulus (start and end stimulus are most
important). This leads to adaption -> a constant stimulus fades from consciousness
(like during the day you don’t constantly feel your clothing, but if something
changes the rapidly adapting neurons will let us know). Sensory systems are able to
detect contrasts and motion -> all types of changes we can detect.
2. Transport – axons transport the signal to the series of relay nuclei.
The transport of sensory information happens via parallel
processing. All sensory systems have complicated pathways. In
the pictures in the book you follow the sensory neurons to the
brain where sensory receptors are made. When you study the
pathways you need to remember the parts which are discussed in
the lecture (thus not all anatomical structures).
Remember anatomical sides
Dorsal (more close to back, upper part brain) and ventral (more
close to stomach, lower part brain). Rostral ( front), caudal (tail),
lateral (to the side and medial is to the middle of the brain).
4
,Sensory information pathway of touch starts from dorsal root ganglion,
comes to dorsal horn, then it goes to the caudal to rostral medulla and then
to thalamus and cortex.
There are always multiple parallel pathways which increase the speed of
processing. For touch, there are different types of cells which can process
different types of stimuli. The topographical representation is maintained in
these pathways. Dermatomes will be processed in different parts of the
spinal cord. Mechanoreceptors from the upper and lower body will be kept
separate. Lower limbs are positioned more medial from the spinal cord and
the upper limbs are positioned more lateral in dorsal column (side of the
spinal cord). Another feature is that there is cross over of sensory
information to the opposite hemisphere (decussation). Besides, there is not
only flow of information from the body to the brain but there are also
feedback connections/descending projections to modulate the information.
This has to do with predictions we make about the outside world.
3. Processing – interneurons and local circuitry in nuclei process the signal in the brain.
All sensory information which comes from our body passes through the thalamus before reaching the cortex.
The thalamus is the major relay station for sensory and motor information. The thalamus has different nuclei, it
has a y-shaped division which divides the thalamus into the anterior, lateral and medial thalamus. It projects
into the middle layers of the cortex. The information of touch will go through the ventral posterior medial
nucleus (VPM) and ventral posterior lateral nucleus (VPL). LGN (vision) and MGN (hearing) are also really
important for sensory systems. It will then finally reach the cortex where the representation of the world is
made. It first reaches primary sensory cortices (small proportion whole brain, primary sensory and motor areas,
visual and hearing cortex) and then the representation of the world is made. Sensory cortices (projection areas)
contains less than ¼ of the human cortex, the rest is involved in higher order information like language,
reasoning, moral thoughts etc.
The cortex contains columns where cells are organized (one column will process information from one area of
your skin for example -> processes one feature from each stimulus), layers (each column has stereotypical
layering (6) which have different functions in each layer) and areas. Most of the information from the thalamus
will end up in layer 4. The information will spread through the cortex in layer 2 and 3 -> transported to other
cortical regions and layer 5 and 6 to basal ganglia or back to the thalamus.
5
,The topographical representation for the system of touch is maintained in the sensory cortex. This is not only
true for different parts of the body, but this is also true for different functioning columns. Thus, there are
functionally distinct columns in the sensory cortex. Each digit of the monkey hand has its own representation ->
for each digit (D4) two types of information are kept separate (one comes from slowly and one from rapidly
adapting receptors). Different receptors will send information in different parts of the cortex -> so that
different stimuli coming from the same finger will be processed in a certain part of the cortex. You can see the
functionally distinct columns in the cortex in the book -> the information of vision, colors are kept separate.
Summary sensory systems (with the system of touch as an example):
1. Reception - Sensory receptors translate the energy of the stimulus into electrical signals (modality, location,
intensity and timing).
2. Transport - Axons transport the signal to the series of relay nuclei (parallel processing, topographical
representation, cross-over, feedback connections).
3. Processing - Interneurons and local circuitry in nuclei process the signal (information comes from receptors
-> go to relay nuclei -> first goes through the thalamus -> then to the cortex).
LECTURE 2: VISUAL SYSTEMS 1
Purves Chapter 11
1. Reception – sensory receptors translate the energy of the stimulus into electrical signals (modality, location,
intensity and timing).
Vision is really important for humans and a well-
studied sensory system. The visual system consists
of the eye which focuses the light (photons) on the
retina. This light has to be converted to neural
impulses in the retina (which is part of the nervous
system). The optic tract processes the signals and
transports them to the lateral geniculate nucleus
(LGN) of the thalamus. Optic radiations are axons
from the thalamus which project to the primary
visual cortex (occipital lobe).
You can compare our eyes to a camera. The lens will focus the light on the retina. The cornea is the front of the
eye which first collects the light and focuses it on the pupil, then the lens focuses the light on the retina. Then,
the image is inverted. Our vision can focus on the density of light -> moving features can be observed.
6
, Visual perception has often been compared to the operation of a camera. Like the lens of a camera, the lens of the eye focuses an inverted
image onto the retina. This analogy breaks down rapidly, however, because it does not capture what the visual system really does, which is
to create a three-dimensional perception of the world that is different from the two-dimensional images projected onto the retina. The
analogy also fails to reflect the cognitive function of the visual system, such as our ability to perceive an object as the same under strikingly
different visual conditions, conditions that cause the image on the retina to vary widely. Light waves from an object (such as a tree) enter
the eye first through the cornea, which is the clear dome at the front of the eye. It is like a window that allows light to enter the eye. The
light then progresses through the pupil, the circular opening in the center of the colored iris. Fluctuations in the intensity of incoming light
change the size of the eye’s pupil. As the light entering the eye becomes brighter, the pupil will constrict (get smaller), due to the pupillary
light response. As the entering light becomes dimmer, the pupil will dilate (get larger). Initially, the light waves are bent or converged first
by the cornea, and then further by the crystalline lens (located immediately behind the iris and the pupil), to a nodal point (N) located
immediately behind the back surface of the lens. At that point, the image becomes reversed (turned backwards) and inverted (turned
upside-down). The light continues through the vitreous humor, the clear gel that makes up about 80% of the eye’s volume, and then, ideally,
back to a clear focus on the retina, behind the vitreous. The small central area of the retina is the macula, which provides the best vision of
any location in the retina. If the eye is considered to be a type of camera (albeit, an extremely complex one), the retina is equivalent to the
film inside of the camera, registering the tiny photons of light interacting with it.
Vision is not only focusing the light on the retina and creating the image but it can also change the image and
make the representation in our brain. This leads to Gestalt psychology (Max Wertheimer) -> what we see
represents not just the properties of objects but, more importantly, the organization of sensations by the brain.
The brain makes certain assumptions about what is to be seen in the world, expectations that seem to derive in
part from experience and in part from the built-in neural wiring for vision. One of the assumptions we make
about a visual image is that there are interrelationships. Vision is based on these interrelationships -> our brain
starts grouping the image based on interrelationships (color/similarity > rows, proximity -> columns etc.).
What we recognize in a melody is not simply the sequence of particular notes but their interrelationship. A melody played in different keys
will still be recognized as the same melody because the relationship of the notes remains the same. Likewise, we are able to recognize
different images under a variety of visual conditions, including differences in illumination, because the relationships between the
components of the image are maintained by the brain.
Besides, our brain automatically divides the object vs background. The figure-ground dichotomy illustrates one
principle of visual perception – a winner-take-all perceptual strategy: only one part of the image can be
selected as the focus of attention. In the visual system only part of an image can be selected as the focus of
attention; the rest becomes, at least momentarily, background. In addition, when seeing patterns you start
grouping them: making sense of the patterns is called occlusion. Besides, you can make assumption about the
shape of the visual objects. When one object appears to cover another, we assume the occluded object is in
the background and construct our visual image accordingly. The integration of distinctive objects into a
coherent visual scene is aided by another central fact of vision: closer structures cover those that are more
distant. We “see” a pattern in otherwise unrelated shapes only when the shapes are seen as fragments of an
occluded background. Without the assumption of occlusion, our brain would not have enough information to
infer a relationship between the assorted shapes. Assumptions about visual objects can lead to the fact that we
perceive lines to be unequal (even though it are two lines of equal length) because the brain uses shape as an
indicator of size. Thus, we can also make assumptions about the size of visual objects. The spatial relationships
of objects also help us to interpret an image -> we judge the size of an object by comparing it to its immediate
surroundings. Thus, the perceived size of an objects depends on other objects in the visual field. Thus, we
learned that it is not only the absolute image that we see, but our brain sees patterns. The visual image we see
is already enhanced and adapted (by the neurons circuitry in retina, thalamus, cortex). Our visual system is
constantly making assumptions about the outside world based on experiences and build in neuronal wiring (the
modulating influence of descending projections).
The retina has a spot which processes most of the information received = fovea (in the middle of the macula).
The spot where the optical nerve leaves the retina and where you can’t see anything = blind-spot.
7