100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary MLE Course: Integration Module Exam (Grade: 8.5) $8.05   Add to cart

Summary

Summary MLE Course: Integration Module Exam (Grade: 8.5)

 11 views  1 purchase
  • Course
  • Institution

My summary got me a grade of 8.7 on the integration exam, as part of the MLE course of the Brain and Cognition Specialization. By studying with my notes, you won't need to consult the lectures or any other materials. I include screenshots of the slides, plenty of images alongside concepts for bette...

[Show more]

Preview 4 out of 33  pages

  • April 11, 2023
  • 33
  • 2021/2022
  • Summary
avatar-seller
INTEGRATION LECTURES

I1 – Semantic Memory

How is knowledge (meaning) represented?
The idea of meaning representation as an encyclopedia is not a good analogy to model
semantic memory: it is not the case that all knowledge is stored in the same place in the brain.
On the other hand, this analogy accurately represents the classification system in semantic
memory: like in a library, information is organized in a hierarchical and systematic way (e.g., all
information about topic x is in the same place). Overall, the analogy is both a good and a bad
model and, as it turns out, semantic memory is very complex.




Until 1960, there was hardly any attention to semantic memory in memory psychology. The
interest emerged from the notion that (semantic) memory has a role in everyday life. At the
same time, computers progressed, leading computer engineers to wonder how to implement
human-type knowledge into computers: the field of AI was born out of this need. Around the
same period, psychologists came up with the idea that semantic memory is organized as a
network of connections between different concepts.

AI (Artificial Intelligence)
AI has been having ups and downs: currently, it is a hot field. There are two basic – largely
refuted – assumptions regarding AI:

1) If we know how a computer generates knowledge and uses it, then we will know how
memory works (the underlying structure might be the same).

2) If we know how human memory works, then we can make a super-powerful computer.




1

,If we look at how AI represents things, the relationship between entities can be visualized like
this:




This is a network representation of semantic memory by means of rules, where “isa” means “a
type of (category)”. For instance, ‘my dog’ is a type of dog and ‘dog’ is a type of animal. The
circle represents a concept: the dog eats meat; my dog chases the Frisbee.

By feeding the AI with many of these rules, you can encode knowledge about the world. This is
one way to represent knowledge: visualizing it by means of a network of rules and
relationships. However, this would normally be in computer language (code).

The roots of AI
AI was based on the idea that human thinking (knowledge) is mostly based on rules, rather than
fast thinking: having many rules for many instances, just like chess players. In fact, de Groot
(psychology professor at the UvA) explored this hypothesis by studying how chess masters
remember chess situations. When faced with existing chess board configurations, chess
masters were much better than students. However, when faced with impossible situations
(e.g., violating the rules of chess), the masters lost their advantage. This led him to conclude
that grandmasters are so good because they have a huge store of chess knowledge (e.g., earlier
games played, watched, or studied). Thus, they built up huge chess databases over time. De
Groot’s work inspired Herbert Simon to co-found AI: the idea consisted in feeding an “expert”
system with different sets of rules and situations, so that whenever a situation occurred, the
rule-based expert system answered based on similar known situations from the database.




2

,Using semantic knowledge

Sentence verification: Hierarchical Models & Network Models

• Hierarchical: Collins & Quillian Model
At the top of the hierarchy, there are more general categories. Each level adds more
specific properties and characteristics.




This way, you can verify sentences such as “a penguin has wings” or answer questions
such as “can a canary sing?”. Reaction times measure how long it takes you to find the
appropriate level in the model: in the first case, you have to go from specific (penguin)
to generic (bird) whereas the opposite is true for the second question (you have to go to
the specific “canary” level to know its properties). However, this model is wrong: it
makes incorrect predictions in many cases:

doesn’t explain the typicality effect: you identify “a robin is a bird” faster than
other birds because it is more typical.
frequency of association is more important than distance in the hierarchy. So,
while “a cat is a mammal” is true, the sentence “a cat is an animal” is responded
to faster because cat-animal are more frequently associated than cat-mammal
(despite being higher in the hierarchy).
doesn’t explain how NO-answers are created: when you don’t find the answer
within the tree. These no-answers cannot be converted into reaction times.



3

, • Network: Spreading Activation Model
items (e.g., properties, things) are associated as nodes in an interconnected network.
This model works with semantic priming: in a lexical decision task, you will be faster at
identifying a target word when this is preceded by a semantically related word.
However, this model still doesn’t explain reaction times in many important cases.




Priming
• Associative
“cats and dogs” may prime “weather” because of their association in the
common expression “it’s raining cats and dogs”. So, the two are found together
in the real world (although they are not necessarily related in meaning).

• Semantic
this type of priming occurs when there is an inherent bond between the words
(e.g., “dog” primes “Labrador”, which is a type of dog).

The lexical decision task is often used to test semantic priming: here, participants are
sequentially presented the prime (briefly) and the target word. They are faster to
respond to the target word (e.g., robin) if the preceding prime is semantically related
(e.g., bird). In contrast, response times are slower if the prime is unrelated to the target,
as in bird-arm.

When people expect to be primed with a semantically
related word, but you give them an unrelated one,
reaction times slow to a great extent (depending on SOA).



4

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller elenafresch. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $8.05. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

73314 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$8.05  1x  sold
  • (0)
  Add to cart