This summary contains all the material for the machine learning profession on UVT. All notebooks including examples have been added. All lectures have been attended, summarized and elaborated.
When you go to your mail, it automatically will put spam into an apart spam folder. What the system does
is for example: if (A or B or C) and not D, then spam.
Machine learning is the study of computer algorithms that improve automatically through experience →
involves becoming better at a task T based on some experience E, with respect to some performance
measure P. Learning in ML is learning based on a performance measure.
How does the ML process works?
1. Find examples of spam and non-spam (training set).
2. Come up with a learning algorithm.
3. This learning algorithm infers rules from examples.
4. The rules that you infer from this training set, can be applied to new and unseen data (emails) →
to understand how your model generalizes.
Purpose of ML is to solve the problem, when the new problem comes. The unseen data needs to be as
close as possible to the real world.
Machine learning examples:
• Recognize handwritten numbers and letters.
• Recognize faces in photos.
• Determine whether text expresses positive, negative, or no opinion.
• Guess person’s age based on a sample of writing.
• Flag suspicious credit card transactions.
• Recommend books and movies to users based on their own and others purchase history.
• Recognize and label mentions of people’s or organization names in text.
Different types of learning problems:
1. Regression:
• The response of regression is a real number.
o A persons age.
o Predict price of a stock.
o Predict student’s score on exam.
2. Binary classification:
• The response is a yes/no answer → a condition being there yes or no.
o Detect spam.
o Predict polarity of product review: positive VS negative (sentiment analysis).
3. Multiclass classification:
• Response is one of a finite set of options.
o Classify newspaper article as: politics, sports, science, technology, health, finance,
etc.
o Detect species based on a photo.
o Detect a movie genre: romance, action, thriller, etc.
, 4. Multilabel classification:
• Response is a finite set of yes/no answer:
o Assign songs to one or more genres: (rock, pop, metal), (hip-hop, rap), (jazz,
blues), (rock, punk).
5. Ranking:
• Most relevant when your searching in for example google → most interesting pages will
come on top. So ranking is about order object according to relevance.
o Rank web pages in response to user query.
o Predict student’s preference of courses in a program.
6. Sequence labeling (relevant in speech recognition).
• The input is a sequence of elements (words). The
response is a corresponding sequence of labels.
o Label words in a sentence with their syntactic
category determine noun adverb verb (prep
noun).
o Label frames in speech signal with
corresponding phonemes.
7. Autonomous behavior (self-driving cars).
• Input is measurement from sensor-camera, microphone radar, etc. Response is
instructions for actuators, -steering, accelerator, brake, etc.
Evaluation.
One of the most important problems for ML is the generalization problem. To see how a method
generalizes, you need some metric, or some standard → how are you going to understand how good your
model is working?
For regression problems you can use MAE or MSE. MSE is more sensitive to outliers because of squaring.
• MAE = the average (absolute) difference between true and predicted vale.
• MSE = the average square of the difference between true value and predicted value.
Error rate is a metric which compares two things to the whole set of things, so it does not distinguish TP
form TN. Example: in the gender binary approach: if you assume that there are two genders, if you then
want to label people as male/female, normally these binary conditions are if something is there or not.
But if you want to look how likely your model labels the data in comparison to the true data → you can
use error rate. But this is not an ideal metric, because it doesn’t distinguish TP from TN. That is why we
use different approaches, that requires to split the data into FP/FN/TP/TN. In the email classification
example, we get the following:
, • False positive: flagged as spam, but is not spam.
• False negative: not flagged as spam, but is spam.
• False positive are a bigger problem! You don’t want a normal email flagged as spam →
minimize. So for different problems you have different metrics.
Positives = something being there regardless of the sentimental meaning of something being there.
Different metrics focus on one kind of mistake:
• Precision: what fraction of flagged emails were real spam?
• Recall: what fraction of real spams were flagged?
The flagged sets (predicted spam) consists of true negatives and false positives. And the spam consists of
true positives and false negatives.
Example of a confusion matrix:
, The f-score is another measure which combines precision and recall. It is the harmonic mean between
precision and recall. A kind of average.
There is also a more generalized version of the f-score → F-beta score. The parameter beta quantifies how
much more we care about recall than precision.
For example. F0.5 is the metric to use if we care half as much about recall as about precision.
What to do when it comes to multiclass classification?
There are three classes: Spam, Ok, Phish. Now, you can come up with a similar matrix with in the columns
the predicted values, and in the rows the actual values. This allows you to distribute your dataset in such
a way that each of your datapoints falls into exactly one of the points. Looking into this metric is easier to
imagine what category each datapoint falls into. In the example, 2 is a false negative for spam, false
positive for OK, 4 is a FN for phish and a false positive for spam (everything on the diagonal gives us the
true positives).
There are two approaches you can take when it comes to calculating precision and recall:
• Macro-average: you look into every class by itself with a one VS all method → look for a condition
being there, or not (looking at the other classes) and then calculate the metrics for those classes
separately with one VS all approach, and then, taking the average of those things (so if you have
3 classes, you divide by 3). The problem is that it gives equal weight to each class, regardless of
the size in the dataset.
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller robinvanheesch1. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $8.16. You're not tied to anything after your purchase.