This course provides a broad introduction to machine learning and statistical pattern recognition. Learn about both supervised and unsupervised learning as well as learning theory, reinforcement learning and control. Explore recent applications of machine learning and design and develop algorithms ...
What we 'll see learn today is the first in-depth discussion of a
learning algorithm, linear regression. The term supervised
learning [ NOISE ] meant that you were given Xs which was a
picture of what 's in front of the car. And the algorithm
[ NOise ] had to map that to an output Y which was the
steering direction. And that was a regression problem, [NOISE ]
because the output Y that you want is a continuous value. As
opposed to a classification problem where Y is the speed. And
we'll talk about classification next Monday, but supervised
learning regression. The job of the learning algorithm is to
output a function to make predictions about housing prices.
And by convention, I 'm gon na call this a function that it
outputs a hypothesis , [ NOISE ] And go through how to do that.
And the hypothesis is to take as input, any size of a house, and
try to tell you what it thinks should be the price. Machine
learning sometimes just calls this a linear function but
technically it 's an affine function. Um, so more generally in- in-
this example we have just one input feature X. More generally,
if you have multiple input features , so if. you have more data ,
more information about these houses , such as number of
bedrooms.
Theta [ NOISE ] is called the parameters , um , of the learning
algorithm. M is going to be the number of rows where each
house you have in your training set. X is called input features ,
or sometimes input attributes, and Y is the output. And
sometimes we call this the target variable. N is n to denote the
number of features you have for the supervised learning
problem. N is equal to 2 in this example. The learning
algorithm 's job is to choose values for the parameters Theta
so that it can output a hypothesis. Sometimes for notational
convenience , I just write this as h of x , sometimes I include
the Theta there and they mean the same thing. In linear
regression, we want to minimize the square difference between
what the hypothesis outputs and what the true price of the
house is. We often put a one-half there so to make the math a
little bit simpler later. We 'll talk more about that when we talk
about the generalization of linear regression.
Theta equals the vector of all zeros would be a reasonable
default. We can use an algorithm called gradient descent to
find the value of Theta that minimizes the cost function J of the
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller smiteshkolekar. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $12.49. You're not tied to anything after your purchase.