Summary for the midterm of the course 'Business Analytics'. Includes all the reading material for week 1, 2, and 3.
Week 1 --> Read chapter 2.1-2.2.
Week 2 --> Read chapter 10.1, 10.3
Week 3 --> Read chapter 3.1-3.3, 3.5
Also, check out my free summary of the knowledge clips from week ...
Samenvatting An Introduction to Statistical Learning, Basis
Samenvatting An Introduction to Statistical Learning, ISBN: 9781461471370 Big Data Analysis (7204MM17XY)
Summary Statistical Computing (JBM050)
All for this textbook (8)
Written for
Universiteit van Amsterdam (UvA)
College of Economics and Business
Business Analytics (6013B0522Y)
All documents for this subject (3)
4
reviews
By: Ádámv1 • 3 year ago
By: tomascepas7 • 3 year ago
By: lolaboogerd • 3 year ago
By: daniellabonilla98 • 3 year ago
Seller
Follow
jtimmermans
Reviews received
Content preview
Summary midterm Business Analytics week 1 - week 3
WEEK 1 CHAPTERS
2.1 What Is Statistical Learning?
Input variables: e.g. advertising budgets. Typically denoted using the symbol X, with a subscript to
distinguish them. X1: TV budget, X2: The Radio Budget etc.
→ also called predictors, independent variables, features, or sometimes just variables
Output variable: e.g. sales. Typically denoted using the symbol Y
→ also called variable response or dependent variable
We assume that there is some relationship between Y and X = (X1, X2, ..., Xp), which can be written
in the very general form:
Y = f(X) + ϵ
Here f i s some fixed but unknown function of X1…, Xp, and ϵ is random error term, which is
independent of X and has mean zero. f represents the systematic information that X provides about Y.
In essence, statistical learning refers to a set of approaches for estimating f.
2.1.1 Why Estimate f
Prediction
To predict Y, when inputs X are readily available and the error term averages to zero.
f^ represents our estimate for f. Often treated as a black box, not typically concerned with the exact
form of f^, p rovided that it yields accurate predictions for Y. Ŷ represents the resulting prediction
for Y.
The accuracy of Ŷ depends on two quantities:
● Reducible error: f^ will not be a perfect estimate for f, and this inaccuracy will introduce
some error. It is reducible because we can potentially improve the accuracy of f^ by using the
most appropriate statistical learning technique.
● Irreducible errors: Y i s also a function of ϵ w
hich cannot be predicted by using X. Therefore
this error also affects the accuracy of our predictions. No matter how well we estimate f , we
cannot reduce the error introduced by ϵ .
○ Why is the irreducible error larger than zero?
■ Unmeasured variables
■ Unmeasurable variation
1
,E(Y - Ŷ)² represents the average or expected value, of the squared difference between the predicted
and actual value of Y.
Var(ϵ) represents the variance associated with the error term.
Inference
To understand the relationship between X a nd Y, to understand how Y c hanges as a function of
X1,..., Xp. Now f cannot be treated as a black box, because we need to know its exact form.
● Which predictors are associated with the response?
● What is the relationship between the response and each predictor?
● Can the relationship between Y and each predictor be adequately summarized using a linear
equation, or is the relationship more complicated?
Linear models allow for relatively simple and interpretable inferences, but may not yield as accurate
predictions as some other approaches.
2.1.2 How do we estimate f?
Training data: observations that will be used to train our method how to estimate f.
Let Xij represent the value of the jth predictor, or input, for observation i, where i=1,2,..., n and
j=1,2,...,p. Correspondingly, let yi represent the response variable for the ith observation. Then our
training data consist of {(x1,y1), (x2,y2),..., (xn,yn)} w here xi = (xi1, xi2,..., xip)T.
Parametric methods
Involve a two-step model-based approach.
xample assumption linear model:
1. Make an assumption about the functional form/shape of f. E
Only the p + 1 coefficients (β) have to be estimated.
2. Now we need a procedure that uses the training data to fit/train the model. We want to find
values such that
The most common approach is (ordinary) least squares.
The potential disadvantage of a parametric approach is that the model we choose will usually not
match the true unknown form of f. We can try to address this problem by choosing flexible models
that can fit many different possible functional forms for f.
→ Flexible models can lead to overfitting the data. This means they follow the errors or noise too
closely.
2
, Non-parametric methods
Do not make explicit assumptions about the functional form of f. They seek an estimate of f t hat gets
as close to the data points as possible without being too rough or wiggly.
Advantage: have the potential to accurately fit a wider range of possible shapes for f. Avoid the
danger of estimate f being very different from the true f.
Disadvantage: they do not reduce the problem or estimating f t o a small number of parameters, a very
large number of observations is required in order to obtain an accurate estimate for f.
Thin-plate spline can be used to estimate f. It does not impose any pre-specified model on f. It instead
attempts to produce an estimate for f that is as close as possible to the observed data, subject to the fit
being smooth.
2.1.3 The trade-off between prediction accuracy and model interpretability
Why would we ever choose a more restrictive method (linear) instead of a very flexible approach (thin
plated)? → When we are mainly interested in inference, restrictive models are much more
interpretable
Restrictive approach is more interpretable because it is less complex. Flexible approach however can
be more complex to interpret.
Trade-off between flexibility and interpretability of different statistical learning methods:
Lasso: relies upon the linear model but uses an alternative fitting procedure for estimating the
coefficients. More restrictive and therefore less flexible and more interpretable.
Least squares linear regression: relatively inflexible but quite interpretable.
Generalized additive model (GAMs): extend the linear model to allow for certain non-linear
relationships. More flexible than linear regression and less interpretable.
Bagging, boosting, and support vector machines: highly flexible approaches that are harder to
interpret.
2.1.4 Supervised versus Unsupervised Learning
Supervised learning: for each observation of the predictor measurements there is an associated
response to the predictors.
Unsupervised learning: when we lack a response variable (Y) that can supervise our analysis.
E.g. Cluster analysis: the goal is to ascertain whether the observations fall into relatively
distinct groups
3
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller jtimmermans. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $5.95. You're not tied to anything after your purchase.