100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
SOA PA Exam $11.99   Add to cart

Exam (elaborations)

SOA PA Exam

 2 views  0 purchase
  • Course
  • SOA PA
  • Institution
  • SOA PA

Define forward Selection - answer-which involves starting with no variables in the model, testing the addition of each variable using a chosen model fit criterion, adding the variable (if any) whose inclusion gives the most statistically significant improvement of the fit, and repeating this proces...

[Show more]

Preview 3 out of 21  pages

  • August 6, 2024
  • 21
  • 2024/2025
  • Exam (elaborations)
  • Questions & answers
  • SOA PA
  • SOA PA
avatar-seller
TOPDOCTOR
SOA PA Exam
After Building a GLM Regression Model, what are some ways (steps) to validate the
model? - answer-1. Evaluate the RMSE against an OLS model or other test models to
determine the best or most interpretable model
2. Interpret diagnostic plots to assess the model assumptions
- Residuals Versus Fitted Graph:
- Normal QQ Plot:
- Scale-Location graph:
- The Residuals vs Leverage:
Note: Be aware if the question asks you to re-run the model on the full data set
3. Understand how to interpret Predictors and which predictors are boolean and what
the coefficients represent.

After finding the optimal cp value in a decision tree, what are options to move forward
with the model? - answer-1. Build a new tree from scratch using the optimal complexity
parameter value
2. Use the overly complex tree we just built and prune back (remove) any splits which
the subtree doesn't satisfy the new impurity reduction.

Are variable transformations needed when using a decision tree? - answer-No
transformation is needed when deciding to fit a decision tree because for a numeric
predictor, a tree makes splits based on the relative order of the variable values.

Compare and contrast Ridge and Lasso Regression - answer-Similarities:
In both cases there is a hyperparameter to estimate that controls the extent of the
reduction. This is normally selected through cross‐validation.
Differences;
-Lasso regression not only helps in reducing over-fitting but it can help us in feature
selection
- (LASSO provides an alternative to forward and backward selection for variable
selection)

Compare PCA and LASSO regularization on both how they perform dimension
reduction and their interpretability - answer-Where PCA reduces the dimensionality
without reference to the target variable, LASSO reduces the dimensionality by coercing
some coefficients to zero when optimizing with its penalized optimization function that
considers both the fit of the coefficients in predicting the target and the number of
dimensions in the model. The reduction in the number of dimensions can be directly
chosen with PCA but is only applicable to numerical variables. The reduction in the
number of dimensions in LASSO can be only indirectly chosen via the lambda
hyperparameter but is also applicable to categorical variables after binarization.

The principal components from PCA can be difficult to describe in detail compared to
the original variables, which can be preserved in LASSO.

,Define 4 Random Forest Parameters - answer-1. Number of Trees
2. Proportion of Observations
3. Proportion of features
4. Decision Tree Parameters

Define a Balanced binary tree - answer-A binary tree in which the left and right subtrees
of any node differ in depth by at most one.

Define a Loss Function - answer-A function of two variables; the prediction and a single
observed new value that measures the error. (e.g squared error loss function: [y_new -
g(x_new, B_hat)]^2

Define accuracy - answer-(TP+TN)/N

Define AIC - answer--Akaike Information Criterion;
estimate of the quality of each model, relative to each of the other models. Thus, AIC
provides a means for model selection.
-Method: adding a variable requiring an increase in the loglikelihood of two per
parameter added.
= 2*k - 2*ln(L) [Lower = better]
- Thus, AIC rewards goodness of fit (as assessed by the likelihood function), but it also
includes a penalty that is an increasing function of the number of estimated parameters.
- The penalty discourages overfitting, which is desired because increasing the number
of parameters in the model almost always improves the goodness of the fit.

Define an Elbow Plot - answer-- A plot of the proportion of variance explained by the
variance between the k centers is calculated and plotted for successive values of k.
- Increases in k generally lead to increases in the proportion of variance explained, but
the size of each increase typically decreases with each additional cluster.

Define an unbalanced binary tree - answer-A binary tree in which the left and right
subtrees of some nodes differ in depth by more than one.

Define and describe the two types of Hierarchical Clustering - answer-Agglomerative:
starter by considering each observation as its own cluster, then gradually grouping them
with nearby clusters at each stage until you only have one cluster left (bottom-up)
Divisive: starts by considering all observations as a single cluster and then
progressively splitting into subclusters recursively (top-down)

Define and provide examples of Supervised learning - answer-A type of machine
learning task of learning a function that maps an input to an output based on example
input-output pairs.
Examples; Generalized Linear Models (GLM), Regularization (lasso & ridge), decision
trees.

, Define and provide examples of Unsupervised learning - answer-a type of machine
learning algorithm used to draw inferences from datasets without human intervention.
Examples; Hierarchical clustering, k-Means clustering, Principal Component Analysis
(PCA)

Define AUC - answer-"Area Under Curve"
- is equal to the probability that a classifier will rank a randomly chosen positive instance
higher than a randomly chosen negative one
- Higher value better

Define Backward Selection - answer-which involves starting with all candidate variables,
testing the deletion of each variable using a chosen model fit criterion, deleting the
variable (if any) whose loss gives the most statistically insignificant deterioration of the
model fit, and repeating this process until no further variables can be deleted without a
statistically insignificant loss of fit.

Define Best Subset Selection - answer-is a method that aims to find the subset of
independent variables (Xi) that best predict the outcome (Y) and it does so by
considering all possible combinations of independent variables.
- Compared to forward and backward selection

Define Bias - answer-The Expected Loss arising from the model not being
complex/flexible enough to capture the underlying signal.
- High bias means the model won't be accurate because it doesn't have the capacity to
capture the signal in the data

Define BIC - answer-Bayesian information criterion; criterion for model selection among
a finite set of models; the model with the lowest BIC is preferred.
= k * ln(L) - 2 * ln( L )
- It penalizes the complexity of the model where complexity refers to the number of
parameters in the model.

Define Classification Error - answer-[Impurity Measure]
= 1- max(p)
Either a false negative response or a false positive response

Define confusion matrix - answer-- A convenient summary of the model predictions from
which several performance measures are derived

Define CP: - answer-- Decision tree control parameter
- Complexity Parameter
- : if any split does not increase the overall R^2 of the model by at least cp (where R^2
is the usual linear-models definition) then that split is decreed to be, a priori, not worth
pursuing
- Higher value indicates less complex tree

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller TOPDOCTOR. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $11.99. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

72964 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$11.99
  • (0)
  Add to cart