100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Isye 6501 Final exam Questions and answers | Updated 2024/25 $10.99   Add to cart

Exam (elaborations)

Isye 6501 Final exam Questions and answers | Updated 2024/25

 11 views  0 purchase
  • Course
  • ISYE 6501
  • Institution
  • ISYE 6501

Isye 6501 Final exam Questions and answers | Updated 2024/25

Preview 3 out of 23  pages

  • August 10, 2024
  • 23
  • 2024/2025
  • Exam (elaborations)
  • Questions & answers
  • ISYE 6501
  • ISYE 6501
avatar-seller
STUVATE
Isye 6501 Final exam Questions and
answers | Updated 2024/25
1-norm - Similar to rectilinear distance; measures the straight-line length of a vector from
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



the origin. If z=(z1,z2,...,zm) is a vector in an m-dimensional space, then it's 1-norm is
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



square root(|𝑧1|+|𝑧2|+⋯+|𝑧𝑚| = |𝑧1|+|𝑧2|+⋯+|𝑧| = Σm over i=1 |𝑧𝑖|
ii ii ii ii ii ii ii ii ii




ii A/B Testing - testing two alternatives to see which one performs better
ii ii ii ii ii ii ii ii ii ii ii ii ii




2-norm - Similar to Euclidian distance; measures the straight-line length of a vector from
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



the origin. If z=(z1,z2,...,zm) is a vector in an 𝑚-dimensional space, then its 2-norm is the
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



same as 1-norm but everything is squared= square root(Σm over i=1 (|𝑧𝑖|)^2)
ii ii ii ii ii ii ii ii ii ii ii ii




Accuracy - Fraction of data points correctly classified by a model; equal to TP+TN /
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



TP+FP+TN+FN
ii




ii Action - In ARENA, something that is done to an entity.
ii ii ii ii ii ii ii ii ii ii ii ii




Additive Seasonality - Seasonal effect that is added to a baseline value (for example, "the
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



temperature in June is 10 degrees above the annual baseline").
ii ii ii ii ii ii ii ii ii ii




Adjusted R-squared - Variant of R2 that encourages simpler models by penalizing the use
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



of too many variables.
ii ii ii ii




AIC - Akaike information criterion- Model selection technique that trades off between
ii ii ii ii ii ii ii ii ii ii ii ii ii ii



model fit and model complexity. When comparing models, the model with lower AIC is
ii ii ii ii ii ii ii ii ii ii ii ii ii ii



preferred. Generally penalizes complexity less than BIC.
ii ii ii ii ii ii ii




ii Algorithm - Step-by-step procedure designed to carry out a task. ii ii ii ii ii ii ii ii ii ii ii




Analysis of Variance/ANOVA - Statistical method for dividing the variation in observations
ii ii ii ii ii ii ii ii ii ii ii ii ii ii



among different sources.
ii ii ii




Approximate dynamic program - Dynamic programming model where the value functions
ii ii ii ii ii ii ii ii ii ii ii ii ii



are approximated.
ii ii




Arc - Connection between two nodes/vertices in a network. In a network model, there is a
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



variable for each arc, equal to the amount of flow on the arc, and (optionally) a capacity
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



constraint on the arc's flow. Also called an edge.
ii ii ii ii ii ii ii ii ii




Area under the curve (AUC) - Area under the ROC curve; an estimate of the classification
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



model's accuracy. Also called concordance index.
ii ii ii ii ii ii




ii ARIMA - Autoregressive integrated moving average.
ii ii ii ii ii ii ii

, Arrival Rate - Expected number of arrivals of people, things, etc. per unit time -- for
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



example, the expected number of truck deliveries per hour to a warehouse.
ii ii ii ii ii ii ii ii ii ii ii ii




Assignment Problem - Network optimization model with two sets of nodes, that finds the
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



best way to assign each node in one set to each node in the other set.
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii




Attribute - A characteristic or measurement - for example, a person's height or the color of
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



a car. Generally interchangeable with "feature", and often with "covariate" or "predictor". In
ii ii ii ii ii ii ii ii ii ii ii ii ii



the standard tabular format, a column of data.
ii ii ii ii ii ii ii ii




Autoregression - Regression technique using past values of time series data as
ii ii ii ii ii ii ii ii ii ii ii ii ii ii



predictors of future values.
ii ii ii ii




Autoregressive integrated moving average (ARIMA) - Time series model that uses
ii ii ii ii ii ii ii ii ii ii ii ii ii



differences between observations when data is nonstationary. Also called Box-Jenkins.
ii ii ii ii ii ii ii ii ii ii




Backward elimination - Variable selection process that starts with all variables and then
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



iteratively removes the least-immediately-relevant variables from the model.
ii ii ii ii ii ii ii ii




Balanced Design - Set of combinations of factor values across multiple factors, that has
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



the same number of runs for all combinations of levels of one or more factors.
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii




Balking - An entity arrives to the queue, sees the size of the line (or some other attribute),
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



and decides to leave the system.
ii ii ii ii ii ii




Bayes' theorem/Bayes' rule - Fundamental rule of conditional probability:
ii ii ii ii ii ii ii ii ii ii ii



𝑃(𝐴|𝐵)=𝑃(𝐵|𝐴)*𝑃(𝐴) / 𝑃(𝐵)
ii ii ii




Bayesian Information criterion (BIC) - Model selection technique that trades off model fit
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



and model complexity. When comparing models, the model with lower BIC is preferred.
ii ii ii ii ii ii ii ii ii ii ii ii ii



Generally penalizes complexity more than AIC.
ii ii ii ii ii ii




Bayesian Regression - Regression model that incorporates estimates of how coefficients
ii ii ii ii ii ii ii ii ii ii ii ii ii



and error are distributed.
ii ii ii ii




Bellman's Equation - Equation used in dynamic programming that ensures optimality of a
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



solution.
ii




Bernoulli Distribution - Discrete probability distribution where the outcome is binary, either
ii ii ii ii ii ii ii ii ii ii ii ii ii ii



0 or 1. Often, 1 represents success and 0 represents failure. The probability of the
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



outcome being 1 is 𝑝 and the probability of outcome being 0 is 𝑞 = 1−𝑝, where 𝑝 is between
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



0 and 1.
ii ii ii




ii Bias - Systematic difference between a true parameter of a population and its estimate.
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii

, Binary Data - Data that can take only two different values (true/false, 0/1, black/white,
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



on/off, etc.)
ii ii




ii Binary integer program - Integer program where all variables are binary variables.
ii ii ii ii ii ii ii ii ii ii ii ii ii




ii Binary Variable - Variable that can take just two values: 0 and 1.
ii ii ii ii ii ii ii ii ii ii ii ii ii ii




Binomial Distribution - Discrete probability distribution for the exact number of successes,
ii ii ii ii ii ii ii ii ii ii ii ii ii ii



k, out of a total of n iid Bernoulli trials, each with probability p: Pr(𝑘)= (n over k) p^k(1-p)^n-k
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii




Blocking - Factor introduced to an experimental design that interacts with the effect of the
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



factors to be studied. The effect of the factors is studied within the same level (block) of the
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



blocking factor.
ii ii




box and whisker plot - Graphical representation data showing the middle range of data
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



(the "box"), reasonable ranges of variability ("whiskers"), and points (possible outliers)
ii ii ii ii ii ii ii ii ii ii ii



outside those ranges.
ii ii ii




Box-Cox Transformation - Transformation of a non-normally-distributed response to a
ii ii ii ii ii ii ii ii ii ii ii ii



normal distribution.
ii ii




Branching - Splitting a set of data into two or more subsets, to each be analyzed
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



separately.
ii




ii CART - Classification and regression trees.
ii ii ii ii ii ii ii




Categorical Data - Data that classifies observations without quantitative meaning (for
ii ii ii ii ii ii ii ii ii ii ii ii ii



example, colors of cars) or where quantitative amounts are categorized (for example, "0-
ii ii ii ii ii ii ii ii ii ii ii ii ii



10, 11-20, ...").
ii ii




Causation - Relationship in which one thing makes another happen (i.e., one thing
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



causes another).
ii ii




Chance Constraint - A probability-based constraint. For example, a standard linear
ii ii ii ii ii ii ii ii ii ii ii ii ii



constraint might be 𝐴x≤𝑏. A similar chance constraint might be Pr (𝐴x≤𝑏)≥0.95
ii ii ii ii ii ii ii ii ii ii ii ii




ii Change Detection - Identifying when a significant change has taken place in a process.
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii




Classification - The separation of data into two or more categories, or (a point's
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



classification) the category a data point is put into.
ii ii ii ii ii ii ii ii ii




Classification tree - Tree-based method for classification. After branching to split the data,
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



each subset is analyzed with its own classification model.
ii ii ii ii ii ii ii ii ii




Classifier - A boundary that separates the data into two or more categories. Also (more
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii



generally) an algorithm that performs classification.
ii ii ii ii ii ii

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller STUVATE. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $10.99. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

79223 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$10.99
  • (0)
  Add to cart