machine learning with python ibm coursera quiz answers week 4
Practice Quiz: Linear Classification
1. Which of the following examples is/are a sample application of Logistic Regression? (select three)
- Estimating the blood pressure of a patient based on her symptoms and biographical data.
- The probability that a person has a heart attack within a specified time period using person’s age and sex.
- Customer’s propensity to purchase a product or halt a subscription in marketing applications.
- Likelihood of a homeowner defaulting on a mortgage.
2. Which of the following statements comparing linear and logistic regressions is TRUE?
- Linear regression is used for a continuous target whereas logistic regression is more suitable for a categorical target.
- Independent variables in linear regression can be continuous or categorical, but can only be categorical in logistic regression.
- In this course, linear regression minimizes the mean absolute error, while logistic regression minimizes the mean squared error.
- Both linear and logistic regression can be used to predict categorical responses and attain a point’s likelihood of belonging to each class.
3. How are gradient descent and learning rate used in logistic regression?
- Gradient descent takes increasingly bigger steps towards the minimum with each iteration.
- Gradient descent will minimize learning rate to minimize the cost in fewer iterations.
- We want to minimize the cost by maximizing the learning rate value.
- Gradient descent specifies the steps to take in the current slope direction, learning rate is the step length.
Graded Quiz: Linear Classification
4. Which option lists the steps of training a logistic regression model in the correct order?
1 Use the cost function on the training set.
2 Update weights with new parameter values.
3 Calculate cost function gradient.
4 Initialize the parameters.
5 Repeat until specified cost or iterations reached.
- 1, 4, 3, 2, 5
- 4, 1, 3, 2, 5
- 3, 2, 5, 4, 1
- 4, 3, 2, 5, 1
5. What is the objective of SVM in terms of hyperplanes?
- Find the hyperplane of the lowest dimension.
- Choose the hyperplane that represents the largest margin between the two classes.
- Minimize the distance between hyperplane and the support vectors.
- Choose the hyperplane that’s closest to one of the two classes.
6. Logistic regression is used to predict the probability of a:
- Categorical dependent variable
- Categorical independent variable
- Numerical independent variable
- Numerical dependent variable
7. In which cases would we want to consider using SVM?
- When we want multiple decision boundaries with varying weights.
- When we desire efficiency when using large datasets.
- When mapping the data to a higher dimensional feature space can better separate classes.
- When we desire probability estimates for each class.
8. What is a disadvantage of one-vs-all classification?
- There’s an ambiguous region where multiple classes are valid outputs.
- It requires more models to be created compared to one-vs-one.
- It cannot output probability estimates of classes.
- It does not handle two-class classification well.