# NPTEL Introduction To Machine Learning Week 5 Assignment Answer 2023

NPTEL Introduction To Machine Learning Week 5 Assignment Solutions

## NPTEL Introduction To Machine Learning Week 5 Assignment Answer 2023

1. The perceptron learning algorithm is primarily designed for:

2. Unsupervised learning

2. The last layer of ANN is linear for and softmax for .

• Regression, Regression
• Classification, Classification
• Regression, Classification
• Classification, Regression

3. Consider the following statement and answer True/False with corresponding reason:
The class outputs of a classification problem with a ANN cannot be treated independently.

1. True. Due to cross-entropy loss function
2. True. Due to softmax activation
3. False. This is the case for regression with single output
4. False. This is the case for regression with multiple outputs

4. Given below is a simple ANN with 2 inputs X1,X2∈{0,1} and edge weights −3,+2,+2

Which of the following logical functions does it compute?

1. XOR
2. NOR
3. NAND
4. AND

5. Using the notations used in class, evaluate the value of the neural network with a 3-3-1 architecture (2-dimensional input with 1 node for the bias term in both the layers). The parameters are as follows

Using sigmoid function as the activation functions at both the layers, the output of the network for an input of (0.8, 0.7) will be (up to 4 decimal places)

1. 0.7275
2. 0.0217
3. 0.2958
4. 0.8213
5. 0.7291
6. 0.8414
7. 0.1760
8. 0.7552
9. 0.9442
10. None of these

6. If the step size in gradient descent is too large, what can happen?

1. Overfitting
2. The model will not converge
3. We can reach maxima instead of minima
4. None of the above

7. On different initializations of your neural network, you get significantly different values of loss. What could be the reason for this?

1. Overfitting
2. Some problem in the architecture
3. Incorrect activation function
4. Multiple local minima

8. The likelihood L(θ|X) is given by:

1. P(θ|X)
2. P(X|θ)
3. P(X).P(θ)
4. P(θ)P(X)

9. Why is proper initialization of neural network weights important?

1. To ensure faster convergence during training
2. To prevent overfitting
3. To increase the model’s capacity
4. Initialization doesn’t significantly affect network performance
5. To minimize the number of layers in the network