NPTEL Deep Learning – IIT Ropar Week 12 Assignment Answers 2024

Sanket
By Sanket

NPTEL Deep Learning – IIT Ropar Week 12 Assignment Answers 2024

1. What is the primary purpose of the attention mechanism in neural networks?

  • To reduce the size of the input data
  • To focus on specific parts of the input sequence
  • To increase the complexity of the model
  • To eliminate the need for recurrent connections
Answer :- For Answers Click Here 

2. If we make the vocabulary for an encoder-decoder model using the given sentence. What will be the size of our vocabulary?
Sentence: Convolutional neural networks excel at recognizing patterns and features within images, enhancing object detection accuracy significantly.

  • 13
  • 18
  • 14
  • 16
Answer :- For Answers Click Here 

3. Which of the following is a disadvantage of using an encoder-decoder model for sequence-to-sequence tasks?

  • The model requires a large amount of training data
  • The model is slow to train and requires a lot of computational resources
  • The generated output sequences may be limited by the capacity of the model
  • The model is prone to overfitting on the training data
Answer :- 

4. Which scenarios would most benefit from hierarchical attention mechanisms?

  • Summarizing long text documents
  • Classifying images in a dataset
  • Analyzing customer reviews or feedback data
  • Real-time processing of sensor data
Answer :- 

5. Which of the following are the advantages of using attention mechanisms in encoder-decoder models?

  • Reduced computational complexity
  • Ability to handle variable-length input sequences
  • Improved gradient flow during training
  • Automatic feature selection
  • Reduced memory requirements
Answer :- 

6. Choose the correct statement with respect to the attention mechanism in the encoder-decoder model

  • Attention mechanism can’t be used for images
  • Only important features get high weights in the attention mechanism
  • Attention mechanism is not suitable for tasks like Machine Translation
  • None of these
Answer :- For Answers Click Here 

7. We are performing the task of ”Image Question Answering” using the encoder-decoder model. Choose the equation representing the Decoder model for this task.

  • CNN(xi)
  • RNN(st−1,e(y^t−1))
  • P(y|q,I)=Softmax(Vs+b)
  • RNN(xit)
Answer :- 

8. What is the purpose of the softmax function in the attention mechanism?

  • To normalize the attention weights
  • To compute the dot product between the query and key vectors
  • To compute the element-wise product between the query and key vectors
  • To apply a non-linear activation function to the attention weights
Answer :- 

9. Which of the following output functions is most commonly used in the decoder of an encoder-decoder model for translation tasks?

  • Sigmoid
  • ReLU
  • Softmax
  • Tanh
Answer :- 

10. We are performing a task where we generate the summary for an image using the encoder-decoder model. Choose the correct statements.

  • LSTM is used as the decoder.
  • CNN is used as the decoder.
  • LSTM is used as the encoder.
  • None of These
Answer :- For Answers Click Here 
Share This Article
Leave a comment