NPTEL Deep Learning – IIT Ropar Week 11 Assignment Answers 2024

Sanket
By Sanket

NPTEL Deep Learning – IIT Ropar Week 11 Assignment Answers 2024

1. Select the correct statements about GRUs

  • GRUs have fewer parameters compared to LSTMs
  • GRUs use a single gate to control both input and forget mechanisms
  • GRUs are less effective than LSTMs in handling long-term dependencies
  • GRUs are a type of feedforward neural network
Answer :- For Answers Click Here 

2. What is the main advantage of using GRUs over traditional RNNs?

  • They are simpler to implement
  • They solve the vanishing gradient problem
  • They require less computational power
  • They can handle non-sequential data
Answer :- For Answers Click Here 

3. What is the role of the forget gate in an LSTM network?

  • To determine how much of the current input should be added to the cell state
  • To determine how much of the previous time step’s cell state should be retained
  • To determine how much of the current cell state should be output
  • To determine how much of the current input should be output
Answer :- 

4. How does LSTM prevent the problem of vanishing gradients?

  • Different activation functions, such as ReLU, are used instead of sigmoid in LSTM
  • Gradients are normalized during backpropagation
  • The learning rate is increased in LSTM
  • Forget gates regulate the flow of gradients during backpropagation
Answer :- 

5. We construct an RNN for the sentiment classification of text where a text can have positive sentiment or negative sentiment. Suppose the dimension of one-hot encoded-words is R100×1, dimension of state vector si is R50×1. What is the total number of parameters in the network? (Don’t include biases also in the network)

Answer :- 

6. Arrange the following sequence in the order they are performed by LSTM at time step t.
[Selectively read, Selectively write, Selectively forget]

  • Selectively read, Selectively write, Selectively forget
  • Selectively write, Selectively read, Selectively forget
  • Selectively read, Selectively forget, Selectively write
  • Selectively forget, Selectively write, Selectively read
Answer :- For Answers Click Here 

7. Which of the following is a limitation of traditional feedforward neural networks in handling sequential data?

  • They can only process fixed-length input sequences
  • They are highly optimizable using the gradient descent methods
  • They can’t model temporal dependencies between sequential data
  • All of These
Answer :- 

8. Which of the following is a formula for computing the output of an LSTM cell?

  • ot=σ(Wo[ht−1,xt]+bo)
  • ft=σ(Wf[ht−1,xt]+bf)
  • ct=ft∗ct−1+it∗gt
  • ht=ot∗tanh(ct)
Answer :- 

9. Which type of neural network is best suited for processing sequential data?

  • Convolutional Neural Networks (CNN)
  • Recurrent Neural Networks (RNN)
  • Fully Connected Neural Networks (FCN)
  • Deep Belief Networks (DBN)
Answer :- 

10. Which of the following is true about LSTM and GRU networks?

  • LSTM networks have more gates than GRU networks
  • GRU networks have more gates than LSTM networks
  • LSTM and GRU networks have the same number of gates
  • Both LSTM and GRU networks have no gates
Answer :- For Answers Click Here 
Share This Article
Leave a comment