convolutional neural networks coursera week 2 quiz answers

Quiz - Deep Convolutional Models

1. Which of the following do you typically see in ConvNet? (Check all that apply.)

  • Use of multiple POOL layers followed by a CONV layer.
  • Multiple FC layers followed by a CONV layer.
  • Use of FC layers after flattening the volume to output classes.
  • ConvNet makes exclusive use of CONV layers

2. LeNet - 5 made extensive use of padding to create valid convolutions, to avoid increasing the number of channels after every convolutional layer. True/False?

  • True
  • False

3. Training a deeper network (for example, adding additional layers to the network) allows the network to fit more complex functions and thus almost always results in lower training error. For this question, assume we’re referring to “plain” networks.

  • False
  • True

4. The computation of a ResNet block is expressed in the equation:

Which part corresponds to the skip connection?

  • The term in the orange box, marked as B.
  • The term in the blue box, marked as A
  • The equation of ResNet.
  • The term in the red box, marked as C

5. In the best scenario when adding a ResNet block it will learn to approximate the identity function after a lot of training, helping improve the overall performance of the network. True/False?

  • False
  • True

6. Suppose you have an input volume of dimension nH x nW x nC. Which of the following statements do you agree with? (Assume that the “1x1 convolutional layer” below always uses a stride of 1 and no padding.)

  • You can use a 2D pooling layer to reduce NH, nw, and NC.
  • You can use a 1×1 convolutional layer to reduce 1c but not NH and nw
  • You can use a 1×1 convolutional layer to reduce 7H, 1w, and C.
  • You can use a 2D pooling layer to reduce NH, nw, but not nc.

7. Which of the following are true about bottleneck layers? (Check all that apply)

  • By adding these layers we can reduce the computational cost in the inception modules.
  • Bottleneck layers help to compress the 1×1, 3×3, 5×5 convolutional layers in the inception network.
  • The bottleneck layer has a more powerful regularization effect than Dropout layers.
  • The use of bottlenecks doesn’t seem to hurt the performance of the network.

8. When having a small training set to construct a classification model, which of the following is a strategy of transfer learning that you would use to build the model?

  • It is always better to train a network from a random initialization to prevent bias in our model.
  • Use an open-source network trained in a larger dataset freezing the layers and re-train the softmax layer.
  • Use an open-source network trained in a larger dataset. Use these weights as an initial point for the training of the whole network.
  • Use an open-source network trained in a larger dataset, freeze the softmax layer, and re- train the rest of the layers

9. In Depthwise Separable Convolution you:

  • For the “Depthwise” computations each filter convolves with all of the color channels of the input image.
  • Perform one step of convolution
  • The final output is of the dimension Nout X Nout X Me (where Re is the number of color channels of the input image).
  • You convolve the input image with Re number of 7 x1 filters (Re is the number of color channels of the input image).
  • For the “Depthwise” computations each filter convolves with only one corresponding color channel of the input image
  • The final output is of the dimension Nout X Nout × Me (where Nc is the number of filters used in the pointwise convolution step).
  • Perform two steps of convolution.
  • You convolve the input image with a filter of 7/ x 7/ x 7c where Re acts as the depth of  the filter (Re is the number of color channels of the input image).

10. Suppose that in a MobileNet v2 Bottleneck block we have an n X n × 5 input volume, we use 30 filters for the expansion, in the depthwise convolutions we use 3 × 3 filters, and 20 filters for the projection. How many parameters are used in the complete block, suppose we don't use bias?

  • 8250
  • 1020
  • 80
  • 1101

Leave a Reply