- Cost Functions
- Optimization: Learning to Minimize Cost
- Backpropagation
- Tuning Hidden-Layer Count and Neuron Count
- An Intermediate Net in Keras
- Summary
- Key Concepts
Tuning Hidden-Layer Count and Neuron Count
As with learning rate and batch size, the number of hidden layers you add to your neural network is also a hyperparameter. And as with the previous two hyperparameters, there is yet again a Goldilocks sweet spot for your network’s count of layers. Throughout this book, we’ve reiterated that with each additional hidden layer within a deep learning network, the more abstract the representations that the network can represent. That is the primary advantage of adding layers.
The disadvantage of adding layers is that backpropagation becomes less effective: As demonstrated by the plot of learning speed across the layers of a five-hidden-layer network in Figure 8.8, backprop is able to have its greatest impact on the parameters of the hidden layer of neurons closest to the output ŷ.17 The farther a layer is from ŷ, the more diluted the effect of that layer’s parameters on the overall cost. Thus, the fifth layer, which is closest to the output ŷ, learns most rapidly because those weights are associated with larger gradients. In contrast, the third hidden layer, which is several layers away from the output layer’s cost calculation, learns about an order of magnitude more slowly than the fifth hidden layer.
FIGURE 8.8 The speed of learning over epochs of training for a deep learning network with five hidden layers. The fifth hidden layer, which is closest to the output ŷ, learns about an order of magnitude more quickly than the third hidden layer.
Given the above, our rules of thumb for selecting the number of hidden layers in a network are:
The more abstract the ground-truth value y you’d like to estimate with your network, the more helpful additional hidden layers may be. With that in mind, we recommend starting off with about two to four hidden layers.
If reducing the number of layers does not increase the cost you can achieve on your validation dataset, then do it. Following the problem-solving principle called Occam’s razor, the simplest network architecture that can provide the desired result is the best; it will train more quickly and require fewer compute resources.
On the other hand, if increasing the number of layers decreases the validation cost, then you should pile up those layers!
Not only is network depth a model hyperparameter, but the number of neurons in a given layer is, too. If you have many layers in your network, then there are many layers you could be fine-tuning your neuron count in. This may seem intimidating at first, but it’s nothing to be too concerned about: A few too many neurons, and your network will have a touch more computational complexity than is necessary; a touch too few neurons, and your network’s accuracy may be held back imperceptibly.
As you build and train more and more deep learning models for more and more problems, you’ll begin to develop a sense for how many neurons might be appropriate in a given layer. Depending on the particular data you’re modeling, there may be lots of low-level features to represent, in which case you might want to have more neurons in the network’s early layers. If there are lots of higher-level features to represent, then you may benefit from having additional neurons in its later layers. To determine this empirically, we generally experiment with the neuron count in a given layer by varying it by powers of 2. If doubling the number of neurons from 64 to 128 provides an appreciable improvement in model accuracy, then go for it. Rehashing Occam’s razor, however, consider this: If halving the number of neurons from 64 to 32 doesn’t detract from model accuracy, then that’s probably the way to go because you’re reducing your model’s computational complexity with no apparent negative effects.