Key Concepts
Here are the essential foundational concepts thus far. New terms from the current chapter are highlighted in purple.
parameters:
weight w
bias b
activation a
artificial neurons:
sigmoid
tanh
ReLU
input layer
hidden layer
output layer
layer types:
dense (fully connected)
softmax
cost (loss) functions:
quadratic (mean squared error)
cross-entropy
forward propagation
backpropagation
optimizers:
stochastic gradient descent
optimizer hyperparameters:
learning rate Ξ·
batch size