Backpropagation is a common algorithm for training a neural network. It works by computing the gradient of each weight with respect to the overall error, and using stochastic gradient descent to iteratively fine tune the weights of the network. In this episode, we compare this concept to finding a location on a map, marble maze games, and golf.
Fler avsnitt av Data Skeptic
Visa alla avsnitt av Data SkepticData Skeptic med Kyle Polich finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
