By Jim Davies

Photo by Nadi Lindsay from Pexels
Neural networks need to “dream” of weird, senseless examples to learn well. Maybe we do, too.
From Jim Davies writing for Nautilus
Maybe our brains are serving up weird dreams to, in a way, fight the tide of monotony. To break up bland regimented experiences with novelty. This has an adaptive logic: Animals that model patterns in their environment in too stringent a manner sacrifice the ability to generalize, to make sense of new experiences, to learn. AI researchers call this “overfitting,” fitting too well to a given dataset. A face-recognition algorithm, for example, trained too long on a dataset of pictures might start identifying individuals based on trees and other objects in the background. This is overfitting the data. One way to look at it is that, rather than learning the general rules that it should be learning—the various contours of the face regardless of expression or background information—it simply memorizes its experiences in the training set. Could it be that our minds are working harder, churning out stranger dreams, to stave off overfitting that might otherwise result from the learning we do about the world every day?
Read the full article at Nautilus