Dreams from an ML Perspective

Some really fascinating research suggests that dreams could be our brain’s way of avoiding overfitting when learning from experience. The idea is based on the fact that data scientists often introduce noise and voids into training data in order to keep models developed by ML like neural networks from becoming “over-fitted” — that is, very good but too narrow to be useful. This idea draws upon a long-standing interest in cognitive and computer sciences with analogy and the way humans both learn from analogous situations but somehow our mental models remain flexible enough to adapt to less-analogous events.

Coverage and publication.