What happens next and when “next” happens: Mechanisms of spatial and temporal prediction
The physics of the environment provide a rich spatiotemporal structure for our experience. Objects move in predictable ways and their features and identity remain stable across time and space. How does the brain leverage this structure to make predictions about and learn from the environment? This thesis describes research centered around a mechanistic description of sensory prediction called LeabraTI (TI: Temporal Integration) that explains precisely how predictive processing is accomplished in neocortical microcircuits. The fundamental prediction of LeabraTI is that predictions and sensations are interleaved across the same neural tissue at an overall rate of 10 Hz, corresponding to the widely studied alpha rhythm of posterior cortex. Experiments described herein tested this prediction by manipulating the spatiotemporal properties of three-dimensional object stimuli in a laboratory setting. EEG results indicated that predictions were subserved by ~10 Hz oscillations that reliably tracked the onset of stimuli and differentiated between spatially predictable and unpredictable object sequences. There was a behavioral advantage for combined spatial and temporal predictability for discrimination of unlearned objects, but prolonged study of objects under this combined predictability context impaired discriminability relative to other learning contexts. This counterintuitive pattern of results was accounted for by a neural network model that learned three-dimensional viewpoint invariance with LeabraTI’s spatiotemporal prediction rule. Synaptic weight scaling from prolonged learning built viewpoint invariance, but led to confusion between ambiguous views of objects, producing slightly lower performance on average. Overall, this work advances a biological architecture for sensory prediction accompanied by empirical evidence that supports learning of realistic time- and space-varying inputs.