The investigation of high-level properties of human cortical vision has long been modelled using mainly nonhuman primates. Yet, at least for transformation-invariant object recognition, rodents’ capabilities have now been asserted as a valid tool to study some characteristics of object recognition (1, 2). In this study, we aimed to 1. Train mice in a texture-based shape discrimination and generalization task using Saksida-Bussey touchscreen chambers, 2. Record single-unit activity with multi-channel silicone probe arrays in the primary visual cortex and 3. Use unsupervised and supervised machine-learning approaches to assess the presence of shape-related information in isolated single-unit responses (150 cells, 8 animals). We have found that mice are able to distinguish mid-level vision features through an incremental learning approach, leading them to correctly identify (above 77,5% correct threshold) texture over texture shapes. We also found that, after using a dimensionality reduction approach (multi-dimensional scaling) of the full temporal response of neurons in V1, responses cluster surprisingly well relating to shape, though grating orientation information is still a driving force. Finally, we have further investigated generalization capabilities of classifiers to differentiate between two shapes of differing texture orientation. Taken together, our results indicate some weak yet present information relating to shape in V1. We can also confirm that mice can differentiate between second-order shapes despite this strong influence of orientation-selectivity pull.
|Number of pages||1|
|Publication status||Published - 17 Jan 2019|
|Event||Belgian Brain Congress 2018 - Liege, Belgium|
Duration: 19 Oct 2018 → 19 Oct 2018
|Conference||Belgian Brain Congress 2018|
|Period||19/10/18 → 19/10/18|