How does ‘attention’ affect perceptual processing? Does it alter the way even early visual processing works? In Jabar & Anderson (2015), we hypothesized that exposure to probablistic stimuli can result in V1 neurons being tuned differently, and that this is what drives differences in perceptual precision. But are such low-level mechanisms enough to cause the changes we see at the behavioural level?
While that question could be answered through electrophysiological investigations (See my previous post!!), it could also be probed using neural models. Assuming a population of orientation sensitive neurons, we can take the ‘neural representation’ of an orientation as the resultant vector that is encoded by that population. How well the input stimulus is represented can be measured as the difference (or ‘angular error’) between the input and the decoded output.
Based on that assumption, the effect of changing neural parameters can be modelled. Do tuning changes really affect representational precision? What about gain changes? Below is a simple online V1 model that I wrote in R Shiny. Play around and see for yourself what changing the parameters do! The ‘mean error’ is the average angular error across the specified number of trials.
Clearly there are other neural parameters that could be modelled, such as anisotropy, number of neurons in the population (currently 400), etc. And of course, we do want to match the models to observed behaviour (which we will do in our upcoming VSS poster, so look out for that!.
For aficionados of all things technical, the online model is made possible through a use of a linux ‘cloud’ server (in this case, through Linode). Then it was a matter of installing a R shiny server on the Linux cloud. Shiny is an R package that allows for interactive and dynamic applets, which can be deployed on the web. Shiny also seems to be compatible with other R packages (at least, that I’ve tested), but do remember to allow all users access to the installed packages. Port 3838 will be the ‘R server’ by default. Because the code for the neural model is (somewhat) efficient, changes to the user input can update the state of the model near instantaneously.