Beyond Accuracy and Reaction Time

Cognitive experiments commonly utilize some kind of detection paradigm. These usually involve forced choices (hit button A when you see/think about X, hit button B for Y). Such tasks neccessarily requires decision-making, which arguably could contaminate effects.

These tasks also provide the kind of data that we are all familiar with: Accuracy and Reaction Time. However, is this really the best way to obtain, say, data about visual processing? Why not just ask participants to report or draw what they saw?

We have been working with estimation tasks that affords us with these types of information. Yes, they can be both harder to program and to analyse as compared to detection tasks, but it gives us much more information. Below is an illustration of one trial of a task we have developed. It uses only a mouse and a single button for the responses.

Participants initiate the trial by clicking the fixation icon. A stimulus (a grating with a central dot) appears somewhere for a brief time period. Participants first are to move the mouse to (where they perceived) the location of the grating, and they click once. This simple act gives us multiple pieces of information:

  1. We obtain the time it takes for the participant to start moving ('Initiation RT')
  2. It gives us the additional time required for participants to estimate the location ('Spatial RT')
  3. It gives us data about the spatial precision:

Spatial precision can further be broken down into the eccentricity and bearing components. For the example trial, the participant has both overshot (+ve eccentrcity error), and made his localization slightly anticlockwise (-ve bearing).
If we record down the mouse position over time, we can also charaterize the entire trajectory in a trial.

Each coloured line represents the trajectory for a single trial. Evidently, it is also clear with such a visualization that participants move out balistically, and then make finer adjustments as they reach the target. This is all potentially useful data.

But there's more! Participants, once they made the localization, can then draw the orientation that they perceived. Participants report this to be a very smooth process, and our data indicates that this results in high precision. This gives us more information, such as:

  1. How long it takes for participants to draw the orientation ('Orientation RT')
  2. How precisely they made the orientation estimation

In the example trial, the participant reported an orientation error more clockwise than he should have, being biased to wards the horizontal.

All these data, the paradigm gives us on each trial. Is spatial precision coupled to orientation precision? Can they be affected independently? What do the different reaction time measures measure? As you can see, there is much to be gained by going away from simple detection tasks and the commonplace ACC and RT measures.

For those interested in examples, this task was used to collected the data we presented at VSS. The paper is currently in review. This data was also used to model probability effects as changes in neural tuning vs. gain, which was also presented at the same conference.

Try it for yourself! See my previous post!!

However, that is not to say there aren't instances where detection tasks are useful. Stay tuned for Part 2, where we discuss this topic along with a new paradigm.

Date: 2016-05-31 Tue 00:00

Author: Britt Anderson

Created: 2024-05-02 Thu 03:14

Validate