Following up from Part 1, when are detection tasks useful?
Intuition would suggest that if you want a clean measure of the time it takes for one to perceive/recognize an object, a detection task might be apt. But, what really goes on in making a ‘detection’? Typically, participants are to press one of two buttons to indicate one of two options. Do differences in RT really measure differences in time required to detect? Not necessarily! There could be differences in response preparation too, especially if participants are given a cue prior to the target, as most studies of attention do, that could (systematically) affect the response preparation levels of one choice over the other.
What might be a way to study this? Instead of a button press (which is binary: either on or off), what if participants made a response on something more analog? Again, intuition would suggest that that requires expensive equipment. Not necessarily? What about using (cheap and consumer-grade) gaming devices? Game controllers, such as the XBox 360 controller, have analog triggers. How about just asking participants to hold these down to a halfway point, and to pull it in to make a detection?
We did just that, in a new paradigm we have been developing. We also made it so that the trial would not begin until participants are holding down the left and right triggers (and the controller vibrates to indicate when they aren’t pulling enough). They then do a detection task with each trigger representing one of the two options (in this case, participants classify a target as either green or blue).
Well, is this method precise? Given that gaming does require a certain level of input precision, it seems a good bet that the controller is precise enough for a detection task.
And apparently, it really is quite precise! The green line indicates the trigger state for the trigger that maps on to the ‘green’ response, and the same with blue. In trial 41, this participant sees a blue target, starts responding at 300ms post-onset, and takes an additional 20ms or so to actually hit a threshold. In a button-based experiment, you would only get this threshold point. Here, clearly, you can see the participant was more prepped for a ‘blue’ response than a ‘green’ one. From such data, it is trivial to calculate the baseline state, force of the response, how fast participants initiate a response, how fast they let go of the response, etc.
Do some manipulations have their effect only via changing the baseline response preparation? More prep would mean less travel distance to the detection threshold, after all. Or do they affect the actual time to initate the movement? Is there a relation between preparation and initiation? These are things that such a method afford investigations of.
Another issue is, are ‘hits’ (as measured by the experiment) always really hits? Are ‘falses alarms’ really just that? What if those trials that are done correctly, but with longer RTs, are maybe still due to a false alarm that did not reach a threshold? Would this method have anything to say about that?
Evidently, yes! In trial 47, the participant sees a green target, but started responding with the blue trigger to the detection threshold. Button experiments would say this is a false alarm and leave it at that. But, notice how the participant also tried to press the green trigger, but alas, he already committed to the error.
In the following trial, the participant starts vacillating his responses a bit. The blue target was shown, the participant started to initiate the blue trigger, but perhaps because of the the previous error, started pressing the green trigger, but then managed to correct himself and ended pressing the (correct) blue trigger. Again, in a typical button press experiment, this would just be a ‘hit’ with a long RT. Clearly, that would be missing a lot of the response dynamics that actually went on in the trial.
So, maybe instead of looking at actual time to cross a trigger threshold/ complete a button press, detection perhaps is more related to the initiation time? Using this type of paradigm raises a bunch of questions and concerns, for sure. But just because such concerns aren’t obvious (or can’t be studied) in button-based experiments does not mean they aren’t there.
Given our lab’s research interests, do probabilty effects occur only when response differences are possible? Given that we predict different probability mechanisms for spatial vs. feature probability, do they create different types of RT facilitation? Can we see them either in the response prepatation states or initiation times? Can false alarms in initiation explanable by something like a drift diffusion model, where the noise sometimes overwhelms the signal? Stay tuned, as we explore these issues, and more!
For those thinking of implementing this analog trigger method, the experiment was coded in Python in an Archlinux system, using the xboxdrv package. Threading was implemented in the Python code to run the gamepad and collect responses in a seperate process. The result was stable data collection at 1000Hz.