Accuracy of response times in 4.0 on Macs

I am using SuperLab 4.0.1 on a Macintosh iBook G4 with OS 10.4.4. The participant responds on the ibook keyboard to text presented (rather than sound or picture files). How can I estimate the accuracy of the response times that are reported?

I am not sure that I can give you a good answer, but I’ll try.

When using a USB keyboard, the accuracy of the keyboard is going to be about 10 to 12 milliseconds. With a laptop’s built-in keyboard, it all depends on how it is wired internally. If it is wired as a USB peripheral, its accuracy will be the same 10-12 milliseconds. If not, there is no way to tell.

I’d be happy to hear from other users on this issue.

10-12 ms ?

Hisham, could you explain a bit more what you mean? I think (and hope) that you mean that there is a consistent lag from the time a usb keyboard’s key is pressed until the keypress is registered by SuperLab, and that, depending on the machine, the lag tends to be between 10-12 ms. That is, RTs may seem to be 10-12 ms longer than they actually were, but consistently so within a ± 1 ms window.

If, on the other hand, you mean that a measured RT from a USB device is accurate only to ± 10-12 ms, then that’s terrible, an order of magnitude worse than acceptable.

Greg

The answer to this question depends on a combination of hardware and operating system. For a keyboard (regardless of type) on both Mac OS X and Windows, SuperLab looks for key-press events. SuperLab’s accuracy depends on the time difference between when the key on the keyboard is triggered and when the operating system finally delivers an event that says this occurred. I haven’t explicitly tested this, and I didn’t find anything in Apple’s documentation specifying how long it would take.

On Mac OS X, SuperLab looks for the lowest level keyboard events, before text services is given a chance to handle a key-press. SuperLab checks for these low-level events roughly every .5ms on Mac OS X. On Windows, SuperLab checks more often because Windows doesn’t provide sub-millisecond granularity in thread timing.

I hope that helps a little.

On Drivers and Buffers

Hi Greg,

I’m afraid that the 10-12 milliseconds that I described in my earlier post is not a consistent lag.

In his reply, Hank wrote “SuperLab’s accuracy depends on the time difference between when the key on the keyboard is triggered and when the operating system finally delivers an event that says this occurred”. This time difference is due to buffering. Many people assume that since devices are now much faster, that data arrives a lot faster to a program and hence delays are all but gone.

Nothing can be further from the truth. In fact, as devices get faster (e.g. the 12 Mbit/s for USB 1.1 vs. the 400+ MBit/s for USB 2.0), the only thing that is safe to assume is that buffers will get larger in order to handle more data. How quickly data is moved from the buffer and delivered to the application program that needs it depends entirely on the operating system or the device driver that controls the device.

In the particular case of USB, every device must have a device driver. The designers of the USB standard realized that people are not going to like having to install a driver for every keyboard or mouse that they buy, so they devised a standard driver for such devices known as HID, or Human Interface Devices, and it is a built-in driver on Mac OS, Windows, and Linux. On Mac OS and Windows, the HID driver passes information such as key presses and mouse movement to the application program, e.g. SuperLab, every 10 milliseconds or so.

In other words, even if you manage to press a key every one millisecond on the keyboard, SuperLab will receive all the key press information from the OS no faster than once every 10 milliseconds. This is one big reason why we developed our own XID firmware, or eXperimental Interface Device, in order to provide built-in timing and not worry about USB or operating system delays. Practically speaking, SuperLab can reset the RT timer in the response pad at any time; when the participant presses or releases a key, the key information is time-stamped in the response pad itself prior to being sent to the computer. This way, even if there is a delay, SuperLab can still obtain an accurate reaction time.

Since you’re handy with programming, you might enjoy browsing through the XID specs at http://www.cedrus.com/xid/.

USB timing issues

Hisham,

Well, that’s too bad about the buffering, although i do understand just what you mean. Factors like this are why I had originally planned to hook our response buttons up to the dataport of the SV-1, so that we could get more accurate RTs (using XID), and it’s why in the “old days”, we used PC game port button inputs. I suppose I was hoping that there was a way in Superlab to get around the problem.

I’ve stayed away from low-level programming on Macs. I used to do it a great deal on embedded microcontrollers, older (i.e., V6/V7) UNIX systems, and pre-Windows PCs, but the amount of learning required to start back up on a Mac would be prohibitive. But, what you wrote did send me to the Apple Developers’ pages, and I did find this interesting passage:

Using the Low Latency Isochronous Functions
In Mac OS X, the time between when an isochronous transaction completes on the USB bus and when you receive your callback can stretch to tens of milliseconds. This is because the callback happens on the USB family work loop, which runs at a lower priority than some other threads in the system. In most cases, you can work around this delay by queuing read and write requests so that the next transaction is scheduled and ready to start before you receive the callback from the current transaction. In fact, this scheme is a good way to achieve higher performance whether or not low latency is a requirement of your application.

In a few cases, however, queuing isochronous transactions to keep the pipe busy is not enough to prevent a latency problem that a user might notice. Consider an application that performs audio processing on some USB input (from a musical instrument, for example) before sending the processed data out to USB speakers. In this scenario, a user hears both the raw, unprocessed output of the instrument and the processed output of the speakers. Of course, some small delay between the time the instrument creates the raw sound waves and the time the speaker emits the processed sound waves is unavoidable. If this delay is greater than about 8 milliseconds, however, the user will notice.

In Mac OS X version 10.2.3 (version 1.9.2 of the USB family) the USB family solves this problem by taking advantage of the predictability of isochronous data transfers. By definition, isochronous mode guarantees the delivery of some amount of data every frame or microframe. In earlier versions of Mac OS X, however, it was not possible to find out the exact amount of data that was transferred by a given time. This meant that an application could not begin processing the data until it received the callback associated with the transaction, telling it the transfer status and the actual amount of data that was transferred.

Version 1.9.2 of the USB family introduced the LowLatencyReadIsochPipeAsync and LowLatencyWriteIsochPipeAsync functions. These functions update the frame list information (including the transfer status and the number of bytes actually transferred) at primary interrupt time. Using these functions, an application can request that the frame list information be updated as frequently as every millisecond. This means an application can retrieve and begin processing the number of bytes actually transferred once a millisecond, without waiting for the entire transaction to complete.

Important: Because these functions cause processing at primary interrupt time, it is essential you use them only if it is absolutely necessary. Overuse of these functions can cause degradation of system performance.
To support the low latency isochronous read and write functions, the USB family also introduced functions to create and destroy the buffers that hold the frame list information and the data. Although you can choose to create a single data buffer and a single frame list buffer or multiple buffers of each type, you must use the LowLatencyCreateBuffer function to create them. Similarly, you must use the LowLatencyDestroyBuffer function to destroy the buffers after you are finished with them. This restricts all necessary communication with kernel entities to the USB family.

For reference documentation on the low latency isochronous functions, see the IOUSBLib.h documentation in I/O Kit Framework Reference.

I have no idea it it’s relevant, but it sounds like it might (or should) be. The link is here:http://developer.apple.com/documentation/DeviceDrivers/Conceptual/USBBook/index.html#//apple_ref/doc/uid/TP40000973

Cheers,

Greg

not quite right…

It isn’t the drivers in the OSes that cause the 10 msec polling rate. They are written to HID spec which allows for 1 msec update speeds from the devices. Its just that standard keyboards and mice only will send data every 10 msec.

If you get an x-keys USB device it will send data much faster.

BTW, while ± 10msec is bad why is it so particularly bad for you? Its just a variance increase usually overcome by adding a couple of more trials / condition and/or a couple of more subjects.

RT variance

Well, some important RT effects aren’t more than two or three time as great as 10 ms. But in fact, you may have a point, and of course this means that several other bugaboos of RT experiment implementation can also be disregarded: for example, why worry about synchronizing to vertical retrace? It’s just an increased variance of about the same order as this USB input variance? As long as the screen update occurs without flickering, then plus or minus a video frame, who cares? In the end, if you run enough trials and enough subjects, the means will all converge on the same quantities anyway.

Personally, though, what I prefer to do is two things: first, try to eliminate as much implementation-induced variance as possible, and second, try to understand how much there is and what its sources are. Actually, a third thing is to report as precisely as possible just what I am measuring and how accurately.

For example, it now appears that with USB input and Superlab, I won’t be able to make the normal unqualified assertion that RTs are measured “to the nearest ms”, but rather “to the nearest ms ± 10-12 ms”. While an understanding reader will “get” that this is simply an increase in variance that will all come out in the wash, I personally am somewhat uncomfortable writing things like that (and reading them, frankly). In the old software we’ve used in our lab up to now, vertical retrace and subject input was sampled at a 4096Hz rate and RTs were rounded to 1 ms from that. Visual stimuli were produced reliably on the exact frame specified, and audio stimuli and user responses on the exact millisecond specified. In other words, the implementation-induced RT error was less than 1 ms for every aspect of an experiment, which is what I’ve always thought it meant to measure them “to the nearest ms”.

Anyway, I’m pondering the situation. Maybe it’s just a matter learning to live with it. By the way, we’re using the X-Keys USB switch interface, so maybe we have less than a problem than it sounded.

Greg Shenaut

agree

I absolutely agree that it is good to know, and report, the details of how the response is recorded.

But, 1msec accuracy has rarely (never really) been a “given” with standard PC hardware and software. The pre ADB Macs (Plus and back) and straight DOS PCs with good programming could count on them. But, without some sort of button box its never been a given.

Also, you seem to imply that the effect would have to be larger than your potential error. This is not the case at all. The effect can be much smaller and still easily detectable.

Also, the USB limitation discussed here is an HID device limitation (x-keys is on of those). So, even though your hardware now posts at 1msec it still has a buffer issue (although I believe that driver properly handles interrupts). High speed USB devices like the button box have no such problem (not in the msec+ range anyway).

(There was an old paper called “Good News for Bad Clocks” that covered this well. (Ulrich 1989)

Yes, we don’t expect exact 1 ms precision from OS/X and Superlab for all the reasons you said. The old system was DOS-based and completely took over the hardware to achieve its accuracy.

In our research, we have to test AD patients in their homes and other remote locations, and for years, our RAs schlepped CRTs and those old sewing machine style “luggable” PCs all over northern California. This most recent grant cycle, we finally decided it wasn’t worth it, and we made the shift to MacBooks & LCD displays, well aware that we’ll have to get used to more timing variation.

Before we committed to Superlab, I was assured that the new version would support the SV1’s digital I/O port for subject responses. The SV1 measures RTs to 1 ms accuracy and then transmits a message including the timing and the response to the computer. The input port would allow us to use our big “jelly bean” style buttons, which are considerably easier for the AD patients. I figured that by ordering Superlab & SV1’s, the only new source of timing variation with our new equipment would be due to the LCD display. However, the SV1 digital I/O isn’t supported yet by Superlab, and we decided to use the X-Keys interface as a fall-back.

We’re about two weeks into our first Superlab/Macbook/X-Keys experiment (a lexical decision/semantic priming variation), and the results look pretty much as they always have, although now that I think about it, the RTs are longer than usual, about 670 ms for the grand mean RT for correct word trials, where I would have expected something closer to 500 ms, based on previous similar studies using the old system. The same overall pattern as usual, but substantially longer RTs. There are changes in the procedure, stimuli, and so on, but that’s a pretty big difference in RTs.

Anyway, it would make me feel more comfortable to understand better what’s going on “under the hood” with the timing.

Greg Shenaut

Greg,

I thought you might like to know that I finally have the internal SuperLab timing to a point where I’m comfortable saying the following:

My Apple Extended keyboard gives me 8.01ms precision (give or take about .01ms). I have measured this difference in keypress times on numerous occasions, and the next smallest is .00ms. I’m really hoping to impress you with 4.0.3. Note that this specific change in my code only decreases response times by about 1-2ms on my machine, so we were in good shape here to begin with. Also note that this is only a statement about the precision, not the accuracy.

We’ve looked at many of the issues raised in our rsearch on millisecond timing accuracy. In short you’re right to check and keep on checking! We’ve now an all-in-one commercial device called the Black Box ToolKit which will do this kind of check more easily and accurately. For information on our published papers, the kit and who’s using it see:

http://www.blackboxtoolkit.com/

Unfortunately the issue of timing inaccuracy isn’t going away any time soon regardless of how fast your PC or MAC etc. is or what experiment generator you use :frowning:

Richard