I’m designing a paradigm for fMRI. It has 5 different stimuli and each stimulus set is composed by 50 slides. Each slide is shown by 6 seconds. So the duration of the paradigm is 1500 seconds.
The slides’ sequence is randomized and optimized with the software Optseq. As so, at SuperLab, I’ve 1 block with 1 trail with 250 slides already sequenced.
When I ran this set, I verified that it took 1507 seconds. These extra 7 seconds “grew” along the run.
For fMRI, this is a big problem as it exceeds in more than 2 TRs. In practice, this means that when we take the “picture”, the “car” has gone sometime ago.
Then I tried to switch off the antivirus, the wireless connection, the sidebar, everything that could take processing time and is superfluous. Again, 1507 seconds.
The OS is Vista Business and I’m running the Demo version.
How to jump over this hurdle?
Thanks in advance, Paulo.
Seven seconds after 250 trials comes out to 28 ms per trial. If you have a 70 Hz monitor, that would be pretty close to two refresh cycles. This may simply be the overhead required to load and display a new image. If it seems stable, you might try asking for 5972 ms presentations and see if that brings you to six seconds.
The proper way to do this in SuperLab is using the “End” tab in the Trial editor. Each of your slides will need to be in their own trial. When using this method of setting the length of a trial, errors will not accumulate.
Note that the cumulative time taken by the events within any one given trial needs to be less than the amount of time specified in the trial editor. This is because SuperLab will not end a trial early–it will only delay the onset of the next trial.
Thank you for the ingenious solution. Along the week I tried several time limits, also to understand how the software works. I got some surprises. For example, I tried with a set of 72 slides, which duration should be 06:54.0. Varying the time limits, I got:
- for 5972 ms, 06:53.7 (-0.3”)
- for 5973 ms, 06:53.8 (-0.2”)
- for 5974 ms, 06:54.9 (+0.9”)
- for 5975 ms, 06:54.8 (+0.8”)
- for 5985 ms, 06:55.0 (+1.0”)
It seems that there isn’t a linear relationship between the time limit and the duration. On the contrary, it seems that the relation is by steps, which turns tuning difficult. There is any reason for this step relation?
Anyway, I achieve to a solution of the paradigm duration. For the complete slide set, total duration should be 25:00.0. With 5973 ms for all the slides, I got 24:58.3 (-1.7”). So I had to bring some slides to 6000 ms. With this hybrid solution (most of the slides with 5973 ms and some of them with 6000 ms), I got 25:00.4 (+0.4”), which is acceptable.
Then I remembered about the Greg Shenaut talk about refreshment rates. I was making trials just with the laptop. I ran it now with the data-show connected to the laptop. Well, this last paradigm took 25:02.1 (+2.1”), and it was unacceptable again. So I ran the previous (where all the slides had 5973 ms, and which took -1.7”), and now I got 25:00.3 (+0.3”). Finally solved!
So, it is possible to get a solution for the exact duration, but with hard work.
Is there a possibility to change the time limit for a set of Events and not one by one?
Kind regards, Paulo
I made an experiment with the Hank’s advice: for each Event I created one Trial; in the Trial Editor, for each Trial, in the End tab, I put a checkmark in the box and 0 ms as delay. Unfortunately the result was worst. For the same 72 slides of the previous post, and 6000 ms for the most of the slides (there is some with 3000 and others with 9000 ms), no data-show connected, results were:
- for each slide as one Event and one Trial for all the Events: 06:56.1 (+2.1”)
- for each slide as one Event and one Trial for each Event: 06:57.4 (+3.4”)
I already had a lot of problems with Trials’ transition. They are too long. I don’t know if it is a Vista problem or something else. I’m facing severe problems with crashes with SuperLab. So it works until the end, I have to shutdown the lateral bar, the antivirus, automatic updates, wireless connection, screen saver, power management, well, everything but SuperLab.
Kind regards, Paulo
At this point, I believe you are confusing length of experiment from start to finish with the time difference between each trial. You haven’t specified how you are calculating the difference in the run-time length, so I’m assuming it’s from the point that you press run to the point that the window goes away. SuperLab does processing before and after the experiment, so there is always going to be some additional time at either end. Furthermore, if you have SuperLab configured to load all of your stimuli before starting the experiment, this would be included in the processing time.
Take a look at the verbose log. It’s not particularly user-friendly, but if you are trying to troubleshoot very specific timing details, the information is here.
Next: I recommend against using Vista to run SuperLab. SuperLab 4.0.6 fixes the “crash” issue, but we had to make a sacrifice to do this. As a result, SuperLab under Vista is slightly more susceptible to timing errors introduced by the operating system itself. Note that if/when this happens, it will show up in the verbose log.
As I was running a demo version, I preferred to wait for the full version to come to this subject again (and also for a button box for fMRI to test the complete scenario). This explains the lag.
I installed version 4.0.6b and the first thing I notice is that it’s much more robust under Vista than the previous. These are really good news.
When I say “duration”, I’m referring to the net time from the beginning until the end. I use to add a “dummy” slide in the beginning and another at the end to account for eventual time discrepancies. But I followed your advice and I run with the --verbose. These are too good news because the verbose brings control over the experiment to another dimension. Now we know what is going on along those miliseconds.
The differences have a quite simple justification: when I introduce 6000 ms in SuperLab, I’m telling to show the slide for exactly 6000 ms; I wasn’t accounting for all the work that must be done to show the slide and after it was shown, to clean everything; these are some miliseconds that when are multiplied by 250 slides transform themselves into seconds! I thought that 6000 ms was the total cycle time, from the beginning of a slide flush until the presentation of the next slide. With verbose now it’s possible to account for these little differences and make the necessary compensations.
There was one thing that I had to remove: flush the data to disk by the end of each trial. This introduces a lot of variance: sometimes 0.69 ms and other times 132.43 ms! Flushing the data to disk just after the expriment finished (which means after the last dummy slide) solved the problem, because there isn’t more flushing along the presentation.
Solved. The only problem is that some hard work is needed to tune the duration of the trials so the cycle is 6000 ms. But that’s OK. Nice job. Thank you so much, Paulo
Still, if you want a trial to be presented every 6000 ms, the solution is to set the time in the “End” tab of the trial editor to 6000 ms. You also need to be sure that the events in the trial add up to less than 6000 ms. There are delays that occur at various places, but this feature ensures that given the chance, each trial will start at a precise interval after the previous trial. These lengths also need not be the same from trial to trial. The time specified on a trial is interpreted as the intended length of the trial being edited.
If you set your trial to 6000ms and you have a 6000ms ISI inside the trial, this feature will not work. If you look in the verbose log, this is the “sync trial” feature.