Randomization - THE BLOCK

I understand what went awry in the randomization. Apparently I needed to select randomization at an additional level (i.e., at the level of the block).

So, now my face pictures were randomly presented, and my trial codes are randomly listed in the last column, BUT the trial codes are inaccurate.

Is there something else I am doing wrong?


You might try

a trick I am using to set up some experiments. Instead of using SuperLab event or trial codes, create a fake text stimulus list consisting of the codes corresponding to all of your real stimulus lists, in order, and then set up an event parallel to your real events that uses that list, but that displays its “stimuli” in the background color (so they can’t be seen by the subjects). That is, you have one or more real stimulus lists with N items in each list, and a fake invisible list with N arbitrary textual codes. Set up an event for each one, and use them all in your trial. I put the id codes in the ITI, which I make the first event in each trial, but you could probably put it in anywhere, especially if you make it very brief.

Anyway, the textual codes appear in the datafile in the stimulus column of an event that identifies all of the codes for the trial in which it appears.

I don’t know whether multiple aligned stimulus lists will remain aligned when randomized, I didn’t try that. Instead, I’m setting up several “randomization sets” where I shuffle the lists externally, before importing them into SuperLab. This also allows me to do some counterbalancing as part of the randomization process.

Greg Shenaut

Greg, Thanks for the tip… but I need this to be easily understood by undergraduates in Experimental Psych… and your method would be much too difficult. Given that everything is working except for the trial codes, I might have to give in and remove that part. Also, what i need it do seems like exactly what SuperLab should be able to do.

Sandy, randomizing events within a trial does NOT randomize events within a stim list. There is no explicit method of randomizing a stim list. SuperLab will make multiple trials based on the number of items in the list. You then want to randomize the trials within the block, as you have discovered.

I’ve also run into a bug myself where the incorrect code values may be registered at run time. If this occurs, any action depending on those code values is incorrect, and (as you have seen) that incorrect code value winds up being stored in the data file. I wasn’t aware anyone else had run into this issue. I’ll get right on top of it.

Also, when adding additional information on the same question in a forum, can you please post the additional information as a reply to yourself? It makes your train of thought a little easier to follow. Thanks.



I am new to using a forum and the concept of the thread is starting to make sense, now. I apologize for any confusion I caused.

I am surprised about the codes (which are so essential in teaching a course on Experimental Methods). It does do something sort of orderly. I have 20 faces in my pilot run and they should be divided equally between the following:
Race & Type = White New
Race & Type = White Old
Race & Type = Asian New
Race & Type = Asian Old

Instead, everytime I run it, the faces are presented in a new random order and the codes are randomly unrelated BUT instead of finding 5 of each type of the four conditions, there are ALWAYS 8 Asian Old, 4 White New, 4 White Old, and 4 Asian Old.

I am sending this just in case it is helpful in debugging this.

an alternative method

Hank, I can easily embed the names of the conditions in the file names (actually that is how I usually develop pictorial stimuli), so if I combine that with printing out the actual response, it will be possible for the correct responses to be easily counted (this is a very simple design because it is the first week of classes) and that is all I truly need this week.

However, I truly need this problem to be de-bugged ASAP otherwise no one can design a study that gives feedback based on the response codes.

In keeping with that need, I have one more request of you while you are engaged in the debugging … do you know if this same error occurs if you switch to verbal stimuli and load in multiple lists in lieu of multiple folders and then link each list to the different response (i.e., variable) codes? If it encounters the same error, it would be essential that this error be debugged and hopefully fixed at the same time.


I currently have no reason to believe that it’s related to the type of stimuli, as the type of the stimulus is unimportant when the core of SuperLab is doing its stuff. Therefore, this issue should apply to all types of stimuli, and the fix would likewise apply to all types.

so type of stimulus won’t matter


This is good news, but only if it turns out that this bug can be easily fixed.

I await your progress. And just to show you that I believe you will find a way to fix this soon, I purchased a 5-license version of 4.0 today. I hope my confidence is warranted! :slight_smile:



I’ve managed to find and fix the bug that was causing my experiment to work incorrectly. Unfortunately, this did not fix your experiment. Since otherwise the behavior of the fault is identical between the two experiments, I have reason to believe (so far) that it’s a similar logical error near the other bug.

I’m headed home for today, but your bug is still on the top of my list.