I am trying to create an auditory lexical decision task in SuperLab. The stimuli are recordings of different talkers saying several different words (and nonwords, of course). Due to subject fatigue, I want to spread all of the stimuli over three distinct blocks.
In addition to the usual constraint of not wanting to present an item of the same type (e.g. a nonword) more than three times in a row, I also want to avoid repetition of the same talker more than two times in a row. I also want to ensure that no one talker is overrepresented in a particular block.
I have read solutions to problems similar to mine e.g. (http://community.cedrus.com/showthread.php? t=1627 and http://community.cedrus.com/showthread.php?t=793) but I don’t think that they will work in my case, because my constraints are so tight (and because of the fact that I do not have a equal number of tokens from each talker). More generally, I don’t want to introduce unnecessary noise into my data in the form of random ordering.
For those reasons it has been necessary for me to create pseudorandom lists for blocks 1, 2, and 3, with variations for different groups of subjects. I have all of the filenames of the audio files in order in an excel spreadsheet.
Now my questions (finally!): 1) Is there a way to import my filenames, in order, into a stimulus list? Or do I have to do it by hand, selecting each file one-by-one in the correct order, for each of my lists? 2) Is there a way to create separate lists that run for different subject groups? (Ideally without having to create a separate event object.) 3) Is there a simpler or more intuitive way of solving my problem rather than having several different lists?