Lexical decision task with visual and auditory stimuli

One of my students in Research Methods course wants to create an experiment in which she has participants do a lexical decision task (responding to both English words and English non-words) while at the same time hearing spoken words (either in English or Spanish) over headphones. Can you recommend any sample experiment(s) that she should look over to get an idea of how to best approach coding her study? She has not used SuperLab in the past but wants to learn it well enough to code her study. Thanks!

Mapping between visual/auditory stimuli

The student and I can get SuperLab to present visual stimulus (from list of words vs list of non-words) and we’re pretty sure we know how to get it to present simultaneous auditory stimulus – she needs to make the .wav files yet.

Where we are really struggling is in terms of how to control the relationship between the visual stimulus and the auditory stimulus. For example, 25% of the true words should be presented on computer screen with the same English word spoken over head phones; 25% of true words along with the same word spoken in Spanish, 25% of true words along with a different word spoken in English, and 25% of true words along with a different word in Spanish. We are using SuperLab 5.0 to create experiment. Same basic idea with non-words–although there will not be any semantic relationship between the visual stimulus and the spoken word.

Any help you can offer would be appreciated.

If I am reading this correctly, you will need a matching number of same English words, and same Spanish words, etc., of which SuperLab will choose only 25%.

In other words, if you have a list of 20 true words, you will also need

    A list of 20 spoken English true words
    A list of 20 spoken Spanish true works
    And so forth...
SuperLab would then choose 5 of the 20 true words (25%) and the matching spoken words. Is this correct?

Thanks for trying to help me with this. I will try to explain it a bit better (apologies in advance for the length of this reply…)

I’m not sure of the exact number but lets say that there will be a list of 20 English words (text) and a list of 20 non-words (text)–all of these will be presented visually on the computer screen in a random order, one at a time, to the participant. The participant’s task is to determine as quickly as possible whether or not the visual stimulus forms a true English word or not (i.e., they will perform a lexical decision task)

For each of the 20 English words, there will be a .wav file with the word pronounced in English as well as a .wav file with the spoken Spanish equivalent of the word. And, yes, I assume that the English spoken words should be be in one list and the Spanish spoken words in another list.

The percentages refer to the proportion of different trials. A true English word will be presented on the screen for half of the trials; a non-word will be presented on the screen for the other half of the trials.

When a true English word is presented visually, the participant will simultaneously hear one of the following: (a) the same word as on the screen spoken in English or (b) the same word as on the screen spoken in Spanish or © a different word than what is on the screen spoken in English or (d) a different word than what is on the screen spoken in Spanish.

Similarly, when a non-word is presented on the computer screen, the participant will simultaneously hear either a word spoken in English or a word spoken in Spanish.

We can’t figure out how to link the visual stimulus with the auditory stimulus to create the various conditions.

I hope that this makes sense…

I think we’ve figured it out…each trial will access two lists (one for visual items and another for auditory items), the two lists will be paired, and the list access with be randomized. It does mean that the pairing of visual stimulus-auditory stimulus must be predetermined for all conditions but that’s fine for a class project.

I can see that you have been given access to a preview version of SuperLab 6, which contains the sublist feature.

By using synchronized sublists, you can avoid manually pairing individual stimuli:

This feature pairs items from stimulus lists together before breaking them into smaller divisions, like so:

Although order is randomized throughout the sublists, you can see that the pairings are the same.

Order your stimulus lists appropriately and create events that draw from these sublists to create your desired test conditions.