# Probabilistic Responses / Superlab Tallying Results / Event locations on screen

Hi:

I’m trying to code a categorization / JDM task. The experiment involves three learning phases, and one experiment phase. In the experiment phase, people view either two or three pictures (of faces) and either make a choice or categorize the face. I’ve run into three issues that i’m hoping someone can help me out with:

First, In the categorization learning phase, we were hoping to make the correct answer probabilistic, where if one particular face had a 75% likelihood to be in group A and 25% in group B, when people catergorized the face, 75% of the time it would be in A and 25% in B. I’m aware that the pictures are my events, and I know how to assign the correct answer, but I’m not sure if it allows for the correct answer to randomly change based on a probability.

Second, again in the learning phase, I’m hoping to test to see if the subjects adequately learned the each condition. I’ve been unable to figure out if Superlab can examine the results of the test, and if a certain threshold isn’t met, then subjects are forced to redo a particular block.

Third, as I mentioned, in the experiment condition, subjects will select one face out of a possible two or three. I’ve coded it so that each face is an event, so in each trial, either two or three faces are shown. Since the faces are repeated throughout the trials, I had to select a specific position for each face. Therefore when a face is repeated, it’s in the same location on the screen. I see that the new version allows us to pick one of four random positions for each event, but I’m having a hard time figuring out how to do this. Please help.

I know this is a lot, but I really appreciate any help available.

I’ve figured out how to do your first two things, but I haven’t figured out how to make it compatible with the third.

In your situation, it might make more sense to imagine your pictures as trials, not just as events. To do a 25/75 split, you’ll need four events. Each will have the same picture, and each will have feedback to skip the remaining events and move on to the next trial–for both correct and incorrect responses. The trial will be set up to randomize events ONCE PER PARTICIPANT. You could do once per group as well, but it’s critical that you don’t use “Every Time it is Presented.” One of your four events will be categorized in the 25% group. configured with the necessary correct response and the relevant code value. The other three will be configured for the 75% group.

For the second question, you will need SuperLab 4.0.3 or later, as the solution is dependent on bug fixes that are available in 4.0.3. As I’m typing this, it’s being released. You will need to set up a code that applies to trials. It will have two values: correct and incorrect. For part 1, you’ve already created feedback that skips the remaining events when the response is either correct or incorrect. In addition, you also need to set the appropriate code value on the trial. Finally, you will set up a Macro at the block level. This Macro will run at the end of the block (in your case, anyway). What you want in the Macro’s expression editor is “Percentage of… Trials Presented in This Block…” You want to use a subset. The Range is All Trials, and you want to limit your selection to the trials marked with the “correct” code value. This subset should be “Greater Than” your desired percentage. If the expression is true, you want to continue with the next block. If it’s false, you want to repeat the block.

Note that the selection of event randomization in the trials is important for this second selection. This second phase must use the exact same trials used by the first phase, otherwise the events won’t be in the same order, and therefore, the expected response won’t be the same.

The positioning multiple stimuli in random locations is easy. I touched up on it here: http://community.cedrus.com/showthread.php?t=180 Unfortunately, you wouldn’t be able to do this and still keep track of how the stimulus had previously been categorized. The methods described above take advantage of event randomization in order to create and store the correct response. As a result, use of this value is heavily dependent on continuing to use the exact same trials, and I’m at a loss for how to do this and keep it available for your third phase.

Thanks Hank…

I"m giving it a whirl, and am running into some issues with code values.

So I have a bunch of trials; each trial has a different picture (event). I used the “correct response” option to code the correct responses at the event level.

The thing is that half of the pictures should have one correct answer, while the other half will have the other correct answers. So, if the possible options were heads or tails, 1/2 of the pictures (trials) would have heads as the correct answer and half would have tails.

When I tried to add the code values on both the trial and event level, it’s putting in the same thing for all of the trials? Can I change the values for each?

Ben

I tried creating a sample experiment to show you how I think this should work, but I’ve run into two separate bugs as a result–both related to randomization. One was easy to fix, but the other isn’t. I’ll have to get back to you on this once we have it fixed.

In the meantime, I’ve attached an experiment that shows how I expected this to be implemented. The bug needing fixing is in the “Randomize the events in this trial - Once per participant” area, so it’s currently completely useless… hopefully not for long. :o

Variable Correct Response.zip (1.98 KB)

Macro doesn’t work 2nd time through

Thanks Hank! I really appreciate all of your help. I’ve almost got it.

I’ve successfully managed to assign codes and create a macro that tallies the correct scores based on the code. If a sufficient score (75%) isn’t attained, it will repeat the instructions and test blocks. The problem is that during the second time through, the macro fails to work and regardless of the entries, it gives you a succesful scores and goes on from there.

I’ve created a sample experiment showing you this and have attached it.

The experiment starts with an instruction block and then moves into the test block.

In the test block there are 6 trials. Each trial, you will view one face. You have to determine if the face is a member of the Frank or Jones family by pushing f or j respectively. For our purposes, the first three faces are Franks and the last three are Jones. If you get 75% correct, a message will flash indicating you scored well enough and will move onto the next phase. If not, you will receive a message saying you didn’t score well enough and have to repeat the instructions and test block. As mentioned, the macro works the first time through, but the second time through, it will give you a sufficient score and move you onto the next phase.

Any thoughts on how to fix this?

test example.zip (2.28 KB)

I don’t see anything explicitly incorrect about how you have your experiment configured that should cause your Macro issues. The zip didn’t include the stimulus files, so I can’t run it to see how it behaves.

Can you send me a copy with the stim files? In the file menu, select “Create an experiment package” and send me that. If you don’t want to post them on the board, I can give you my e-mail address.

Thanks again Hank.

The create stimulus experiment isn’t working, so I just added the jpg files to the zip file.

If the jpgs don’t translate to the experiment, here’s how they work

R1C1.jpg is event 1-Frank
R1C2.jpg is event 2-Frank
R1C3.jpg is event 3-Frank
R1C4.jpg is event 4-Jones
R1C5.jpg is event 5-Jones
R1C6.jpg is event 6-Jones

Try running the experiment and mark all the answer incorrectly. The test will appropriately say their incorrect and repeat the test. But when you answer the questions incorrectly the second time, it will say they are correct.

test example.zip (8.05 KB)

Sorry. 4.0.3c fixes the issue with creating an experiment package.

here is the file attached using the create experimental package option.

Again, there is a macro in the block saying that if you don’t score 70% on the test, then you’ll have to redo the block.

It works the first time, and sometimes the second time, but if you keep scoring incorrectly, it will eventually mark it right.

I think it might have something to do with the code values. Once they are assigned, after going through it the first time, they might not erase… so something coded as correct, will not be recoded as incorrect if they score it wrong the second time… however that’s just my guess. Any thoughts?

test example2.zip (8.44 KB)

Here’s what’s going on:

Yes, there is a bug, but it’s easy to work around. The Macro is configured to consider all trials presented in the block, which includes previous runs of the block. This is not the bug. The bug is that the total is staying constant at 6. If you press “f” every single time, the percentage the first time through is 4 of 6, or 66%. The second time through, it’s 8 of 6, or 133%. The workaround is actually the way it should be done: set the Macro to look at a subset consisting of only the last six trials presented in the block. Then it will work as you expect, starting from scratch each time you repeat the block.

Thanks Hank!!! You’re a true rockstar and I appreciate your help.

Ben

btw, I don’t know if this was intentional, but “6 - jones” is set so that any response is correct, which is why my numbers were 4 of 6 instead of the 3 of 6 I intuitively expected based on the names of the events.