I hope you are doing well. I have a task paradigm for iOS that I would like to code in MWorks.
I’m envisioning a visual memory test similar to the one linked here. Ideally, I would like to control several aspects, including the duration for which the squares are visible before being occluded, the size of the grid, the number of squares that are lit, and the flexibility for touch screen interaction. Do you know if a similar task infrastructure currently exists? It would greatly assist my development.
I would appreciate any suggestions on how to approach this task. Thank you for your time!
While I don’t have an example that works exactly like the test you referenced, the Simon experiment that I wrote for you last year is a good starting point.
Like the buttons in that experiment, each square in the new one would be a fixation point. To support a variable number of squares, you’ll need to create enough fixation points to handle the maximum number of squares, but queue and display only the desired number at each step in the test. You would detect button presses using a touch input device.
Maybe you can take a shot at implementing this, using the Simon experiment as a guide, and let me know if you have trouble with any of the details?