Generating videos of stimuli

Hi Chris,

We are planning to conduct an experiment online via a platform called
labvanced. It is a cognitive task so we need very simple stimuli which we
have generated with a plugin that Ralf wrote. Each stimulus has three
features, motion, color, shape. Usually with our monkeys we generate these
stimuli online with mworks and run the task like any other task, but in
labvanced we cannot generate such stimuli, mostly because there is no
motion engine on the platform.

So our current strategy is to have a little GIF or video clip for every
combination of these features and upload those clips on labvanced as
animated / moving stimuli.

My question for you would then be: is there a way of generating a video or
a series of images from MWorks that I can then quickly put together in
python or matlab to create a gif?

My plan would be to use mworks to generate the stimuli, show one stimulus
at a time on the screen, save each frame as an image into a specific
folder, cycle through the folders with python to generate one gif for each
stimulus.

I am not sure if it is possible in MWorks to export a frame into a picture
in an automatic way.

I hope this is clear, in case it is not I would also be available for a
skype call to explain better.

thanks in any case
best
Antonino

Hi Antonino,

MWorks doesn’t provide any built-in support for this. Probably the simplest approach would be to record the screen while showing all the stimuli and then edit the resulting video into separate clips for each stimulus configuration. If you prefer to generate GIF’s, it looks like you have some options.

Cheers,
Chris

Thanks again Chris,

so there is no way of recreating what exactly was on the screen, offline?
Say in matlab or python?
I am talking of a script that offline reproduces every pixel of every frame?

lg
a

Hi Antonino,

Apologies for the delayed reply.

So there is no way of recreating what exactly was on the screen, offline? Say in matlab or python? I am talking of a script that offline reproduces every pixel of every frame?

No.

That said, I have done some work on automatic capture of stimulus display frames to the event stream. In short, every time the stimulus display is updated, the contents of the display are written, as binary image data, to a new system variable (#stimDisplayCapture), which is recorded in the event file. The capture process is expensive in terms of CPU and GPU usage, although that expense is proportional to the resolution of the captured images.

This is very experimental at the moment, and it may never work well enough to become a supported feature. However, if you want to try it, I can provide you with an MWorks build that includes it. At present, if you want to capture images at a scaled-down resolution, the dimensions must be hard coded, so I’ll need to know what dimensions you want. Alternatively, I can code it so that the images are the same resolution as the display, but, as I said, this can be very challenging even for a relatively powerful system.

Let me know if you’re interested.

Cheers,
Chris

Dear Chris,

first of all do not worry at all about delays.

I would be very happy to check this out.

lg
a

Hi there Chris,

regarding the #stimDisplayCapture I am currently capturing a 400x400 pixel
portion of the mirror window. I draw the portion of the window manually
with Quicktime before starting the experiment.

If the acquisition of the frame works on the mirror window, I will arrange
the stimuli’s and window’s size accordingly. But I could also use the
parameter that cuts out a portion of the screen. It makes little difference
for me.

Given that I will generate 880 combinations, each lasting 6 seconds, I
would need to have a rather good control of the timing. Will each frame
have a timestamp?

thanks
LG
a

Hi Antonino,

If the acquisition of the frame works on the mirror window, I will arrange the stimuli’s and window’s size accordingly.

I hadn’t thought of that approach, but it seems like a good idea. If you configure MWServer to display only the mirror window and set the mirror window’s size and aspect ratio as desired, then MWorks can just capture the mirror window at full resolution and write the result to #stimDisplayCapture. If this sounds good to you, I’ll make an MWorks build that does this.

Will each frame have a timestamp?

Yes. Each assignment to #stimDisplayCapture will have the same timestamp as the corresponding #stimDisplayUpdate event.

Cheers,
Chris

Well, that sounds optimal!

I look forward to test this!

LG

a

Hi Antonino,

The build with display capture support is now available to download.

As we discussed, it will capture the main display window at full resolution. For your purposes, you want the “main” window to be the mirror window, so you should select “Mirror window only” in MWServer’s display preferences.

Every time MWorks renders a new frame for the stimulus display, it will capture that frame in PNG format and write the data to the variable #stimDisplayCapture. If you extract the value associated with a #stimDisplayCapture event (which will be a bytes object in Python or a uint8 array in MATLAB) and write it as binary data to a file, you should be able to open that file in an image viewer.

To quickly confirm that the display capture is working correctly, run your experiment with MWClient’s Image Viewer window open, and set its “Image data variable” field to #stimDisplayCapture. If things are working, you should see a copy of MWServer’s mirror window in the image viewer.

Since this is still experimental code, I don’t recommend using it for production experiments. If you have problems, please let me know!

Chris

Hey Chris,

I downloaded the build and installed it.

I tried to compile a custom plugin that Ralf wrote to generate the stimuli
I need to make the clips of, but I encountered an error.
The same plugin compiled successfully on version 0.10 on the same machine.

Attached the plugin and the errors I encountered while trying to compile
it.

I am inexperienced when it comes to mworks plugins and I would not know
where to start, I hope it is something trivial you could spot faster so
that I can focus on fixing it (or, as always, ask Ralf if I fail).

Thanks in advance
LG
a

Attachment: Build_target_adv_stimulus_2021-01-25T16-33-13.txt (2.96 KB)

Hi Antonino,

The library file libboost_system.a no longer exists. If you delete it from the “Frameworks & Libraries” section in the Xcode sidebar (see the attached image), you should be able to build the plugin.

Cheers,
Chris

Attachment: Screen_Shot_2021-01-25_at_3.57.46_PM.png (171 KB)

Hey There,

everything worked perfectly.

I have created all the stimuli I needed.

What type of feedback can I offer you?

best
Antonino

Hi Antonino,

Thanks again for testing the in-progress version of stimulus display frame capture. This feature is now fully implemented and available in the MWorks nightly build. For more info, please see this discussion. (FYI, the user in that discussion is the one who initially requested the ability to record the stimulus display, and the in-progress code I shared with you represented my to-date efforts to add the capability.)

If you want to provide more feedback or suggest further improvements, please feel free to do so, either here or in the new discussion.

Cheers,
Chris