Hi Lindsey,
I’ve attached a ZIP file containing the example code that we discussed.
There are two examples that generate and display white noise images. The first, in folder “capture”, uses the approach we discussed: It generates noise images with MWorks’ white noise background stimulus, captures the images with #stimDisplayCapture, saves the images to files, and then loads and displays the files as a frame list. There are two protocols in the experiment: “Generate noise” creates the noise images, and “Present noise” displays them.
The advantage of this approach is that it isn’t limited to a particular stimulus type. You could use any stimulus (or combination of stimuli) that you like and capture and display the images in exactly the same way. The disadvantages of this approach are that it’s relatively slow (because you have to wait for each display update and image capture to complete before moving on to the next); in the case of white noise, the image files are larger than they need to be (because MWorks creates PNG files with red, green, blue, and alpha channels, when you only need a single, grayscale channel); and you need to show the images on the stimulus display as you capture them (which may be an issue if the animal is already present).
The second example, in the folder python_gen
, uses NumPy and tifffile to generate and save the noise images. As with “capture”, the images are presented via a frame list. The advantages of this approach are that image generation is much faster (so much so that it may be feasible to generate each trial’s images at run time, as the example does); the image files are smaller (they’re saved as grayscale TIFF’s with no alpha), which both saves disk space and reduces image load times; and you don’t need to display the images when you generate them. The disadvantage of this approach is that you can’t make use of any MWorks stimuli when creating the images; instead, they are created entirely in Python code. This isn’t an issue for simple white noise, but it might be for more complex stimuli.
You also asked for an example of how to load image files in batches. The noise-generation examples demonstrate one nice way to do this: Define a frame list that has a bunch of images as attached frames, and load/unload the images by loading/unloading their parent frame list. (There are also other approaches you could take, if you aren’t using a frame list.)
Finally, you asked about extracting warnings and image hashes from event files. There are two Python files that demonstrate this. The first, print_warnings.py
, just prints all the warning messages it finds in a given event file:
$ python3 print_warnings.py ~/Documents/MWorks/Data/warnings.mwk2
804949877 WARNING: Variable for ignored trials: ignore was not found.
805220401 WARNING: Eye window can't find the following variables: eye_h, eye_v, saccade, , , ,
805220422 WARNING: Variable for success trials: success was not found.
805220427 WARNING: Variable for failure trials: failure was not found.
805220430 WARNING: Variable for ignored trials: ignore was not found.
806897914 WARNING: Skipped 1 display refresh cycle
806931262 WARNING: Skipped 1 display refresh cycle
812935464 WARNING: Skipped 1 display refresh cycle
The second file, get_image_hashes.py
, demonstrates both how to extract image paths and hashes from #stimDisplayUpdate events, and how to compute a given image file’s hash and compare it against the extracted value:
$ python3 get_image_hashes.py ~/Documents/MWorks/Data/image_hashes.mwk2 capture/images/20240911-135431
No matching hash for image trial_2/frame_6.png
Note that this code is designed to work with the noise-generation example experiments, which is why it looks first for a frame list and then finds the displayed image inside it.
There’s a lot going on here, and I know I haven’t discussed any of it in great detail. If you have questions or need me to clarify anything, please let me know!
Cheers,
Chris
noise_stimulus.zip (637.9 KB)