Noise stimulus

Hi Chris,

I’m interested in trying out the white noise background stimulus. Is there available code to reconstruct the stimulus from the seed used to initiate it for offline analysis?

I’m also curious about the possibility of generating a different kind of noise stimulus plugin. I’d like to create a gaussian spatiotemporal frequency noise stimulus where I could define the spatial and temporal frequency cutoffs. If this is something you might have time for in the near future, I can send more details from a methods section.

Thanks,
Lindsey.

Hi Lindsey,

Is there available code to reconstruct the stimulus from the seed used to initiate it for offline analysis?

I don’t recommend trying to do that. To quote myself from another discussion:

While it’s entirely possible, starting with the seed value and parameters of the stimulus and display, to re-create the entire sequence of noise images outside of MWorks, there are many details that you need to get exactly right. If you mess up the computation at any point, you can end up generating noise that’s entirely different from what was actually displayed during the experiment. If I were doing this experiment myself, and I really cared about the precise pixels that were displayed on screen, I would be very, very uncomfortable with this approach.

Instead of trying to reconstruct the noise, I’d recommend using stimulus display frame capture to save each noise image for later analysis.

I’d like to create a gaussian spatiotemporal frequency noise stimulus where I could define the spatial and temporal frequency cutoffs. If this is something you might have time for in the near future, I can send more details from a methods section.

That actually sounds similar to something I developed for Mark Histed’s lab a few years ago. Sure, please send more details.

Cheers,
Chris

Hi Chris,

For the frame capture, do you have a sense of what the speed/memory constraints for this will be? For instance, will I be able to present stimuli at 30-60 Hz?

Here is a blurb from the methods section of Niell and Stryker, 2008:

Gaussian noise movies were created by first generating a random spa- tiotemporal frequency spectrum in the Fourier domain with defined spectral characteristics. To drive as many simultaneously recorded units as possible, we used a spatial frequency spectrum that dropped off as A(f) = 1/(f + fc),with fc 0.05 cpd, and a sharp cutoff at 0.12cpd, to approximately match the stimulus energy to the distribution of spatial frequency preferences. The temporal frequency spectrum was flat with a sharp low-pass cutoff at 4 Hz. This three-dimensional (wx, wy, wt) spectrum was then inverted to generate a spatiotemporal movie. This stimulus is related to the subspace reverse correlation method (Ringach et al.,1997), in that both explicitly restrict the region of frequency space that is sampled. To provide contrast modulation, this movie was multiplied by a sinusoidally varying contrast. Movies were generated at 60 x 60 pixels and then smoothly interpolated to 480 x 480 pixels by the video card to appear at 60 x 60° on the monitor and played at 30 frames per second. Each movie was 5 min long and was repeated two to three times, for 10–15 min total presentation.

Ideally, I could control the center frequencies and cutoffs.

Do you have a description of the stimulus you coded for Mark?

Lindsey.

Hi Lindsey,

For the frame capture, do you have a sense of what the speed/memory constraints for this will be? For instance, will I be able to present stimuli at 30-60 Hz?

Capturing every frame at 30-60 Hz and full display resolution probably isn’t going to work. I was thinking you’d be presenting other stimuli on top of a static white noise background, in which case it would be sufficient to capture just one frame of the noise (e.g. per trial) for later analysis.

Were you instead imagining updating the noise every frame? That’s 100% supported by MWorks’ white noise background (via the randomize_on_draw parameter), but capturing all those frames probably isn’t feasible. An alternative would be to use pre-generated (or dynamically generated) noise images instead of the white noise background stimulus. That’s an approach other labs have used in order to have the noise available for offline analysis.

Here is a blurb from the methods section of Niell and Stryker, 2008

That still sounds kinda sorta similar to what I implemented for Mark, but the end result might be totally different. I think I’d need to read the details.

Do you have a description of the stimulus you coded for Mark?

The stimulus is described in Beaudot and Mullen, 2006. Mark also referenced Bondy and Cumming, 2017 (maybe this paper?).

Cheers,
Chris

Hi Chris,

Yes- my goal is to present a temporally modulated noise stimulus that I can use to map receptive fields through reverse correlation analysis. That means I need to rapidly update the stimulus (30-60 Hz) and know exactly what was presented on each frame to align neural responses to stimulus history.

This can be done with either white noise or something more targeted to the spatio-temporal preferences of the brain region. The noise stimulus you made for Mark is conceptually similar to this more targeted noise, but doesn’t currently have any temporal modulation.

I am ok with having pre-generated stimulus images as long as I can be sure that it will be presented reliably- if there are skipped frames that I don’t know about that will be problematic. What did you mean by a dynamically-generated stimulus? I had imagined that this is how the white noise background works.

Also- are there any tools for making pre-generated stimulus files with MWorks? For instance, could I use the #stimulusCapture function to create .pngs at low rates offline that I could then present at high rates online? It would be nice if things were internally generated/consistent.

Let me know if it’s easier to discuss this over zoom.

Thanks,
Lindsey.

Hi Lindsey,

The noise stimulus you made for Mark is conceptually similar to this more targeted noise, but doesn’t currently have any temporal modulation.

I don’t know exactly what you have in mind, but Mark’s stimulus is dynamic and updates every frame. Mark feeds it a directory full of simple white noise image files (for example, generated with Python and numpy.random.uniform), one for each frame. When a frame is displayed, the stimulus computes the FFT of the “simple” noise, multiplies it by a frequency-space mask, computes the inverse FFT, and displays the result.

If Mark’s stimulus isn’t really what you want, I can certainly implement a new one. It might be several months before I get to it, though.

I am ok with having pre-generated stimulus images as long as I can be sure that it will be presented reliably- if there are skipped frames that I don’t know about that will be problematic.

A frame list would be the right way to present the image files. If there are skipped frames, you’ll be notified.

What did you mean by a dynamically-generated stimulus? I had imagined that this is how the white noise background works.

I was referring to when you would generate the noise image files. By “pre-generated”, I meant creating them all before you run the experiment. By “dynamically generated”, I meant creating the image files as the experiment runs (e.g with Python code). But in either case, I literally mean image files, saved to disk to be available later.

The white noise background stimulus does indeed generate the noise dynamically, but it exists only in CPU/GPU memory. None of the data is ever read from or written to a file.

Also- are there any tools for making pre-generated stimulus files with MWorks? For instance, could I use the #stimulusCapture function to create .pngs at low rates offline that I could then present at high rates online? It would be nice if things were internally generated/consistent.

There aren’t any existing tools, but what you described is 100% possible. You would need to create an experiment that displays and captures each stimulus you want, one by one. Then you would extract all the captured frames from the MWorks event file, store them as separate image files, and use the files for the real experiment. Actually, you could skip the event file step and store the captured frames as the stimulus-generation experiment runs using a little Python code.

I hadn’t thought of this, but it sounds like a good approach, as long as MWorks can create the stimuli you want.

Let me know if it’s easier to discuss this over zoom.

I’m happy to do that, if that’s your preference.

Cheers,
Chris

Hi Lindsey,

I’ve attached a ZIP file containing the example code that we discussed.

There are two examples that generate and display white noise images. The first, in folder “capture”, uses the approach we discussed: It generates noise images with MWorks’ white noise background stimulus, captures the images with #stimDisplayCapture, saves the images to files, and then loads and displays the files as a frame list. There are two protocols in the experiment: “Generate noise” creates the noise images, and “Present noise” displays them.

The advantage of this approach is that it isn’t limited to a particular stimulus type. You could use any stimulus (or combination of stimuli) that you like and capture and display the images in exactly the same way. The disadvantages of this approach are that it’s relatively slow (because you have to wait for each display update and image capture to complete before moving on to the next); in the case of white noise, the image files are larger than they need to be (because MWorks creates PNG files with red, green, blue, and alpha channels, when you only need a single, grayscale channel); and you need to show the images on the stimulus display as you capture them (which may be an issue if the animal is already present).

The second example, in the folder python_gen, uses NumPy and tifffile to generate and save the noise images. As with “capture”, the images are presented via a frame list. The advantages of this approach are that image generation is much faster (so much so that it may be feasible to generate each trial’s images at run time, as the example does); the image files are smaller (they’re saved as grayscale TIFF’s with no alpha), which both saves disk space and reduces image load times; and you don’t need to display the images when you generate them. The disadvantage of this approach is that you can’t make use of any MWorks stimuli when creating the images; instead, they are created entirely in Python code. This isn’t an issue for simple white noise, but it might be for more complex stimuli.

You also asked for an example of how to load image files in batches. The noise-generation examples demonstrate one nice way to do this: Define a frame list that has a bunch of images as attached frames, and load/unload the images by loading/unloading their parent frame list. (There are also other approaches you could take, if you aren’t using a frame list.)

Finally, you asked about extracting warnings and image hashes from event files. There are two Python files that demonstrate this. The first, print_warnings.py, just prints all the warning messages it finds in a given event file:

$ python3 print_warnings.py ~/Documents/MWorks/Data/warnings.mwk2
804949877 WARNING: Variable for ignored trials: ignore was not found.
805220401 WARNING: Eye window can't find the following variables: eye_h, eye_v, saccade, , , , 
805220422 WARNING: Variable for success trials: success was not found.
805220427 WARNING: Variable for failure trials: failure was not found.
805220430 WARNING: Variable for ignored trials: ignore was not found.
806897914 WARNING: Skipped 1 display refresh cycle
806931262 WARNING: Skipped 1 display refresh cycle
812935464 WARNING: Skipped 1 display refresh cycle

The second file, get_image_hashes.py, demonstrates both how to extract image paths and hashes from #stimDisplayUpdate events, and how to compute a given image file’s hash and compare it against the extracted value:

$ python3 get_image_hashes.py ~/Documents/MWorks/Data/image_hashes.mwk2 capture/images/20240911-135431 
No matching hash for image trial_2/frame_6.png

Note that this code is designed to work with the noise-generation example experiments, which is why it looks first for a frame list and then finds the displayed image inside it.

There’s a lot going on here, and I know I haven’t discussed any of it in great detail. If you have questions or need me to clarify anything, please let me know!

Cheers,
Chris
noise_stimulus.zip (637.9 KB)

Hi Chris,
Thanks so much!
I’m traveling for the next week or so, but will try it out when I return.
Best,
Lindsey.

Hi Chris,
I just started to check out the noise_stimulus.mwel file and I’m a bit confused how to use it.
Perhaps we could chat over zoom?
I’m available all afternoon today and most of the day on Monday.
Thanks,
Lindsey.

Hi Lindsey,

Could we chat Monday morning? Maybe around 10am?

Thanks,
Chris

Hi Chris-
Yes that works for me. Thanks.
Let me know if I should resend the zoom link.
Just a couple quick questions so I can potentially play with it a bit more between now and then: am I supposed to break up the noise_stimulus code to make separate experiments for the “Generate noise” and “Present noise” protocols? Right now it won’t play because it can’t find the files (because they haven’t been made yet) and I’m not sure how to instruct it to specifically do the “Generate noise” protocol. Also- there is no path set for capture in the “Generate noise” protocol- is the images folder the default?
Thanks,
Lindsey.

Hi Lindsey,

I can send a link for the meeting.

For both the “capture” and “python_gen” examples, the path to store the images is set in noise_stimulus.py (right at the beginning; variable imagepath). You should set that to a valid directory on your system before loading either experiment.

To run the “Generate noise” protocol, choose it in MWClient’s protocol-selection drop-down menu (below the experiment name, directly to the left of the green “play” button that starts the experiment). Once you’ve run it, switch back to “Present noise” and run that.

Chris

Thanks- I’ll give that a try and see you on Monday.
Lindsey.

Hi Chris,
Sorry- a question about the using movie stimulus instead of frame_list. Can I still generate/present the folder of frames as in the example code, or do I need to change the code to create a single movie file?
Thanks,
Lindsey.

It will work the same. In the stimulus definition, you should just need to replace frame_list with movie and then add the frames_per_second parameter.

Chris

Awesome- thanks!

Hi Lindsey,

I’m having issues with path names. I’ve set it up to write the images to a folder that is in the same directory as the mwel, but it seems to be looking for it elsewhere.

The Python code looks fine. But in the MWEL, the path attribute of the image_file stimulus should remain as I originally had it:

path = 'images/trial_${trial_number}/frame_${rr_index}.png'

The reason is that the images part of the path is actually a symbolic link created by the Python code:

# In order for MWorks to access the noise image directory, we need to create a
# symbolic link to it inside the current working directory
os.symlink(os.path.join(imagepath, sessiondir), 'images')

So, irrespective of the value of the Python imagepath variable, MWorks will look for the images via that symbolic link.

Chris

Hi Chris,
For some reason it’s still failing. I don’t why it’s looking for that alternate path…
Lindsey.