Hi Chris,
I was emailing to ask if it was possible to have MWorks use a seed to generate a white noise image as a stimulus. If I can do this instead of saving the white noise stimuli as images, it would be ideal. Currently I am generating and saving each image using MatLab and feeding them into MWorks as .png files.
Thank you,
Yvonne
Hi Yvonne,
This isn’t possible at present. However, it would be pretty straightforward to add support for it, either by modifying the existing White Noise Background stimulus or by adding a new stimulus type.
As we’ve discussed before, the downside to this approach is that the noise-generation method will be baked in to the stimulus implementation (although we could offer a fixed set of options, e.g. uniform, Gaussian, etc.). Have you decided that this is OK for your purposes?
Cheers,
Chris
Hey Chris,
Thanks for the promo response.
The ability to show a very large number of unique noises is essential for our research here, so it will be very helpful if we have it hard coded in MWorks, using seeds instead of loading huge noise image files. I like your idea of having a fixed set of options (e.g. uniform, Gaussian, etc.). How difficult is it to incorporate it in MWorks?
Cheers and have a nice weekend,
Arash
Hi Arash,
How difficult is it to incorporate it in MWorks?
I don’t think it’s a big job – maybe a couple days of work?
The biggest question is how large is “a very large number”? More specifically, how many sequential frames of unique white noise do you need to generate? The answer will determine if the stimulus can pre-generate all the frames and keep them in memory, or whether it will have to generate them one-by-one, on the fly. If you need no more than a couple seconds of noise per trial, then pre-generation should be fine. If you need minutes or hours of noise, then we’ll need to generate it dynamically (unless you’re OK with playing a few seconds of noise on a loop).
I like your idea of having a fixed set of options (e.g. uniform, Gaussian, etc.).
Are there any other options you’d like to have?
Chris
Hey Chris,
Thanks a lot.
how large is “a very large number”?
Ideally we want to show 1-2 seconds of noise, refreshed at every frame. That means (assuming 60 Hz) 60-120 noises per trial. But there will be ~2000 of such trials, that will take us to ~200,000 noise patterns per session.
Are there any other options you’d like to have?
We may have to try other types of noise (e.g. pyramidal noise, phase scrambling, etc) in the future, but for the moment we just need white noise at variable grains. So the variables for each noise image will be: size and grain. This will be enough to get us going.
Cheers,
Arash
Hi Arash,
Ideally we want to show 1-2 seconds of noise, refreshed at every frame. That means (assuming 60 Hz) 60-120 noises per trial. But there will be ~2000 of such trials, that will take us to ~200,000 noise patterns per session.
In that case, we should be able to pre-generate the noise and keep it in memory. The noise frames can be updated between trials.
So the variables for each noise image will be: size and grain.
I’m not 100% sure what you have in mind here.
By size, do you mean you want the option of displaying noise over only a portion of the display (instead of fullscreen)?
By grain, I assume you mean that you want control over the scale at which the image is randomized. In other words, while the current white noise background assigns a different value to every pixel, you want the option of randomizing larger groups of pixels (e.g. 4x4 or 8x8 squares). Is that correct? If so, do you want to specify grain size in units of pixels or degrees?
Thanks,
Chris
Hey Chris,
In that case, we should be able to pre-generate the noise and keep it in memory. The noise frames can be updated between trials.
Great.
By size, do you mean you want the option of displaying noise over only a portion of the display (instead of fullscreen)?
Well, we want it full screen, size in this case will refer to the screen resolution.
By grain, I assume you mean that you want control over the scale at which the image is randomized. In other words, while the current white noise background assigns a different value to every pixel, you want the option of randomizing larger groups of pixels (e.g. 4x4 or 8x8 squares). Is that correct? If so, do you want to specify grain size in units of pixels or degrees?
Yes exactly. And ideally we want it to be specified in degrees units.
Cheers,
Arash
Ok, sounds good. I’ll try to get this done in the next week or so.
Chris
Fantastic, thanks a lot.
Speaking of noise types, having an option for making the color version of each noise type (flat, gaussian) would also be very helpful.
Cheers,
Arash
Hey Chris,
Are there any updates regarding the noises?
Cheers,
Arash
Hi Arash,
Are there any updates regarding the noises?
I’ve been working on it this week. The updates should be in the nightly build in a couple more days.
Speaking of noise types, having an option for making the color version of each noise type (flat, gaussian) would also be very helpful.
Do you mean, e.g., pink and brown noise? Or do you mean literal color?
If the former, what would Gaussian pink noise look like?
Chris
Great thanks.
As for color, I meant it literally. Basically the same kinds of noise (flat, Gaussian, pink etc.) generated separately for the 3 color channels, that will give us a salt and peppery color noise.
Cheers,
Arash
Hi Arash,
The updates to MWorks’ white noise background stimulus are now in the nightly build. The only thing I haven’t done yet is implement alternative random number distributions; at the moment, the noise is always uniform. You can read about the new features in the documentation.
Also, I reworked the stimulus implementation so that the noise can be re-randomized frame by frame, on the GPU. This means that you can run fully dynamic white noise as long as you want, and you don’t have to worry about re-randomizing between trials. However, the entire noise sequence can still be reproduced by using the same seed for the random number generator.
I’ll send an updated example experiment to Yvonne shortly. If you run in to any issues, please let me know.
Chris
Fantastic. Thanks Chris, we will play with and report.
Cheers,
Arash
Hi Arash & Yvonne,
In another discussion, Yvonne wrote:
when we actually run the experiment we will need to be able to average stimuli from the same time point in different trials together in order to employ reverse correlation analysis.
Had I known this when we were deciding how to generate the white noise you need, I would have strongly recommended that you stick to using pre-generated images, rather than generating the noise dynamically inside MWorks.
Here’s the problem: While it’s entirely possible, starting with the seed value and parameters of the stimulus and display, to re-create the entire sequence of noise images outside of MWorks, there are many details that you need to get exactly right. If you mess up the computation at any point, you can end up generating noise that’s entirely different from what was actually displayed during the experiment. If I were doing this experiment myself, and I really cared about the precise pixels that were displayed on screen, I would be very, very uncomfortable with this approach.
On the other hand, while pre-rendered noise images do take time to generate and disk space to store, they also give you an exact, explicit record of what was displayed. Using the MWorks event file, you can easily determine which image was onscreen at any given time, no error-prone computation required.
If you still want to generate the noise images on the fly, I think the best approach would be to use a Python or MATLAB script to generate them between trials, based on parameters defined in the experiment. Each image would have a unique name and would be stored in an archive directory for later use. This is kind of a “best of both worlds” approach, in that
- You get full control over how the noise is generated, but
- You don’t have to pre-generate a fixed number of images before the experiment runs and can instead generate as many as you need at run time.
Of course, I’d be happy to provide you with an example implementation of this idea.
What do you think?
Chris
Hey Chris,
Thanks for the thoughtful message. We discussed the issue, the problem with using images is that we will run into memory issues, but if using the seeds introduce a higher chance of human error then it is better to do it with the brute force approach (lots of images, lots of memory usage).
If you help us generate the images on the fly, that would help a lot with naming the files and not generating extra.
Cheers,
Arash
Hi Arash,
I have an implementation of my latest proposal. It works as expected, but there are a few potential issues:
-
Generation of the white noise images is pretty slow. Using a Python script to generate and load 60 1920x1080 images (i.e. one second’s worth) takes about 15 seconds on my 2013 Mac Pro. An equivalent version using MATLAB takes about 10 seconds. Almost all of that time is spent writing the images to disk.
-
The previously-mentioned 60 images use 119 MB of disk space (in PNG format). That’s going to add up pretty fast.
If it’s not acceptable to have a 10-15 second pause between trials, then one potential solution would be to generate the images for multiple trials in batches. That would mean less frequent but longer breaks for the animal.
As for the disk usage, I guess we should ask if you really need to save all the images, or if some subset would be sufficient. For example, do you only need a record of what the animal saw during the 50ms stimulation phase? If so, we could archive just those images. Even better, we could use GPU-generated noise for the remaining 950ms of the trial, meaning issue (1) would be mitigated, too.
Let me know how you want to proceed.
Cheers,
Chris
Hey Chris,
Is there a way to produce the images as bitmaps instead of PNGs? It is definitely better to bitmap them for saving because PNG files are substantially larger than bitmap for white noise (I know, it is counterintuitive!).
As for the experiment, 10+ seconds between the trials is too long. Can we produce noise images for the entire day (~2000 trials) in advance and name/save only the ones that were used? And as for what needs to be saved, we need to save at least 2 seconds of noise (40 images assuming a 50 ms duration for each noise).
Cheers,
Arash