MonkeyWorks - Movies

I’m implementing code to play movies in MW. The code is fine, but MW is getting stuck at the very beginning when loading the movie stimuli.

If I only load 3 movies, it works. As soon as I add a fourth, it stalls, and no messages are displayed in the console window. What do you think the problem might be? In general, should I be aware of any limits in MW on the number of movies or images that can be loaded?

Thanks,

Elias

Hey Elias,

I didn’t write the movie plugin, and I have no idea what its limitations are. It’s “brother,” the Drifting Grating plugin (which was written around the same time) had some pretty fundamental flaws in it, so I wouldn’t be surprised if there were some wonky bits in the movie stimulus too. I don’t think either got debugged all that seriously.

At some point in the future, we’ll either pull this plugin into the main distribution, or rewrite it if it is a complete mess. In the meantime, you’ll need to go to Chris for help.

– Dave

Dave,

Thanks for the response. As far as I can tell, it’s a memory limit on how many images I can load. I wrote test code that just loads the image frames and does nothing else (no movie function).

Whether on my laptop or in the experimental setup, MW can only load 450 images at 720x480 resolution (~80 MB total). If I downsample, I can load more. Otherwise, it hangs.

I might be misdiagnosing, but I guess the workaround is to downsample until it works. Any ideas why the limit is so low (80 MB) on the total amount of stimuli that can be loaded into MW? Is this an internal setting that could be increased or is it imposed by external factors (graphics card/system memory)?

Elias

Any limits on available texture memory are imposed by your graphics card and/or driver. 80MB could be a lot of texture memory if we’re not talking about a high end GPU. However, by my math, 720 * 480 * 450 = 155 MB, assuming one byte pixels, and given that the textures are RGBA by default, then we’re talking 600+ MB, which would strain even “good” GPUs. And no, GPU textures are not compressed, so it is irrelevant how big your jpgs are.

In any event, the movie stimulus class should really not be trying to load all of the frames at parse time (this would be analogous to your DVD player trying to load the entire movie into memory when you first press play; in fact it loads only a portion and then keeps on top of loading frames into a buffer so that they’ll be ready when needed). Short of implementing true buffered loads, a better way to have implemented that would be to at least expose a “load” action for the movie stimulus that gets a particular movie ready to go and an “unload” that frees up the resources. Some of this infrastructure for deferred loading of regular stimuli exists in recent versions of MW, but the emphasis has been on speeding loading time, rather than allow huge numbers of stimuli to be loaded. A “quick” fix would be to build on this structure with the movie stimulus, while also including an “unload” action to ensure the resources get freed.

– Dave

Great, that explains. As you say, true buffering may not be easy, but I like the idea of allowing the user to load/unload blocks of stimuli as needed. This may be useful not only for movies but for mwk files that contain multiple experiments and hence multiple stimulus sets and for blocked designs.

I should be fine downsampling for now. Thanks again for the info.

Elias

Just to fill in some details:

The memory limit that Elias is hitting is actually in Scarab. Each Scarab session has a 100MB send buffer. For some reason, the client is trying to send the server an event that overflows this buffer. The overflow is caught in the Scarab function buffered_stream_write (in stream_tcpip_buffered.c). However, the error isn’t really handled in MonkeyWorksCore, and the client ends up waiting indefinitely for the send to complete, hence the hang that Elias experienced.

I modified the Scarab code to flush the buffer when it fills and re-ran Elias’ experiment. The server’s rsize rapidly grew to >1GB before I killed it. At that point, the experiment looked to be no more than 20% loaded, so it seems likely that the server would have run out of memory.

Anyway, it still seems like the right short-term solution is for Elias to scale down his movies. Long term, we should do as Dave suggests and modify the movie plugin so it doesn’t load the entire movie at once.

Chris

Ooh, that’s more serious / interesting. This is probably the first time anyone tried to move an experiment larger than one send-buffer worth of data.

Is the >1GB memory footprint problem due to loading of images, or could there be something still not quite right in transmitting the packaged experiment? Basically, I’m wondering if we’re sure that flushing the buffer mid-way really solves the problem. Is it possible to send an experiment with, say, 101 MB worth of images (or other resources)? In 0.4.4 release candidate builds, at least, you should be able to set “deferred=1” on the stimuli to prevent them from being loaded until run-time. This would separate out the two issues (transmission and loading).

Let me know if you find anything else out,
Dave

I just re-tested Elias’ movie-loading experiments using the current nightly build, and I found that the results have changed.

I tested two experiments in /users/Common folder/forChris on dicarloserver2. The first, mwk_movie_ei_Fe09/MOVIE4.xml, uses 90x60 frames. The second, mwk_movie_ei/Movie_loader.xml, uses 720x480 frames. I ran the tests on my MacBook, which has 4GB RAM and an NVIDIA GeForce 9400M graphics card.

The experiment with 90x60 frames loaded and ran without any problems. (It may have run successfully back in February, too. I don’t remember.) The one with 720x480 frames crashed MWServer (after reporting a malloc failure) the first time I tried to load it. The second time it loaded successfully, at which point MWServer was using 2.3GB of resident memory.

It appears that these experiments no longer trigger the Scarab 100MB buffer issue. A lot of code has changed since I last ran these tests (for one thing, the movie plugin has been largely gutted and now uses the new dynamic stimulus infrastructure), so I’m not sure how to account for the change. However, it does seem like the 720x480-frame experiment is now running into honest-to-god memory constraints (presumably in GPU memory, but maybe in main memory, which is shared with the GPU on my MacBook).

Elias, when you have a chance, can you download a nightly build and try out some of your movie experiments? It’d be good to know what limitations you run into with the current MWorks code.

Thanks,
Chris

There’s additional info on this problem in this discussion.