MWServer apparent memory leak: testcase attached

Hi Chris,

Running the attached testcase in MWorks nightly version 20160428 shows an apparent memory leak in MWServer. RSS grows quickly to occupy all available memory and swapping eventually brings Mac OS to a halt. At least on the two machines we tested it on.

Thanks,
Mark

Attachment: longStimTest.xml (2.8 KB)

Hi Mark,

Thanks for the report and test case. I’ve reproduced the issue on my workstation. However, this isn’t actually a memory leak: If you stop the experiment, MWServer will reliably (if slowly) release all the memory.

Rather, the problem is that the “SetValue” state contains an unconditional transition to itself. Because of this, when the experiment reaches this state, it “spins”, constantly re-executing the state’s actions and testing its transitions without pause. In addition to causing excessive CPU usage (I see MWServer at 230% CPU in Activity Monitor), this spinning generates a deluge of #announceCurrentState events. MWorks’ event-handling machinery can’t process these events as fast as they’re created, so they pile up in the event queue. Each unhandled event consumes some memory, and as the backlog grows, so does MWServer’s memory usage.

To resolve this problem, you need to ensure that the “SetValue” state pauses for some amount of time before restarting. The attached example experiment demonstrates this. It contains two versions of a simple “infinite loop” protocol. The first is like your test case: A state spins without pause, and it causes memory usage to grow rapidly. The second adds a 1ms pause before re-entering the state. When I run this version, MWServer’s CPU usage is reasonable (about 18%), and its memory usage is constant.

If you have any questions, please let me know.

Cheers,
Chris

Attachment: infinite_loop.xml (1.77 KB)

Hi Chris,

Indeed. We use those ‘spin’ states - direct transitions back to the same state - usually when we need to monitor a variable and act based on it. There may be some turing-style proof that you can always do the same thing with a few extra states and conditional transitions, but if nothing else spinning reduces the number of states.

I added a wait for 1ms for throttling and that’s fixed the problem.

When conditional logic is in transitions, how often are the conditionals checked? Every 1 or 2ms? Or can MWorks spin much faster since it doesn’t have to log the #announceState?

It might be useful to:

  • throw an error if too many #announceState events are emitted in a single trial (to avoid out-of-memory cases)
  • add a <task_system_state> parameter to allow fast spinning. Something like log_on_selfdirect=“no”. Or perhaps add a transition (e.g. “spin_direct_to_self”) that doesn’t log.
    Not sure if either or both of those is a good idea.

thanks for the help,
Mark

Hi Mark,

When conditional logic is in transitions, how often are the conditionals checked? Every 1 or 2ms? Or can MWorks spin much faster since it doesn’t have to log the #announceState?

A state’s transitions are tested every 500 microseconds. No #announceCurrentState events are generated until a transition succeeds.

Not sure if either or both of those is a good idea.

If possible, I think the best solution would be to implement your variable checks as conditional transitions, with the target being, e.g., an error-handling state. Spinning is just a terrible waste of machine resources, and it may impact the performance of other parts of the experiment (stimulus display updates, I/O devices, etc.).

Chris

Sounds good, I’ll look into refactoring into multiple states.
Mark