Long saving time

Hi Chris,

If you remember, a while ago, i had an issue with data not being saved. While this issue is resolved; I have a different one now. Mworks takes ages closing the data file (more than 1.5 hours at this moment and going). The file size so far is about 3.6GB. I am using v 0.9 as well. Any idea what causes this? if it’s unavoidable, is it possible to have another instance of Mworks to start my next experiment?

Cheers,
Beshoy

Hi Beshoy,

It sounds like the same root problem as before: At some point in the experiment, the rate at which events are written falls drastically below the rate at which events are generated, producing a huge backlog, and the data file can’t close until the backlog is cleared. Please see the linked comment for my suggestions on how to proceed.

Cheers,
Chris

Hi Chris,

I checked the data read and write rates. During the experiment. THe data read/sec is around 40KB and the written is about 13-25MB. Are those normal numbers?

Cheers,
Beshoy

Hi Beshoy,

During the experiment. THe data read/sec is around 40KB and the written is about 13-25MB. Are those normal numbers?

Yes, those are totally reasonable I/O rates.

Can you try running the special MWorks build I provided previously? The log file it generates should provide some insight into what’s happening.

Chris

Hi Chris,

I tried this special build, but i am not sure how and where to find the log file to send you.

Cheers,
Beshoy

Hi Beshoy,

It should be in the same place as before: /tmp/mwserver_event_file_log.txt.

Thanks,
Chris

Hi Chris,

I attached the log file. Again, it took hours to finish saving the file.
Thanks in advance for the help.

Cheers,
Beshoy

Attachment: mwserver_event_file_log.txt.zip (4.12 MB)

Hi Beshoy,

Thanks for the log file. I’ll take a look and see if it provides any new insight into the problem.

Chris

Hi Beshoy,

The log file confirms that writing events to disk is taking an unreasonably long time (e.g. 100ms or more per 1000 events, where a reasonable duration would be <10ms). I can induce this problem on my Mac Pro by using a stress-testing tool that loads the system with disk writes.

I’ve also discovered that I can eliminate the problem, even with the stress-testing tool running, by setting SQLite’s “synchronous” flag to “OFF”. Disregarding for a moment whether this is really a good idea, it’d be interesting to know if this change resolves the issue for you, too. If you’re willing to try it, I’ve created a modified build of MWorks that you can get at

It’s identical to the current nightly build, except for the change to the synchronous flag.

I should note that, when I start the stress-testing tool, Activity Monitor’s data written/sec figure jumps from around 60MB to 700MB or more. Since you aren’t seeing a similarly high write volume, you may be experiencing a different issue. Still, I think this is a worthwhile test to run.

Thanks,
Chris

Hi Chris,

Thanks for the modified version. I will test it on Monday and let you know how it goes!

Cheers,
Beshoy

Hi Chris,
This worked! File size looks appropriate and I took a quick look at the events and things seem in order.

Cheers,
Beshoy

Hi Beshoy,

That’s great news! I have to think a bit more about whether this should be the default configuration going forward, although my current feeling is that it should.

Cheers,
Chris

Hi Beshoy,

I have to think a bit more about whether this should be the default configuration going forward

I thought about it and decided that this is the best default. The change is now in the nightly build and will be included in the next MWorks release.

Cheers,
Chris