Hi Beshoy,
Thanks for sending the log file. It has shed considerable light on the problem.
It appears that the data-file writing code isn’t losing track of any events, and it keeps on writing them to disk right up until the point that you close the data file. However, at some point in the experiment, the rate at which events are written falls drastically below the rate at which events are generated, producing an enormous backlog (e.g. 30 minutes or more) of events. When you tell MWorks to close the data file, the data-file writing code stops as soon as possible, and any events in the backlog are never written to disk.
So there are two problems here:
- At some point, something is causing the rate at which events are written to the event file to plummet.
- MWorks isn’t handling the backlog of unwritten events produced by problem 1.
Problem 2 is certainly fixable on my end. I’m working on that now, and I’ll get an updated build to you ASAP.
Problem 1 is a bit of a mystery. I can think of three explanations why a backlog of events might be building up:
- The experiment is simply generating way more events per unit time than can be written to disk in that time.
- The CPU is overloaded, so the data-file writing code isn’t running as often as needed.
- The actual writes to disk are taking an unreasonably long time.
Looking at your data file, I see no support for explanation 1. Your experiment seems to generate a perfectly reasonable number of events per unit time.
I also find explanation 2 dubious. If the CPU were really that overloaded, you would probably be noticing it in myriad ways (e.g. long application opening times, slow user interfaces). Unless that has been your experience, I’m inclined to rule this out, too.
That leaves us with explanation 3. If disk I/O is taking an exceptionally long time, then I’d expect that either
- other processes are also doing a lot of disk I/O, causing a lot of contention and slowing things down for everyone, or
- there’s a physical problem with the disk.
You can get a sense of the system’s disk I/O activity by opening the Activity Monitor application and switching to the “Disk” tab. If you see huge numbers by “Data read/sec” or “Data written/sec”, then that may be the underlying issue.
On the other hand, if the issue is with the health of the disk, then I’m not sure how best to diagnose that. I suppose you could try running Disk Utility’s “First Aid” procedure on it, as that may find some problems.
As I said, I’m working on a new MWorks build for you to try. In this build, I’ll include some additional logging that will hopefully clarify why the event backlog is happening. I’ll let you know as soon as I have it ready.
Thank you for your patience and assistance in debugging this issue!
Chris