When an action that can lead to an error (e.g. divide by 0) can only be reached via a conditional statement that precludes the error (e.g. enforce denominator>0) it still leads to a failure to load the experiment. For example, if I want to compute behavioral performance, I need to divide by the number of completed trials. At load-time the number of completed trials is 0 so I have a conditional statement that only computes performance when nTrials>20. However, this leads to an error at load time and failure to load.
This is true whether the action is reached via a conditional statement within a state or a conditional transition from a different state. (I tried to attach another file demonstrating the latter but couldn’t seem to attach more than one file.)
When some other kinds of errors fall under the conditional, you get an error at load time but the experiment still loads and runs. This happens if, for example, instead of dividing by zero you attempt to get disc_rand(1, counter) if counter>20 but the load-time value of counter is 0.
This isn’t a huge deal but seems like it shouldn’t happen.
Hi Alex,
There are two issues at work here:
-
At load time, all expressions are evaluated once, irrespective of control flow. Although I wasn’t involved in that design decision, I assume the idea was to catch as many errors as possible before the experiment runs.
-
Errors in expression evaluation are handled inconsistently. As you noted, some are fatal, while others are not.
Eliminating load-time expression evaluation (issue 1) would be simple. We could even add a configuration file setting to turn it on or off.
Issue 2 is trickier. Certainly, it’s extremely frustrating (and potentially costly in terms of lost time) to have a running experiment abort due to a failed division or other error. I imagine this was a motivating factor in the decision to pre-evaluate expressions at load time.
However, it’s not clear that the alternative (issuing an error message but continuing execution) is really viable either. For example, when disc_rand fails, it returns 0. Is that a condition that your experiment can readily handle, or are you going to have to stop execution, correct the error in your protocol, and restart? If the latter, then the only net change vs. abort-on-failure is that the user is now responsible for doing the aborting.
My point here is not to argue for one approach or the other. I’m just laying out the issues as I see them, and I would welcome any input you have.
Cheers,
Chris
Hi Chris,
Thanks for your quick reply.
In general, I appreciate the conservative error checking of MWorks, even if
that means that there are some false positives. I therefore wouldn’t want
to completely turn off load-time evaluation. As you said, having the error
come up during a real experiment would be bad.
My concern was with failing to run sound programs, as in my example. I was
initially thinking that there might be a straight forward way to anticipate
the possible range of values that a variable can take during load-time
evaluation. For example, in the case of “if n>10, ans = 100/n” we know that
the expression under the conditional will never be evaluated when n<=10 so
there will never be a divide by zero error. However, this may be hard to
actually implement.
An alternative reasonable way to deal with these kind of false positive
errors could be to have the load-time errors consistently behave like
warnings in that they still allow the experiment to run. This way the
experimenter is alerted to the error at load time but it doesn’t
necessarily prevent sound programs from running, as in my example. This, of
course, depends on how other people use MWorks error checking so that’s
just my vote.
Alex