More python bridge / plotting questions

Hi Mark,

The conduit changes are now in the MWorks nightly build. Along with the switch to ZeroMQ, they include the following:

  • Conduits can now register callbacks for all events with the aptly-named register_callback_for_all_events method.
  • register_callback_for_code now actually works in all situations.
  • A new conduit type, CachingIPCClientConduit, is now available.

The caching conduit is most significant for your needs. It’s a subclass of IPCClientConduit, so it includes all the functionality of that class. In addition, it automatically receives and caches the data for all incoming events. (The caching machinery is implemented in C++ for optimal performance.) Instances of the class expose the cached event data via a mapping interface. You can retrieve values by name or code, e.g.

my_var = None
if 'my_var' in conduit:
    my_var = conduit['my_var']
my_other_var = conduit[reverse_codec['my_other_var']]

The mapping interface is read-only. To update variable values, use send_data as before.

Regarding performance of the ZeroMQ-based IPC mechanism: I implemented a test that defines 1000 variables and performs 1000 assignments to each as rapidly as possible (i.e. without waiting between assignments). Both the base IPCClientConduit and the new CachingIPCClientConduit successfully receive all 1,000,000 events without issue. While they don’t keep pace with MWServer, they don’t drop any events, run out of memory, or otherwise fail in any way. The shared-memory IPC can’t even send the codec in this case, much less keep up with the onslaught of events. With the new IPC mechanism, you should no longer need any of the workarounds meant to avoid the limits of the shared-memory scheme. In particular, you shouldn’t need to pause between registrations when registering a large number of callbacks.

Please test these changes when you have a chance. If you run into any issues or have any questions or suggestions for further improvements, please let me know!

Cheers,
Chris