Hi Chris,
I’m setting up plotting in Python via the client bridge, as I alluded to in last question.
I currently have a little Python infrastructure code for client bridge scripts to register a callback on every variable and then keep track of their current values. This seems to work ok.
I vaguely remember you and I discussed having a client-bridge-side function to just return the current value of each variable.
I assume that would have been implemented via C-level processing of the event stream to do what I do in Python.
But I checked through the Python IPCConduit code and didn’t see anything like this.
Question: should I just use my existing code, or is there a currently-faster way to do it? If it doesn’t exist already, I’m happy to try adding plotting to what I’ve got and hopefully processing time won’t be limiting.
Mark
Hi Mark,
I vaguely remember you and I discussed having a client-bridge-side function to just return the current value of each variable.
That’s been on my to-do list for a long time, but I haven’t implemented it yet. Until I do, you’ll need to stick to your current approach.
Cheers,
Chris
Hi Chris,
Not a big deal. My Python version should be fine. I’ll probably do some profiling in the next few weeks and I’ll share what I find, too.
Mark
Hi Mark,
I just wanted to let you know that I’ve been working on adding variable caching to the conduits and hope to finish up this week.
It turned in to a slightly larger project than I initially planned, but the changes will hopefully make for better and more reliable conduits. Notably, I changed the interprocess communication mechanism from a shared-memory scheme to one using ZeroMQ with Unix domain socket endpoints. The shared-memory business has been a steady source of reliability and performance problems, and I suspect it’s also to blame for this issue that you reported recently:
What I do: load the experiment, load the Python client bridge. Then, start streaming. What happens: a bunch of overflow errors on the console and the Python bridge becomes unresponsive. I found that if I comment out all the callback registrations that I use to track the values of all the variables, the error doesn’t happen.
In my testing, the ZeroMQ-based IPC has been very robust and reliable, and I’m hopeful that you won’t see this kind of problem once the new mechanism is available.
I’ll let you know as soon as this stuff is done and ready for you to test. Thank you for your patience!
Cheers,
Chris
Sounds great! (I’m cc’ing some people from the lab who are hitting this issue.)
This is good news. One comment: I think we’re probably the users pushing the bandwidth of the conduit event streams the most, so it’s probably worth verifying the ZeroMQ transfer can handle the same event flow as the old shared memory scheme. But I assume you’re already on this.
thanks,
Mark
Hi Mark,
The conduit changes are now in the MWorks nightly build. Along with the switch to ZeroMQ, they include the following:
- Conduits can now register callbacks for all events with the aptly-named
register_callback_for_all_events
method.
-
register_callback_for_code
now actually works in all situations.
- A new conduit type,
CachingIPCClientConduit
, is now available.
The caching conduit is most significant for your needs. It’s a subclass of IPCClientConduit
, so it includes all the functionality of that class. In addition, it automatically receives and caches the data for all incoming events. (The caching machinery is implemented in C++ for optimal performance.) Instances of the class expose the cached event data via a mapping interface. You can retrieve values by name or code, e.g.
my_var = None
if 'my_var' in conduit:
my_var = conduit['my_var']
my_other_var = conduit[reverse_codec['my_other_var']]
The mapping interface is read-only. To update variable values, use send_data
as before.
Regarding performance of the ZeroMQ-based IPC mechanism: I implemented a test that defines 1000 variables and performs 1000 assignments to each as rapidly as possible (i.e. without waiting between assignments). Both the base IPCClientConduit and the new CachingIPCClientConduit successfully receive all 1,000,000 events without issue. While they don’t keep pace with MWServer, they don’t drop any events, run out of memory, or otherwise fail in any way. The shared-memory IPC can’t even send the codec in this case, much less keep up with the onslaught of events. With the new IPC mechanism, you should no longer need any of the workarounds meant to avoid the limits of the shared-memory scheme. In particular, you shouldn’t need to pause between registrations when registering a large number of callbacks.
Please test these changes when you have a chance. If you run into any issues or have any questions or suggestions for further improvements, please let me know!
Cheers,
Chris
Thank you for this! This will help us significantly, and lifts a barrier that’s prevented us from doing significant plotting in Python. I will test in the next few weeks.