Attachment: python_image.zip (1.79 KB)
Hi Setayesh,
I guess you just sent it to me.
It just looks that way. Everyone on the discussion (currently six people, including me) gets a copy of each post emailed to them individually.
Cheers,
Chris
Hi Chris,
Thank you for that new image-passing approach! I just tried it, hooked it into out python environment, and benchmarked the time — it’s very fast (<1ms), so that’s great.
However, there’s one issue: It does not seem to be able to handle the alpha channel. It renders everything as fully opaque (alpha=255), even when I double-check to make sure the PIL renderer is rendering the alpha channel. Is that an issue on the MWorks side?
Thanks,
Nick
Hi Nick,
However, there’s one issue: It does not seem to be able to handle the alpha channel. It renders everything as fully opaque (alpha=255), even when I double-check to make sure the PIL renderer is rendering the alpha channel. Is that an issue on the MWorks side?
That’s not what I observe. If I change my example by replacing the line
draw.ellipse([x0, y0, x1, y1], fill='red')
with
draw.ellipse([x0, y0, x1, y1], fill=(255,0,0,50))
then I get a mostly-transparent red circle.
How are you specifying values for the alpha channel?
Chris
Hi Chris,
Hmm, I am not observing a mostly-transparent red circle with your example. I’m instead observing a lighter red circle, but it is still fully opaque, e.g. if you put an object behind it, that object is fully occluded. For example, with this example the red circle appears fully opaque:
draw.ellipse([x0 - size/2, y0, x1 - size/2, y1], fill=(0,255,0,255))
draw.ellipse([x0, y0, x1, y1], fill=(255,0,0,50))
After a little poking around, this seems to originate in PIL, not MWorks as I originally suspected (sorry about that!). For example, this code:
scene_buffer = np.zeros((128 * 4, 128), np.uint8)
img = Image.frombuffer('RGBA', (128, 128), scene_buffer, 'raw', 'RGBA', 0, 1)
draw = ImageDraw.Draw(img, 'RGBA')
draw.ellipse([16, 16, 80, 80], fill=(0,255,0,255))
draw.ellipse([48, 48, 112, 112], fill=(255,0,0,50))
plt.imshow(np.array(img)); plt.show()
Produces this undesirable image, without transparency:
[cid:image001.png@01D69B39.22C81F30]
Whereas this code:
scene_buffer = np.zeros((128 * 4, 128), np.uint8)
img = Image.frombuffer('RGB', (128, 128), scene_buffer, 'raw', 'RGB', 0, 1) # Note RGB here, not RGBA
draw = ImageDraw.Draw(img, 'RGBA')
draw.ellipse([16, 16, 80, 80], fill=(0,255,0,255))
draw.ellipse([48, 48, 112, 112], fill=(255,0,0,50))
plt.imshow(np.array(img)); plt.show()
Produces this desirable image, with transparency:
[cid:image002.png@01D69B39.22C81F30]
The only difference is the image is defined RGBA in the first and just RGB in the second. This phenomenon is described in the top answer herehttps://stackoverflow.com/questions/359706/how-do-you-draw-transparent-polygons-with-python.
I don’t know why PIL does this — it’s very non-intuitive behavior — but in an effort to try to get it working with MWorks, I changed the buffer to RGB, but get an error ‘ERROR: Pixel buffer object contains invalid data’, so I’m guessing MWorks expects an alpha channel.
What do you think is a good way to resolve this? Would it be possible to make MWorks accept a 3-channel image buffer, since that seems to be the only way to get PIL to render with transparency?
Thanks,
Nick
Attachments:
- image001.png (9.3 KB)
- image002.png (9.36 KB)
Hi Nick,
The only difference is the image is defined RGBA in the first and just RGB in the second. This phenomenon is described in the top answer here
That’s a pretty strange way to handle alpha compositing. Unfortunately, it doesn’t look like there’s a way to get more sensible behavior out of PIL.
Would it be possible to make MWorks accept a 3-channel image buffer, since that seems to be the only way to get PIL to render with transparency?
Sure. Like I mentioned, I can add a parameter that lets you select the pixel format, and RGB can be one of the options. The only potential issue with that is that you won’t be able to blend the image you generate with other stimuli beneath it (since its alpha will be one everywhere), but maybe that isn’t something you need to do.
I’ll try to make this change in the next day or two.
Chris
Hi Chris,
Yeah, it’s strange behavior from PIL.
Adding a parameter that lets us select RGB pixel format would be great, thanks! And yes, blending in MWorks is not something we’ll need — we’re going to be generating the final display in python, so no need to worry about that.
Thank you,
Nick
Hi Chris,
I was setting up at the rig computer today and ran into the following
import error:
ERROR: Python execution failed: Traceback (most recent call last):
File “/var/folders/9z/vbsgw8712cjgzsk922s64zxh0000gn/T/MWorks/Experiment
Cache/_Users_psychophysics1_rc_mworks_files_rendered_images.mwel/tmp/render_scene.py”,
line 13, in
import matplotlib
File
“/Users/psychophysics1/rc/venv_min/lib/python3.7/site-packages/matplotlib/init.py”,
line 174, in
_check_versions()
File
“/Users/psychophysics1/rc/venv_min/lib/python3.7/site-packages/matplotlib/init.py”,
line 159, in _check_versions
from . import ft2font
ImportError: cannot import name ‘ft2font’ from partially initialized
module ‘matplotlib’ (most likely due to a circular import)
(/Users/psychophysics1/rc/venv_min/lib/python3.7/site-packages/matplotlib/init.py)
It was strange because if I don’t add the path
‘/Users/psychophysics1/rc/venv_min/lib/python3.7/site-packages’,
I would just get ‘matplotlib’ not found.
The only other item in sys.path is
“/Library/Frameworks/MWorksPython.framework/Resources/python.zip”.
Do you have suggestions on how to fix/avoid this error?
Thanks,
Ruidong
Hi Chris,
Please ignore the previous message as I just had to upgrade to python 3.8.
I have a new question: does queue_stimulus() require a higher mac os version? On the rig computer running OS version 10.13.4, I got the following error:
ERROR: Failed to create object.
Extended information:
reason: Metal is not supported on this system
location: line 30, column 5
object_type: action/queue_stimulus
parent_scope: Rendered Images Demo
ref_id: idm34633076219600
parser_context: mw_anonymous_create
Adding a parameter that lets us select RGB pixel format would be great, thanks!
This is done. You’ll need this new MWorks build. See the attached example.
Unfortunately, using a persistent NumPy array as a buffer doesn’t work for RGB-format images, so you’ll have to use tobytes instead. This isn’t as efficient, but hopefully it’s good enough.
Chris
Attachment: python_image_2.zip (1.72 KB)
Hi Chris,
Thank you! It works and is very fast (<1ms per step). The to_bytes() method takes almost not time at all, so I’m guessing it’s not allocating new memory. I can also pre-allocate the image and draw objects outside of render(), which saves a bit of time as well.
Anyway, we’re well within the speed we need to be, so all set on the rendering front. I’ve hooked it up to our python environment and it’s working on our most complex tasks without dropping any frames.
Thanks,
Nick
Hi Nick,
I’ve attached a new example that integrates your dummy task code in to an MWorks experiment. To simulate joystick movement, you use the keyboard’s arrow keys. To simulate eye movements, you move the mouse cursor around the stimulus display window (the cursor position corresponds to the gaze location).
I’ve included a 60-second timeout. If the subject doesn’t successfully complete the task in that period, the trial ends, and the protocol advances to the next trial. I’ve also included a one-second interval between trials. Obviously, you can change any of this to suit your needs.
In addition to managing the dummy-task code, the Python code also handles recording of eye position and joystick state. These values are recorded as soon as MWorks acquires them, independent of when the image-rendering code is called, using event callbacks. For more info on these, see the docs.
Hopefully the example is pretty straightforward. If you have any questions, please let me know.
Cheers,
Chris
Attachment: game_demo.zip (181 KB)
Hi Chris,
Thank you very much for this! I’ll test it out when I have a chance (likely in the second half of next week), and will get back to you if I have questions.
Best,
Nick
Hi Nick,
FYI, the Python image stimulus is now included in the MWorks nightly build, so you no longer need to use the custom builds I provided previously.
Cheers,
Chris
Hi Chris,
Thanks so much for the example, I could run it on our rig computer! I have
a question related to this:
I am trying to set up a multiplayer game where separate keyboards control
two avatars on the same screen. When I tried to start the demo you sent
initially, I got the ‘multiple matching HID devices’ error pasted below. I
was able to run the demo by specifying preferred_location_id as prompted.
My question is if I want to allow for input from two keyboards, is there a
way to read key presses from both?
ERROR: Found multiple matching HID devices for “joystick”:
Device #1
Product: Apple Internal Keyboard / Trackpad
Manufacturer: Apple Inc.
Location ID: 2152726528 (0x80500000)
Device #2
Product: Magic Keyboard with Numeric Keypad
Manufacturer: Apple
Location ID: 343706896 (0x147c8d10)
Device #3
Please set the “preferred_location_id” attribute to the Location ID of the
desired device.
Thanks,
Ruidong
Hi Chris,
Firstly, thanks for the demo! The dynamic event buffer for joystick/eye position is great, and latency is well within our margin to not drop frames.
I’m now going to hook it into my real tasks instead of the dummy task, but that should be smooth sailing since they have the same interface.
I do have a couple of general workflow-related questions:
- Currently, print statements from the python code are not shown in the MWorks console. This, combined with the inability of MWorks to enter breakpoints in the python code, can make debugging a bit difficult. Is there any way to pipe python stdout to the MWorks console? That would make debugging much easier than logging to a separate text file, which is what I’m currently doing.
- Changes to the python code don’t propagate to the MWorks run unless I restart the MWorks server application. Is there a way to reload the python code when an experiment is loaded, so I don’t have to reboot the MWServer application?
Thanks,
Nick
Hi Nick,
Currently, print statements from the python code are not shown in the MWorks console. This, combined with the inability of MWorks to enter breakpoints in the python code, can make debugging a bit difficult. Is there any way to pipe python stdout to the MWorks console?
The functions available to your Python code include message
, which converts its value to a string and then writes it to the MWorks console and event stream (just like report does in an experiment). You could use this along with redirect_stdout to send your debugging messages to the console.
Changes to the python code don’t propagate to the MWorks run unless I restart the MWorks server application. Is there a way to reload the python code when an experiment is loaded, so I don’t have to reboot the MWServer application?
The top-level Python code (e.g. the code loaded via python_file) is reloaded whenever the experiment is loaded. This issue is that the Python library is not re-initialized when the experiment is reloaded, so any modules imported by the top-level code are not reloaded.
You can work around this by explicitly reloading imported modules in your top-level code with importlib.reload. For example, in game_demo.py, you could import the Task
class like this:
import importlib
try:
importlib.reload(dummy_task.task)
except NameError:
import dummy_task.task
from dummy_task.task import Task
This can get awkward, especially if there are many modules that need to be reloaded, but it’s probably the best workaround available to you.
Cheers,
Chris