I have a psychopy script for a psychophysical experiment in which participants see different shapes and have to react in a specific way. I would like to take physiological measures (ECG) on a different computer and implement event points in my measures. So whenever the participant is shown a stimulus, I would like this to show up on the ECG.
Basically, I would like to add commands for parallel port i/o.
I don't even know where to start, to be honest. I have read quite a bit about this, but I'm still lost.
Here is probably the shortest fully working code to show a shape and send a trigger:
from psychopy import visual, parallel, core
win = visual.Window()
shape = visual.ShapeStim(win)
# Show it
shape.draw()
win.flip()
# Immediately send trigger
parallel.setData(10) # Trigger code, here 10
core.wait(0.020) # Number of seconds to send this trigger code (enough for the hardware to send and the receiver to recognize it)
parallel.setData(0) # Stop sending trigger.
You could, of course, extend this by putting the stimulus presentation and trigger in a loop (running trials), and do various other things alongside it, e.g. collecting responses and saving data. This is just a minimal example of stimulus presentation and sending a trigger.
It is very much on purpose that the trigger code is located immediately after flipping the window. On most hardware, sending a trigger is very fast (voltage on the port changes within 1 ms of the line being run in the script) whereas the monitor only updates its image around every 16.7 ms. and win.flip() waits for the latter. I've made some examples of good and bad practices for timing triggers here.
For parallel port io, you should use pyparallel. Make continuous ECG measurements, store those and store the timestamp and whatever metadata you want per stimulus. Pseudo code to get you started:
while True:
store_ecg()
if time_to_show():
stimulus()
time.sleep(0.1) # 10 Hz
Related
Actual problem:
I have a controller node that subscribes to 2 topics and publishes to 1 topic. Although, in simulation everything seems to be working as expected, in actual HW, the performance degrades. I suspect the problem is that one of the two input topics is lagging behind the other one by a significant amount of time.
Question:
I want to re-create this behavior in simulation in order to test the robustness of the controller. Therefore, I need to delay one of the topics by a certain amount of time - ideally this should be configurable parameter. I could write a node that has a FIFO memory buffer and adjust the delay-time by monitoring the frequency of the topic. Before I do that, is there a command line tool or any other quick to implement method that I can use?
P.s. I'm using Ubuntu 16.04 and ROS-Kinetic.
I do not know of any out of the box solutions that would do exactly what you describe here.
For a quick hack, if your topic does not have a timestamp and the node just takes in the messages as they arrive the easiest thing to do would be to record a bag and play the two topics back from two different instances of rosbag play. Something like this:
first terminal
rosbag play mybag.bag --clock --topics /my/topic
second terminal started some amount of time later
rosbag play mybag.bag --topics /my/other_topic
Not sure about the --clock flag, whether you need it depends mostly on what you mean by simulation. If you would want to control the time difference more than pressing enter in two different terminals you could write a small bash script to launch them.
Another option that would still involve bags, but would give you more control over the exact time the message is delayed by would be to edit the bag to have the messages in the bag already with the correct delay. Could be done relatively easily by modifying the first example in the rosbag cookbook:
import rosbag
with rosbag.Bag('output.bag', 'w') as outbag:
for topic, msg, t in rosbag.Bag('input.bag').read_messages():
# This also replaces tf timestamps under the assumption
# that all transforms in the message share the same timestamp
if topic == "/tf" and msg.transforms:
outbag.write(topic, msg, msg.transforms[0].header.stamp)
else:
outbag.write(topic, msg, msg.header.stamp if msg._has_header else t)
Replacing the if else with:
...
import rospy
...
if topic == "/my/other_topic":
outbag.write(topic, msg, t + rospy.Duration.from_sec(0.5))
else:
outbag.write(topic, msg, t)
Should get you most of the way there.
Other than that if you think the node would be useful in the future or you want to have it work on live data as well then you would need to implement the the node you described with some queue. One thing you could look at for inspiration could be the topic tools, git for topic tools source.
I'm trying to get a looping call to run every 2 seconds. Sometimes, I get the desired functionality, but othertimes I have to wait up to ~30 seconds which is unacceptable for my applications purposes.
I reviewed this SO post and found that looping call might not be reliable for this by default. Is there a way to fix this?
My usage/reason for needing a consistent ~2 seconds:
The function I am calling scans an image (using CV2) for a dollar value and if it finds that amount it sends a websocket message to my point of sale client. I can't have customers waiting 30 seconds for the POS terminal to ask them to pay.
My source code is very long and not well commented as of yet, so here is a short example of what I'm doing:
#scan the image for sales every 2 seconds
def scanForSale():
print ("Now Scanning for sale requests")
#retrieve a new image every 2 seconds
def getImagePreview():
print ("Loading Image From Capture Card")
lc = LoopingCall(scanForSale)
lc.start(2)
lc2 = LoopingCall(getImagePreview)
lc2.start(2)
reactor.run()
I'm using a Raspberry Pi 3 for this application, which is why I suspect it hangs for so long. Can I utilize multithreading to fix this issue?
Raspberry Pi is not a real time computing platform. Python is not a real time computing language. Twisted is not a real time computing library.
Any one of these by itself is enough to eliminate the possibility of a guarantee that you can run anything once every two seconds. You can probably get close but just how close depends on many things.
The program you included in your question doesn't actually do much. If this program can't reliably print each of the two messages once every two seconds then presumably you've overloaded your Raspberry Pi - a Linux-based system with multitasking capabilities. You need to scale back your usage of its resources until there are enough available to satisfy the needs of this (or whatever) program.
It's not clear whether multithreading will help - however, I doubt it. It's not clear because you've only included an over-simplified version of your program. I would have to make a lot of wild guesses about what your real program does in order to think about making any suggestions of how to improve it.
I am working in a project using Raspberry Pi 3 B where I get data from a IR sensor(Sharp GP2Y0A21YK0F) through a ADC MPC3008 and display it in real-time using PyQtgraph library.
However, it seems that I am getting very few samples and the graph is not "smooth" as I expect.
I am using the Adafruit Python MCP3008 Library and the function mcp.read_adc(0) to get the data.
Is there a way to measure the sample rate in Python?
Thank you
Hugo Oliveira
I would suggest setting up some next level buffering, ideally via multiprocessing (see multiprocessing and GUI updating - Qprocess or multiprocessing?) to better get a handle on how fast you can access the data. Currently you're using a QTimer to poll with, which is only getting 3 raw reads every 50 msec... so you're REALLY limiting yourself artificially via the timer. I haven't used the MCP3008, but a quick look at some at their code seems like you'll have to set up some sample testing to try some things out, or investigate further for better documentation. The question is the behavior of the mcp.read_adc(0) method and is it blocking or non-blocking... if non-blocking, does it return stale data if there's no new data, ... etc. It would be ideal if it was blocking from a timing sense, you could just set up a loop on it and time delta each successive return to determine how fast you're able to get new samples. If it's non-blocking, you would want it to return null for no new samples, and only return the actual samples that were new if it does return something. You'll have to play around with it and see how it behaves. At any rate, once you get the secondary thread set up to just poll the mcp.read_adc(0), then you can use the update() timer to collect the latest buffer and plot it. I also don't know the implications of multi-threading / multiprocessing on the RaspPI (see general discussion here: Multiprocessing vs Threading Python) , but anything should be better than the QTimer polling.
I apologize in advance for this being a bit vague, but I'm trying to figure out what the best way is to write my program from a high-level perspective. Here's an overview of what I'm trying to accomplish:
RasPi takes input from altitude sensor on serial port at 115000 baud.
Does some hex -> dec math and updates state variables (pitch, roll, heading, etc)
Uses pygame library to do some image manipulation based on the state variables on a simulated heads up display
Outputs the image to a projector at 30 fps.
Note that there's no user input (for now).
The issue I'm running into is the framerate. The framerate MUST be constant. I'd rather skip a data packet than drop a frame.
There's two ways I could see structuring this:
Write one function that, when called, grabs data from the serial bus and spits out the state variables as the output. Then write a pygame loop that calls this function from inside it. My concern with this is that if the serial port starts being read at the end of an attitude message, it'll have to pause and wait for the message to start again (fractions of a second, but could result in a dropped frame)
Write two separate modules, both to be running simultaneously. One continuously reads data from the serial port and updates the state variables as fast as possible. The other just does the image manipulation, and grabs the latest state variables when it needs them. However, I'm not actually sure how to write a multithreaded program like this, and I don't know how well the RasPi will handle such a program.
I don't think that RasPi would work that well running multithreaded programs. Try the first method, though it would be interesting to see the results of a multithreaded program.
I have a long running python process running headless on a raspberrypi (controlling a garden) like so:
from time import sleep
def run_garden():
while 1:
/* do work */
sleep(60)
if __name__ == "__main__":
run_garden()
The 60 second sleep period is plenty of time for any changes happening in my garden (humidity, air temp, turn on pump, turn off fan etc), BUT what if i want to manually override these things?
Currently, in my /* do work */ loop, i first call out to another server where I keep config variables, and I can update those config variables via a web console, but it lacks any sort of real time feel, because it relies on the 60 second loop (e.g. you might update the web console, and then wait 45 seconds for the desired effect to take effect)
The raspberryPi running run_garden() is dedicated to the garden and it is basically the only thing taking up resources. So i know i have room to do something, I just dont know what.
Once the loop picks up the fact that a config var has been updated, the loop could then do exponential backoff to keep checking for interaction, rather than wait 60 seconds, but it just doesnt feel like that is a whole lot better.
Is there a better way to basically jump into this long running process?
Listen on a socket in your main loop. Use a timeout (e.g. of 60 seconds, the time until the next garden update should be performed) on your socket read calls so you get back to your normal functionality at least every minute when there are no commands coming in.
If you need garden-tending updates to happen no faster than every minute you need to check the time since the last update, since read calls will complete significantly faster when there are commands coming in.
Python's select module sounds like it might be helpful.
If you've ever used the unix analog (for example in socket programming maybe?), then it'll be familiar.
If not, here is the select section of a C sockets reference I often recommend. And here is what looks like a nice writeup of the module.
Warning: the first reference is specifically about C, not Python, but the concept of the select system call is the same, so the discussion might be helpful.
Basically, it allows you to tell it what events you're interested in (for example, socket data arrival, keyboard event), and it'll block either forever, or until a timeout you specify elapses.
If you're using sockets, then adding the socket and stdin to the list of events you're interested in is easy. If you're just looking for a way to "conditionally sleep" for 60 seconds unless/until a keypress is detected, this would work just as well.
EDIT:
Another way to solve this would be to have your raspberry-pi "register" with the server running the web console. This could involve a little bit extra work, but it would give you the realtime effect you're looking for.
Basically, the raspberry-pi "registers" itself, by alerting the server about itself, and the server stores the address of the device. If using TCP, you could keep a connection open (which might be important if you have firewalls to deal with). If using UDP you could bind the port on the device before registering, allowing the server to respond to the source address of the "announcement".
Once announced, when config. options change on the server, one of two things usually happen:
A) You send a tiny "ping" (in the general sense, not the ICMP host detection protocol) to the device alerting it that config options have changed. At this point the host would immediately request the full config. set, acquiring the update with it.
B) You send the updated config. option (or maybe the entire config. set) back to the device. This decreases the number of messages between the device and server, but would probably take more work as it seems like more a deviation from your current setup.
Why not use an event based loop instead of sleeping for a certain amount of time.
That way your loop will only run when a change is detected, and it will always run when a change is detected (which is the point of your question?).
You can do such a thing by using:
python event objects
Just wait for one or all of your event objects to be triggered and run the loop. You can also wait for X events to be done, etc, depending if you expect one variable to be updated a lot.
Or even a system like:
broadcasting events