I'm trying to share the class instance/object variables with other processes that I start within it, since I need to run multiple function at the same time, to record macros from the keyboard & mouse and re-play them latter at the same timing.
I see that it's possible to use multiprocessing.Manager, but i'm using concurrent.futures.ThreadPoolExecutor. is there a similar function there?
I wrote the code below now to clarify. The actual code has a setState function for settings the recording state and such, and the key that's pressed doesn't get passed. Also, the actual code obviously has a listener for the key presses and mouse moves, the getKey and getMove functions should be the ones appending to the list. The problem in this case is that the recording variable can't be accessed from the second process that should start recording moves once the "Insert" key is pressed. A function in concurrent that's similar to Manager in multiprocessing would solve it, but i'm not sure what it's called or to use it.
from concurrent.futures import ThreadPoolExecutor as Executor
import time
class recMacros(object):
def __init__(self):
self.recording = False
self.lastKey = None
self.lastMove = None
self.mouseMoves = []
self.keyPresses = []
self.runMacros()
def getTime(self):
return time.time()
def getKey(self):
#return keyboard listener last key pressed
return "W"
def getMove(self):
#return keyboard listener last key pressed
return "W"
def recMoves(self):
while True:
while self.recording:
mouseMove = self.getMove()
if mouseMove != self.lastMove:
self.mouseMoves.append((mouseMove, self.getTime()))
self.lastMove = mouseMove
def recPresses(self):
while True:
keyPress = self.getKey()
if keyPress == "Insert":
self.recording = True
elif keyPress == "End":
self.recording = False
elif self.recording and keyPress != self.lastKey:
self.keyPresses.append((keyPress, self.getTime()))
self.lastKey = keyPress
else:
print("Error")
def recMacros(self):
with Executor(max_workers=2) as e:
e.submit(recPresses)
e.submit(recMoves)
if __name__ == "__main__":
recMacros()
I'd appreciate some quick direction since i'm in a rush. Thanks in advance
#user2357112 supports Monica
Here's the code I used to test the timing, to verify that ThreadPoolExecutor is like a process to comes to running the functions in parallel:
from concurrent.futures import ThreadPoolExecutor
import time
def printTime():
print(f"Time: {time.time()}\n")
def runPro():
with ThreadPoolExecutor(max_workers=3) as e:
for i in range(3):
e.submit(printTime)
runPro()
If you want to save something in a variable that every process can use, you can use queue.
Import queue
import queue
Create a shared variable
shared_var = queue.Queue(maxsize=0)
where maxsize is the maximum that can be saved in this queue
Edit shared variable in any process
shared_var.put(item)
Get the things in the variable
variable = shared_var.get()
There is a lot more you can do with queue, see the documentation.
Related
Currently creating separate instances of my class, Example, then creating a thread for each instance and utilizing the Class's execute_thread function as the thread function target. The thread function continues running as long as the member variable exit_signal is not updated to True. Once control, shift, and 2 are pressed on the keyboard, the member variable isn't updated from within the thread instance.
The problem is the thread function isn't recognizing any change to the member variable, why isn't it detecting the change, is the while loop preventing it from doing so?
import keyboard
import multiprocessing
import time
class Example:
m_exit_signal = False
def __init__(self):
keyboard.add_hotkey('control, shift, 2', lambda: self.exit_signaled())
def execute_example_thread(self):
exit_status = self.m_exit_signal
# THREAD continues till exit is called! -
while exit_status == False:
time.sleep(5)
exit_status = self.m_exit_signal
print(exit_status)
def exit_signaled(self):
self.m_exit_signal = True
print("Status {0}".format(self.m_exit_signal))
example_objects = []
example_objects.append(Example())
example_objects.append(Example())
example_threads = []
for value in example_objects:
example_threads.append(multiprocessing.Process(target=value.execute_example_thread, args=()))
example_threads[-1].start()
Multiprocessing forks your code so that it runs in a separate process. In the code above the keyboard callback is calling the method in the instances present in the parent process. The loop (and a copy of the class instance) is actually running in a forked version in a child process. In order to signal the child, you need to share a variable between them and use it to pass data back and forth. Try the code below.
import keyboard
import multiprocessing as mp
import time
class Example(object):
def __init__(self, hot_key):
self.run = mp.Value('I', 1)
keyboard.add_hotkey('control, shift, %d' % hot_key, self.exit_signaled)
print("Initialized {}".format(mp.current_process().name))
def execute(self):
while self.run.value:
time.sleep(1)
print("Running {}".format(mp.current_process().name))
print("{} stopping".format(mp.current_process().name))
def exit_signaled(self):
print("exit signaled from {}".format(mp.current_process().name))
self.run.value = 0
p1 = mp.Process(target=Example(1).execute)
p1.start()
time.sleep(0.1)
p2 = mp.Process(target=Example(2).execute)
p2.start()
Here the parent and the child of each instance share an self.run = mp.Value To share data, you need to use one of these, not just any python variable.
I have the following python script:
#! /usr/bin/python
import os
from gps import *
from time import *
import time
import threading
import sys
gpsd = None #seting the global variable
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
global gpsd #bring it in scope
gpsd = gps(mode=WATCH_ENABLE) #starting the stream of info
self.current_value = None
self.running = True #setting the thread running to true
def run(self):
global gpsd
while gpsp.running:
gpsd.next() #this will continue to loop and grab EACH set of gpsd info to clear the buffer
if __name__ == '__main__':
gpsp = GpsPoller() # create the thread
try:
gpsp.start() # start it up
while True:
print gpsd.fix.speed
time.sleep(1) ## <<<< THIS LINE HERE
except (KeyboardInterrupt, SystemExit): #when you press ctrl+c
print "\nKilling Thread..."
gpsp.running = False
gpsp.join() # wait for the thread to finish what it's doing
print "Done.\nExiting."
I'm not very good with python, unfortunately. The script should be multi-threaded somehow (but that probably doesn't matter in the scope of this question).
What baffles me is the gpsd.next() line. If I get it right, it was supposed to tell the script that new gps data have been acquired and are ready to be read.
However, I read the data using the infinite while True loop with a 1 second pause with time.sleep(1).
What this does, however, is that it sometimes echoes the same data twice (the sensor hasn't updated the data in the last second). I figure it also skips some sensor data somehow too.
Can I somehow change the script to print the current speed not every second, but every time the sensor reports new data? According to the data sheet it should be every second (a 1 Hz sensor), but obviously it isn't exactly 1 second, but varies by milliseconds.
As a generic design rule, you should have one thread for each input channel or more generic, for each "loop over a blocking call". Blocking means that the execution stops at that call until data arrives. E.g. gpsd.next() is such a call.
To synchronize multiple input channels, use a Queue and one extra thread. Each input thread should put its "events" on the (same) queue. The extra thread loops over queue.get() and reacts appropriately.
From this point of view, your script need not be multithreaded, since there is only one input channel, namely the gpsd.next() loop.
Example code:
from gps import *
class GpsPoller(object):
def __init__(self, action):
self.gpsd = gps(mode=WATCH_ENABLE) #starting the stream of info
self.action=action
def run(self):
while True:
self.gpsd.next()
self.action(self.gpsd)
def myaction(gpsd):
print gpsd.fix.speed
if __name__ == '__main__':
gpsp = GpsPoller(myaction)
gpsp.run() # runs until killed by Ctrl-C
Note how the use of the action callback separates the plumbing from the data evaluation.
To embed the poller into a script doing other stuff (i.e. handling other threads as well), use the queue approach. Example code, building on the GpsPoller class:
from threading import Thread
from Queue import Queue
class GpsThread(object):
def __init__(self, valuefunc, queue):
self.valuefunc = valuefunc
self.queue = queue
self.poller = GpsPoller(self.on_value)
def start(self):
self.t = Thread(target=self.poller.run)
self.t.daemon = True # kill thread when main thread exits
self.t.start()
def on_value(self, gpsd):
# note that we extract the value right here.
# Otherwise it could change while the event is in the queue.
self.queue.put(('gps', self.valuefunc(gpsd)))
def main():
q = Queue()
gt = GpsThread(
valuefunc=lambda gpsd: gpsd.fix.speed,
queue = q
)
print 'press Ctrl-C to stop.'
gt.start()
while True:
# blocks while q is empty.
source, data = q.get()
if source == 'gps':
print data
The "action" we give to the GpsPoller says "calculate a value by valuefunc and put it in the queue". The mainloop sits there until a value pops out, then prints it and continues.
It is also straightforward to put other Thread's events on the queue and add the appropriate handling code.
I see two options here:
GpsPoller will check if data changed and raise a flag
GpsPoller will check id data changed and put new data in the queue.
Option #1:
global is_speed_changed = False
def run(self):
global gpsd, is_speed_changed
while gpsp.running:
prev_speed = gpsd.fix.speed
gpsd.next()
if prev_speed != gpsd.fix.speed
is_speed_changed = True # raising flag
while True:
if is_speed_changed:
print gpsd.fix.speed
is_speed_changed = False
Option #2 ( I prefer this one since it protects us from raise conditions):
gpsd_queue = Queue.Queue()
def run(self):
global gpsd
while gpsp.running:
prev_speed = gpsd.fix.speed
gpsd.next()
curr_speed = gpsd.fix.speed
if prev_speed != curr_speed:
gpsd_queue.put(curr_speed) # putting new speed to queue
while True:
# get will block if queue is empty
print gpsd_queue.get()
I need to check whether the Escape key has been pressed during execution of some non-GUI code. (The code is in Python, but can easily call into C if necessary.) The code received a function from the GUI that it occasionally calls to check whether it has been interrupted. The question is how to implement this check.
By looking at the documentation, gdk_event_peek seems like an excellent choice for this:
def _check_esc(self):
event = gtk.gdk.event_peek()
if event is None or event.type not in (gtk.gdk.KEY_PRESS, gtk.gdk.KEY_RELEASE):
return False
return gtk.gdk.keyval_name(event.keyval) == 'Escape'
This doesn't work, however: the event returned from gtk.gdk.event_peek() is always None when the main loop is not running. Changing it to gtk.gdk.display_get_default().peek_event() doesn't help either. I assume the events are in the X event queue and are not yet moved to the GDK event queue. The documentation says:
Note that this function will not get more events from the windowing
system. It only checks the events that have already been moved to the
GDK event queue.
So, how does one transfer the event to the GDK event queue or? In other words, when does gtk.gdk.peek_event() ever return an event? Calling gtk.events_pending() doesn't have any effect.
Here is a minimal program to test it:
import gtk, gobject
import time
def code(check):
while 1:
time.sleep(.1)
if check():
print 'interrupted'
return
def _check_esc():
event = gtk.gdk.event_peek()
print 'event:', event
if event is None or event.type not in (gtk.gdk.KEY_PRESS, gtk.gdk.KEY_RELEASE):
return False
return gtk.gdk.keyval_name(event.keyval) == 'Escape'
def runner():
code(_check_esc)
gtk.main_quit()
w = gtk.Window()
w.show()
gobject.idle_add(runner)
gtk.main()
When running the code, the event printed is always None, even if you press Escape or move the mouse.
I also considered installing a handler for Escape and having the checker process events with the while gtk.events_pending(): gtk.main_iteration() idiom. This results in unqueuing and dispatch of all pending events, including keyboard and mouse events. The effect is that the GUI is responsive enabled while the code runs, which doesn't look well and can severely interfere with the execution of the code. The only event processed during execution should be the escape key to interrupt it.
I came up with a runner implementation that satisfies the criteria put forward in the question:
def runner():
# _check_esc searches for Escape in our queue
def _check_esc():
oldpos = len(queue)
while gtk.events_pending():
gtk.main_iteration()
new = itertools.islice(queue, oldpos, None)
return any(event.type == gtk.gdk.KEY_PRESS \
and gtk.gdk.keyval_name(event.keyval) == 'Escape'
for event in new)
queue = []
# temporarily set the global event handler to queue
# the events
gtk.gdk.event_handler_set(queue.append)
try:
code(_check_esc)
finally:
# restore the handler and replay the events
handler = gtk.main_do_event
gtk.gdk.event_handler_set(gtk.main_do_event)
for event in queue:
handler(event)
gtk.main_quit()
Compared to a peek-based solution, its advantage is that it handles the case when another event arrives after the keypress. The disadvantage is that it requires fiddling with the global event handler.
I'm using python-zookeeper for locking, and I'm trying to figure out a way of getting the execution to wait for notification when it's watching a file, because zookeeper.exists() returns immediately, rather than blocking.
Basically, I have the code listed below, but I'm unsure of the best way to implement the notify() and wait_for_notification() functions. It could be done with os.kill() and signal.pause(), but I'm sure that's likely to cause problems if I later have multiple locks in one program - is there a specific Python library that is good for this sort of thing?
def get_lock(zh):
lockfile = zookeeper.create(zh,lockdir + '/guid-lock-','lock', [ZOO_OPEN_ACL_UNSAFE], zookeeper.EPHEMERAL | zookeeper.SEQUENCE)
while(True):
# this won't work for more than one waiting process, fix later
children = zookeeper.get_children(zh, lockdir)
if len(children) == 1 and children[0] == basename(lockfile):
return lockfile
# yeah, there's a problem here, I'll fix it later
for child in children:
if child < basename(lockfile):
break
# exists will call notify when the watched file changes
if zookeeper.exists(zh, lockdir + '/' + child, notify):
# Process should wait here until notify() wakes it
wait_for_notification()
def drop_lock(zh,lockfile):
zookeeper.delete(zh,lockfile)
def notify(zh, unknown1, unknown2, lockfile):
pass
def wait_for_notification():
pass
The Condition variables from Python's threading module are probably a very good fit for what you're trying to do:
http://docs.python.org/library/threading.html#condition-objects
I've extended to the example to make it a little more obvious how you would adapt it for your purposes:
#!/usr/bin/env python
from collections import deque
from threading import Thread,Condition
QUEUE = deque()
def an_item_is_available():
return bool(QUEUE)
def get_an_available_item():
return QUEUE.popleft()
def make_an_item_available(item):
QUEUE.append(item)
def consume(cv):
cv.acquire()
while not an_item_is_available():
cv.wait()
print 'We got an available item', get_an_available_item()
cv.release()
def produce(cv):
cv.acquire()
make_an_item_available('an item to be processed')
cv.notify()
cv.release()
def main():
cv = Condition()
Thread(target=consume, args=(cv,)).start()
Thread(target=produce, args=(cv,)).start()
if __name__ == '__main__':
main()
My answer may not be relevant to your question, but it is relevant to the question title.
from threading import Thread,Event
locker = Event()
def MyJob(locker):
while True:
#
# do some logic here
#
locker.clear() # Set event state to 'False'
locker.wait() # suspend the thread until event state is 'True'
worker_thread = Thread(target=MyJob, args=(locker,))
worker_thread.start()
#
# some main thread logic here
#
locker.set() # This sets the event state to 'True' and thus it resumes the worker_thread
More information here: https://docs.python.org/3/library/threading.html#event-objects
I'm writing an application that listens for sound events (using messages passed in with Open Sound Control), and then based on those events pauses or resumes program execution. My structure works most of the time but always bombs out in the main loop, so I'm guessing it's a thread issue. Here's a generic, simplified version of what I'm talking about:
import time, threading
class Loop():
aborted = False
def __init__(self):
message = threading.Thread(target=self.message, args=((0),))
message.start()
loop = threading.Thread(target=self.loop)
loop.start()
def message(self,val):
if val > 1:
if not self.aborted:
self.aborted = True
# do some socket communication
else:
self.aborted = False
# do some socket communication
def loop(self):
cnt = 0
while True:
print cnt
if self.aborted:
while self.aborted:
print "waiting"
time.sleep(.1);
cnt += 1
class FakeListener():
def __init__(self,loop):
self.loop = loop
listener = threading.Thread(target=self.listener)
listener.start()
def listener(self):
while True:
loop.message(2)
time.sleep(1)
if __name__ == '__main__':
loop = Loop()
#fake listener standing in for the real OSC event listener
listener = FakeListener(loop)
Of course, this simple code seems to work great, so it's clearly not fully illustrating my real code, but you get the idea. What isn't included here is also the fact that on each loop pause and resume (by setting aborted=True/False) results in some socket communication which also involves threads.
What always happens in my code is that the main loop doesn't always pickup where it left off after a sound event. It will work for a number of events but then eventually it just doesn't answer.
Any suggestions for how to structure this kind of communication amongst threads?
UPDATE:
ok, i think i've got it. here's a modification that seems to work. there's a listener thread that periodically puts a value into a Queue object. there's a checker thread that keeps checking the queue looking for the value, and once it sees it sets a boolean to its opposite state. that boolean value controls whether the loop thread continues or waits.
i'm not entirely sure what the q.task_done() function is doing here, though.
import time, threading
import Queue
q = Queue.Queue(maxsize = 0)
class Loop():
aborted = False
def __init__(self):
checker = threading.Thread(target=self.checker)
checker.setDaemon(True)
checker.start()
loop = threading.Thread(target=self.loop)
loop.start()
def checker(self):
while True:
if q.get() == 2:
q.task_done()
if not self.aborted:
self.aborted = True
else:
self.aborted = False
def loop(self):
cnt = 0
while cnt < 40:
if self.aborted:
while self.aborted:
print "waiting"
time.sleep(.1)
print cnt
cnt += 1
time.sleep(.1)
class fakeListener():
def __init__(self):
listener = threading.Thread(target=self.listener)
listener.setDaemon(True)
listener.start()
def listener(self):
while True:
q.put(2)
time.sleep(1)
if __name__ == '__main__':
#fake listener standing in for the real OSC event listener
listener = fakeListener()
loop = Loop()
Umm.. I don't completely understand your question but i'll do my best to explain what I think you need to fix your problems.
1) The thread of your Loop.loop function should be set as a daemon thread so that it exits with your main thread (so you don't have to kill the python process every time you want to shut down your program). To do this just put loop.setDaemon(True) before you call the thread's "start" function.
2)The most simple and fail-proof way to communicate between threads is with a Queue. On thread will put an item in that Queue and another thread will take an item out, do something with the item and then terminate (or get another job)
In python a Queue can be anything from a global list to python's built-in Queue object. I recommend the python Queue because it is thread safe and easy to use.