Python global variable shared between threads (using python-osc) - python

I'm stuck with a threading problem here. I need threads to access a global variable.
I've read a previous answer to a similar question and I understood the "power" of the global keyword in order for functions and threads to access to global variables.
I'm able to make the following code work and it is pretty straightforward to me:
# WORKING CODE !!!
from threading import Thread
import sys, time
a = "" #global variable
def thread1(threadname):
global a
while True:
a *= 2
time.sleep(2)
def thread2(threadname):
global a
while True:
a += 1
time.sleep(1)
if __name__ == "__main__":
thread1 = Thread( target=thread1, args=("Thread-1", ) )
thread2 = Thread( target=thread2, args=("Thread-2", ) )
a = 23
thread1.start()
thread2.start()
while True:
print(a)
Now I would like to have an OSC driven function to modify the global variable a.
I'm using the python-osc module and I'm making the OSC server running on its own thread.
As before I have declared a as a global variable inside the mapped function associated with the "/learn" OSC method.
Strangely to my comprehension the following code is not behaving the same way as the previous one.
edited 2018-10-18, 16:14: "a" is not increasing at all and what I'm seeing printed is
a: 1
printed continuosly. As if we had two different "a" values: one that is increasing inside the OSC thread which is different from the global "a" of the main one.
What I doing wrong?
import threading
from time import sleep
from pythonosc import osc_server, dispatcher
OSCaddress = "192.168.1.68"
OSCport = 13000
a = ""
# OSC functions
def menageLearnButton(unused_addr, args, value):
global a
if value == 1:
a += 1
else:
a += 3
if __name__ == "__main__":
# OSC dispatcher to respond to incoming OSC messages
dispatcher = dispatcher.Dispatcher()
dispatcher.map("/learn", menageLearnButton, "learning")
a = 1
# better to run the OSC server on its own thread
# in order not to block the program here
OSCserver = osc_server.ForkingOSCUDPServer((OSCaddress, OSCport), dispatcher)
OSCserver_thread = threading.Thread(target=OSCserver.serve_forever)
OSCserver_thread.start()
while True:
print("a: {}".format(a))
sleep(1)
Thank you very much for your support.

I think what is going on is that 'ForkingOSCUDPServer' is creating a new process for each OSC request so 'a' is getting reinitialized each time. If I switch your code to use 'ThreadingOSCUDPServer' it seems to have the desired behavior.

Related

How do I make the input ask for the password while the timer is still running?

I am trying to make an attendance system and right now, I want to create a random password, and put it on a countdown so that as it runs out, the student can't use the code anymore. However, when I try to run it, it only displays the password and the countdown, and only asks for input after the timer runs out.
I have attempted to use a for loop as well as the multiprocessing module to no avail. I suspect that the error is located somewhere around my use of the threads.
import threading
#create code and timer
Thread1 = threading.Thread(target=generateCodeandTimer(600))
# make input
Thread2 = threading.Thread(target=attend)
# Start the thread
Thread1.start()
# Start the thread
Thread2.start()
But for reference, this is my full code:
import string
import random
import time
import sys
import threading
code = ""
def generateCodeandTimer(s):
global code
code = ''.join((random.choice(string.ascii_lowercase + string.digits) for x in range(6)))
print("Attendance code:", code)
while s != -1:
mins = s // 60
secs = s % 60
countdown = '{:02d}:{:02d}'.format(mins, secs)
sys.stdout.write('\r' + countdown)
time.sleep(1)
s -= 1
if s==-1:
print()
print("Code expired")
def attend():
print()
studentinput = input("Please enter the code")
if studentinput == code:
print()
print("Your attendance has been taken")
else:
print()
print("Wrong code!")
#create code and timer
Thread1 = threading.Thread(target=generateCodeandTimer(600))
# make input
Thread2 = threading.Thread(target=attend)
# Start the thread
Thread1.start()
# Start the thread
Thread2.start()
In this line:
Thread1 = threading.Thread(target=generateCodeandTimer(600))
you are actually calling the function generateCodeandTimer. The target keyword requires a function object, but this code calls the function and then passes the result as the target of the thread.
The second time you started a thread, you got it right:
Thread2 = threading.Thread(target=attend)
Note the difference: target=attend passes the function object attend because you do not CALL the function. If you had written target=attend(), you would have called the function and passed its result as the target.
The solution is found in the documentation for the Thread constructor. Change the first thread creation to this:
Thread1 = threading.Thread(target=generateCodeandTimer, args=(600,))
The comma after 600 is necessary because the args= keyword requires a tuple.
Your program will now run as you intend. You will discover some other problems - for example, the program won't exit immediately when the user types in the password. But I will let you figure those out, or ask more questions if you run into trouble.

Can I somehow avoid using time.sleep() in this script?

I have the following python script:
#! /usr/bin/python
import os
from gps import *
from time import *
import time
import threading
import sys
gpsd = None #seting the global variable
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
global gpsd #bring it in scope
gpsd = gps(mode=WATCH_ENABLE) #starting the stream of info
self.current_value = None
self.running = True #setting the thread running to true
def run(self):
global gpsd
while gpsp.running:
gpsd.next() #this will continue to loop and grab EACH set of gpsd info to clear the buffer
if __name__ == '__main__':
gpsp = GpsPoller() # create the thread
try:
gpsp.start() # start it up
while True:
print gpsd.fix.speed
time.sleep(1) ## <<<< THIS LINE HERE
except (KeyboardInterrupt, SystemExit): #when you press ctrl+c
print "\nKilling Thread..."
gpsp.running = False
gpsp.join() # wait for the thread to finish what it's doing
print "Done.\nExiting."
I'm not very good with python, unfortunately. The script should be multi-threaded somehow (but that probably doesn't matter in the scope of this question).
What baffles me is the gpsd.next() line. If I get it right, it was supposed to tell the script that new gps data have been acquired and are ready to be read.
However, I read the data using the infinite while True loop with a 1 second pause with time.sleep(1).
What this does, however, is that it sometimes echoes the same data twice (the sensor hasn't updated the data in the last second). I figure it also skips some sensor data somehow too.
Can I somehow change the script to print the current speed not every second, but every time the sensor reports new data? According to the data sheet it should be every second (a 1 Hz sensor), but obviously it isn't exactly 1 second, but varies by milliseconds.
As a generic design rule, you should have one thread for each input channel or more generic, for each "loop over a blocking call". Blocking means that the execution stops at that call until data arrives. E.g. gpsd.next() is such a call.
To synchronize multiple input channels, use a Queue and one extra thread. Each input thread should put its "events" on the (same) queue. The extra thread loops over queue.get() and reacts appropriately.
From this point of view, your script need not be multithreaded, since there is only one input channel, namely the gpsd.next() loop.
Example code:
from gps import *
class GpsPoller(object):
def __init__(self, action):
self.gpsd = gps(mode=WATCH_ENABLE) #starting the stream of info
self.action=action
def run(self):
while True:
self.gpsd.next()
self.action(self.gpsd)
def myaction(gpsd):
print gpsd.fix.speed
if __name__ == '__main__':
gpsp = GpsPoller(myaction)
gpsp.run() # runs until killed by Ctrl-C
Note how the use of the action callback separates the plumbing from the data evaluation.
To embed the poller into a script doing other stuff (i.e. handling other threads as well), use the queue approach. Example code, building on the GpsPoller class:
from threading import Thread
from Queue import Queue
class GpsThread(object):
def __init__(self, valuefunc, queue):
self.valuefunc = valuefunc
self.queue = queue
self.poller = GpsPoller(self.on_value)
def start(self):
self.t = Thread(target=self.poller.run)
self.t.daemon = True # kill thread when main thread exits
self.t.start()
def on_value(self, gpsd):
# note that we extract the value right here.
# Otherwise it could change while the event is in the queue.
self.queue.put(('gps', self.valuefunc(gpsd)))
def main():
q = Queue()
gt = GpsThread(
valuefunc=lambda gpsd: gpsd.fix.speed,
queue = q
)
print 'press Ctrl-C to stop.'
gt.start()
while True:
# blocks while q is empty.
source, data = q.get()
if source == 'gps':
print data
The "action" we give to the GpsPoller says "calculate a value by valuefunc and put it in the queue". The mainloop sits there until a value pops out, then prints it and continues.
It is also straightforward to put other Thread's events on the queue and add the appropriate handling code.
I see two options here:
GpsPoller will check if data changed and raise a flag
GpsPoller will check id data changed and put new data in the queue.
Option #1:
global is_speed_changed = False
def run(self):
global gpsd, is_speed_changed
while gpsp.running:
prev_speed = gpsd.fix.speed
gpsd.next()
if prev_speed != gpsd.fix.speed
is_speed_changed = True # raising flag
while True:
if is_speed_changed:
print gpsd.fix.speed
is_speed_changed = False
Option #2 ( I prefer this one since it protects us from raise conditions):
gpsd_queue = Queue.Queue()
def run(self):
global gpsd
while gpsp.running:
prev_speed = gpsd.fix.speed
gpsd.next()
curr_speed = gpsd.fix.speed
if prev_speed != curr_speed:
gpsd_queue.put(curr_speed) # putting new speed to queue
while True:
# get will block if queue is empty
print gpsd_queue.get()

Control running Python Process (multiprocessing)

I have yet another question about Python multiprocessing.
I have a module that creates a Process and just runs in a while True loop.
This module is meant to be enabled/disabled from another Python module.
That other module will import the first one once and is also run as a process.
How would I better implement this?
so for a reference:
#foo.py
def foo():
while True:
if enabled:
#do something
p = Process(target=foo)
p.start()
and imagine second module to be something like that:
#bar.py
import foo, time
def bar():
while True:
foo.enable()
time.sleep(10)
foo.disable()
Process(target=bar).start()
Constantly running a process checking for condition inside a loop seems like a waste, but I would gladly accept the solution that just lets me set the enabled value from outside.
Ideally I would prefer to be able to terminate and restart the process, again from outside of this module.
From my understanding, I would use a Queue to pass commands to the Process. If it is indeed just that, can someone show me how to set it up in a way that I can add something to the queue from a different module.
Can this even be easily done with Python or is it time to abandon hope and switch to something like C or Java
I purposed in comment two different approches :
using a shared variable from multiprocessing.Value
pause / resume the process with signals
Control by sharing a variable
def target_process_1(run_statement):
while True:
if run_statement.value:
print "I'm running !"
time.sleep(1)
def target_process_2(run_statement):
time.sleep(3)
print "Stoping"
run_statement.value = False
time.sleep(3)
print "Resuming"
run_statement.value = True
if __name__ == "__main__":
run_statement = Value("i", 1)
process_1 = Process(target=target_process_1, args=(run_statement,))
process_2 = Process(target=target_process_2, args=(run_statement,))
process_1.start()
process_2.start()
time.sleep(8)
process_1.terminate()
process_2.terminate()
Control by sending a signal
from multiprocessing import Process
import time
import os, signal
def target_process_1():
while True:
print "Running !"
time.sleep(1)
def target_process_2(target_pid):
time.sleep(3)
os.kill(target_pid, signal.SIGSTOP)
time.sleep(3)
os.kill(target_pid, signal.SIGCONT)
if __name__ == "__main__":
process_1 = Process(target=target_process_1)
process_1.start()
process_2 = Process(target=target_process_2, args=(process_1.pid,))
process_2.start()
time.sleep(8)
process_1.terminate()
process_2.terminate()
Side note: if possible do not run a while True.
EDIT: if you want to manage your process in two different files, supposing you want to use a control by sharing a variable, this is a way to do.
# file foo.py
from multiprocessing import Value, Process
import time
__all__ = ['start', 'stop', 'pause', 'resume']
_statement = None
_process = None
def _target(run_statement):
""" Target of the foo's process """
while True:
if run_statement.value:
print "I'm running !"
time.sleep(1)
def start():
global _process, _statement
_statement = Value("i", 1)
_process = Process(target=_target, args=(_statement,))
_process.start()
def stop():
global _process, _statement
_process.terminate()
_statement, _process = None, _process
def enable():
_statement.value = True
def disable():
_statement.value = False

ideal thread structure question (involves multiple thread communication)

I'm writing an application that listens for sound events (using messages passed in with Open Sound Control), and then based on those events pauses or resumes program execution. My structure works most of the time but always bombs out in the main loop, so I'm guessing it's a thread issue. Here's a generic, simplified version of what I'm talking about:
import time, threading
class Loop():
aborted = False
def __init__(self):
message = threading.Thread(target=self.message, args=((0),))
message.start()
loop = threading.Thread(target=self.loop)
loop.start()
def message(self,val):
if val > 1:
if not self.aborted:
self.aborted = True
# do some socket communication
else:
self.aborted = False
# do some socket communication
def loop(self):
cnt = 0
while True:
print cnt
if self.aborted:
while self.aborted:
print "waiting"
time.sleep(.1);
cnt += 1
class FakeListener():
def __init__(self,loop):
self.loop = loop
listener = threading.Thread(target=self.listener)
listener.start()
def listener(self):
while True:
loop.message(2)
time.sleep(1)
if __name__ == '__main__':
loop = Loop()
#fake listener standing in for the real OSC event listener
listener = FakeListener(loop)
Of course, this simple code seems to work great, so it's clearly not fully illustrating my real code, but you get the idea. What isn't included here is also the fact that on each loop pause and resume (by setting aborted=True/False) results in some socket communication which also involves threads.
What always happens in my code is that the main loop doesn't always pickup where it left off after a sound event. It will work for a number of events but then eventually it just doesn't answer.
Any suggestions for how to structure this kind of communication amongst threads?
UPDATE:
ok, i think i've got it. here's a modification that seems to work. there's a listener thread that periodically puts a value into a Queue object. there's a checker thread that keeps checking the queue looking for the value, and once it sees it sets a boolean to its opposite state. that boolean value controls whether the loop thread continues or waits.
i'm not entirely sure what the q.task_done() function is doing here, though.
import time, threading
import Queue
q = Queue.Queue(maxsize = 0)
class Loop():
aborted = False
def __init__(self):
checker = threading.Thread(target=self.checker)
checker.setDaemon(True)
checker.start()
loop = threading.Thread(target=self.loop)
loop.start()
def checker(self):
while True:
if q.get() == 2:
q.task_done()
if not self.aborted:
self.aborted = True
else:
self.aborted = False
def loop(self):
cnt = 0
while cnt < 40:
if self.aborted:
while self.aborted:
print "waiting"
time.sleep(.1)
print cnt
cnt += 1
time.sleep(.1)
class fakeListener():
def __init__(self):
listener = threading.Thread(target=self.listener)
listener.setDaemon(True)
listener.start()
def listener(self):
while True:
q.put(2)
time.sleep(1)
if __name__ == '__main__':
#fake listener standing in for the real OSC event listener
listener = fakeListener()
loop = Loop()
Umm.. I don't completely understand your question but i'll do my best to explain what I think you need to fix your problems.
1) The thread of your Loop.loop function should be set as a daemon thread so that it exits with your main thread (so you don't have to kill the python process every time you want to shut down your program). To do this just put loop.setDaemon(True) before you call the thread's "start" function.
2)The most simple and fail-proof way to communicate between threads is with a Queue. On thread will put an item in that Queue and another thread will take an item out, do something with the item and then terminate (or get another job)
In python a Queue can be anything from a global list to python's built-in Queue object. I recommend the python Queue because it is thread safe and easy to use.

How can I run 2 servers at once in Python?

I need to run 2 servers at once in Python using the threading module, but to call the function run(), the first server is running, but the second server does not run until the end of the first server.
This is the source code:
import os
import sys
import threading
n_server = 0
n_server_lock = threading.Lock()
class ServersThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.start()
self.join()
def run(self):
global n_server, n_server_lock
if n_server == 0:
n_server_lock.acquire()
n_server += 1
n_server_lock.release()
print(['MainServer'])
# This is the first server class
main_server = MainServer()
elif n_server == 1:
n_server_lock.acquire()
n_server += 1
n_server_lock.release()
print (['DownloadServer'])
# This is the second server class
download_server = DownloadServer()
if __name__ == "__main__":
servers = []
for i in range(2):
servers += [ServersThread()]
When I call the server class, it automatically runs an infinite while loop.
So how can I run 2 servers at once?
Thank you very much for your help Fragsworth, I just test the new structure and working perfect. The MainServer and DownloadServer classes, inherit from threading.Thread and run the infinite loop inside run(). Finally I call the servers as you said.
You don't want to join() in your __init__ function. This is causing the system to block until each thread finishes.
I would recommend you restructure your program so your main function looks more like the following:
if name == "__main__":
servers = [MainServer(), DownloadServer()]
for s in servers:
s.start()
for s in servers:
s.join()
That is, create a separate thread class for your MainServer and DownloadServer, then have them start asynchronously from the main process, and join afterwards.

Categories