Python Exception in thread Thread-1 (most likely raised during interpreter shutdown)? - python

My friend and I have been working on a large project to learn and for fun in python and PyGame. Basically it is an AI simulation of a small village. we wanted a day/night cycle so I found a neat way to change the color of an entire surface using numpy (specifically the cross-fade tutorial) - http://www.pygame.org/docs/tut/surfarray/SurfarrayIntro.html
I implemented it into the code and it WORKS, but is extremely slow, like < 1 fps slow. so I look into threading (because I wanted to add it eventually) and found this page on Queues - Learning about Queue module in python (how to run it)
I spend about 15 minutes making a basic system but as soon as I run it, the window closes and it says
Exception in thread Thread-1 (most likely raised during interpreter shutdown):
EDIT: This is literally all it says, no Traceback error
I don't know what I am doing wrong, but I assume I am missing something simple. I added the necessary parts of the code below.
q_in = Queue.Queue(maxsize=0)
q_out = Queue.Queue(maxsize=0)
def run(): #Here is where the main stuff happens
#There is more here I am just showing the essential parts
while True:
a = abs(abs(world.degree-180)-180)/400.
#Process world
world.process(time_passed_seconds)
blank_surface = pygame.Surface(SCREEN_SIZE)
world.render(blank_surface) #The world class renders everything onto a blank surface
q_in.put((blank_surface, a))
screen.blit(q_out.get(), (0,0))
def DayNight():
while True:
blank_surface, a = q_in.get()
imgarray = surfarray.array3d(blank_surface) # Here is where the new numpy stuff starts (AKA Day/Night cycle)
src = N.array(imgarray)
dest = N.zeros(imgarray.shape)
dest[:] = 20, 30, 120
diff = (dest - src) * a
xfade = src + diff.astype(N.int)
surfarray.blit_array(blank_surface, xfade)
q_out.put(blank_surface)
q_in.task_done()
def main():
MainT = threading.Thread(target=run)
MainT.daemon = True
MainT.start()
DN = threading.Thread(target=DayNight)
DN.daemon = True
DN.start()
q_in.join()
q_out.join()
If anyone could help it would be greatly appreciated. Thank you.

This is pretty common when using daemon threads. Why are you setting .daemon = True on your threads? Think about it. While there are legitimate uses for daemon threads, most times a programmer does it because they're confused, as in "I don't know how to shut my threads down cleanly, and the program will freeze on exit if I don't, so I know! I'll say they're daemon threads. Then the interpreter won't wait for them to terminate when it exits. Problem solved."
But it isn't solved - it usually just creates other problems. In particular, the daemon threads keep on running while the interpreter is - on exit - destroying itself. Modules are destroyed, stdin and stdout and stderr are destroyed, etc etc. All sorts of things can go wrong in daemon threads then, as the stuff they try to access is annihilated.
The specific message you're seeing is produced when an exception is raised in some thread, but interpreter destruction has gotten so far that even the sys module no longer contains anything usable. The threading implementation retains a reference to sys.stderr internally so that it can tell you something then (specifically, the exact message you're seeing), but too much of the interpreter has been destroyed to tell you anything else about what went wrong.
So find a way to shut down your threads cleanly instead (and remove .daemon = True). Don't know enough about your problem to suggest a specific way, but you'll think of something ;-)
BTW, I'd suggest removing the maxsize=0 arguments on your Queue() constructors. The default is "unbounded", and "everyone knows that", while few people know that maxsize=0 also means "unbounded". That's gotten worse as other datatypes have taken maxsize=0 to mean "maximum size really is 0" (the best example of that is collections.deque); but "no argument means unbounded" is still universally true.

(1) This works for me.
SOURCE: https://realpython.com/intro-to-python-threading/#starting-a-thread
(2) I use daemon
import logging
import threading
import time
def thread_function(name):
logging.info("Thread %s: starting", name)
time.sleep(2)
logging.info("Thread %s: finishing", name)
if __name__ == "__main__":
format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO,
datefmt="%H:%M:%S")
threads = list()
for index in range(3):
logging.info("Main : create and start thread %d.", index)
x = threading.Thread(target=thread_function, args=(index,) , daemon=True)
threads.append(x)
x.start()
for index, thread in enumerate(threads):
logging.info("Main : before joining thread %d.", index)
thread.join()
logging.info("Main : thread %d done", index)

Related

Set function timeout without having to use contextlib [duplicate]

I looked online and found some SO discussing and ActiveState recipes for running some code with a timeout. It looks there are some common approaches:
Use thread that run the code, and join it with timeout. If timeout elapsed - kill the thread. This is not directly supported in Python (used private _Thread__stop function) so it is bad practice
Use signal.SIGALRM - but this approach not working on Windows!
Use subprocess with timeout - but this is too heavy - what if I want to start interruptible task often, I don't want fire process for each!
So, what is the right way? I'm not asking about workarounds (eg use Twisted and async IO), but actual way to solve actual problem - I have some function and I want to run it only with some timeout. If timeout elapsed, I want control back. And I want it to work on Linux and Windows.
A completely general solution to this really, honestly does not exist. You have to use the right solution for a given domain.
If you want timeouts for code you fully control, you have to write it to cooperate. Such code has to be able to break up into little chunks in some way, as in an event-driven system. You can also do this by threading if you can ensure nothing will hold a lock too long, but handling locks right is actually pretty hard.
If you want timeouts because you're afraid code is out of control (for example, if you're afraid the user will ask your calculator to compute 9**(9**9)), you need to run it in another process. This is the only easy way to sufficiently isolate it. Running it in your event system or even a different thread will not be enough. It is also possible to break things up into little chunks similar to the other solution, but requires very careful handling and usually isn't worth it; in any event, that doesn't allow you to do the same exact thing as just running the Python code.
What you might be looking for is the multiprocessing module. If subprocess is too heavy, then this may not suit your needs either.
import time
import multiprocessing
def do_this_other_thing_that_may_take_too_long(duration):
time.sleep(duration)
return 'done after sleeping {0} seconds.'.format(duration)
pool = multiprocessing.Pool(1)
print 'starting....'
res = pool.apply_async(do_this_other_thing_that_may_take_too_long, [8])
for timeout in range(1, 10):
try:
print '{0}: {1}'.format(duration, res.get(timeout))
except multiprocessing.TimeoutError:
print '{0}: timed out'.format(duration)
print 'end'
If it's network related you could try:
import socket
socket.setdefaulttimeout(number)
I found this with eventlet library:
http://eventlet.net/doc/modules/timeout.html
from eventlet.timeout import Timeout
timeout = Timeout(seconds, exception)
try:
... # execution here is limited by timeout
finally:
timeout.cancel()
For "normal" Python code, that doesn't linger prolongued times in C extensions or I/O waits, you can achieve your goal by setting a trace function with sys.settrace() that aborts the running code when the timeout is reached.
Whether that is sufficient or not depends on how co-operating or malicious the code you run is. If it's well-behaved, a tracing function is sufficient.
An other way is to use faulthandler:
import time
import faulthandler
faulthandler.enable()
try:
faulthandler.dump_tracebacks_later(3)
time.sleep(10)
finally:
faulthandler.cancel_dump_tracebacks_later()
N.B: The faulthandler module is part of stdlib in python3.3.
If you're running code that you expect to die after a set time, then you should write it properly so that there aren't any negative effects on shutdown, no matter if its a thread or a subprocess. A command pattern with undo would be useful here.
So, it really depends on what the thread is doing when you kill it. If its just crunching numbers who cares if you kill it. If its interacting with the filesystem and you kill it , then maybe you should really rethink your strategy.
What is supported in Python when it comes to threads? Daemon threads and joins. Why does python let the main thread exit if you've joined a daemon while its still active? Because its understood that someone using daemon threads will (hopefully) write the code in a way that it wont matter when that thread dies. Giving a timeout to a join and then letting main die, and thus taking any daemon threads with it, is perfectly acceptable in this context.
I've solved that in that way:
For me is worked great (in windows and not heavy at all) I'am hope it was useful for someone)
import threading
import time
class LongFunctionInside(object):
lock_state = threading.Lock()
working = False
def long_function(self, timeout):
self.working = True
timeout_work = threading.Thread(name="thread_name", target=self.work_time, args=(timeout,))
timeout_work.setDaemon(True)
timeout_work.start()
while True: # endless/long work
time.sleep(0.1) # in this rate the CPU is almost not used
if not self.working: # if state is working == true still working
break
self.set_state(True)
def work_time(self, sleep_time): # thread function that just sleeping specified time,
# in wake up it asking if function still working if it does set the secured variable work to false
time.sleep(sleep_time)
if self.working:
self.set_state(False)
def set_state(self, state): # secured state change
while True:
self.lock_state.acquire()
try:
self.working = state
break
finally:
self.lock_state.release()
lw = LongFunctionInside()
lw.long_function(10)
The main idea is to create a thread that will just sleep in parallel to "long work" and in wake up (after timeout) change the secured variable state, the long function checking the secured variable during its work.
I'm pretty new in Python programming, so if that solution has a fundamental errors, like resources, timing, deadlocks problems , please response)).
solving with the 'with' construct and merging solution from -
Timeout function if it takes too long to finish
this thread which work better.
import threading, time
class Exception_TIMEOUT(Exception):
pass
class linwintimeout:
def __init__(self, f, seconds=1.0, error_message='Timeout'):
self.seconds = seconds
self.thread = threading.Thread(target=f)
self.thread.daemon = True
self.error_message = error_message
def handle_timeout(self):
raise Exception_TIMEOUT(self.error_message)
def __enter__(self):
try:
self.thread.start()
self.thread.join(self.seconds)
except Exception, te:
raise te
def __exit__(self, type, value, traceback):
if self.thread.is_alive():
return self.handle_timeout()
def function():
while True:
print "keep printing ...", time.sleep(1)
try:
with linwintimeout(function, seconds=5.0, error_message='exceeded timeout of %s seconds' % 5.0):
pass
except Exception_TIMEOUT, e:
print " attention !! execeeded timeout, giving up ... %s " % e

Is there a problem with Pipes in Python multiprocessing on macOS?

I'm encountering some strange behavior with Pipe in Python multiprocessing on my Mac (Intel, Monterey). I've tried the following code in 3.7 and 3.11 and in both cases, not all the tasks are executed.
def _mp_job(nth, child):
print("Nth is", nth)
if __name__ == "__main__":
from multiprocessing import Pool, Pipe, set_start_method, log_to_stderr
import logging, time
set_start_method("spawn")
logger = log_to_stderr()
logger.setLevel(logging.DEBUG)
with Pool(processes = 10) as mp_pool:
jobs = []
for i in range(20):
parent, child = Pipe()
# child = None
r = mp_pool.apply_async(_mp_job, args = (i, child))
jobs.append(r)
while jobs:
new_jobs = []
for job in jobs:
if not job.ready():
new_jobs.append(job)
jobs = new_jobs
print("%d jobs remaining" % len(jobs))
time.sleep(1)
I know exactly what's going on, but I don't know why.
[EDITED: my explanation for what was happening was quite unclear on my first pass, as reflected in the comments, so I've cleaned it up. Thanks for your patience.]
If I run this code on my macOS Monterey machine, it will loop forever, reporting that some number of jobs are remaining. The logging information reveals that the child processes are failing; you'll see a number of lines like this:
[DEBUG/SpawnPoolWorker-10] worker got EOFError or OSError -- exiting
What's happening is that when the child worker dequeues a job and tries to unpickle the argument list, it encounters ConnectionRefusedError when unpickling the child connection side of the Pipe in the arguments (I know these details not because of the output of the function above, but because I inserted a traceback printout at the point in the Python multiprocessing library where the worker reports encountering the OSError). At that point the worker fails, having removed the job from the work queue but not having completed it. That's why I have # child = None in there; if I uncomment that, everything works fine.
My first suspicion is that this is a bug in Python on macOS (I haven't tested this on other platforms, but it makes no sense to me that something this basic would have been missed unless it's a platform-specific error). I don't understand why the child process would get ConnectionRefusedError, since the Pipe establishes a socket pair and you shouldn't be able to get ConnectionRefusedError in that case, as far as I understand.
This seems more likely to happen the more processes I have in the pool. If I have 2, it seems to work reliably. But 4 or more seem to cause a problem; I have a six-core computer, so I don't think that's part of what's happening.
Does anyone have any insight into this? Am I doing something obviously wrong?

Threading in 'while True' loop

First of all, I've only started working with Python a month ago, so I don't have deep knowledge about anything.
In a project I'm trying to collect the results of multiple (simultaneous) functions in a database, infinitely until I tell it to stop.
In an earlier attempt, I successfully used multiprocessing to do what I need, but since I now need to collect all the results of those functions for a database within the main, I switched to threading instead.
Basically, what I'm trying to do is:
collect1 = Thread(target=collect_data1)
collect2 = Thread(target=collect_data2)
send1 = Thread(target=send_data1)
send2 = Thread(target=send_data2)
collect = (collect1, collect2)
send = (send1, send2)
while True:
try:
for thread in collect:
thread.start()
for thread in collect:
thread.join()
for thread in send:
thread.start()
for thread in send:
thread.join()
except KeyboardInterrupt:
break
Now, obviously I can't just restart the threads. Nor explicitly kill them. The functions within the threads can theoretically be stopped at any point, so terminate() from multiprocessing was fine.
I was thinking whether something like the following could work (at least PyCharm is fine with it, so it seems to work) or if it creates a memory leak (which I assume), because the threads are never properly closed or deleted, at least as far as I can tell from researching.
Again, I'm new to Python, so I don't know anything about this aspect.
Code:
while True:
try:
collect1 = Thread(target=collect_data1)
collect2 = Thread(target=collect_data2)
send1 = Thread(target=send_data1)
send2 = Thread(target=send_data2)
collect = (collect1, collect2)
send = (send1, send2)
for thread in collect:
thread.start()
for thread in collect:
thread.join()
for thread in send:
thread.start()
for thread in send:
thread.join()
except KeyboardInterrupt:
break
I feel like this approach seems too good to be true, especially since I've never came across a similar solution during my research.
Anyways, any input is appreciated.
Have a good day.
You explicitly await the threads' termination in loops containing thread.join(), so no memory leak takes place, and your code is fine. If you're worried that you don't dispose of the thread objects in any way after they terminate, it will be done automatically as soon as they aren't used anymore, so this souldn't be a concern neither.

Python script is hanging AFTER multithreading

I know there are a few questions and answers related to hanging threads in Python, but my situation is slightly different as the script is hanging AFTER all the threads have been completed. The threading script is below, but obviously the first 2 functions are simplified massively.
When I run the script shown, it works. When I use my real functions, the script hangs AFTER THE LAST LINE. So, all the scenarios are processed (and a message printed to confirm), logStudyData() then collates all the results and writes to a csv. "Script Complete" is printed. And THEN it hangs.
The script with threading functionality removed runs fine.
I have tried enclosing the main script in try...except but no exception gets logged. If I use a debugger with a breakpoint on the final print and then step it forward, it hangs.
I know there is not much to go on here, but short of including the whole 1500-line script, I don't know hat else to do. Any suggestions welcome!
def runScenario(scenario):
# Do a bunch of stuff
with lock:
# access global variables
pass
pass
def logStudyData():
# Combine results from all scenarios into a df and write to csv
pass
def worker():
global q
while True:
next_scenario = q.get()
if next_scenario is None:
break
runScenario(next_scenario)
print(next_scenario , " is complete")
q.task_done()
import threading
from queue import Queue
global q, lock
q = Queue()
threads = []
scenario_list = ['s1','s2','s3','s4','s5','s6','s7','s8','s9','s10','s11','s12']
num_worker_threads = 6
lock = threading.Lock()
for i in range(num_worker_threads):
print("Thread number ",i)
this_thread = threading.Thread(target=worker)
this_thread.start()
threads.append(this_thread)
for scenario_name in scenario_list:
q.put(scenario_name)
q.join()
print("q.join completed")
logStudyData()
print("script complete")
As the docs for Queue.get say:
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the Empty exception (timeout is ignored in that case).
In other words, there is no way get can ever return None, except by you calling q.put(None) on the main thread, which you don't do.
Notice that the example directly below those docs does this:
for i in range(num_worker_threads):
q.put(None)
for t in threads:
t.join()
The second one is technically necessary, but you usually get away with not doing it.
But the first one is absolutely necessary. You need to either do this, or come up with some other mechanism to tell your workers to quit. Without that, your main thread just tries to exit, which means it tries to join every worker, but those workers are all blocked forever on a get that will never happen, so your program hangs forever.
Building a thread pool may not be rocket science (if only because rocket scientists tend to need their calculations to be deterministic and hard real-time…), but it's not trivial, either, and there are plenty of things you can get wrong. You may want to consider using one of the two already-built threadpools in the Python standard library, concurrent.futures.ThreadPoolExecutor or multiprocessing.dummy.Pool. This would reduce your entire program to:
import concurrent.futures
def work(scenario):
runScenario(scenario)
print(scenario , " is complete")
scenario_list = ['s1','s2','s3','s4','s5','s6','s7','s8','s9','s10','s11','s12']
with concurrent.futures.ThreadPoolExecutor(max_workers=6) as x:
results = list(x.map(work, scenario_list))
print("q.join completed")
logStudyData()
print("script complete")
Obviously you'll still need a lock around any mutable variables you change inside runScenario—although if you're only using a mutable variable there because you couldn't figure out how to return values to the main thread, that's trivial with an Executor: just return the values from work, and then you can use them like this:
for result in x.map(work, scenario_list):
do_something(result)

separate threads in pygtk application

I'm having some problems threading my pyGTK application. I give the thread some time to complete its task, if there is a problem I just continue anyway but warn the user. However once I continue, this thread stops until gtk.main_quit is called. This is confusing me.
The relevant code:
class MTP_Connection(threading.Thread):
def __init__(self, HOME_DIR, username):
self.filename = HOME_DIR + "mtp-dump_" + username
threading.Thread.__init__(self)
def run(self):
#test run
for i in range(1, 10):
time.sleep(1)
print i
..........................
start_time = time.time()
conn = MTP_Connection(self.HOME_DIR, self.username)
conn.start()
progress_bar = ProgressBar(self.tree.get_widget("progressbar"),
update_speed=100, pulse_mode=True)
while conn.isAlive():
while gtk.events_pending():
gtk.main_iteration()
if time.time() - start_time > 5:
self.write_info("problems closing connection.")
break
#after this the program continues normally, but my conn thread stops
Firstly, don't subclass threading.Thread, use Thread(target=callable).start().
Secondly, and probably the cause of your apparent block is that gtk.main_iteration takes a parameter block, which defaults to True, so your call to gtk.main_iteration will actually block when there are no events to iterate on. Which can be solved with:
gtk.main_iteration(block=False)
However, there is no real explanation why you would use this hacked up loop rather than the actual gtk main loop. If you are already running this inside a main loop, then I would suggest that you are doing the wrong thing. I can expand on your options if you give us a bit more detail and/or the complete example.
Thirdly, and this only came up later: Always always always always make sure you have called gtk.gdk.threads_init in any pygtk application with threads. GTK+ has different code paths when running threaded, and it needs to know to use these.
I wrote a small article about pygtk and threads that offers you a small abstraction so you never have to worry about these things. That post also includes a progress bar example.

Categories