I have a multi-threaded Bottle/Python web application that only occasionally hangs. The source can be downloaded here: https://github.com/whbrewer/spc
I've been trying to figure out what is causing the hang, but since it only happens very occasionally and I have not yet figured out how to reproduce it, I try to spend some time whenever it hangs to discover what is causing the problem. I have tried attaching a debugger to the running process using:
gdb /usr/bin/python -p 32489
but this only gives me the libc traceback such as:
0x00007fd2adbdb0fc in __libc_waitpid (pid=32490, stat_loc=stat_loc#entry=0x7ffd8aac545c,
options=0) at ../sysdeps/unix/sysv/linux/waitpid.c:31
31 return INLINE_SYSCALL (wait4, 4, pid, stat_loc, options, NULL);
From this output, I was able to find a similarly asked question which provided some useful advice, but ultimately did not help solve the problem. However, from reading another related informative post, I discovered that the Python debugger PDB can be used to solve this problem. So, my question is, how would I go about implementing PDB in a Python/Bottle application so when it occasionally hangs, I can get a python traceback?
One thing I have noted, is that when the hang happens, it seems the server thread turns into a zombie process, so it shows a defunc python process, while the other threads seem to continue to run. I have tried to use different web servers with Bottle, such as cherrypy and rocket, but both gave the same result.
Related
No, I'm not talking about runserver listening for requests.
Whenever i run makemigrations, migrate or scripts i wrote using django-extensions command runscript, I need to stop the execution of the program before typing in another. This was not the case before restarting my PC this morning.
I'm building a small QR code ticketing app and it was working up until this morning. I fixed a bug regarding opencv since and the app is functional again, but this command line issue is bothering me. I'm going to have to make the /admin and scripts available to a few of my colleagues tomorrow and i'm afraid we won't know when the script running is actually done since it doesn't prompt or allow another command. Having a script executed terminated early because of this would be catastrophic.
whenever i run a command, the blank line not accepting input appears
There's way too much possible reasons why this could've happened; and it's impossible for anyone else to reproduce your scenario just based on what you described. This isn't a widespread issue, so it must something particular to do with your setup.
Run a debugger, like pudb or pdb to find out where the script is stuck on.
Or add a try-except block to catch KeyboardInterrupt to your manage.py and use traceback library to narrow down where it's stuck.
I have a Python app that uses websockets and gevent. It's quite a big application in my personal experience.
I've encountered a problem with it: when I run it on Windows (with 'pipenv run python myapp'), it can (suddenly but very rarily) freeze, and stop accepting messages. If I then enter CTRL+C in cmd, it starts reacting to all the messages, that were issued when it was hanging.
I understand, that it might block somewhere, but I don't know how to debug theses types of errors, because I don't see anything in the code, that could do it. And it happens very rarily on completely different stages of the application's runtime.
What is the best way to debug it? And to actually see what goes behind the scenes? My logs show no indication of a problem.
Could it be an error with cmd and not my app?
Your answer may be as simple as adding timeouts to some of your spawns or gevent calls. Gevent is still single threaded, and so if an IO bound resource hangs, it can't context switch until it's been received. Setting a timeout might help bypass these issues and move your app forward?
I have an interactive Windows console application that I communicate through subprocess module in my program. So far, the console commands(of the application) that I called from my script did not made any problems because they gave instant results(pipe outputs).
However there are other functionalities I need to use that keeps printing output to the console until it recieves "stop" command. I need to read those outputs, but my application is a GUI and I cannot let any hanging occur, and I don't want to get all the output after stop command is issued, I need to get it realtime, when it is served. I made some research and it appears this is a difficult problem to overcome(especially in Windows), but there are no recent questions about this topic.
I did find this,
Non-blocking read on a subprocess.PIPE in python, looks like what I am looking for.
But the answer seems really low-level for a Python solution and it is old. Are there any better-newer solutions for this problem ?
Thanks in advance.
I am working on a basic crawler which crawls 5 websites concurrently using threads.
For each site it creates a new thread. When I run the program from the shell then the output log indicates that all the 5 threads run as expected.
But when I run this program as a supervisord program then the log indicates that only 2 threads are being run everytime! The log indicates that the all the 5 threads have started but only the same two of them are being executed and the rest get stuck.
I cannot understand why this inconsistency is happening when it is run from a shell and when it run from supervisor. Is there something I am not taking into account?
Here is the code which creates the threads:
for sid in entries:
url = entries[sid]
threading.Thread(target=self.crawl_loop, \
args=(sid, url)).start()
UPDATES:
As suggested by tdelaney in the comments, I changed the working directory in the supervisord configuration and now all the threads are being run as expected. Though I still don't understand that why setting the working directory to the crawler file directory rectifies the issue. Perhaps some one who knows about how supervisor manages processes can explain?
AFAIK python threads can't do threads properly because it is not thread safe. It just gives you a facility to simulate simultaneous run of the code. Your code will still use 1 core only.
https://wiki.python.org/moin/GlobalInterpreterLock
https://en.wikibooks.org/wiki/Python_Programming/Threading
Therefore it is possible that it does not spawn more processes/threads.
You should use multiprocessing I think?
https://docs.python.org/2/library/multiprocessing.html
I was having the same silent problem, but then realised that I was setting daemon to true, which was causing supervisor problems.
https://docs.python.org/2/library/threading.html#threading.Thread.daemon
So the answer is, daemon = true when running the script yourself, false when running under supervisor.
Just to say, I was just experiencing a very similar problem.
In my case, I was working on a low powered machine (RaspberryPi), with threads that were dedicated to listening to a serial device (an Arduino nano on /dev/ttyUSB0). Code worked perfectly on the command line - but the serial reading thread stalled under supervisor.
After a bit of hacking around (and trying all of the options here), I tried running python in unbuffered mode and managed to solve the issue! I got the idea from https://stackoverflow.com/a/17961520/741316.
In essence, I simply invoked python with the -u flag.
I'm running a local django development server together with virtualenv and it's been a couple of days that it behaves in a weird way. Sometimes I don't see any log in the console sometimes I see them.
A couple of times I've tried to quit the process and restart it and I've got the port already taken error, so I inspected the running process and there was still an instance of django running.
Other SO answers said that this is due to the autoreload feature, well so why sometimes I have no problem at all and sometimes I do?
Anyway For curiosity I ps aux| grep python and the result is always TWO running process, one from python and one from my activated "virtualenv" python:
/Users/me/.virtualenvs/myvirtualenv/bin/python manage.py runserver
python manage.py runserver
Is this supposed to be normal?
I've solved the mistery: Django was trying to send emails but it could not because of improper configuration, so it was hanging there forever trying to send those emails.
Most probably (I'm not sure here) Django calls an OS function or a subprocess to do so. The point is that the main process was forking itself and giving the job to a subprocess or thread, or whatever, I'm not expert in this.
It turns out that when your python is forked and you kill the father, the children can apparently keep on living after it.
Correct me if I'm wrong.