I am running two python processes (clock and web) through Heroku and Foreman
When I run locally with Foreman:
1. Both processes log to terminal
2. Then the clock process stops outputting (even though its still running). This halting of the output doesn't happen at a consistent place in the code but usually somewhere between 3-5 iterations.
3. The web process continues to output correctly.
Oddly enough, when I run the same code on Heroku, the logs output just fine.
We have PYTHONUNBUFFERED set to true locally (with .env) and on Heroku. Has anybody come across this issue? Is there a solution to it? Thanks.
I couldn't fix this problem with Foreman, but I did come up with solution. There is a python port of Foreman called honcho. I've switched to honcho and it fixed my logging/freezing issue.
Related
I have a python3 application that I want to run continually on an Ubuntu server. I'm managing it using pm2, but running into a very strange error.
I am starting the pm2 process using:
pm2 start --name python_app --watch --interpreter /usr/bin/python3.8 python_app.py
When I first run this, it doesn't start properly: pm2 continually stops and restarts the process multiple times per second, and will keep doing this unless stopped. The output of the pm2 error logs makes very little sense: it's a lot of very long tracebacks through python libraries (almost always Flask), but without any actual errors attached to them, other than KeyboardInterrupts, which I am not making.
After manually stopping and starting the app (using the commands below), everything runs as expected (and then continues to work fine for subsequent restarts).
pm2 stop python_app
pm2 start python_app
I have repeated this process (deleting and remaking the pm2 process to see the error, and then stopping and restarting to make it work) multiple times, with the same results every time. I wonder whether this is the result of something else that's not right? Or, whether there's an equivalent command in pm2 to 'setup' a new process without launching it, then starting it separately.
I tried increasing the startup memory that pm2 could use, but it just uses 100% of whatever I give it, and still experiences the same restarting issue (just much faster, haha).
ah, I think I know what the issue was: I noticed that watching appeared as 'disabled' when I listed processes, even though I had run with the --watch command.
When I removed --watch from the start command, it worked fine: perhaps it's not meant to be used with Python?
Would love to hear from anyone who knows more, but problem solved.
I have a situation where if I run Apache with wsgi (now uninstalled), a test website works, but running the same server with runserver 0.0.0.0:8080 gives ERR_CONNECTION_REFUSED from local or remote (even with the apache2 service stopped).
Edit: I don't think it's Apache, I've reproduced the problem on a clean server with no Apache installed, so unless Apache somehow modified something under source control it's not that
My knowledge of web details is hazy, I don't even know where to troubleshoot this problem - the devserver runs (runserver prints as expected and doesn't give any errors) but never receives a request, I have nothing in iptables.
Sorry for anyone who read this, it would probably have been impossible to solve given my supplied information.
What had actually happened was that I'd been having to modify my wsgi.py script in order to make it happy inside the Apache server, and I'd added a line which said "os.system('/bin/bash --rcfile )" to try and make sure that when running inside apache it got the virtualenv activated.
This line must have been causing some strange problem, another symptom was that I realised when I was running "runserver", it wasn't crashing the the python process was backgrounding itself, where normally it runs inside that console window.
Thanks everyone who asked questions helping me debug!
I am working on a basic crawler which crawls 5 websites concurrently using threads.
For each site it creates a new thread. When I run the program from the shell then the output log indicates that all the 5 threads run as expected.
But when I run this program as a supervisord program then the log indicates that only 2 threads are being run everytime! The log indicates that the all the 5 threads have started but only the same two of them are being executed and the rest get stuck.
I cannot understand why this inconsistency is happening when it is run from a shell and when it run from supervisor. Is there something I am not taking into account?
Here is the code which creates the threads:
for sid in entries:
url = entries[sid]
threading.Thread(target=self.crawl_loop, \
args=(sid, url)).start()
UPDATES:
As suggested by tdelaney in the comments, I changed the working directory in the supervisord configuration and now all the threads are being run as expected. Though I still don't understand that why setting the working directory to the crawler file directory rectifies the issue. Perhaps some one who knows about how supervisor manages processes can explain?
AFAIK python threads can't do threads properly because it is not thread safe. It just gives you a facility to simulate simultaneous run of the code. Your code will still use 1 core only.
https://wiki.python.org/moin/GlobalInterpreterLock
https://en.wikibooks.org/wiki/Python_Programming/Threading
Therefore it is possible that it does not spawn more processes/threads.
You should use multiprocessing I think?
https://docs.python.org/2/library/multiprocessing.html
I was having the same silent problem, but then realised that I was setting daemon to true, which was causing supervisor problems.
https://docs.python.org/2/library/threading.html#threading.Thread.daemon
So the answer is, daemon = true when running the script yourself, false when running under supervisor.
Just to say, I was just experiencing a very similar problem.
In my case, I was working on a low powered machine (RaspberryPi), with threads that were dedicated to listening to a serial device (an Arduino nano on /dev/ttyUSB0). Code worked perfectly on the command line - but the serial reading thread stalled under supervisor.
After a bit of hacking around (and trying all of the options here), I tried running python in unbuffered mode and managed to solve the issue! I got the idea from https://stackoverflow.com/a/17961520/741316.
In essence, I simply invoked python with the -u flag.
I'm running a local django development server together with virtualenv and it's been a couple of days that it behaves in a weird way. Sometimes I don't see any log in the console sometimes I see them.
A couple of times I've tried to quit the process and restart it and I've got the port already taken error, so I inspected the running process and there was still an instance of django running.
Other SO answers said that this is due to the autoreload feature, well so why sometimes I have no problem at all and sometimes I do?
Anyway For curiosity I ps aux| grep python and the result is always TWO running process, one from python and one from my activated "virtualenv" python:
/Users/me/.virtualenvs/myvirtualenv/bin/python manage.py runserver
python manage.py runserver
Is this supposed to be normal?
I've solved the mistery: Django was trying to send emails but it could not because of improper configuration, so it was hanging there forever trying to send those emails.
Most probably (I'm not sure here) Django calls an OS function or a subprocess to do so. The point is that the main process was forking itself and giving the job to a subprocess or thread, or whatever, I'm not expert in this.
It turns out that when your python is forked and you kill the father, the children can apparently keep on living after it.
Correct me if I'm wrong.
I'm using gitlab-ci to automatically build a C++ project and run unit-tests written in python (it runs the daemon, and then communicates via the network/socket based interface).
The problem I'm finding is that when the tests are run by the GitLab-CI runner, they fail for various reasons (with one test, it stalls indefinitely on a particular network operation, on the other it doesn't receive a packet that should have been sent).
BUT: When I open up SSH and run the tests manually, they all work successfully (the tests also succeed on all of our developers' machines [linux/windows/OSX]).
At this point I've been trying to replicate enough of the build/test conditions that gitlab-ci is using but I don't really know any exact details, and none of my experiments have reproduced the problem.
I'd really appreciate help with either of the following:
Guidance on running the tests manually outside of gitlab-ci, but replicating its environment so I can get the same errors/failures and debug the daemon and/or tests, OR
Insight into why the test would fail when ran by GitLab-CI-Runner
Sidetrack 1:
For some reason, not all the (mostly debugging) output that would normally be sent to the shell shows up in the gitlab-ci output.
Sidetrack 2:
I also played around setting it up with jenkins, but one of the tests fails to even connect to the daemon, while the rest do it fine.
-i usually replicate the problem by using a docker container only for the runner and running the tests inside it, dont know if you have it setup like this =(.
-Normally the test doesnt actually fail if you log in the container you will see he actually does everything but doesnt report back to the Gilab CI, dont freak out it does it job it simply does not say it.
PS: you can see if its actually running by checking the processes on the machine.
example:
im running a gitlab ci with java and docker:
gitlab ci starts doing its thing then hangs at a download,meanwhile i log in the container and check that he is actually working and manages to upload my compiled docker image.