It used to work fine so far both in heroku and locally. Now it only works locally but doesn't work aftter deploying on heroku. This is the log from Heroku:
2021-04-22T17:41:34.000000+00:00 app[api]: Build succeeded
2021-04-22T17:41:37.853181+00:00 heroku[worker.1]: Starting process with command `python3.9.4 dbot.py`
2021-04-22T17:41:38.485331+00:00 heroku[worker.1]: State changed from starting to up
2021-04-22T17:41:39.356992+00:00 app[worker.1]: bash: python3.9.4: command not found
2021-04-22T17:41:39.416346+00:00 heroku[worker.1]: Process exited with status 127
2021-04-22T17:41:39.485205+00:00 heroku[worker.1]: State changed from up to crashed
I have my requirments.txt file with these dependencies included:
git+https://github.com/Rapptz/discord.py
youtube_dl==2020.03.08
pynacl == 1.3.0
colorlog == 4.1.0
And I have this in my procfile:
worker: python3.9.4 dbot.py
As for my knowledge I think it has something do with the python version which I recently updated to 3.9.4
Instead of putting the version of python into the Procfile simply put:
worker: python dbot.py
If you want to specify a python version for your bot create a runtime.txt file with the following format
python-3.9.4
My discord.py-rewrite bot is currently being hosted on heroku. It's been fine for these past few weeks until heroku started going haywire just a few hours ago. Why is this happening?
This is the full error
2020-08-26T22:31:28.999683+00:00 heroku[worker.1]: Relocating dyno to a new server
2020-08-26T22:31:29.006086+00:00 heroku[worker.1]: State changed from up to down
2020-08-26T22:31:29.008558+00:00 heroku[worker.1]: State changed from down to starting
2020-08-26T22:31:42.288485+00:00 heroku[worker.1]: Starting process with command `python skybot.py.py`
2020-08-26T22:31:42.917236+00:00 heroku[worker.1]: State changed from starting to up
2020-08-26T22:31:51.626938+00:00 app[worker.1]: Signed In
2020-08-26T22:32:16.991403+00:00 heroku[worker.1]: Stopping all processes with SIGTERM
2020-08-26T22:32:20.126043+00:00 heroku[worker.1]: Process exited with status 0
2020-08-27T03:16:41.956314+00:00 heroku[worker.1]: Stopping all processes with SIGTERM
2020-08-27T03:16:42.134827+00:00 heroku[worker.1]: Process exited with status 0
Have I ran out of hours already? Or is this just some common bug in the heroku deployment system. I never made any changes to the script btw. This just happened out of the blue.
I decided to use python's subprocess where I'm invoking a particular command line program. It was working before but after starting my ec2 instance (from being shutdown), this is showing up in the command line:
[INFO] Handling signal: ttou
Here are the programs I've used.
nginx and gunicorn for my webserver, flask for my api, python 2.7.
Here's the "now problematic" line of code that used to work.
query = subprocess.check_output(couch_cmd, shell=True)
this is the value for couch_cmd:
couch_cmd = "/opt/couchbase/bin/cbq --script=\"select * from `user` where email='" + email + "'\""
This used to work before but after I stopped and started my ec2 instance, this keeps on appearing in the logs when I call my api.
[2017-10-18 02:13:39 +0000] [3324] [INFO] Handling signal: ttou
Note: I've also used the command above and executed it in the python shell. And it's working! I've already changed the appropriate config in nginx to point to my dns and also used proxy_pass. I think my nginx config is fine cause the request gets routed to my webserver. I think there's something in gunicorn that messes up the invocation of subprocess.check_output(). Please help!
I've read this article and realized my mistake:
https://www.computerhope.com/unix/signals.htm
The TTIN and TTOU signals are sent to a process when it attempts to
read or write respectively from the tty while in the background.
Typically, this signal can be received only by processes under job
control; daemons do not have controlling terminals and should never
receive this signal.
I've realized that I was starting gunicorn in the background. This must have prompted gunicorn to intercept the output of check_output() when I tried to log its result to the terminal, thereby intercepting the output and making my request timeout. I was using the output to in my response payload.
How I started gunicorn:
gunicorn app:app -b 127.0.0.1:8000 &
Solution:
gunicorn app:app -b 127.0.0.1:8000
I copied from here to run my Python code as a daemon.
For extra uptime. I thought it would be a better Idea to use supervisor to keep this daemon running.
I did this.
python_deamon.conf
[program:python_deamon]
directory=/usr/local/python_deamon/
command=/usr/local/python_venv/bin/python daemon_runnner.py start
stderr_logfile=/var/log/gunicorn.log
stdout_logfile=/var/log/gunicorn.log
autostart=true
autorestart=true
The problem is that though supervisor successfully starts the python_daemon it keeps retrying.
2015-09-23 16:10:45,592 CRIT Supervisor running as root (no user in config file)
2015-09-23 16:10:45,592 WARN Included extra file "/etc/supervisor/conf.d/python_daemon.conf" during parsing
2015-09-23 16:10:45,592 INFO RPC interface 'supervisor' initialized
2015-09-23 16:10:45,592 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2015-09-23 16:10:45,592 INFO supervisord started with pid 13880
2015-09-23 16:10:46,595 INFO spawned: 'python_deamon' with pid 17884
2015-09-23 16:10:46,611 INFO exited: python_deamon (exit status 1; not expected)
2015-09-23 16:10:47,614 INFO spawned: 'python_deamon' with pid 17885
2015-09-23 16:10:47,630 INFO exited: python_deamon (exit status 1; not expected)
2015-09-23 16:10:49,635 INFO spawned: 'python_deamon' with pid 17888
2015-09-23 16:10:49,656 INFO exited: python_deamon (exit status 1; not expected)
2015-09-23 16:10:52,662 INFO spawned: 'python_deamon' with pid 17891
2015-09-23 16:10:52,680 INFO exited: python_deamon (exit status 1; not expected)
2015-09-23 16:10:53,681 INFO gave up: python_deamon entered FATAL state, too many start retries too quickly
Just for the record the after overriding run() method I never return anything.
Is it possible to do what I am trying to do or am I being dumb ?
P.S: I know that the root cause of the whole problem is that since run() never return anything supervisor keeps trying to start it and hence thinks that the process failed and gives the status as FATAL Exited too quickly (process log may have details). My actual question is am I doing it right ? or can this be done this way ?
P.P.S: Stand alone(without supervisor) daemon_runnner.py runs fine with and without sudo permissions.
try to set startsecs = 0:
[program:foo]
command = ls
startsecs = 0
autorestart = false
http://supervisord.org/configuration.html
startsecs
The total number of seconds which the program needs to stay running after a startup to consider the start successful. If the program does not stay up for this many seconds after it has started, even if it exits with an “expected” exit code (see exitcodes), the startup will be considered a failure. Set to 0 to indicate that the program needn’t stay running for any particular amount of time.
This is what I normally do:
Create a service.conf file which describes the new Python script. This script references the shell script which is the one in reality launching the Python script. This .conf file lives inside /etc/supervisor/conf.d/
Create a shell script which launches the Python script. Change permissions to executable. chmod 755 service.sh. In this script we actually launch the Python script.
Configure log_stderr and stderr_logfile to verify issue.
Update supervisor using reload and then check status:
supervisor> status
alexad RUNNING pid 32657, uptime 0:21:05
service.conf
[program:alexad]
; Set full path to celery program if using virtualenv
command=sh /usr/local/src/gonzo/supervisorctl/alexad.sh
directory=/usr/local/src/gonzo/services/alexa
log_stdout=true ; if true, log program stdout (default true)
log_stderr=true ; if true, log program stderr (default false)
stderr_logfile=/usr/local/src/gonzo/log/alexad.err
logfile=/usr/local/src/gonzo/log/alexad.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; Set Celery priority higher than default (999)
priority=500
service.sh
#!/bin/bash
cd /usr/local/src/gonzo/services/alexa
exec python reader.py
You should check your supervisor program logs which live here normally /var/log/supervisor/
In my case, I had a ModuleNotFoundError: No module named 'somemodule'
I found this odd cause when I ran the script directly it worked fine.
After attempting to run the script with sudo I realized what was happening.
Python imports are specific to the user, not to the folder. So if you python or pip install with your current logged-in user and try running the same script as sudo or some other user it will probably return ModuleNotFoundError: No module named 'somemodule'
Supervisor by default runs as root
I solved this by setting the user in the supervisor config file to the current user which in my case ubuntu:
[program:some_program]
directory=/home/ubuntu/scripts/
command=/usr/bin/python3 myscript.py
autostart=true
autorestart=true
user=ubuntu
stderr_logfile=/var/log/supervisor/myscriptlogs.err.log
stdout_logfile=/var/log/supervisor/myscriptlogs.out.log
As a side note, it's also important to make sure your Supervisor command= is calling the script from the version of Python you intend.
cd to the folder where your script lives and run which python and use that
Your script is failing with an exit status. Supervisor is simply trying to restart the script.
Supervisor is started with root permissions, perhaps it is giving those permissions to your script and this is causing it to fail (a change in the source directory or something). Check what happens when you run your daemon as root without Supervisor.
We really need more information to know why it is failing.
Not sure if the issue is the same with daemon runner, but if you use the daemon context directly and use supervisord, you need to set context.detach_process to False
I deployed a python worker in Heroku cloud with an app. My Procfile is as follows.
worker: python worker.py
Worker will read a text file, do some changes and writes to another text file. Both text files are in app root directory. This is happening line by line. Approximately, this will take around 6hrs. But once I deploy it, within 1min or so it will crash. Error is logged as
State changed from up to crashed
Process exited with status 0
I have no clue what is causing this.