I have a python script called post.py that checks for HTTP "POST" methods to my server. This is all being held on an AWS EC2 instance. I want it so that the service is constantly running this python script and that I don't have to open a command line prompt and run: python post.py
How do you set up a python script like this?
You should be using supervisord to daemonize your script. Your config file should look something like this:
[program:post]
command: /usr/bin/python -m post
directory: /home/ubuntu/post # if post.py is in a folder called post that lives in home/ubuntu
autostart: true
I found out how to daemonize my script very easily:
I went to /etc/init/ and added a file called post.config.
I put in this:
start on runlevel [2345]
stop on runlevel [!2345]
env AN_ENVIRONMENTAL_VARIABLE=i-want-to-set
respawn
exec /home/ubuntu/Files/mysite/post.py
And now it is working perfectly!
If you meant to detach the execution from terminal you can use nohup(http://www.cyberciti.biz/tips/nohup-execute-commands-after-you-exit-from-a-shell-prompt.html) for that,else if you wanted to execute the post.py more than once in a scheduled fashion. you can use cron job for this - linux utilty.If you want to do this in python you can check this out https://docs.python.org/2/library/sched.html
Related
I am trying to start a python script in pm2 with a a variable.
Without a variable, I would run:
pm2 start --name myapp /home/user/myapp/start.py --interpreter ~/myapp_venv/bin/python3
The python command I would like to run is:
python3 /home/user/myapp/start.py -cf configs/myapp2.ini
If I activate the virtual environment, I can start the app just fine.
I am looking for the PM2 start command to run this in PM2.
Also, I would like to stop the pm2 logs from generating and writing as I log in my own app, so they are useless writes for me.
Something like the below I thought would work adding into the PM2 start script.
-o "/dev/null" -e "/dev/null"
If anyone would be able to advise on the PM2 start command to run this app with the viable in PM2, I would be very grateful.
I have since found the answer to my own question.
Run the below command from the myapp directory:
pm2 start "~/myapp_venv/bin/python3 start.py -cf configs/myapp2.ini" --name myapp2
I am using django_q for some scheduling and automations in my django project.
I successfully configured all the needed stuff but to get django_q running I have to type in the server command line 'python manage.py qcluster' and after i close the shell session id doesn't work anymore.
In the django_q official documentation it says that there is no need for a supervisor, but this is not running.
Any ideas?
There are a few approaches you can use.
You could install the screen program to create a terminal session which stays around after logout. See also: https://superuser.com/questions/451057/keep-processes-alive-after-ssh-logout
You could use systemd to automatically start your qcluster. This has the advantage that it will start qcluster again if your server is rebooted. You'll want to write a service unit file with Type=simple. Here's a list of resources.
Here's an example unit file. (You may need to adapt this somewhat.)
[Unit]
Description=qcluster daemon
[Service]
User=<django user>
Group=<django group>
WorkingDirectory=<your working dir>
Environment=PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/ bin/
ExecStart=python manage.py qcluster
Restart=always
[Install]
WantedBy=multi-user.target
I have a simple script that executes a flask command called sendemail (located in the "main" blueprint).
The "task" script, located in /home/ubuntu/tasks:
cd /home/ubuntu/app
source venv/bin/activate
flask main sendemail
deactivate
When I run (from anywhere, including the home directory)
bash /home/ubuntu/tasks/task
The function runs exactly as intended. However, when I add this same script to crontab, it produces an error, emailing me this message:
/home/ubuntu/tasks/task: line 4: flask: command not found
I've made sure that I have the latest flask installed and assume this might have something to do with the PATH variables - how can I fix/debug this?
The activation doesn’t work in the cron because you don’t have the same environment variables. You can use set > /path/to/your.log to diagnose…
You can simplify your scrip by calling Flask directly:
/home/ubuntu/app/venv/bin/flask main sendemail
I want to create service that will be start with ubuntu and will have ability to use django models etc..
This service will create thread util.WorkerThread and wait some data in main.py
if __name__ == '__main__':
bot.polling(none_stop=True)
How I can to do this. I just don't know what I need to looking for.
If you also can say how I can create ubuntu autostart service with script like that, please tell me )
P.S. all django project run via uwsgi in emperor mode.
The easiest way in my opinion is create a script and run on crontab.
First of all create a script to start your django app.
#!/bin/bash
cd /path/to your/virtual environment #path to your virtual environment
. bin/activate #Activate your virtual environment
cd /path/to your/project directory #After that go to your project directory
python manage.py runserver #run django server
Save the script and open crontab with the command:
crontab -e
Now edit the crontab file and write on the last line:
#reboot path/to/your/script.sh
This way is not the best but the easiest, if you are not comfortable with Linux startup service creation.
I hope this help you :)
Take a look at supervisord. It is much easier than daemonizing python script.
Config it something like this:
[program:watcher]
command = /usr/bin/python /path/to/main.py
stdout_logfile = /var/log/main-stdout.log
stdout_logfile_maxbytes = 10MB
stdout_logfile_backups = 5
stderr_logfile = /var/log/main-stderr.log
stderr_logfile_maxbytes = 10MB
stderr_logfile_backups = 5
Ok, that is answer - https://www.raspberrypi-spy.co.uk/2015/10/how-to-autorun-a-python-script-on-boot-using-systemd/
In new versions ubuntu services .conf in /etc/init fail with error Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
But services works using systemd
I have written a logging application in Python that is meant to start at boot, but I've been unable to start the app with Ubuntu's Upstart init daemon. When run from the terminal with sudo /usr/local/greeenlog/main.pyw, the application works perfectly. Here is what I've tried for the Upstart job:
/etc/init/greeenlog.conf
# greeenlog
description "I log stuff."
start on startup
stop on shutdown
script
exec /usr/local/greeenlog/main.pyw
end script
My application starts one child thread, in case that is important. I've tried the job with the expect fork stanza without any change in the results. I've also tried this with sudo and without the script statements (just a lone exec statement). In all cases, after boot, running status greeenlog returns greeenlog stop/waiting and running start greeenlog returns:
start: Rejected send message, 1 matched rules; type="method_call", sender=":1.61" (uid=1000 pid=2496 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init"))
Can anyone see what I'm doing wrong? I appreciate any help you can give. Thanks.
Thanks to unutbu's help, I have been able to correct my job. Apparently, these are the only environment variables that Upstart sets (retrieved in Python with os.environ):
{'TERM': 'linux', 'PWD': '/', 'UPSTART_INSTANCE': '', 'UPSTART_JOB': 'greeenlog', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin'}
My program relies on a couple of these variables being set, so here is the revised job with the right environment variables:
# greeenlog
description "I log stuff."
start on startup
stop on shutdown
env DISPLAY=:0.0
env GTK_RC_FILES=/etc/gtk/gtkrc:/home/greeenguru/.gtkrc-1.2-gnome2
script
exec /usr/local/greeenlog/main.pyw > /tmp/greeenlog.out 2>&1
end script
Thank you!