Start django app as service - python

I want to create service that will be start with ubuntu and will have ability to use django models etc..
This service will create thread util.WorkerThread and wait some data in main.py
if __name__ == '__main__':
bot.polling(none_stop=True)
How I can to do this. I just don't know what I need to looking for.
If you also can say how I can create ubuntu autostart service with script like that, please tell me )
P.S. all django project run via uwsgi in emperor mode.

The easiest way in my opinion is create a script and run on crontab.
First of all create a script to start your django app.
#!/bin/bash
cd /path/to your/virtual environment #path to your virtual environment
. bin/activate #Activate your virtual environment
cd /path/to your/project directory #After that go to your project directory
python manage.py runserver #run django server
Save the script and open crontab with the command:
crontab -e
Now edit the crontab file and write on the last line:
#reboot path/to/your/script.sh
This way is not the best but the easiest, if you are not comfortable with Linux startup service creation.
I hope this help you :)

Take a look at supervisord. It is much easier than daemonizing python script.
Config it something like this:
[program:watcher]
command = /usr/bin/python /path/to/main.py
stdout_logfile = /var/log/main-stdout.log
stdout_logfile_maxbytes = 10MB
stdout_logfile_backups = 5
stderr_logfile = /var/log/main-stderr.log
stderr_logfile_maxbytes = 10MB
stderr_logfile_backups = 5

Ok, that is answer - https://www.raspberrypi-spy.co.uk/2015/10/how-to-autorun-a-python-script-on-boot-using-systemd/
In new versions ubuntu services .conf in /etc/init fail with error Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
But services works using systemd

Related

How to keep django_q cluster running in linux server?

I am using django_q for some scheduling and automations in my django project.
I successfully configured all the needed stuff but to get django_q running I have to type in the server command line 'python manage.py qcluster' and after i close the shell session id doesn't work anymore.
In the django_q official documentation it says that there is no need for a supervisor, but this is not running.
Any ideas?
There are a few approaches you can use.
You could install the screen program to create a terminal session which stays around after logout. See also: https://superuser.com/questions/451057/keep-processes-alive-after-ssh-logout
You could use systemd to automatically start your qcluster. This has the advantage that it will start qcluster again if your server is rebooted. You'll want to write a service unit file with Type=simple. Here's a list of resources.
Here's an example unit file. (You may need to adapt this somewhat.)
[Unit]
Description=qcluster daemon
[Service]
User=<django user>
Group=<django group>
WorkingDirectory=<your working dir>
Environment=PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/ bin/
ExecStart=python manage.py qcluster
Restart=always
[Install]
WantedBy=multi-user.target

Shutdown script not executing on Google Cloud VM

I have the following:
#! /usr/bin/python3.7
f=open("python_out.txt",'w',encoding='utf-8')
f.write("OK1")
import socket
import telegram
f.write("OK2")
BOT_TOKEN = "telegram BOT_TOKEN"
CHAT_ID = "chat_id"
bot = telegram.Bot(token=BOT_TOKEN)
host_name = socket.gethostname()
content = 'Machine name: %s is shutting down!' % host_name
bot.send_message(chat_id=CHAT_ID, text=content)
f.write("OK3")
I have checked my environment, I can make this script work through python3 script.py when it is in the instance,It can send notifications and output python_out.txt.
I set this script in shutdown-script
But when I manually clicked the "stop" button, it did not work as expected. startup-script too.
I have read many posts:
Shutdown script not executing on a Google Cloud VM
Reliably executing shutdown scripts in Google Compute Engine
Pro Tip: Use Shutdown Script Detect Preemption on GCP
Of course it also includes official documents:
https://cloud.google.com/compute/docs/shutdownscript
I want to try setting powerbtn.sh,but i can't find /etc/acpi/ in GCP Ubuntu 16.04 LTS
I can't find any more schedule, any ideas?
When you use startup script and shutdown script, the current user that execute is the root user, and the default directory /root/. This directory isn't writable, that's why nothing happens with your code.
Simply write files in writable directory and that's all.
Don't forget that the files that you create are written by the root user and all user can't read and/or write on file wrote by root. Use chmod or chown to change this.

Run python GUI application as service on ubuntu 18.04

I've successfully manage to run my GUI python application as a service in a Raspberry pi. The used unit file:
[Unit]
Description=Example systemd service.
After=graphical.target
[Service]
Type=simple
Environment="Display=:0"
Environment=XAUTHORITY=/home/pi/.Xauthority
WorkingDirectory=/home/pi/tf/
ExecStart=/home/pi/tf/myApp.py
Restart=always
RestartSec=10s
KillMode=process
Timeout=infinity
[Install]
WantedBy=graphical.target
At the beginning of my python application i added the route python3 like this:
#! /address/where/is/python3
The problem is that i can't do the same in Ubuntu.
I think is because .Xauthority file does not exist.
In ubuntu i ran
echo $XAUTHORITY
and i got:
/run/user/1000/Xauthority
then i change these lines:
Environment=XAUTHORITY=/run/user/1000/Xauthority
WorkingDirectory=/home/sergio/tf/
ExecStart=/home/sergio/tf/myApp.py
with "journalctl -u myApp -f" displays the following error:
cannot connect to X server
Any idea what can it be?
I solve this problem by following the steps by htorque in this post https://askubuntu.com/questions/21923/how-do-i-create-the-xauthority-file
Follow this steps:
1-Open System > Preferences > Startup Applications
2-Click on Add :
3-Name: Xauthority
4-Command: /bin/bash -c 'ln -s -f "$XAUTHORITY"
~/.Xauthority' Comment: Creates a symbolic link from ~/.Xauthority to
$XAUTHORITY and add the entry by clicking on Add.
Now every time you log in, it should create the link to the current
authority file.
So now we have the file .Xauthority in ~/
Finally service file myApp.service was updated like this:
Environment="Display=:0"
Environment=XAUTHORITY=/home/"yourUsername"/.Xauthority

How can I run a django management command by cron job

I am working with a django app called django-mailbox. The purpose of this is to import email messages via pop3 and other protocols and store them in a db. I want to do this at regular intervals via a chron. In the documentation http://django-mailbox.readthedocs.org/en/latest/topics/polling.html it states:
Using a cron job
You can easily consume incoming mail by running the management command named getmail (optionally with an argument of the name of the mailbox you’d like to get the mail for).:
python manage.py getmail
Now I can run this at the command line locally and it works but if this was deployed to an outside server which was only accessible by a URL how would this command be given?
If you are using a virtual env use the python binary from the virtualenv
* * * * * /path/to/virtualenv/bin/python /path/to/project/manage.py management_command
on the server machine:
$ sudo crontab -l
no crontab for root
$ sudo crontab -e
no crontab for root - using an empty one
Select an editor. To change later, run 'select-editor'.
1. /bin/ed
2. /bin/nano <---- easiest
3. /usr/bin/vim.basic
4. /usr/bin/vim.tiny
Choose 1-4 [2]:
choose your preferred editor
then see http://en.wikipedia.org/wiki/Cron for how to schedule when will the command run, direct it to some .sh file on your machine, make sure you give full path as this is going to run in root user context.
the script the cron will run may look something like:
#!/bin/bash
cd /absolute/path/to/django/project
/usr/bin/python ./manage.py getmail

Deploying CherryPy (daemon)

I've followed the basic CherryPy tutorial (http://www.cherrypy.org/wiki/CherryPyTutorial). One thing not discussed is deployment.
How can I launch a CherryPy app as a daemon and "forget about it"? What happens if the server reboots?
Is there a standard recipe? Maybe something that will create a service script (/etc/init.d/cherrypy...)
Thanks!
Daemonizer can be pretty simple to use:
# this works for cherrypy 3.1.2 on Ubuntu 10.04
from cherrypy.process.plugins import Daemonizer
# before mounting anything
Daemonizer(cherrypy.engine).subscribe()
cherrypy.tree.mount(MyDaemonApp, "/")
cherrypy.engine.start()
cherrypy.engine.block()
There is a decent HOWTO for SysV style here.
To summarize:
Create a file named for your application in /etc/init.d that calls /bin/sh
sudo vim /etc/init.d/MyDaemonApp
#!/bin/sh
echo "Invoking MyDaemonApp";
/path/to/MyDaemonApp
echo "Started MyDaemonApp. Tremble, Ye Mighty."
Make it executable
sudo chmod +x /etc/init.d/MyDaemonApp
Run update-rc.d to create our proper links in the proper runtime dir.
sudo update-rc.d MyDaemonApp defaults 80
sudo /etc/init.d/MyDaemonApp
There is a Daemonizer plugin for CherryPy included by default which is useful for getting it to start but by far the easiest way for simple cases is to use the cherryd script:
> cherryd -h
Usage: cherryd [options]
Options:
-h, --help show this help message and exit
-c CONFIG, --config=CONFIG
specify config file(s)
-d run the server as a daemon
-e ENVIRONMENT, --environment=ENVIRONMENT
apply the given config environment
-f start a fastcgi server instead of the default HTTP
server
-s start a scgi server instead of the default HTTP server
-i IMPORTS, --import=IMPORTS
specify modules to import
-p PIDFILE, --pidfile=PIDFILE
store the process id in the given file
As far as an init.d script goes I think there are examples that can be Googled.
And the cherryd is found in your:
virtualenv/lib/python2.7/site-packages/cherrypy/cherryd
or in: https://bitbucket.org/cherrypy/cherrypy/src/default/cherrypy/cherryd
I wrote a tutorial/project skeleton, cherrypy-webapp-skeleton, which goal was to fill the gaps for deploying a real-world CherryPy application on Debian* for a web-developer. It features extended cherryd for daemon privilege drop. There's also a number of important script and config files for init.d, nginx, monit, logrotate. The tutorial part describes how to put things together and eventually forget about it. The skeleton part proposes a way of possible arrangement of CherryPy webapp project assets.
* It was written for Squeeze but practically it should be same for Wheezy.
Info on Daemonizer options
When using Daemonizer, the docs don't state the options, e.g. how to redirect stdout or stderr. From the source of the Daemonizer class you can find the options. As a reference take this example from my project:
# run server as a daemon
d = Daemonizer(cherrypy.engine,
stdout='/home/pi/Gate/log/gate_access.log',
stderr='/home/pi/Gate/log/gate_error.log')
d.subscribe()

Categories