I am trying to deploy a Django App using Gunicorn + Python3 + Nginx. Before updating to python 3.6 everything was working. But after update I cant seem to create a gunicorn sock file. I use below script to run the Gunicorn.
#!/bin/bash
NAME=yogavidya #Name of the application (*)
DJANGODIR=/home/ytsejam/public_html/abctasarim/ # Django project directory (*)
SOCKFILE=/home/ytsejam/public_html/abctasarim/run/gunicorn.sock # we will communicate using this unix socket (*)
USE=ytsejam # the user to run as (*)
GROUP=webdata # the group to run as (*)
NUM_WORKERS=1 # how many worker processes should Gunicorn spawn (*)
DJANGO_SETTINGS_MODULE=yogavidya.settings.base # which settings file should Django use (*)
DJANGO_WSGI_MODULE=yogavidya.wsgi # WSGI module name (*)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /usr/bin/virtualenvwrapper.sh
source /home/ytsejam/.virtualenvs/yv_dev/bin/activate
workon yv_dev
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/ytsejam/public_html/abctasarim/gunicorn --name=$NAME --workers=$NUM_WORKERS --env=DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE --pythonpath /home/ytsejam/public_html/abctasarim --user ytsejam --bind=unix:/home/ytsejam/public_html/abctasarim/run/gunicorn.sock yogavidya.wsgi:application
When I check the output message showing status service which runs script, behaviour changes time to time if it fails :
● yogavidya.service - Yogavidya gunicorn daemon
Loaded: loaded (/usr/lib/systemd/system/yogavidya.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2017-03-27 13:13:36 BST; 10min ago
Process: 14393 ExecStart=/home/ytsejam/public_html/abctasarim/gunicorn_start.sh (code=exited, status=1/FAILURE)
Main PID: 14393 (code=exited, status=1/FAILURE)
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: File "/home/ytsejam/.virtualenvs/yv_dev/lib/python3.6/site-packages/gunicorn/sock.py", line 117, in __init__
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: super(UnixSocket, self).__init__(addr, conf, log, fd=fd)
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: File "/home/ytsejam/.virtualenvs/yv_dev/lib/python3.6/site-packages/gunicorn/sock.py", line 32, in __init__
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: self.sock = self.set_options(sock, bound=bound)
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: File "/home/ytsejam/.virtualenvs/yv_dev/lib/python3.6/site-packages/gunicorn/sock.py", line 46, in set_options
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: if err[0] not in (errno.ENOPROTOOPT, errno.EINVAL):
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: TypeError: 'OSError' object is not subscriptable
Mar 27 13:13:36 ytsejam systemd[1]: yogavidya.service: Main process exited, code=exited, status=1/FAILURE
Mar 27 13:13:36 ytsejam systemd[1]: yogavidya.service: Unit entered failed state.
Mar 27 13:13:36 ytsejam systemd[1]: yogavidya.service: Failed with result 'exit-code'.
When I try to restart it , I get success message and output becomes
yogavidya.service - Yogavidya gunicorn daemon
Loaded: loaded (/usr/lib/systemd/system/yogavidya.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2017-03-27 13:25:55 BST; 1s ago
Main PID: 14590 (gunicorn)
CGroup: /system.slice/yogavidya.service
└─14590 /home/ytsejam/.virtualenvs/yv_dev/bin/python3 /home/ytsejam/public_html/abctasarim/gunicorn --name=yogavidya --workers=1 --env=DJANGO_SETTINGS_MODULE=yogavidya.settings.base --pythonpath /home
Mar 27 13:25:55 ytsejam systemd[1]: Started Yogavidya gunicorn daemon.
Mar 27 13:25:55 ytsejam gunicorn_start.sh[14590]: Starting yogavidya as ytsejam
But still I cant see the sock file inside "run" folder. How can I fix gunicorn script to start sock file ?
Thanks
Related
I want a python script to automatically start after boot on a linux computer. To achieve this I set up a systemd service:
[Unit]
Description=My Script Service
Wants=network-online.target
After=network-online.target
After=multi-user.target
StartLimitIntervalSec=3600
StartLimitBurst=60
[Service]
Type=idle
User=masterofpuppets
Restart=on-failure
RestartSec=60s
WorkingDirectory=/home/masterofpuppets
ExecStart=/home/masterofpuppets/mypythonscript.py
[Install]
WantedBy=multi-user.target
But I get an error:
sudo systemctl status mysystemd.service
● transfer_DB_remote_to_local.service - My Script Service
Loaded: loaded (/etc/systemd/system/mysystemd.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Mon 2022-11-21 11:49:46 CET; 55s ago
Process: 19283 ExecStart=/home/masterofpuppets/mypythonscript.py (code=exited, status=1/FAILURE)
Main PID: 19283 (code=exited, status=1/FAILURE)
The python script is
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import subprocess
import keyring
import time
DB_backup_from_server = f"mysqldump --single-transaction --quick -v -h 192.168.0.97 -u {keyring.get_password('serverDB', 'user')} -p'{keyring.get_password('serverDB', 'pw')}' testDB > ~/testDB_backup.sql"
restore_backup_to_local_DB = f"mysql -v -u {keyring.get_password('mysqlDB', 'user')} -p'{keyring.get_password('mysqlDB', 'pw')}' testDB < ~/testDB_backup.sql"
commands = [DB_backup_from_server, restore_backup_to_local_DB]
execution_interval = 60*60
t0 = time.time() - execution_interval
while True:
if time.time() - t0 > execution_interval:
t0 = time.time()
for cmd in commands:
subprocess.run(cmd,
stdout = subprocess.PIPE,
universal_newlines = True,
shell = True)
time.sleep(60)
There are no errors if I start it manually.
This is a similar issue, but the suggested solution doesn't help in my case.
Edit:
journalctl -u mysystemd.service
Nov 21 14:36:37 masterofpuppets-pc systemd[1]: Started My Script Service.
Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: Traceback (most recent call last):
Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: File "/home/masterofpuppets/mypythonscript.py
Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: DB_backup_from_server = f"mysqldump --single-transaction --quick -v -h 192.168.0.38 -u {keyring.get_>
Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: File "/usr/lib/python3/dist-packages/keyring/core.py", line 57, in get_password
Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: return _keyring_backend.get_password(service_name, username)
Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: File "/usr/lib/python3/dist-packages/keyring/backends/fail.py", line 25, in get_password
Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: raise NoKeyringError(msg)
Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: keyring.errors.NoKeyringError: No recommended backend was available. Install a recommended 3rd party bac>
Nov 21 14:36:38 masterofpuppets-pc systemd[1]: mysystemd.service: Main process exited, code=exited, status=1/FAILURE
Nov 21 14:36:38 masterofpuppets-pc systemd[1]: mysystemd.service: Failed with result 'exit-code'.
Nov 21 14:37:38 masterofpuppets-pc systemd[1]: mysystemd.service: Scheduled restart job, restart counter is at 143.
Nov 21 14:37:38 masterofpuppets-pc systemd[1]: Stopped My Script Service.
Thanks to the hints, I now got it working via a user unit service.
The file is located at ~/.config/systemd/user/myuserunit.service
[Unit]
StartLimitIntervalSec=3600
StartLimitBurst=60
[Service]
Type=simple
Restart=on-failure
RestartSec=60s
ExecStart=/home/masterofpuppets/mypythonscript.py
[Install]
WantedBy=default.target
enabling the service
systemctl --user daemon-reload
systemctl --user enable myuserunit.service
Now it automatically starts after reboot/login.
I'm trying to serve Flask with gunicorn and nginx
Here is my systemd unit configuration file
[Unit]
Description=Gunicorn instance to serve odooErp
After=network.target
[Service]
User=tito
Group=www-data
WorkingDirectory=/home/tito/peg/odoo_api/peg_api
Environment="PATH=/home/tito/peg/odoo_api/peg_api/env/bin"
ExecStart=/home/tito/peg/odoo_api/peg_api/env/bin/gunicorn --workers 3 --bind unix:odooErp.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
When I start the service, I run into the following error despite having installed gunicorn using pip
[0;1;31m●[0m odooErp.service - Gunicorn instance to serve productionOdoo
Loaded: loaded (/etc/systemd/system/odooErp.service; enabled; vendor preset: enabled)
Active: [0;1;31mfailed[0m (Result: exit-code) since Tue 2020-08-18 05:33:23 UTC; 1min 0s ago
Main PID: 18305 (code=exited, status=1/FAILURE)
CPU: 43ms
Aug 18 05:33:23 peg-test-01 systemd[1]: Started Gunicorn instance to serve odooErp.
Aug 18 05:33:23 peg-test-01 gunicorn[18305]: Traceback (most recent call last):
Aug 18 05:33:23 peg-test-01 gunicorn[18305]: File "/home/tito/peg/odoo_api/peg_api/env/bin/gunicorn", line 7, in <module>
Aug 18 05:33:23 peg-test-01 gunicorn[18305]: from gunicorn.app.wsgiapp import run
Aug 18 05:33:23 peg-test-01 gunicorn[18305]: ImportError: No module named 'gunicorn'
Aug 18 05:33:23 peg-test-01 systemd[1]: [0;1;39modooErp.service: Main process exited, code=exited, status=1/FAILURE[0m
Aug 18 05:33:23 peg-test-01 systemd[1]: [0;1;39modooErp.service: Unit entered failed state.[0m
Aug 18 05:33:23 peg-test-01 systemd[1]: [0;1;39modooErp.service: Failed with result 'exit-code'.[0m
this is probably because the module is not in correct path
if you are on windows you can set the path as follows in your enviroment :
search this pc > properties > advanced system setting in the left corner > enviromental variable in bottom right corner > in the user varible click PATH then new and set the path
Sometimes there can be some confusion if you are running multiple versions of Python; simply running pip will install a module for one version and not another. I usually prefer running ' -m pip install gunicorn'
I have an EC2 instance associated with ElasticBeanstalk on which my flask app is deployed. I have implemented some rest APIs using flask, python.
I created another EC2 instance on Linux 2, on which installed MongoDB community edition. I noticed that it has a local ip mapped in /etc/mongod.conf file:
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
As per my understanding I need to map the private IP of EC2 instance - flask app to it:
# network interfaces
net:
port: 27017
bindIp: <private ip of EC2 with flask app>
So that I can access the MongoDB installed at this instance from the flask app.
# configuring mongo
application.config["MONGO_DBNAME"] = "my_db"
application.config["MONGO_URI"] = "mongodb://public_ip_of_mongodb:27017/my_app"
For some reasons, as soon as I edit the '/etc/mongod.conf file' mongod service starts failing:
mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-08-04 17:17:28 UTC; 1min 11s ago
Docs: https://docs.mongodb.org/manual
Process: 2019 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=48)
Process: 2015 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 2012 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 2010 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Main PID: 1651 (code=exited, status=0/SUCCESS)
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: Starting MongoDB Database Server...
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal mongod[2019]: about to fork child process, waiting until server is ready for connections.
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal mongod[2019]: forked process: 2023
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: mongod.service: control process exited, code=exited status=48
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: Failed to start MongoDB Database Server.
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: Unit mongod.service entered failed state.
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: mongod.service failed.
Even if I revert the bindId to 127.0.0.1 it still fails.
Am I missing anything over here?
I have written a script called coinview.py and it can run on linux. When I trying to run it as systemd, it raises error
error:ImportError: No module named 'schedule'.
I use pip3 show schedule, it already exist. So i have no idea what's wrong with my script.
I print sys.executable and sys.path in systemd.
[Unit]
Description=coinview deamon
After=rc-local.service
[Service]
Type=simple
User=root
Group=root
WorkingDirectory=/home/ubuntu/source/quotation_api
ExecStart=/usr/bin/python3 coinview.py
Restart=always
[Install]
WantedBy=multi-user.target
ubuntu#ip-100-00-40-02:/etc/systemd/system$ pip3 show schedule
Name: schedule
Version: 0.6.0
Summary: Job scheduling for humans.
Home-page: https://github.com/dbader/schedule
Author: Daniel Bader
Author-email: mail#dbader.org
License: MIT
Location: /home/ubuntu/.local/lib/python3.5/site-packages
Requires:
Required-by:
Mar 27 08:40:10 ip-100-00-40-02 python3[8634]: Traceback (most recent call last):
Mar 27 08:40:10 ip-100-00-40-02 python3[8634]: File "coinview.py", line 3, in <module>
Mar 27 08:40:10 ip-100-00-40-02 python3[8634]: import requests,threading,time,schedule,json
Mar 27 08:40:10 ip-100-00-40-02 python3[8634]: ImportError: No module named 'schedule'
Mar 27 08:40:10 ip-100-00-40-02 systemd[1]: coinview.service: Main process exited, code=exited, status=1/FAILURE
Mar 27 08:40:10 ip-100-00-40-02 systemd[1]: coinview.service: Unit entered failed state.
Mar 27 08:40:10 ip-100-00-40-02 systemd[1]: coinview.service: Failed with result 'exit-code'.
Mar 27 08:40:10 ip-100-00-40-02 systemd[1]: coinview.service: Service hold-off time over, scheduling restart.
Mar 27 08:40:10 ip-100-00-40-02 systemd[1]: Stopped coinview deamon.
Apr 09 07:59:03 ip-100-00-40-02 python[12095]: /usr/bin/python3
Apr 09 07:59:03 ip-100-00-40-02 python[12095]: ['/home/ubuntu/source/quotation_api', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x8
According to these logs, i found that the PYTHONPATH is different in manual shell and systemd.And i try to add "/home/ubuntu/.local/lib/python3.5/site-packages" into /etc/profile but systemd logs show that it still can't found the path.
So i do a stuip thing, add
sys.path.append("/home/ubuntu/.local/lib/python3.5/site-packages")
in my code, and it works...
Install the package for root with
sudo pip install schedule
Or instead of running it as root, try running it as another specific user.
Modify your .service to something like:
[Unit]
Description=coinview deamon
After=rc-local.service
[Service]
Type=simple
User=user
WorkingDirectory=/home/ubuntu/source/quotation_api
ExecStart=/usr/bin/python3 coinview.py
Restart=always
[Install]
WantedBy=multi-user.target
Hope that helps!
I've read all over, and I still can't get my python script to run in systemd.
Here is the shell script I use:
#! /bin/sh
cd /home/albert/speedcomplainer
/usr/bin/python speedcomplainer.py
I can execute the script (/usr/bin/speedcomplainer), it runs just fine from the command line. The python script loops forever, checking my internet speeds. As I said, it runs fine, from the command line directly (python ...) or from the shell script I created in usr/bin.
But when I put it into this unit file:
# speedcomplianer - checks and tweets comcast speeds.
#
#
[Unit]
Description=Ethernet Speed Complainer
After=syslog.target network.target
[Service]
Type=simple
WorkingDirectory=/home/albert/speedcomplainer
ExecStart=/usr/bin/speedcomplainer
Restart=always
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
It fails to startup (sudo systemctl start speedcomplainer.service) with this error:
speedcomplainer.service - Ethernet Speed Complainer
Loaded: loaded (/lib/systemd/system/speedcomplainer.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit) since Wed 2016-02-24 20:21:02 CST; 7s ago
Process: 25325 ExecStart=/usr/bin/speedcomplainer (code=exited, status=1/FAILURE)
Main PID: 25325 (code=exited, status=1/FAILURE)
I look at the log with journalctl -u speedcomplainer and :
Feb 24 20:21:02 haven systemd[1]: Started Ethernet Speed Complainer.
Feb 24 20:21:02 haven speedcomplainer[25325]: Traceback (most recent call last):
Feb 24 20:21:02 haven speedcomplainer[25325]: File "speedcomplainer.py", line 9, in <module>
Feb 24 20:21:02 haven speedcomplainer[25325]: import twitter
Feb 24 20:21:02 haven speedcomplainer[25325]: ImportError: No module named twitter
Feb 24 20:21:02 haven systemd[1]: speedcomplainer.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:21:02 haven systemd[1]: speedcomplainer.service: Unit entered failed state.
Feb 24 20:21:02 haven systemd[1]: speedcomplainer.service: Failed with result 'exit-code'.
Feb 24 20:21:02 haven systemd[1]: speedcomplainer.service: Service hold-off time over, scheduling restart.
Feb 24 20:21:02 haven systemd[1]: Stopped Ethernet Speed Complainer
AHAHA!! An import error in the python script.
But wait - it works from everywhere else. Why am I getting an Import error only when it runs from systemd? (Answer - the module is installed locally. Next question:)
OK. After following the path that #jcomeau_ictx lead me down, it seems that pip installed to my local user directory. How do I install modules for root use?
OK. Thanks to jcomeau_ictx, I figured out the problem. pip installs locally, by default. This post discussed in detail how to install systemwide (TL;DR apt-get.). This installed for the root user. I didn't want to mess with a virtual env, and it's only one module with few dependencies.