How to start my telegram bot on Google Cloud - python

I want to upload my bot to the server. I am using Google Cloud. My bot.service file looks like this:
[Unit]
Description=Telegram bot 'ConverterBot'
After=syslog.target
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/home/misha_markov1015#converterbot/ConverterBot
ExecStart=/usr/bin/python3 /home/misha_markov1015#converterbot/ConverterBot/main.py
RestartSec=10
Restart=always
[Install]
WantedBy=multi-user.target
When I enter the following commands:
sudo systemctl daemon-reload
sudo systemctl enable bot
sudo systemctl start bot
sudo systemctl status bot
I get the following error:
Nov 01 19:08:53 converterbot systemd[1]: Started Telegram bot 'ConverterBot'.
Nov 01 19:08:53 converterbot systemd[8978]: bot.service: Changing to the requested working directory failed: No such file or directory
Nov 01 19:08:53 converterbot systemd[8978]: bot.service: Failed at step CHDIR spawning /usr/bin/python3: No such file or directory
Nov 01 19:08:53 converterbot systemd[1]: bot.service: Main process exited, code=exited, status=200/CHDIR
Nov 01 19:08:53 converterbot systemd[1]: bot.service: Failed with result 'exit-code'.
If you start the bot with the /home/python3/ConverterBot/main.py command, then everything works correctly. I checked several times: the path to this file exists, but why does the error say that this file does not exist?

If suddenly someone encounters such a problem, then the solution was as follows: /home/misha_markov1015#converterbot/ConverterBot this path really does not exist. Instead, you had to write: /home/misha_markov1015/ConverterBot.

Related

Botocore fails to read credentials when run as daemon.service

I had my script running smoothly from command line, however, when I start it as a systemd.service, I get the following error:
iot_local.service - My iot_local Service
Loaded: loaded (/lib/systemd/system/iot_local.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-04-01 23:06:45 UTC; 5s ago
Process: 2436 ExecStart=/usr/bin/python /home/ubuntu/myTemp/iot_local.py (code=exited, status=1/FAILURE)
Main PID: 2436 (code=exited, status=1/FAILURE)
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 358, in resolve
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: service_name, region_name)
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: File "/usr/local/lib/python2.7/dist-packages/botocore/regions.py", line 122, in construct_endpoint
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: partition, service_name, region_name)
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: File "/usr/local/lib/python2.7/dist-packages/botocore/regions.py", line 135, in _endpoint_for_partition
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: raise NoRegionError()
Apr 01 23:06:45 ip-172-31-29-45 python[2436]: botocore.exceptions.NoRegionError: You must specify a region.
Apr 01 23:06:45 ip-172-31-29-45 systemd[1]: iot_local.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 23:06:45 ip-172-31-29-45 systemd[1]: iot_local.service: Unit entered failed state.
Apr 01 23:06:45 ip-172-31-29-45 systemd[1]: iot_local.service: Failed with result 'exit-code'.
it seems to fail on this line:
DB=boto3.resourse('dynamodb')
If I add the region as an argument, the script still fails later because can not find credentials. So, when I provide region, id and a key as argument, everything works:
boto3.resource('dynamodb', region_name='us-west-2', aws_access_key_id=ACCESS_ID, aws_secret_access_key=ACCESS_KEY)
The obvious problem is that when this script is run as service, it fails to obtain the info from the ~/.aws/config and ~/.aws/credentials, which I made sure to contain all the necessary information by running aws configure as mentioned here.
[default]
aws_access_key_id=XXXXXXXXXXXXXX
aws_secret_access_key=YYYYYYYYYYYYYYYYYYYYYYYYYYY
I also tried this:
export AWS_CONFIG_FILE="/home/ubuntu/.aws/config"
and this
sudo chown root:root ~/.aws
but it did not help. Any ideas why .service does not "see" the credentials files?
The answer is much simpler - the environment variables arent getting loaded, in your case the aws credentials.
Create a File sudo vi /lib/systemd/system/iot_local.service With the below given content.
[Unit]
Description=Spideren MQ RPC service
[Service]
WorkingDirectory=/opt/cocoon_predev/
Environment="AWS_DEFAULT_REGION=us-xxxx"
Environment="AWS_ACCESS_KEY_ID=xxxxxxxxx"
Environment="AWS_SECRET_ACCESS_KEY=xxxxxxxxxx"
Environment="LANG=en_US.UTF-8"
Environment="PYTHONIOENCODING=utf-8"
ExecStart=/usr/bin/python2 /home/myproject/iotservice.py
User=myusername
Restart=on-failure
RestartSec=90s
And then finally run the below commands to activate the service.
sudo systemctl daemon-reload
sudo systemctl start iot_local.service
sudo systemctl status iot_local.service
To check Errors in daemon startup - Run the following command
journalctl -u spiderenrpc.service
I have done this on Ubuntu - so check on your systemd formats specific to your os.
I had a similar issue with Fluent Bit AWS Firehose plugin failed to read the AWS credentials, when its run as systemd service.
When manually invoking the Fluent Bit from command-line, the credentials were loaded.
The credentials was indeed stored under the user root, and the service was running as root, by default.
To fix the issue, I had to explicitly specify the user as root in the systemd unit:
[Service]
User=root
and run systemctl daemon-reload
When systemd runs your script as a service, the script is no longer being run by the ubuntu user so the home directory is no longer /home/ubuntu. That means that ~/.aws/credentials no longer refers to /home/ubuntu/.aws/credentials and your script is therefore trying to load credentials from the wrong place (probably /root/.aws/credentials).
You can configure systemd to run your script as a specific user. Add User=ubuntu in the [Service] section.

gunicorn status is active (exited) but showing not running in monit

I am trying to setup a new django project on a server from scratch (django+gunicorn+nginx) and I have everything correct except the init scripts for gunicorn.
if I run the gunicorn command manually it works and I can view my site on the ip address, but when I try to do service gunicorn start it gives me this output and it doesn't work...
gunicorn-project.service
Loaded: loaded (/etc/init.d/gunicorn-project; bad; vendor preset: enabled)
Active: active (exited) since Thu 2016-11-17 04:23:56 UTC; 17min ago
Docs: man:systemd-sysv-generator(8)
Process: 1656 ExecStart=/etc/init.d/gunicorn-project start (code=exited, status=0/SUCCESS)
Tasks: 0
Memory: 0B
CPU: 0
Nov 17 04:23:56 project gunicorn-project[1656]: from multiprocessing import cpu_count
Nov 17 04:23:56 project gunicorn-project[1656]: /etc/gunicorn.d/gunicorn-project2.py:3: RuntimeWarning: Parent module '/
Nov 17 04:23:56 project gunicorn-project[1656]: from os import environ
Nov 17 04:23:56 project gunicorn-project[1656]: /etc/gunicorn.d/gunicorn-project3.py:2: RuntimeWarning: Parent module '
Nov 17 04:23:56 project gunicorn-project[1656]: from multiprocessing import cpu_count
Nov 17 04:23:56 project gunicorn-project[1656]: /etc/gunicorn.d/gunicorn-project3.py:3: RuntimeWarning: Parent module '
Nov 17 04:23:56 project gunicorn-project[1656]: from os import environ
Nov 17 04:23:56 project gunicorn-project[1656]: *
Nov 17 04:23:56 project systemd[1]: Started gunicorn-project.service.
Nov 17 04:25:01 project systemd[1]: Started gunicorn-project.service.
I cannot figure out why this is happening... this is the file references in the output...
"""gunicorn WSGI server configuration."""
from multiprocessing import cpu_count
from os import environ
def max_workers():
return cpu_count() * 2 + 1
max_requests = 1000
worker_class = 'gevent'
workers = max_workers()
errorlog = '/home/gunicorn-project/log/gunicorn/error.log'
accesslog = '/home/gunicorn-project/log/gunicorn/access.log'
I just had the very same error message thrown at me (were you also following a Digital Ocean tutorial?) and spent quite a few hours trying to figure it out. I don't know if you still need it, but perhaps I can spare someone else from wasting as much time as I did.
I managed to fix it (after several different attempts) by:
Stopping gunicorn:
sudo systemctl stop gunicorn
Changing writing permissions in both directories containing gunicorn.service:
sudo chmod u+x /etc/systemd/system/multi-user.target.wants/gunicorn-project.service
sudo chmod u+x /etc/systemd/system/gunicorn-project.service
Manually deleting and rebuilding the gunicorn-project.service symlink:
unlink /etc/systemd/system/multi-user.target.wants/gunicorn-project.service
rm /etc/systemd/system/multi-user.target.wants/gunicorn-project.service
ln -s /etc/systemd/system/gunicorn-project.service /etc/systemd/system/multi-user.target.wants/gunicorn-project.service
Reloading gunicorn daemon:
sudo systemctl daemon-reload
Restarting gunicorn:
sudo systemctl start gunicorn
sudo systemctl enable gunicorn
And the status changed to active (running). Basically, at some point, I installed gunicorn without setting proper permissions. Once I redid those steps with the right permissions, gunicorn was able to execute as intended.
OBS: my file was named gunicorn.service, which I believe is the default. I changed to gunicorn-project.service to match OP's terminology.

django gunicorn sock file not created by wsgi

I have a basic django rest application in my digital ocean server (Ubuntu 16.04) with a local virtual environment.
The basic wsgi.py is:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "workout_rest.settings")
# This application object is used by any WSGI server configured to use this
# file. This includes Django's development server, if the WSGI_APPLICATION
# setting points here.
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
# Apply WSGI middleware here.
# from helloworld.wsgi import HelloWorldApplication
# application = HelloWorldApplication(application)
I have followed step by step this tutorial:
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04
When I test Gunicorn's ability to serve the project with this command:
gunicorn --bind 0.0.0.0:8000 myproject.wsgi:application
All works well.
So I've tried to setup Gunicorn to use systemd service file.
My /etc/systemd/system/gunicorn.service file is:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ben
Group=www-data
WorkingDirectory=/home/ben/myproject
ExecStart=/home/ben/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:/home/ben/myproject/myproject.sock myproject.wsgi:application
[Install]
WantedBy=multi-user.target
My Nginx configuration is:
server {
listen 8000;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ben/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/ben/myproject/myproject.sock;
}
}
I've changed listen port from 80 to 8000 because 80 give me a err_connection_refused error.
After starting the server with this command:
sudo systemctl restart nginx
When I try to run my website, I get an 502 Bad Gateway error.
I've tried these commands (found on the tutorial comments):
sudo systemctl daemon-reload
sudo systemctl start gunicorn
sudo systemctl enable gunicorn
sudo systemctl restart nginx
but nothing changes.
When I take a look at the Nginix logs with this command:
sudo tail -f /var/log/nginx/error.log
I can read that sock file doesn't exists:
2016/10/07 09:00:18 [crit] 24974#24974: *1 connect() to unix:/home/ben/myproject/myproject.sock failed (2: No such file or directory) while connecting to upstream, client: 86.197.20.27, server: 139.59.150.116, request: "GET / HTTP/1.1", upstream: "http://unix:/home/ben/myproject/myproject.sock:/", host: "server_ip_adress:8000"
Why this sock file isn't created? How can I configure django/gunicorn to create this file?
I have added gunicorn in my INSTALLED_APP in my Django project but it doesn't change anything.
EDIT:
When I test the nginx config file with nginx -t I get an error: open() "/run/nginx.pid" failed (13: Permission denied).
But if I run the command with sudo: sudo nginx -t, the test is successful. Does that mean that I have to allow 'ben' user to run Ngnix?
About gunicorn logfile, I cannot find a way to read them. Where are they stored?
When I check whether gunicorn is running by using ps aux | grep gunicorn:
ben 26543 0.0 0.2 14512 1016 pts/0 S+ 14:52 0:00 grep --color=auto gunicorn
Here is hat happens when you run the systemctl enable and start commands for gunicorn:
sudo systemctl enable gunicorn
Synchronizing state of gunicorn.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable gunicorn
sudo systemctl start gunicorn
I get no output with this command
sudo systemctl is-active gunicorn
active
sudo systemctl status gunicorn
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: enabled)
Active: active (exited) since Thu 2016-10-06 15:40:29 UTC; 23h ago
Oct 06 15:40:29 DevUsine systemd[1]: Started gunicorn.service.
Oct 06 18:52:56 DevUsine systemd[1]: Started gunicorn.service.
Oct 06 20:55:05 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 20:55:17 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:07:36 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:16:42 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:21:38 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:25:28 DevUsine systemd[1]: Started gunicorn daemon.
Oct 07 08:58:43 DevUsine systemd[1]: Started gunicorn daemon.
Oct 07 15:01:22 DevUsine systemd[1]: Started gunicorn daemon.
I had to change the permissions of my sock folder:
sudo chown ben:www-data /home/ben/myproject/
Another thing is that I have changed the sock location after reading in many post that it's not a good pratice to keep the sock file in the django project.
My new location is:
/home/ben/run/
Don't forget to change permissions:
sudo chown ben:www-data /home/ben/run/
To be sure that gunicorn is refreshed, run these commands:
pkill gunicorn
sudo systemctl daemon-reload
sudo systemctl start gunicorn
That will kill the gunicorn processes and start new ones.
You can run this command to make the process start at server boot:
sudo systemctl enable gunicorn
All works well now.
While the accepted answer works, there is one (imo major) issue with it, which is that the gunicorn web server is (probably) running as root, which is not recommended. The reason you end up needing to chown the socket is because it is owned by root:root, because that is the user/group your init job assumes by default. There are multiple ways to get your job to assume another role. As of this time (with gunicorn 19.9.0), in my opinion, the simplest solution to this is to use the --user and --group flags provided as part of the gunicorn command. This means your server can start with the user/group you specify. In your case:
exec gunicorn --user ben --group www-data --bind unix:/home/ben/myproject/myproject.sock -m 007 wsgi
will start gunicorn under ben:www-data user and create a socket owned by ben:www-data with the permissions 770, or read/write/execute privilege for the user ben and group www-data on the socket, which is exactly what you ned in this case.
I have given path to the sock file outside my project. I needed to just create the directory so that the gunicorn can create the file inside that directory as I had had mentioned that path in the .services file. Basically, I made sure that I had all directories existing according to the path in the .services file. No need to change permissions or ownership
Try run
sudo systemctl daemon-reload
sudo systemctl start gunicorn
sudo systemctl status gunicorn.service
The last line helped me to re-create .scok file

Run Python program as service on Ubuntu 16.04 inside a virtual environment

I'm trying to get a Flask + SocketIO app running as a service on Ubuntu 16.04, inside a virtual environment. My server is restarted every day at 3 am (outside of my control), so I need it to automatically launch on startup.
Running the script by itself works fine:
$ python main.py
(29539) wsgi starting up on http://127.0.0.1:8081
I can tell that it's working because it's serving pages (through an nginx server set up by following this Stack Overflow answer, though I don't think that's relevant.)
Here's my /etc/systemd/system/opendc.service:
[Unit]
Description=OpenDC flask + socketio service
[Service]
Environment=PYTHON_HOME=/var/www/opendc.ewi.tudelft.nl/web-server/venv
Environment=PATH=$VIRTUAL_ENV/bin:$PATH
ExecStart=/var/www/opendc.ewi.tudelft.nl/web-server main.py
Restart=always
[Install]
WantedBy=multi-user.target
So when I try to get that going using:
$ sudo systemctl daemon-reload
$ sudo systemctl restart opendc
It doesn't serve pages anymore. The status shows:
$ sudo systemctl status opendc
* opendc.service - OpenDC flask + socketio service
Loaded: loaded (/etc/systemd/system/opendc.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Fri 2016-08-19 10:48:31 CEST; 15min ago
Process: 29533 ExecStart=/var/www/opendc.ewi.tudelft.nl/web-server main.py (code=exited, status=203/EXEC)
Main PID: 29533 (code=exited, status=203/EXEC)
Aug 19 10:48:31 opendc.ewi.tudelft.nl systemd[1]: opendc.service: Service hold-off time over, scheduling restart.
Aug 19 10:48:31 opendc.ewi.tudelft.nl systemd[1]: Stopped OpenDC flask + socketio service.
Aug 19 10:48:31 opendc.ewi.tudelft.nl systemd[1]: opendc.service: Start request repeated too quickly.
Aug 19 10:48:31 opendc.ewi.tudelft.nl systemd[1]: Failed to start OpenDC flask + socketio service.
I've looked up (code=exited, status=203/EXEC) and done some troubleshooting with what I found:
I checked that main.py is executable:
$ ls -l main.py
-rwxr-xr-x 1 leon leon 2007 Aug 19 10:46 main.py
And that main.py has this first line to point to Python in the virtual environment:
#!/var/www/opendc.ewi.tudelft.nl/web-server/venv/bin/python
So what's the problem here?
Tried and tested way of making a python file run in a Virtual Environment as a service.
[Unit][Unit]
Description=Your own description
After=network.target
[Service]
Type=simple
Restart=always
WorkingDirectory=/home/path/to/WorkingDirectory/
VIRTUAL_ENV=/home/path/to/WorkingDirectory/venv
Environment=PATH=$VIRTUAL_ENV/bin:$PATH
ExecStart=/home/path/to/WorkingDirectory/venv/bin/python app.py
Restart=on-failure
[Install]
WantedBy=multi-user.target
I am putting this one here so I can always come back to it
I believe that you mistype PYTHON_HOME and than PATH=$VIRTUAL_ENV/bin:$PATH
you should use PATH=$PYTHON_HOME/bin:$PATH

Install python package for root user use

I've read all over, and I still can't get my python script to run in systemd.
Here is the shell script I use:
#! /bin/sh
cd /home/albert/speedcomplainer
/usr/bin/python speedcomplainer.py
I can execute the script (/usr/bin/speedcomplainer), it runs just fine from the command line. The python script loops forever, checking my internet speeds. As I said, it runs fine, from the command line directly (python ...) or from the shell script I created in usr/bin.
But when I put it into this unit file:
# speedcomplianer - checks and tweets comcast speeds.
#
#
[Unit]
Description=Ethernet Speed Complainer
After=syslog.target network.target
[Service]
Type=simple
WorkingDirectory=/home/albert/speedcomplainer
ExecStart=/usr/bin/speedcomplainer
Restart=always
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
It fails to startup (sudo systemctl start speedcomplainer.service) with this error:
speedcomplainer.service - Ethernet Speed Complainer
Loaded: loaded (/lib/systemd/system/speedcomplainer.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit) since Wed 2016-02-24 20:21:02 CST; 7s ago
Process: 25325 ExecStart=/usr/bin/speedcomplainer (code=exited, status=1/FAILURE)
Main PID: 25325 (code=exited, status=1/FAILURE)
I look at the log with journalctl -u speedcomplainer and :
Feb 24 20:21:02 haven systemd[1]: Started Ethernet Speed Complainer.
Feb 24 20:21:02 haven speedcomplainer[25325]: Traceback (most recent call last):
Feb 24 20:21:02 haven speedcomplainer[25325]: File "speedcomplainer.py", line 9, in <module>
Feb 24 20:21:02 haven speedcomplainer[25325]: import twitter
Feb 24 20:21:02 haven speedcomplainer[25325]: ImportError: No module named twitter
Feb 24 20:21:02 haven systemd[1]: speedcomplainer.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:21:02 haven systemd[1]: speedcomplainer.service: Unit entered failed state.
Feb 24 20:21:02 haven systemd[1]: speedcomplainer.service: Failed with result 'exit-code'.
Feb 24 20:21:02 haven systemd[1]: speedcomplainer.service: Service hold-off time over, scheduling restart.
Feb 24 20:21:02 haven systemd[1]: Stopped Ethernet Speed Complainer
AHAHA!! An import error in the python script.
But wait - it works from everywhere else. Why am I getting an Import error only when it runs from systemd? (Answer - the module is installed locally. Next question:)
OK. After following the path that #jcomeau_ictx lead me down, it seems that pip installed to my local user directory. How do I install modules for root use?
OK. Thanks to jcomeau_ictx, I figured out the problem. pip installs locally, by default. This post discussed in detail how to install systemwide (TL;DR apt-get.). This installed for the root user. I didn't want to mess with a virtual env, and it's only one module with few dependencies.

Categories