Shutdown script not executing on Google Cloud VM - python

I have the following:
#! /usr/bin/python3.7
f=open("python_out.txt",'w',encoding='utf-8')
f.write("OK1")
import socket
import telegram
f.write("OK2")
BOT_TOKEN = "telegram BOT_TOKEN"
CHAT_ID = "chat_id"
bot = telegram.Bot(token=BOT_TOKEN)
host_name = socket.gethostname()
content = 'Machine name: %s is shutting down!' % host_name
bot.send_message(chat_id=CHAT_ID, text=content)
f.write("OK3")
I have checked my environment, I can make this script work through python3 script.py when it is in the instance,It can send notifications and output python_out.txt.
I set this script in shutdown-script
But when I manually clicked the "stop" button, it did not work as expected. startup-script too.
I have read many posts:
Shutdown script not executing on a Google Cloud VM
Reliably executing shutdown scripts in Google Compute Engine
Pro Tip: Use Shutdown Script Detect Preemption on GCP
Of course it also includes official documents:
https://cloud.google.com/compute/docs/shutdownscript
I want to try setting powerbtn.sh,but i can't find /etc/acpi/ in GCP Ubuntu 16.04 LTS
I can't find any more schedule, any ideas?

When you use startup script and shutdown script, the current user that execute is the root user, and the default directory /root/. This directory isn't writable, that's why nothing happens with your code.
Simply write files in writable directory and that's all.
Don't forget that the files that you create are written by the root user and all user can't read and/or write on file wrote by root. Use chmod or chown to change this.

Related

Automated Backups from EC2 to NAS using Rsync

I am trying to automate backups using a raspberry pi with a python script that will rsync everything on the EC2 instances CAMSTEST1 and CAMSProd in the respective backup directory to the on premise NAS.
Hear is the script
#!/usr/bin/python3
import subprocess
from os import path
# private key for AWS Servers
AWS_PRIVATE_KEY = "ARCS-Key-Pair-01.pem"
AWS_USER ="ubuntu"
# NAS backup directory
NAS_BACKUP_DIR = "192.168.1.128:/test"
NAS_MNT = "/mnt/nasper01/"
# CAMSTest1 Config
CAMSTEST1_USER = "ubuntu"
CAMSTEST1_IP = "52.62.119.203"
CAMSTEST1_DIR = "/mnt/backups/*"
CAMSTEST1_MNT = NAS_MNT + "camstest"
#CAMSProd Config
CAMSPROD_USER = "ubuntu"
CAMSPROD_IP = "54.206.28.116"
CAMSPROD_DIR = "/mnt/backups/*"
CAMSPROD_MNT = NAS_MNT + "camsprod"
# mount NAS
print("Mounting NAS")
subprocess.call(["mount","-t", "nfs", NAS_BACKUP_DIR, NAS_MNT])
print("NAS Mounted Successfully")
# backup CAMSTEST1
print("Backing Up CAMSTest1")
hostnamefs = "{user}#{ip}:{dir}".format(user=CAMSTEST1_USER,ip=CAMSTEST1_IP,dir=CAMSTEST1_DIR)
sshaccess = 'ssh -i {private_key}'.format(private_key=AWS_PRIVATE_KEY)
subprocess.call(["rsync","-P","-v","--rsync-path","sudo rsync","--remove-source-files","--recursive","-z","-e",sshaccess,"--exclude","/backup-script",hostnamefs, CAMSTEST1_MNT ])
print("Backed Up CAMSTest1 Successfully")
#backup CAMSPROD
print("Backing Up CAMSProd")
hostnamefs = "{user}#{ip}:{dir}".format(user=CAMSPROD_USER,ip=CAMSPROD_IP,dir=CAMSPROD_DIR)
sshaccess = 'ssh -i {private_key}'.format(private_key=AWS_PRIVATE_KEY)
subprocess.call(["rsync","-P","-v","--rsync-path", "sudo rsync","--remove-source-files","--recursive","-z","-e",sshaccess,"--exclude","/backup-script","--exclude","/influxdb-backup", "--exclude", "/db-backup-only",hostnamefs, CAMSPROD_MNT ])
print("Backed Up CAMSProd Successfully")
Hear is the cronjob
0 0 * * 0 sudo python3 /home/pi/backup/backupscript.py >> /home/pi/backup/backuplog
The script works perfectly when run manually from the terminal. However it does not work with a cronjob. It runs without errors but nothing is copied from teh ec2 instacnce to the NAS. Could anyone explain why its not working with a cronjob but is working in the terminal ?
EDIT
Here is the output of the the backups script log with no errors
Last RunTime:
2021-10-31 00:00:02.191447
Mounting NAS
NAS Mounted Successfully
Backing Up ARCSWeb02
Backed up ARCWeb02 Successfully
Backing Up CAMSTest1
Backed Up CAMSTest1 Successfully
Backing Up CAMSProd
Backed Up CAMSProd Successfully
Fetching origin
Last RunTime:
2021-11-07 00:00:02.264703
Mounting NAS
NAS Mounted Successfully
Backing Up ARCSWeb02
Backed up ARCWeb02 Successfully
Backing Up CAMSTest1
Backed Up CAMSTest1 Successfully
Backing Up CAMSProd
Backed Up CAMSProd Successfully
Rather than a problem with the script itself, which seems to work fine, I would suggest to run the cronjob directly as root by editing the cron file using crontab -e, or by adding a file with root (or other user which can execute the script directly) specified within the job definition into /etc/cron.d/, without specifying the sudo keyword
Here are some references on that matter:
https://askubuntu.com/questions/173924/how-to-run-a-cron-job-using-the-sudo-command
https://serverfault.com/questions/352835/crontab-running-as-a-specific-user
When you run rsync in a subprocess, it picks up a different $PATH from the non-interactive shell; therefore, rsync might not be in its path. Try using the full path to rsync in your subprocess calls.
I have few suggestions,
As suggested before, use cronjob for root user (no sudo)
remove python3 from cronjob, instead make your script executable using `chmod +x ' and run it as
0 0 * * 0 /home/pi/backup/backupscript.py >> /home/pi/backup/backuplog
Just a suggestion, I thing shell script will be more appropriate here, instead of python

Supervisor service doesn't have access to audio device ("Error while setting open_pcm: No such file or directory")

I'm using Supervisor to demonize a Python / Liquidsoap application. When I start the application from the command line, things are working fine.
When I run the same application using supervisorctl the Liquidsoap implementation fails when trying to access the audio device:
[lineout:3] Using ALSA 1.1.8.
[clock.wallclock_alsa:2] Error when starting output lineout: Failure("Error while setting open_pcm: No such file or directory")!
The USB Audio Interface is accessed via ALSA. The Supervisor Configuration has the correct user set and the service is started with this very user:
[program:aura-engine]
user = engineuser
directory = /opt/aura/engine
command = /opt/aura/engine/run.sh engine
priority = 666
autostart = true
autorestart = true
stopsignal = TERM
redirect_stderr = true
stdout_logfile = /var/log/aura/engine-core-stdout.log
stderr_logfile = /var/log/aura/engine-core-error.log
Any ideas if there are any additional hardware permission issues involved when using Supervisord?
It turned out, that starting the application with root (root user in Supervisor config, but also starting supervisord as root, plus starting the service with sudo supervisorctl start...) successfully grants access to the audio hardware. But running the app as root is not an option. This also issues a warning by Supervisor.
Then I returned the configuration to the desired engineuser, and did reload the configuration with sudo:
sudo supervisorctl reload
Now, suddenly I'm able to start the app without root/sudo and have full access to the audio hardware:
supervisorctl start aura-engine

Start django app as service

I want to create service that will be start with ubuntu and will have ability to use django models etc..
This service will create thread util.WorkerThread and wait some data in main.py
if __name__ == '__main__':
bot.polling(none_stop=True)
How I can to do this. I just don't know what I need to looking for.
If you also can say how I can create ubuntu autostart service with script like that, please tell me )
P.S. all django project run via uwsgi in emperor mode.
The easiest way in my opinion is create a script and run on crontab.
First of all create a script to start your django app.
#!/bin/bash
cd /path/to your/virtual environment #path to your virtual environment
. bin/activate #Activate your virtual environment
cd /path/to your/project directory #After that go to your project directory
python manage.py runserver #run django server
Save the script and open crontab with the command:
crontab -e
Now edit the crontab file and write on the last line:
#reboot path/to/your/script.sh
This way is not the best but the easiest, if you are not comfortable with Linux startup service creation.
I hope this help you :)
Take a look at supervisord. It is much easier than daemonizing python script.
Config it something like this:
[program:watcher]
command = /usr/bin/python /path/to/main.py
stdout_logfile = /var/log/main-stdout.log
stdout_logfile_maxbytes = 10MB
stdout_logfile_backups = 5
stderr_logfile = /var/log/main-stderr.log
stderr_logfile_maxbytes = 10MB
stderr_logfile_backups = 5
Ok, that is answer - https://www.raspberrypi-spy.co.uk/2015/10/how-to-autorun-a-python-script-on-boot-using-systemd/
In new versions ubuntu services .conf in /etc/init fail with error Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
But services works using systemd

Jenkins on Windows gets stuck on Fabric remote command when deploying Python app

I have this Jenkins build configuration for my Django application in "Execute Windows batch command" field:
// Code is downloaded using Git plugin
virtualenv data/.venv
call data/.venv/Scripts/activate.bat
pip install -r requirements/local.txt
cd src/
python .\manage.py test
cd ..
fabric dev deploy // Build job get stuck here
All steps work OK except the last one. Jenkins gets stuck on first Fabric attempt to connect to remote server. In "Console output" is spinner keep spinning and I need to kill build manually.
When I run Fabric task manually from CLI, it works. I read about some problems with Jenkins+known_hosts, so I tried env.reject_unknown_hosts = True in fabfile to see if there is "Add to authorized keys" question.
Fabfile is pretty standard, nothing special:
#task
def dev():
env.user = "..."
env.hosts = "..."
env.key_filename = "..."
nv.reject_unknown_hosts = True
#task
def deploy():
local("python src/manage.py check") # <---- OK, output is in Jenkins
run('git reset --hard') # <---- Jenkins will freeze
run('git pull --no-edit origin master')
# etc ....
print("Done.")
These requires a password, the process is probbaly stuck asking for user's password.
Add --no-pty to the command to make sure it's not blocking and reporting the error.
It is than solved based o your specific remote/ssh/tty setup.

Print to file Beanstalk Worker Tier (Python)

I asked something similar to this question and I haven't gotten any responses that help. So, I have decided to simplify things as much as I can with the following:
I have developed a python flask application and deployed to a beanstalk worker tier python environment. The issue is I can't figure out how to print or log or write anything anywhere. I need to debug this application and the only way I know how to do that is by printing to either the console or a log file to see exactly what is going on. When I run the application locally I can print to the console, write to files, and log with zero problems, it is just when I deploy it to the beanstalk environment that nothing happens. I have SSHed into the ec2 instance where I have application deployed and searched practically every file and I find that nothing was written by my python script anywhere.
This question probably seems absolutely stupid but can someone please provide me with an example of a python flask application that will run on a beanstalk worker environment that just prints "Hello World" to some file that I can find on the ec2 instance? Please include what should be written the requirements.txt file and any *.config files in the .ebextensions folder.
Thank You
Here is another simple python app that you can try. The one in the blog will work as well but this shows a minimal example of an app that prints messages received from SQS to a file on the EC2 instance.
Your app source folder should have the following files:
application.py
import os
import time
import flask
import json
application = flask.Flask(__name__)
start_time = time.time()
counter_file = '/tmp/worker_role.tmp'
#application.route('/', methods=['GET', 'POST'])
def hello_world():
if flask.request.method == 'POST':
with open(counter_file, 'a') as f:
f.write(flask.request.data + "\n")
return flask.Response(status=200)
if __name__ == '__main__':
application.run(host='0.0.0.0', debug=True)
requirements.txt
Flask==0.9
Werkzeug==0.8.3
.ebextensions/01-login.config
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: EC2KeyName
value: your-key-name
Launch a worker tier 1.1 environment with a Python 2.7 solution stack. I tested with (64bit Amazon Linux 2014.03 v1.0.4 running Python 2.7).
Wait for the environment to go green. After it goes green click on the queue URL as visible in the console. This will take you to the SQS console page. Right click on the queue and click on "Send a message". Then type the following message: {"hello" : "world"}.
SSH to the EC2 instance and open the file /tmp/worker_role.tmp. You should be able to see your message in this file.
Make sure you have IAM policies correctly configured for using Worker Role environments.
For more information on IAM policies refer this answer: https://stackoverflow.com/a/23942498/161628
There is a python+flask on beanstalk example on AWS Application Management Blog:
http://blogs.aws.amazon.com/application-management/post/Tx1Y8QSQRL1KQZC/Elastic-Beanstalk-Video-Tutorial-Worker-Tier
http://blogs.aws.amazon.com/application-management/post/Tx36JL4GPZR4G98/A-Sample-App-For-Startups
For the logging issues, i'd suggest:
Check your /var/log/eb-cfn-init.log (and other log files in this directory), if a .config command is failing you will see which and why there.
In your .config commands, output messages to a different log file so you see exactly where your bootstrap failed in your own file.
Add you application log file to EB Log Snapshots (/opt/elasticbeanstalk/tasks/taillogs.d/) and EB S3 log rotation (/opt/elasticbeanstalk/tasks/publishlogs.d/). See other files in these directories for examples.

Categories