I am trying to automate backups using a raspberry pi with a python script that will rsync everything on the EC2 instances CAMSTEST1 and CAMSProd in the respective backup directory to the on premise NAS.
Hear is the script
#!/usr/bin/python3
import subprocess
from os import path
# private key for AWS Servers
AWS_PRIVATE_KEY = "ARCS-Key-Pair-01.pem"
AWS_USER ="ubuntu"
# NAS backup directory
NAS_BACKUP_DIR = "192.168.1.128:/test"
NAS_MNT = "/mnt/nasper01/"
# CAMSTest1 Config
CAMSTEST1_USER = "ubuntu"
CAMSTEST1_IP = "52.62.119.203"
CAMSTEST1_DIR = "/mnt/backups/*"
CAMSTEST1_MNT = NAS_MNT + "camstest"
#CAMSProd Config
CAMSPROD_USER = "ubuntu"
CAMSPROD_IP = "54.206.28.116"
CAMSPROD_DIR = "/mnt/backups/*"
CAMSPROD_MNT = NAS_MNT + "camsprod"
# mount NAS
print("Mounting NAS")
subprocess.call(["mount","-t", "nfs", NAS_BACKUP_DIR, NAS_MNT])
print("NAS Mounted Successfully")
# backup CAMSTEST1
print("Backing Up CAMSTest1")
hostnamefs = "{user}#{ip}:{dir}".format(user=CAMSTEST1_USER,ip=CAMSTEST1_IP,dir=CAMSTEST1_DIR)
sshaccess = 'ssh -i {private_key}'.format(private_key=AWS_PRIVATE_KEY)
subprocess.call(["rsync","-P","-v","--rsync-path","sudo rsync","--remove-source-files","--recursive","-z","-e",sshaccess,"--exclude","/backup-script",hostnamefs, CAMSTEST1_MNT ])
print("Backed Up CAMSTest1 Successfully")
#backup CAMSPROD
print("Backing Up CAMSProd")
hostnamefs = "{user}#{ip}:{dir}".format(user=CAMSPROD_USER,ip=CAMSPROD_IP,dir=CAMSPROD_DIR)
sshaccess = 'ssh -i {private_key}'.format(private_key=AWS_PRIVATE_KEY)
subprocess.call(["rsync","-P","-v","--rsync-path", "sudo rsync","--remove-source-files","--recursive","-z","-e",sshaccess,"--exclude","/backup-script","--exclude","/influxdb-backup", "--exclude", "/db-backup-only",hostnamefs, CAMSPROD_MNT ])
print("Backed Up CAMSProd Successfully")
Hear is the cronjob
0 0 * * 0 sudo python3 /home/pi/backup/backupscript.py >> /home/pi/backup/backuplog
The script works perfectly when run manually from the terminal. However it does not work with a cronjob. It runs without errors but nothing is copied from teh ec2 instacnce to the NAS. Could anyone explain why its not working with a cronjob but is working in the terminal ?
EDIT
Here is the output of the the backups script log with no errors
Last RunTime:
2021-10-31 00:00:02.191447
Mounting NAS
NAS Mounted Successfully
Backing Up ARCSWeb02
Backed up ARCWeb02 Successfully
Backing Up CAMSTest1
Backed Up CAMSTest1 Successfully
Backing Up CAMSProd
Backed Up CAMSProd Successfully
Fetching origin
Last RunTime:
2021-11-07 00:00:02.264703
Mounting NAS
NAS Mounted Successfully
Backing Up ARCSWeb02
Backed up ARCWeb02 Successfully
Backing Up CAMSTest1
Backed Up CAMSTest1 Successfully
Backing Up CAMSProd
Backed Up CAMSProd Successfully
Rather than a problem with the script itself, which seems to work fine, I would suggest to run the cronjob directly as root by editing the cron file using crontab -e, or by adding a file with root (or other user which can execute the script directly) specified within the job definition into /etc/cron.d/, without specifying the sudo keyword
Here are some references on that matter:
https://askubuntu.com/questions/173924/how-to-run-a-cron-job-using-the-sudo-command
https://serverfault.com/questions/352835/crontab-running-as-a-specific-user
When you run rsync in a subprocess, it picks up a different $PATH from the non-interactive shell; therefore, rsync might not be in its path. Try using the full path to rsync in your subprocess calls.
I have few suggestions,
As suggested before, use cronjob for root user (no sudo)
remove python3 from cronjob, instead make your script executable using `chmod +x ' and run it as
0 0 * * 0 /home/pi/backup/backupscript.py >> /home/pi/backup/backuplog
Just a suggestion, I thing shell script will be more appropriate here, instead of python
Related
I have the following:
#! /usr/bin/python3.7
f=open("python_out.txt",'w',encoding='utf-8')
f.write("OK1")
import socket
import telegram
f.write("OK2")
BOT_TOKEN = "telegram BOT_TOKEN"
CHAT_ID = "chat_id"
bot = telegram.Bot(token=BOT_TOKEN)
host_name = socket.gethostname()
content = 'Machine name: %s is shutting down!' % host_name
bot.send_message(chat_id=CHAT_ID, text=content)
f.write("OK3")
I have checked my environment, I can make this script work through python3 script.py when it is in the instance,It can send notifications and output python_out.txt.
I set this script in shutdown-script
But when I manually clicked the "stop" button, it did not work as expected. startup-script too.
I have read many posts:
Shutdown script not executing on a Google Cloud VM
Reliably executing shutdown scripts in Google Compute Engine
Pro Tip: Use Shutdown Script Detect Preemption on GCP
Of course it also includes official documents:
https://cloud.google.com/compute/docs/shutdownscript
I want to try setting powerbtn.sh,but i can't find /etc/acpi/ in GCP Ubuntu 16.04 LTS
I can't find any more schedule, any ideas?
When you use startup script and shutdown script, the current user that execute is the root user, and the default directory /root/. This directory isn't writable, that's why nothing happens with your code.
Simply write files in writable directory and that's all.
Don't forget that the files that you create are written by the root user and all user can't read and/or write on file wrote by root. Use chmod or chown to change this.
I've got a program that needs to automatically install & manage some Docker containers on Windows with minimal user input.
It needs to automatically setup Docker to mount arbitrary Windows folders. It needs to do this from a clean install, where the Docker VM cannot be assumed to have been created.
Docker by default will allow almost any folder in C:\Users to mount through to its Boot2Docker image, which in turn makes them available for mounting into Docker images themselves.
I'd like a way to automatically modify the default mount script from outside the VM so that I can use other folders, but "VBoxManage.exe run", copyto, etc. commands don't work on Boot2Docker in any way, unlike other Linux VMs I have.
So, in my quest for a solution, I stumbled upon py-vbox, which lets you easily send keyboard events to the console using the VirtualBox API. It also allows for direct console sessions, but they fail just like VBoxManage.exe does. So, this ended with me sending lots of
echo command >> /c/script.sh
commands over the keyboard in order to setup a script that will mount the extra volumes. Is there a better way?
For anyone who might need it, here's a very simplified version of what goes on. The first two bits are the old .bat files, so that they apply to anyone. First, to create our docker VM:
set PATH=%PATH%;"c:\Program Files (x86)\Git\bin"
docker-machine create --driver virtualbox my-docker-vm
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" sharedfolder add "my-docker-vm" --name "c/myfolder" --hostpath "c:\myfolder" --automount
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" setextradata "my-docker-vm" VBoxInternal2/SharedFoldersEnableSymlinksCreate/c/myfolder 1
Then, the docker VM must be started...
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm --type=headless my-docker-vm
set PATH=%PATH%;"c:\Program Files (x86)\Git\bin"
docker-machine env --shell cmd my-docker-vm > temp.cmd
call temp.cmd
del temp.cmd
Now, a simplified version of the Python script to write a simplified mount script into the VM via the keyboard using py-vbox:
import virtualbox
script = """\n\
echo if [ ! -d /c/myfolder ] > /c/script.sh\n\
echo then >> /c/script.sh\n\
echo mkdir -p /c/myfolder >> /c/script.sh\n\
echo mount -t vboxsf c/myfolder /c/myfolder >> /c/script.sh\n\
echo fi >> /c/script.sh\n\
chmod +x /c/script.sh\n\
/bin/sh /c/script.sh\n\
rm /c/script.sh\n\
"""
my_vm_name = 'my-docker-vm'
def mount_folder():
vbox = virtualbox.VirtualBox()
is_there = False
for vmname in vbox.machines:
if str(vmname) == my_vm_name:
is_there = True
break
if is_there is False:
raise whatever
return
vm = vbox.find_machine(my_vm_name)
session = vm.create_session()
session.console.keyboard.put_keys(script)
as discussed in the comments:
The C:\Users folder is shared with the VM using the sharedfolders feature of VirtualBox. Just add another sharedfolder and you are done. This is possible from the commandline via VBoxManage sharedfolder add <uuid|vmname> --name <name> --hostpath <path> [--transient] [--readonly] [--automount]. You probably need to restart the VM afterwards.
Another option in newer Windows versions is to just mount whatever folder you want somewhere inside the C:\Users folder, e.g. C:\Users\myuser\dockerdata.
The entire script runs fine. I will also note that if I copy and paste the cron job into the shell and run it manually it works with no issues.
Base = '/home/user/git/'
GIT_out = Base + ("git_file.txt")
FILE_NAME = Base + 'rules/file.xml'
CD_file = open(Base + "rules/reports/CD.txt", 'r')
os.chdir(Base + 'rules')
gitFetchPull = "git fetch --all ;sleep 3 ; git pull --all"
git1 = subprocess.Popen(gitFetchPull, shell=True, stdout=subprocess.PIPE)
gitOut = git1.stdout.read()
print(gitOut)
When I read the output from cron it appears to not be able to authenticate
Received disconnect from 172.17.3.18: 2: Too many authentication failures for tyoffe4
fatal: The remote end hung up unexpectedly
error: Could not fetch origin
cron job
* * * /usr/bin/python /home/tyoffe4/git/rules/reports/cd_release.py >/home/tyoffe4/git/rules/reports/cd_release.out 2>&1
This is likely an issue of the cron environment not having the environment variables set up by your ssh agent. Therefore when git makes an ssh connection, it can't authenticate, because it can't contact your ssh agent and get keys.
This answer probably has what you're looking for:
ssh-agent and crontab -- is there a good way to get these to meet?
If for some reason it's not ssh-agent related, try print os.environ at the top of your script to dump the value of all environment variables.
Compare the output from cron and running env in your bash shell. There are likely some differences, and one of them is the source of your error.
If you set up the same environment variables in your shell as you have in cron, the behavior should reproduce.
I am working with a django app called django-mailbox. The purpose of this is to import email messages via pop3 and other protocols and store them in a db. I want to do this at regular intervals via a chron. In the documentation http://django-mailbox.readthedocs.org/en/latest/topics/polling.html it states:
Using a cron job
You can easily consume incoming mail by running the management command named getmail (optionally with an argument of the name of the mailbox you’d like to get the mail for).:
python manage.py getmail
Now I can run this at the command line locally and it works but if this was deployed to an outside server which was only accessible by a URL how would this command be given?
If you are using a virtual env use the python binary from the virtualenv
* * * * * /path/to/virtualenv/bin/python /path/to/project/manage.py management_command
on the server machine:
$ sudo crontab -l
no crontab for root
$ sudo crontab -e
no crontab for root - using an empty one
Select an editor. To change later, run 'select-editor'.
1. /bin/ed
2. /bin/nano <---- easiest
3. /usr/bin/vim.basic
4. /usr/bin/vim.tiny
Choose 1-4 [2]:
choose your preferred editor
then see http://en.wikipedia.org/wiki/Cron for how to schedule when will the command run, direct it to some .sh file on your machine, make sure you give full path as this is going to run in root user context.
the script the cron will run may look something like:
#!/bin/bash
cd /absolute/path/to/django/project
/usr/bin/python ./manage.py getmail
I'm working on a python script that monitors a directory and uploads files that have been created or modified using scp. That's fine, except I want this to be done recursively, and I'm having a problem if a user creates a directory in the watch directory, and then modifies a file inside that new directory.
I can detect the directory creation and file nested file creation/modification just fine. But if I try to upload that file to the remote server, it won't work since the directory on the remote site won't exist. Is there a simple way to do this WITHOUT recursively copying the created directory? I want to avoid this because I don't want to delete the remote folder if it exists.
Also, please don't suggest rsync. It has to only use ssh and scp.
Since you have ssh, can't you just create the directory first? For example, given a file with absolute path /some/path/file.txt, issue a mkdir -p /home/path before uploading file.txt.
UPDATE: If you're looking to lower the number of transactions, a better method might be to make a tar file locally, transfer that, and untar it.
While I imagine your specific application will have its own quirks (as does mine), this may put you on the right path. Below is a short snippet from a script I use to put files onto a remote EC2 instance using Fabric built on paramiko. Also note I where I put the sudo commands as Fabric has its own "sudo" class. This is one of those quirks I was referring to. Hope this helps someone.
from fabric.api import env, run, put, settings, cd
from fabric.contrib.files import exists
'''
sudo apt-get install fabric
Initially setup for interaction with an AWS EC2 instance
At the terminal prompt run:
fab ec2 makeRemoteDirectory changePermissions putScript
'''
TARGETPATH = '/your/path/here'
def ec2():
env.hosts = ['your EC2 Instance or remote address']
env.user = 'user_name'
env.key_filename = '/path/to/your/private_key.pem'
def makeRemoteDirectory():
if not exists('%s'%TARGETPATH):
run('sudo mkdir %s'%TARGETPATH)
def changePermissions():
run('sudo chown -R %(user)s:%(user)s %(path)s'%{'user': env.user, 'path': TARGETPATH})
def putScript():
fileName = '/path/to/local/file'
dirName = TARGETPATH
put(fileName, dirName)
It's not exactly scp, but sftp can take the -b parameter with a batch file. You can send a mkdir and a put.