Decryption of what I do / what I want:
Remotely run script on our servers (from Ubuntu 14.04, Python 2.7.6,
I would like execute scripts on Ubuntu 12.04 & Ubuntu 14.04). This script will mount encrypted volumes and set correct hostname and confirm that servers are in staging environment.
What I did:
I created EXPECT script (scripts/auto_mounth.sh - Thank to this I am able to run interactive script what will automatically type mount partion's Password, call the hostname and type it and confirm that servers are in staging environment
I created FABRIC function
a. In this function I set env.user & env.password
b. This function runs update & upgrade of OS, and copy and run EXPECT script (for mount encrypted volume)
import os
import sys
import time
import boto
import deployment
import getpass
import sys
env.user = 'my_user'
env.password = 'my_sudo_pass"
def staging_auto_mount():
# install expect
sudo('apt-get update', pty=False)
sudo('apt-get -y dist-upgrade', pty=False)
sudo('apt-get -y autoclean', pty=False)
sudo('apt-get -y install expect', pty=False)
# auto mount EBS volume
sudo('mkdir -p /scripts/', pty=False)
put('scripts/auto_mount.sh', '/scripts/', use_sudo=True, mode=0755)
run('/scripts/auto_mount.sh')
sudo('rm -rf /scripts/')
I created BASH script ( /home/scripts/staging_server01.sh ) - I just put in this script command for run FABric function
cd /scripts/ && /usr/local/bin/fab staging_auto_mount -H XXX.xxx.XXX.xxx
My issue is following:
When I run it manually everything works fine and I can run just BASH Script (ad 3.) like
/home/scripts/staging_server01.sh
Or I can run FABric function
cd /scripts/ && /usr/local/bin/fab staging_auto_mount -H XXX.xxx.XXX.xxx
But when I put it into crontab I have problem
How I set crontab job, I tried several things
08 07 * * * cd /scripts/ && /usr/local/bin/fab staging_auto_mount -H XXX.xxx.XXX.xxx --password='my_SUDO_PASS'
08 07 * * * cd /scripts/ && /usr/local/bin/fab staging_auto_mount -H XXX.xxx.XXX.xxx
08 07 * * * /home/scripts/staging_server01.sh
(on my local environment I use same user with same sudo permission like on staging servers. Both users have same PASS for my testing. I also tried run commands in crontab job like specific user)
In log I see this:
/usr/lib/python2.7/getpass.py:83: GetPassWarning: Can not control echo on the terminal.
passwd = fallback_getpass(prompt, stream)
Warning: Password input may be echoed.
I was thinking to use SSL and run via SSL remote commands, but we often recovery servers, so I would have to very often generate new keys. So this is not way.
I did some investigate and I rewrote my FABric function and I am stuck on this too. and honestly, I have with code bellow much bigger problem [I am not able to create directory & copy file (permission issue, or log told me that directory is already exists) neither when I ran it manually or via crontab]
import os
import sys
import time
import boto
import deployment
import getpass
import sys
import shutil
import os.path
env.user = 'my_user'
env.password = 'my_sudo_pass'
path = "/home/scripts/"
def staging_auto_mount():
# install expect
os.system("sudo -p " + "env.password" + " apt-get update")
os.system("sudo -p " + "env.password" + " apt-get -y dist-upgrade")
os.system("sudo -p " + "env.password" + " apt-get -y autoclean")
os.system("sudo -p " + "env.password" + " apt-get -y install expect")
via crontab I can get this error
Message-Id: <20151215212901.3404CA0B02#vm01-ubuntu>
Date: Tue, 15 Dec 2015 15:29:01 -0600 (CST)
sudo: no tty present and no askpass program specified
sudo: no tty present and no askpass program specified
sudo: no tty present and no askpass program specified
sudo: no tty present and no askpass program specified
[XXX.xxx.XXX.xxx] Executing task 'staging_auto_mount'
About the error above I tried to set specific NOPASSWORD permission in visudo, but the issue still existed
I will be very grateful for any help
here the -S option is suggested:
echo <password> | sudo -S apt-get smth
sudo -S reads the password from stdin.
here is an advice about removimg Defaults requiretty from the /etc/sudoers file
Related
I am attempting to execute sudo from a python script:
import os
command = 'id'
os.popen("sudo -S %s"%(command), 'w').write('password')
My server is configured to not allow non-TTY shells to execute sudo for security reasons, is there a workaround for this to execute the sudo command using python (without using su).
The above code outputs:
sudo: sorry, you must have a tty to run sudo
Based on this: https://stackoverflow.com/a/21444480/1216776
you should be able to do:
import os
import pty
command = 'id'
scmd ="sudo -S %s"%(command)
def reader(fd):
return os.read(fd)
def writer(fd):
yield 'password'
yield ''
pty.spawn(scmd, reader, writer)
I am using fabric to automate some deployment stuff. Below is a sample of the code I used:
run(f"sudo -H -u www-data bash -c 'rm -r project_name' ")
run(f"sudo -H -u www-data bash -c '/opt/www-data/project-name/bin/pip install -r requirements.txt' ")
run("sudo systemctl stop gunicorn")
run("sudo systemctl start gunicorn")
Everytime each line of code was ran, the terminal ask for my user password, is there a way I can enter the password just once?
Edit:
I am using python3 and the essence of the script was to run the commands on a different user, rather than my own.
Update:
I achieved this by running fabric with "-I" param.
fabric -I deploy
Using run is not the ideal way to achieve this.
fabric.operations.sudo(*args, **kwargs) is something that can be used to achieve what you are attempting.
Please be careful with sudo :)
Every run() invocation is a separate shell, as would be a sudo() invocation. The sudo credentials are per shell, so they are gone every time.
A quick and dirty way would be to lump all commands into one sudo invocation.
A nicer way would be to have a sudoers file on the target host(s) and give each user the required privileges to run particular commands without entering a password.
You can create a fab script like below and then iterate over the host list you want to run the commands because you can passwd username and password in the script itself so to avoid password invocation:
# testCheck.py
#!/usr/bin/python2.7
import sys
from fabric.api import *
env.skip_bad_hosts=True
env.command_timeout=160
env.user = 'user_name'
env.shell = "/bin/sh -c"
env.warn_only = True
env.password = 'user_password'
def readhost():
env.hosts = [line.strip() for line in sys.stdin.readlines()]
def hosts():
with settings(warn_only=True):
output=sudo("ls -l /myfolder",shell=False)
# cat hostfile.txt| | /usr/local/bin/fab readhost -f testCheck.py hosts -P -z 5
OR supplying password at command line
# cat hostfile.txt | /usr/local/bin/fab readhost -f testCheck.py --password=your_pass hosts -P -z 5
--> argument "-P" refers to parallel execution method
--> argument "-z" refres to the number of concurrent processes to use in parallel mode
exapmle hostfile.txt:
server1
server2
server3
server4
Hope this will help.
If you are using ssh keys, then set the fabric environment variable key_filename:
env.key_filename='/path/to/key.pem'
# set the following as well
env.user='username'
env.host='hostaddr'
It will ask you for the password only one time.
Have a look at this question regarding avoid to enter any sudo password when using fabric.
I created an SSH-Agent to provide my key to the ssh/scp cmd when connecting to my server.
I also scripted a SSH-Add with the command 'expect' to write my paraphrase when it's needed.
This works perfectly with my user "user".
But I'm executing a python script that uses /dev/mem that need to be run as root through sudo. This python script call another bash script with ssh and scp cmd inside.
Therefore all these cmd are executed as root and my agent/ssh-add doesn't work anymore, keeping asking for the paraphrase for each file.
How could I fix that ? I don't want to log as root and run a agent as root.
I tried the sudo -u user ssh but it doesn't work (ie: need to enter my paraphrase)
Any ideas?
Thanks in advance,
Mat
EDIT: my code:
The py script needing the sudo
#!/usr/bin/env python2.7
import RPi.GPIO as GPIO
import time
import subprocess
from subprocess import call
from datetime import datetime
import picamera
import os
import sys
GPIO.setmode(GPIO.BCM)
# GPIO 23 set up as input. It is pulled up to stop false signals
GPIO.setup(23, GPIO.IN, pull_up_down=GPIO.PUD_UP)
#set path and time to create the folder where the images will be saved
pathtoscript = "/home/pi/python-scripts"
current_time = time.localtime()[0:6]
dirfmt = "%4d-%02d-%02d-%02d-%02d-%02d"
dirpath = os.path.join(pathtoscript , dirfmt)
localdirname = dirpath % current_time[0:6] #dirname created with date and time
remotedirname = dirfmt % current_time[0:6] #remote-dirname created with date and time
os.mkdir(localdirname) #mkdir
pictureName = localdirname + "/image%02d.jpg" #path+name of pictures
var = 1
while var == 1:
try:
GPIO.wait_for_edge(23, GPIO.FALLING)
with picamera.PiCamera() as camera:
#camera.capture_sequence(["/home/pi/python-scripts/'dirname'/image%02d.jpg" % i for i in range(2)])
camera.capture_sequence([pictureName % i for i in range(19)])
camera.close()
cmd = '/home/pi/python-scripts/picturesToServer {0} &'.format(remotedirname)
call ([cmd], shell=True)
except KeyboardInterrupt:
GPIO.cleanup() # clean up GPIO on CTRL+C exit
GPIO.cleanup() # clean up GPIO on normal exit
the bash script:
#!/bin/bash
cd $1
ssh user#server mkdir /home/repulsion/picsToAnimate/"$1" >/dev/null 2>&1
ssh user#server cp "$1"/* /home/repulsion/picsToAnimate/"$1"/ >/dev/null 2>&1
for i in $( ls ); do
scp $i user#server:/home/repulsion/picsToAnimate/"$1"/ >/dev/null 2>&1
done
You will need the SSH agent environment variables to be passed in through the sudo.
To do so, you can run sudo -E to pass all environment variables in through sudo; but this can be dangerous, so it's probably better to pass just the ones you need. The easiest way to do this is for sudo to invoke env to invoke the given program with the appropriate environment variables set:
$ sudo env SSH_AGENT_PID=$SSH_AGENT_PID SSH_AUTH_SOCK=$SSH_AUTH_SOCK my-script
The environment variables needed for shh-agent are removed by sudo. see here for how to keep them.
But why do you have a ssh-add there type the passphrase for you insted of just having a ssh key with no passphrase? You can remove it with
ssh-keygen -p [-P old_passphrase] [-N new_passphrase] [-f keyfile]
I would like to run a python cron job inside of a docker container in detached mode. My set-up is below:
My python script is test.py
#!/usr/bin/env python
import datetime
print "Cron job has run at %s" %datetime.datetime.now()
My cron file is my-crontab
* * * * * /test.py > /dev/console
and my Dockerfile is
FROM ubuntu:latest
RUN apt-get update && apt-get install -y software-properties-common python-software-properties && apt-get update
RUN apt-get install -y python cron
ADD my-crontab /
ADD test.py /
RUN chmod a+x test.py
RUN crontab /my-crontab
ENTRYPOINT cron -f
What are the potential problems with this approach? Are there other approaches and what are their pros and cons?
Several issues that I faced while trying to get a cron job running in a docker container were:
time in the docker container is in UTC not local time;
the docker environment is not passed to cron;
as Thomas noted, cron logging leaves a lot to be desired and accessing it through docker requires a docker-based solution.
There are cron-specific issues and are docker-specific issues in the list, but in any case they have to be addressed to get cron working.
To that end, my current working solution to the problem posed in the question is as follows:
Create a docker volume to which all scripts running under cron will write:
# Dockerfile for test-logs
# BUILD-USING: docker build -t test-logs .
# RUN-USING: docker run -d -v /t-logs --name t-logs test-logs
# INSPECT-USING: docker run -t -i --volumes-from t-logs ubuntu:latest /bin/bash
FROM stackbrew/busybox:latest
# Create logs volume
VOLUME /var/log
CMD ["true"]
The script that will run under cron is test.py:
#!/usr/bin/env python
# python script which needs an environment variable and runs as a cron job
import datetime
import os
test_environ = os.environ["TEST_ENV"]
print "Cron job has run at %s with environment variable '%s'" %(datetime.datetime.now(), test_environ)
In order to pass the environment variable to the script that I want to run under cron, follow Thomas' suggestion and put a crontab fragment for each script (or group of scripts) that has need of a docker environment variable in /etc/cron.d with a placeholder XXXXXXX which must be set.
# placed in /etc/cron.d
# TEST_ENV is an docker environment variable that the script test.py need
TEST_ENV=XXXXXXX
#
* * * * * root python /test.py >> /var/log/test.log
Instead of calling cron directly, wrap cron in a python script that does does things: 1. reads the environment variable from the docker environment variable and sets the environment variable in a crontab fragment.
#!/usr/bin/env python
# run-cron.py
# sets environment variable crontab fragments and runs cron
import os
from subprocess import call
import fileinput
# read docker environment variables and set them in the appropriate crontab fragment
environment_variable = os.environ["TEST_ENV"]
for line in fileinput.input("/etc/cron.d/cron-python",inplace=1):
print line.replace("XXXXXXX", environment_variable)
args = ["cron","-f", "-L 15"]
call(args)
The Dockerfile that for the container in which the cron jobs run is as follows:
# BUILD-USING: docker build -t test-cron .
# RUN-USING docker run --detach=true --volumes-from t-logs --name t-cron test-cron
FROM debian:wheezy
#
# Set correct environment variables.
ENV HOME /root
ENV TEST_ENV test-value
RUN apt-get update && apt-get install -y software-properties-common python-software-properties && apt-get update
# Install Python Setuptools
RUN apt-get install -y python cron
RUN apt-get purge -y python-software-properties software-properties-common && apt-get clean -y && apt-get autoclean -y && apt-get autoremove -y && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ADD cron-python /etc/cron.d/
ADD test.py /
ADD run-cron.py /
RUN chmod a+x test.py run-cron.py
# Set the time zone to the local time zone
RUN echo "America/New_York" > /etc/timezone && dpkg-reconfigure --frontend noninteractive tzdata
CMD ["/run-cron.py"]
Finally, create the containers and run them:
Create the log volume (test-logs) container: docker build -t test-logs .
Run log volume: docker run -d -v /t-logs --name t-logs test-logs
Create the cron container: docker build -t test-cron .
Run the cron container: docker run --detach=true --volumes-from t-logs --name t-cron test-cron
To inspect the log files of the scripts running under cron: docker run -t -i --volumes-from t-logs ubuntu:latest /bin/bash. The log files are in /var/log.
Here is a complement on rosksw answer.
There is no need to do some string replacement in the crontab file in order to pass environment variables to the cron jobs.
It is simpler to store the environment variables in a file when running the contrainer, then load them from this file at each cron execution. I found the tip here.
In the dockerfile:
CMD mkdir -p /data/log && env > /root/env.txt && crond -n
In the crontab file:
* * * * * root env - `cat /root/env.txt` my-script.sh
Adding crontab fragments in /etc/cron.d/ instead of using root's crontab might be preferable.
This would:
Let you add additional cron jobs by adding them to that folder.
Save you a few layers.
Emulate how Debian distros do it for their own packages.
Observe that the format of those files is a bit different from a crontab entry. Here's a sample from the Debian php package:
# /etc/cron.d/php5: crontab fragment for php5
# This purges session files older than X, where X is defined in seconds
# as the largest value of session.gc_maxlifetime from all your php.ini
# files, or 24 minutes if not defined. See /usr/lib/php5/maxlifetime
# Look for and purge old sessions every 30 minutes
09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] && [ -x /usr/lib/php5/sessionclean ] && [ -d /var/lib/php5 ] && /usr/lib/php5/sessionclean /var/lib/php5 $(/usr/lib/php5/maxlifetime)
Overall, from experience, running cron in a container does work very well (besides cron logging leaving a lot to be desired).
Here's an alternative solution.
in Dockerfile
ADD docker/cron/my-cron /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
ADD docker/cron/entrypoint.sh /etc/entrypoint.sh
ENTRYPOINT ["/bin/sh", "/etc/entrypoint.sh"]
in entrypoint.sh
#!/usr/bin/env bash
printenv | cat - /etc/cron.d/my-cron > ~/my-cron.tmp \
&& mv ~/my-cron.tmp /etc/cron.d/my-cron
cron -f
We are using below solution. It supports both docker logs functionality and ability to hang the cron process in the container on PID 1 (if you use tail -f workarounds provided above - if cron crashes, docker will not follow restart policy):
cron.sh:
#!/usr/bin/env bash
printenv | cat - /etc/cron.d/cron-jobs > ~/crontab.tmp \
&& mv ~/crontab.tmp /etc/cron.d/cron-jobs
chmod 644 /etc/cron.d/cron-jobs
tail -f /var/log/cron.log &
cron -f
Dockerfile:
RUN apt-get install --no-install-recommends -y -q cron
ADD cron.sh /usr/bin/cron.sh
RUN chmod +x /usr/bin/cron.sh
ADD ./crontab /etc/cron.d/cron-jobs
RUN chmod 0644 /etc/cron.d/cron-jobs
RUN touch /var/log/cron.log
ENTRYPOINT ["/bin/sh", "/usr/bin/cron.sh"]
crontab:
* * * * * root <cmd> >> /var/log/cron.log 2>&1
And please don't forget to add the creepy new line in your crontab
Here is my checklist for debugging cron python scripts in docker:
Make sure you run cron command somewhere. Cron doesn't start automatically. You can run it from a Dockerfile using RUN or CMD or add it to a startup script for the container. In case you use CMD you may consider using cron -f flag which keeps cron in the foreground and won't let container die. However, I prefer using tail -f on logfiles.
Store environment variables in /etc/envoronment. Run this from a bash startscript: printenv > /etc/environment. This is an absolute must if you use environment variables inside of python scripts. Cron doesn't know anything about the environment variables by default. By it can read them from /etc/environment.
Test Cron by using the following config:
* * * * * echo "Cron works" >>/home/code/test.log
* * * * * bash -c "/usr/local/bin/python3 /home/code/test.py >>/home/code/test.log 2>/home/code/test.log"
The python test file should contain some print statements or something else that displays that the script is running. 2>/home/code/test.log will also log errors. Otherwise, you won't see errors at all and will continue guessing.
Once done, go to the container, using docker exec -it <container_name> bash and check:
That crontab config is in place using crontab -l
Monitor logs using tail -f /home/code/test.log
I have spent hours and days figuring out all of those problems. I hope this helps someone to avoid this.
Don't mix crond and your base image. Prefer to use a native solution for your language (schedule or crython as said by Anton), or decouple it. By decoupling it I mean, keep things separated, so you don't have to maintain an image just to be the fusion between python and crond.
You can use Tasker, a task runner that has cron (a scheduler) support, to solve it, if you want keep things decoupled.
Here an docker-compose.yml file, that will run some tasks for you
version: "2"
services:
tasker:
image: strm/tasker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
configuration: |
logging:
level:
ROOT: WARN
org.springframework.web: WARN
sh.strm: DEBUG
schedule:
- every: minute
task: helloFromPython
tasks:
docker:
- name: helloFromPython
image: python:3-slim
script:
- python -c 'print("Hello world from python")'
Just run docker-compose up, and see it working. Here is the Tasker repo with the full documentation:
http://github.com/opsxcq/tasker
Single Container Method
You may run crond within the same container that is doing something closely related using a base image that handles PID 0 well, like phusion/baseimage.
Specialized Container Method
May be cleaner would be to have another Container linked to it that just runs crond. For example:
Dockerfile
FROM busybox
ADD crontab /var/spool/cron/crontabs/www-data
CMD crond -f
crontab
* * * * * echo $USER
Then run:
$ docker build -t cron .
$ docker run --rm --link something cron
Note: In this case it'll run the job as www-data. Cannot just mount the crontab file as volume because it needs to be owned by root with only write access for root, else crond will run nothing. Also you'll have to run crond as root.
Another possibility is to use Crython. Crython allows you to regularly schedule a python function from within a single python script / process. It even understands cron syntax:
#crython.job(expr='0 0 0 * * 0 *')
def job():
print "Hello world"
Using crython avoids the various headaches of running crond inside a docker container - your job is now a single process that wakes up when it needs to, which fits better into the docker execution model. But it has the downside of putting the scheduling inside your program, which isn't always desirable. Still, it might be handy in some use cases.
Currently I'm working on a Python script to run embed shell scripts, the OS contain root and normal_user.
The problem is, when I try to switch from normal_user to root, using a single command in one line.
I have to only modify the code and nothing like visudo to achieve this and use su only without giving the normal_user any rights outside the coding .
How can I achieve this?
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('172.16.x.x', username='my_name', password='my_password')
***stdin, stdout, stderr = ssh.exec_command('switch to root user && execute a commands as root')***
stdin.flush()
print stdout.readlines()
ssh.close()
Assuming the OS environment has sudo command installed and normal_user is allowed to sudo to root without a password:
stdin, stdout, stderr = ssh.exec_command('sudo -i -H -- echo $USER ; echo $USER')
Options -i simulators root login (prepares .profile, etc. environment variables), option -H sets the $HOME directory the same as the root. These may or may not be necessary depending on your use case.
echo $USER example shows that you are executing that command as root (it should return $USER = root).
As a security measurement you can set the normal_user to have sudo rights only if the user logs in with a correct SSH key.
http://www.evans.io/posts/ssh-agent-for-sudo-authentication/ (complex)