Called bashscript doesn't start up GNU screen session - python

I have a problem with a backup script which is supposed to call a bash starting/stopping script, in which a "daemon" (via GNU screen) is managed. For the moment my python backup script is called via cron. Within the launch.sh script there is a determination of the given parameter. If "stop" is given the script echos "Stopping..." and runs the GNU screen command to shut down the session. The same goes for "start". If the script is called via subprocess.call(...,Shell=True) in Python the string is shown but the screen session remains untouched. If it gets called directly in bash everything works fine.
#!/usr/bin/env python
'''
Created on 27.07.2013
BackUp Script v0.2
#author: Nerade
'''
import time
import os
from datetime import date
from subprocess import check_output
import subprocess
script_dir = '/home/minecraft/automated_backup'
#folders = ['/home/minecraft/staff']
folders = ['/home/minecraft/bspack2','/home/minecraft/staff']
# log = 0
backup_date = date.today()
backup_dir = '/home/minecraft/automated_backup/' + backup_date.isoformat()
def main():
global log
init_log()
init_dirs()
for folder in folders:
token = folder.split("/")
stopCmd = folder + '/launch.sh stop'
log.write("Stopping server %s...\n" % (token[3]))
subprocess.call(stopCmd,shell=True)
#print stopCmd
while screen_present(token[3]):
time.sleep(0.5)
log.write("Server %s successfully stopped!\n" % (token[3]))
specificPath = backup_dir + '/' + token[3]
os.makedirs(specificPath)
os.system("cp /home/minecraft/%s/server.log %s/server.log" % (token[3],specificPath))
backup(folder,specificPath + '/' + backup_date.isoformat() + '.tar.gz')
dumpDatabase(backup_dir)
for folder in folders:
token = folder.split("/")
startCmd = folder + '/launch.sh start'
log.write("Starting server %s...\n" % (token[3]))
subprocess.call(startCmd,shell=True)
time.sleep(1)
log.write(screen_present(token[3]))
#print startCmd
def dumpDatabase(target):
global log
log.write("Dumping Database...\n")
cmd = "mysqldump -uroot -p<password> -A --quick --result-file=%s/%s.sql" % (backup_dir,backup_date.isoformat())
os.system(cmd)
#print cmd
def backup(source,target):
global log
log.write("Starting backup of folder %s to %s\n" % (source,target))
cmd = 'tar cfvz %s --exclude-from=%s/backup.conf %s' % (target,source,source)
os.system(cmd)
#print cmd
def screen_present(name):
var = check_output(["screen -ls; true"],shell=True)
if "."+name+"\t(" in var:
return True
else:
return False
def init_log():
global log
log = open("%s/backup.log" % script_dir,'a')
log.write(
"Starting script at %s\n" % time.strftime("%m/%d/%Y %H:%M:%S")
)
def init_dirs():
global backup_dir,log
log.write("Checking and creating directories...\n")
if not os.path.isdir(backup_dir):
os.makedirs(backup_dir)
if __name__ == '__main__':
main()
And the launch.sh:
#!/bin/sh
if [ $# -eq 0 ] || [ "$1" = "start" ]; then
echo "Starting Server bspack2"
screen -S bspack2 -m -d java -Xmx5G -Xms4G -jar mcpc-plus-legacy-1.4.7-R1.1.jar nogui
fi
if [ "$1" = "stop" ]; then
screen -S bspack2 -X stuff 'stop\015'
echo "Stopping Server bspack2"
fi
What's my problem here?

I'm sure by now you've solved this problem, but looking through your question I'd bet the answer is remarkably simple -- mcpc-plus-legacy-1.4.7-R1.1.jar isn't found by java, which fails, and subsequently screen terminates.
In launch.sh, screen will execute in the same directory as the calling script. In this case, your python script, when run by cron, will have an active directory of the running user's home directory (so root crontabs will run in /root/, for instance, and a user crontab in /home/username/).
Simple solution is just to the following:
cd /home/minecraft/bspack2
as the second line in your launch.sh script, just after #!/bash/sh.
In the future, I'd recommend when interacting with screen to leverage the -L parameter. This turns on autologging. By default, in the current directory a file "screenlog.0" will be generated when screen terminates, showing you a log history of activity during the screen session. This will allow you to debug screen problems with ease, and help encourage keeping track of "current directory" while working with shell scripts, to make finding the screen log output simple.

Related

Logging last Bash command to file from script

I write lots of small scripts to manipulate files on a Bash-based server. I would like to have a mechanism by which to log which commands created which files in a given directory. However, I don't just want to capture every input command, all the time.
Approach 1: a wrapper script that uses a Bash builtin (a la history or fc -ln -1) to grab the last command and write it to a log file. I have not been able to figure out any way to do this, as the shell builtin commands do not appear to be recognized outside of the interactive shell.
Approach 2: a wrapper script that pulls from ~/.bash_history to get the last command. This, however, requires setting up the Bash shell to flush every command to history immediately (as per this comment) and seems also to require that the history be allowed to grow inexorably. If this is the only way, so be it, but it would be great to avoid having to edit the ~/.bashrc file on every system where this might be implemented.
Approach 3: use script. My problem with this is that it requires multiple commands to start and stop the logging, and because it launches its own shell it is not callable from within another script (or at least, doing so complicates things significantly).
I am trying to figure out an implementation that's of the form log_this.script other_script other_arg1 other_arg2 > file, where everything after the first argument is logged. The emphasis here is on efficiency and minimizing syntax overhead.
EDIT: iLoveTux and I both came up with similar solutions. For those interested, my own implementation follows. It is somewhat more constrained in its functionality than the accepted answer, but it also auto-updates any existing logfile entries with changes (though not deletions).
Sample usage:
$ cmdlog.py "python3 test_script.py > test_file.txt"
creates a log file in the parent directory of the output file with the following:
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
Additional file changes are added to the log;
$ cmdlog.py "python3 test_script.py > test_file_2.txt"
the log now contains
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
Running on the original file name again changes the file order in the log, based on modification time of the files:
$ cmdlog.py "python3 test_script.py > test_file.txt"
produces
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
2015-10-12#10:48:01 test_file.txt "python3 test_script.py > test_file.txt"
Full script:
#!/usr/bin/env python3
'''
A wrapper script that will write the command-line
args associated with any files generated to a log
file in the directory where the files were made.
'''
import sys
import os
from os import listdir
from os.path import isfile, join
import subprocess
import time
from datetime import datetime
def listFiles(mypath):
"""
Return relative paths of all files in mypath
"""
return [join(mypath, f) for f in listdir(mypath) if
isfile(join(mypath, f))]
def read_log(log_file):
"""
Reads a file history log and returns a dictionary
of {filename: command} entries.
Expects tab-separated lines of [time, filename, command]
"""
entries = {}
with open(log_file) as log:
for l in log:
l = l.strip()
mod, name, cmd = l.split("\t")
# cmd = cmd.lstrip("\"").rstrip("\"")
entries[name] = [cmd, mod]
return entries
def time_sort(t, fmt):
"""
Turn a strftime-formatted string into a tuple
of time info
"""
parsed = datetime.strptime(t, fmt)
return parsed
ARGS = sys.argv[1]
ARG_LIST = ARGS.split()
# Guess where logfile should be put
if (">" or ">>") in ARG_LIST:
# Get position after redirect in arg list
redirect_index = max(ARG_LIST.index(e) for e in ARG_LIST if e in ">>")
output = ARG_LIST[redirect_index + 1]
output = os.path.abspath(output)
out_dir = os.path.dirname(output)
elif ("cp" or "mv") in ARG_LIST:
output = ARG_LIST[-1]
out_dir = os.path.dirname(output)
else:
out_dir = os.getcwd()
# Set logfile location within the inferred output directory
LOGFILE = out_dir + "/cmdlog_history.log"
# Get file list state prior to running
all_files = listFiles(out_dir)
pre_stats = [os.path.getmtime(f) for f in all_files]
# Run the desired external commands
subprocess.call(ARGS, shell=True)
# Get done time of external commands
TIME_FMT = "%Y-%m-%d#%H:%M:%S"
log_time = time.strftime(TIME_FMT)
# Get existing entries from logfile, if present
if LOGFILE in all_files:
logged = read_log(LOGFILE)
else:
logged = {}
# Get file list state after run is complete
post_stats = [os.path.getmtime(f) for f in all_files]
post_files = listFiles(out_dir)
# Find files whose states have changed since the external command
changed = [e[0] for e in zip(all_files, pre_stats, post_stats) if e[1] != e[2]]
new = [e for e in post_files if e not in all_files]
all_modded = list(set(changed + new))
if not all_modded: # exit early, no need to log
sys.exit(0)
# Replace files that have changed, add those that are new
for f in all_modded:
name = os.path.basename(f)
logged[name] = [ARGS, log_time]
# Write changed files to logfile
with open(LOGFILE, 'w') as log:
for name, info in sorted(logged.items(), key=lambda x: time_sort(x[1][1], TIME_FMT)):
cmd, mod_time = info
if not cmd.startswith("\""):
cmd = "\"{}\"".format(cmd)
log.write("\t".join([mod_time, name, cmd]) + "\n")
sys.exit(0)
You can use the tee command, which stores its standard input to a file and outputs it on standard output. Pipe the command line into tee, and pipe tee's output into a new invocation of your shell:
echo '<command line to be logged and executed>' | \
tee --append /path/to/your/logfile | \
$SHELL
i.e., for your example of other_script other_arg1 other_arg2 > file,
echo 'other_script other_arg1 other_arg2 > file' | \
tee --append /tmp/mylog.log | \
$SHELL
If your command line needs single quotes, they need to be escaped properly.
OK, so you don't mention Python in your question, but it is tagged Python, so I figured I would see what I could do. I came up with this script:
import sys
from os.path import expanduser, join
from subprocess import Popen, PIPE
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
log_file = join(home, "command_log")
command = sys.argv[1:]
with open(log_file, "a") as fout:
fout.write("{}\n".format(" ".join(command)))
out, err = issue_command(command)
which you can call like (if you name it log_this and make it executable):
$ log_this echo hello world
and it will put "echo hello world" in a file ~/command_log, note though that if you want to use pipes or redirection you have to quote your command (this may be a real downfall for your use case or it may not be, but I haven't figured out how to do this just yet without the quotes) like this:
$ log_this "echo hello world | grep h >> /tmp/hello_world"
but since it's not perfect, I thought I would add a little something extra.
The following script allows you to specify a different file to log your commands to as well as record the execution time of the command:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import argparse
from os.path import expanduser, join
from time import time
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
default_file = join(home, "command_log")
parser = argparse.ArgumentParser()
parser.add_argument("-f", "--file", type=argparse.FileType("a"), default=default_file)
parser.add_argument("-p", "--profile", action="store_true")
parser.add_argument("command", nargs=argparse.REMAINDER)
args = parser.parse_args()
if args.profile:
start = time()
out, err = issue_command(args.command)
runtime = time() - start
entry = "{}\t{}\n".format(" ".join(args.command), runtime)
args.file.write(entry)
else:
out, err = issue_command(args.command)
entry = "{}\n".format(" ".join(args.command))
args.file.write(entry)
args.file.close()
You would use this the same way as the other script, but if you wanted to specify a different file to log to just pass -f <FILENAME> before your actual command and your log will go there, and if you wanted to record the execution time just provide the -p (for profile) before your actual command like so:
$ log_this -p -f ~/new_log "echo hello world | grep h >> /tmp/hello_world"
I will try to make this better, but if you can think of anything else this could do for you, I am making a github project for this where you can submit bug reports and feature requests.

permanently change directory python scripting/what environment do python scripts run in?

I have a small git_cloner script that clones my companies projects correctly. In all my scripts, I use a func that hasn't given me problems yet:
def call_sp(
command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
At the end of this individual script, I use:
call_sp('cd {}'.format(branch_path))
This line does not change the terminal I ran my script in to the directory branch_path, in fact, even worse, it annoyingly asks me for my password! When removing the cd yadayada line above, my script no longer demands a password before completing. I wonder:
How are these python scripts actually running? Since the cd command had no permanent effect. I assume the script splits its own private subprocess separate from what the terminal is doing, then kills itself when the script finishes?
Based on how #1 works, how do I force my scripts to change the terminal directory permanently to save me time,
Why would merely running a change directory ask me for my password?
The full script is below, thank you,
Cody
#!/usr/bin/env python
import subprocess
import sys
import time
from os.path import expanduser
home_path = expanduser('~')
project_path = home_path + '/projects'
d = {'cwd': ''}
#calling from script:
# ./git_cloner.py projectname branchname
# to make a new branch say ./git_cloner.py project branchname
#interactive:
# just run ./git_cloner.py
if len(sys.argv) == 3:
project = sys.argv[1]
branch = sys.argv[2]
if len(sys.argv) < 3:
while True:
project = raw_input('Enter a project name (i.e., mainworkproject):\n')
if not project:
continue
break
while True:
branch = raw_input('Enter a branch name (i.e., dev):\n')
if not branch:
continue
break
def call_sp(command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
print "making new branch \"%s\" in project \"%s\"" % (branch, project)
this_project_path = '%s/%s' % (project_path, project)
branch_path = '%s/%s' % (this_project_path, branch)
d['cwd'] = project_path
call_sp('mkdir %s' % branch, **d)
d['cwd'] = branch_path
git_string = 'git clone ssh://git#git/home/git/repos/{}.git {}'.format(project, d['cwd'])
#see what you're doing to maybe need to cancel
print '\n'
print "{}\n\n".format(git_string)
call_sp(git_string)
time.sleep(30)
call_sp('git checkout dev', **d)
time.sleep(2)
call_sp('git checkout -b {}'.format(branch), **d)
time.sleep(5)
#...then I make some symlinks, which work
call_sp('cp {}/dev/settings.py {}/settings.py'.format(project_path, branch_path))
print 'dont forget "git push -u origin {}"'.format(branch)
call_sp('cd {}'.format(branch_path))
You cannot use Popen to change the current directory of the running script. Popen will create a new process with its own environment. If you do a cd within that, it will change directory for that running process, which will then immediately exit.
If you want to change the directory for the script you could use os.chdir(path), then all subsequent commands in the script will be run from that new path.
Child processes cannot alter the environment of their parents though, so you can't have a process you create change the environment of the caller.

pass python var to bash

I'm making a script to take pictures and write them to a folder created/named with the "data&time"
I made this part to create the directory and take the pictures
pathtoscript = "/home/pi/python-scripts"
current_time = time.localtime()[0:6]
dirfmt = "%4d-%02d-%02d-%02d-%02d-%02d"
dirpath = os.path.join(pathtoscript , dirfmt)
dirname = dirpath % current_time[0:6] #dirname created with date and time
os.mkdir(dirname) #mkdir
pictureName = dirname + "/image%02d.jpg" #path+name of pictures
camera.capture_sequence([pictureName % i for i in range(9)])
Then I would like to pass the dirname to a bash script (picturesToServer) which uploads the pictures to a server.
How can I do it?
cmd = '/home/pi/python-scripts/picturesToServer >/dev/null 2>&1 &'
call ([cmd], shell=True)
Maybe I could stay in the python script scp the pictures to the server? I have a ssh-agent with the paraphrase set (ssh-add mykey).
Place the variable in the environment (it'll be available as a regular bash variable in the bash script, e.g. as VAR_NAME in the example below) by replacing your call with:
import subprocess
p = subprocess.Popen(cmd, shell=True, env={"VAR_NAME": dirname})
Or pass it as a positional argument (it'll be available in $1 in the script) by replacing your cmd with:
cmd = '/home/pi/python-scripts/picturesToServer >/dev/null 2>&1 "{0}" &'.format(dirname)
As a side note, consider not using shell = True when you call a subprocess. Using shell = True is a bad idea for a lot of reasons that are documented in the Python docs

Python running synchronously? Running one executable at a time

Trying to use python to control numerous compiled executables, but running into timeline issues! I need to be able to run two executables simultaneously, and also be able to 'wait' until an executable has finished prior to starting another one. Also, some of them require superuser. Here is what I have so far:
import os
sudoPassword = "PASS"
executable1 = "EXEC1"
executable2 = "EXEC2"
executable3 = "EXEC3"
filename = "~/Desktop/folder/"
commandA = filename+executable1
commandB = filename+executable2
commandC = filename+executable3
os.system('echo %s | sudo %s; %s' % (sudoPassword, commandA, commandB))
os.system('echo %s | sudo %s' % (sudoPassword, commandC))
print ('DONESIES')
Assuming that os.system() waits for the executable to finish prior to moving to the next line, this should run EXEC1 and EXEC2 simultaneously, and after they finish run EXEC3...
But it doesn't. Actually, it even prints 'DONESIES' in the shell before commandB even finishes...
Please help!
Your script will still execute all 3 commands sequentially. In shell scripts, the semicolon is just a way to put more than one command on one line. It doesn't do anything special, it just runs them one after the other.
If you want to run external programs in parallel from a Python program, use the subprocess module: https://docs.python.org/2/library/subprocess.html
Use subprocess.Popen to run multiple commands in the background. If you just want the program's stdout/err to go to the screen (or get dumped completely) its pretty straight forward. If you want to process the output of the commands... that gets more complicated. You'd likely start a thread per command.
But here is the case that matches your example:
import os
import subprocess as subp
sudoPassword = "PASS"
executable1 = "EXEC1"
executable2 = "EXEC2"
executable3 = "EXEC3"
filename = os.path.expanduser("~/Desktop/folder/")
commandA = os.path.join(filename, executable1)
commandB = os.path.join(filename, executable2)
commandC = os.path.join(filename, executable3)
def sudo_cmd(cmd, password):
p = subp.Popen(['sudo', '-S'] + cmd, stdin=subp.PIPE)
p.stdin.write(password + '\n')
p.stdin.close()
return p
# run A and B in parallel
exec_A = sudo_cmd([commandA], sudoPassword)
exec_B = sudo_cmd([commandB], sudoPassword)
# wait for A before starting C
exec_A.wait()
exec_C = sudo_cmd([commandC], sudoPassword)
# wait for the stragglers
exec_B.wait()
exec_C.wait()
print ('DONESIES')

Python SimpleXMLRPC fails when starting from Crontab but not in local shell

I am really puzzled on this issue. I am using python's SimpleXMLRPC to provide services to a web application.
The problem is that when I start my xmlrpc server from the command line everything runs smoothly but when it is started through crontab it doesn't.
I have tried to hold the start-up via sleep and checking /sys/class/net/eth0/device/net/eth0/operstate but got no luck.
Please find attached the source for the script:
#!/usr/local/bin/python2.5
# -*- coding: utf-8 -*-
# License: GNU
# startxmlrpc.py: startup script for xmlrpc server to deal with processing
## {{{ http://code.activestate.com/recipes/439094/ (r1)
import socket
import fcntl
import struct
def get_ip_address(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
return socket.inet_ntoa(fcntl.ioctl(
s.fileno(),
0x8915, # SIOCGIFADDR
struct.pack('256s', ifname[:15])
)[20:24])
## end of http://code.activestate.com/recipes/439094/ }}}
import xmlrpclib
import urllib2
import os
from SimpleXMLRPCServer import SimpleXMLRPCServer
from time import sleep
def send(img1,img2,lib,filters):
global HOST_IP
path = '/var/www/%s/' % MD5Cypher(HOST_IP)
makedirs(path)
print "Path: %s" % path
if lib=='devel':
os.system("""python ~/devel_funcs.py %s %s "%s" &""" % (img1_path,img2_path, filters))
if lib=='milena':
import milena_funcs
milena_funcs.mln_process(img1_path, filters)
return HOST_IP + '/' + path.split('/var/www/')[1] + 'out.pgm'
while open('/sys/class/net/eth0/operstate').read().strip() != 'up':
sleep(5)
HOST_IP = get_ip_address('eth0')
server = SimpleXMLRPCServer((HOST_IP, 7070))
server.register_function(send)
server.serve_forever()
This is the error I get if I try to launch my process just after a clean boot:
<class 'xmlrpclib.Fault'>: <Fault 1: "<class 'xmlrpclib.ProtocolError'>:<ProtocolError for 192.168.0.5:7070/RPC2: -1 >">
args = ()
faultCode = 1
faultString = "<class 'xmlrpclib.ProtocolError'>:<ProtocolError for 192.168.0.5:7070/RPC2: -1 >"
message = ''
If I kill it and run it again, it works.
This is the crontab:
usrmln#Slave1:~$ crontab -l
# m h dom mon dow command
* * * * * python ~/master_register.py > /dev/null 2>&1
* * * * * python ~/startxmlrpc.py > /dev/null 2>&1
0 5 * * * find /var/www/ -type d -mtime +3 -exec rm -rf {} \; > /dev/null 2>&1
You don't show what the error is, but, it is possibly that PYTHONPATH is not being set when run from cron. You could set it before running the script.
Or, of course, you are running it as a different user and file permissions are not correctly set. Also ~/devel_funcs.py will not refer to your home directory if cron runs your script as a different user.
I finally got it, I was using python 2.5 locally and I had to add to the execution like this:
/usr/local/bin/python2.5 /home/username/startxmlrpc.py > /dev/null 2>&1

Categories