I have an application that runs multiple Python scripts in order. I can run them in docker-compose as follow:
command: >
bash -c "python -m module_a &&
python -m module_b &&
python -m module_c"
Now I'm, scheduling the job in Nomad, and added the below command under configuration for Docker driver:
command = "/bin/bash"
args = ["-c", "python -m module_a", "&&","
"python -m module_b", "&&",
"python -m module_c"]
But Nomad seems to escape &&, and just runs the first module, and issue exit code 0. Is there any way to run the multiline command similar to docker-compose?
The following is guaranteed to work with the exec driver:
command = "/bin/bash"
args = [
"-c", ## next argument is a shell script
"for module; do python -m \"$module\" || exit; done", ## this is that script.
"_", ## passed as $0 to the script
"module_a", "module_b", "module_c" ## passed as $1, $2, and $3
]
Note that only a single argument is passed as a script -- the one immediately following -c. Subsequent arguments are arguments to that script, not additional scripts or script fragments.
Even simpler, you could run:
command = "/bin/bash"
args = ["-c", "python -m module_a && python -m module_b && python -m module_c" ]
Related
I have a python script which I run on localhost and development in command line with argument, sth as python script.py development - on development and python script.py localhost - on localhost.
Now I want to run this script - when I running script /bin/bash sh,
so I want to run this script from /bin/.bash script.
I added in headers in sh script: #!/usr/bin/env python.
In what way I can achieve this?
do
if [ $1 == "local" ]; then
python script.py $1
elif [ $1 == "development" ]; then
python script.py $1
What I can do to improve this script?
Since $1 already contains what you want, the conditional is unnecessary.
If your script is a Bash script, you should put #!/bin/bash (or your local equivalent) in the shebang line. However, this particular script uses no Bash features, and so might usefully be coded to run POSIX sh instead.
#!/bin/sh
case $1 in
local|development) ;;
*) echo "Syntax: $0 local|development" >&2; exit 2;;
esac
exec python script.py "$1"
A more useful approach is to configure your local system to run the script directly with ./script.py or similar, and let the script itself take care of parsing its command-line arguments. How exactly to do that depends on your precise environment, but on most U*x-like systems, you would put #!/usr/bin/env python as the first line of script.py itself, and chmod +x the file.
I assume this is what you wanted...
#!/bin/bash
if [ ! "$#" ]; then
echo "Usage: $1 (local|development) "
exit
fi
if [ "$1" == "local" ]; then
python script.py "$1"
echo "$1"
elif
[ "$1" == "development" ]; then
python script.py "$1"
echo "$1"
fi
Save the bash code above into a file named let's say script.sh. The make it executable: chmod +x script.sh. Then run it:
./script.sh
If no argument is specified, the script will just print an info about how to use it.
./script.sh local - executes python script.py local
./script.sh development - executes python script.py development
You can comment the lines with echo, they were left there just for debugging purposes (add a # in front of the echo lines to comment them).
How can run a sourced bash script, and then change directories, and then run a command, all within the same shell (Using python)? Is this even possible?
My Attempt:
subprocess.check_call(["env -i bash -c 'source ./init-build ARG'", "cd ../myDir", "bitbake myBoard"], shell =True)
I would make this for you, but I need to see the absolute paths. Here is an example
subprocess.check_call(["""/usr/bin/env bash -c "cd /home/x/y/tools && source /home/x/y/venv/bin/activate && python asdf.py" >> /tmp/asdf.txt 2>&1"""], shell=True)
I have this simple code for running shell scripts and it sometimes work, sometimes not.If not working console log is:
Please edit the vars script to reflect your configuration, then
source it with "source ./vars". Next, to start with a fresh PKI
configuration and to delete any previous certificates and keys, run
"./clean-all". Finally, you can run this tool (pkitool) to build
certificates/keys.
It is strange for me because when I run commands in console they work as should
def cmds(*args):
cd1 = "cd /etc/openvpn/easy-rsa && source ./vars"
cd2 = "cd /etc/openvpn/easy-rsa && ./clean-all"
cd3 = "cd /etc/openvpn/easy-rsa && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runcd1 = subprocess.Popen(cd1, shell=True)
runcd2 = subprocess.Popen(cd2 , shell=True)
runcd3 = subprocess.Popen(cd3 , shell=True)
return (runcd1, runcd2, runcd3)
I've changed like this:
def pass3Cmds(*args):
commands = "cd /etc/openvpn/easy-rsa && source ./vars && ./clean-all && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runCommands = subprocess.Popen(commands, shell=True, stdout=PIPE)
return (runCommands)
but console writes down:
source: not found
You need to combine the three commands into one.
The "source ./vars" only affects the shell from which it's run. When you use three separate Popen commands, you're getting three separate shells.
Run all the commands in one Popen with &&s between them.
The reason this works "sometimes" as written is that you're sometimes running python in a shell where you already sourced the vars script.
I am trying sudo python get_gps.py -c and expecting it to load the script and then present the interactive shell to debug the script live as opposed to typing it in manually.
From the docs:
$ python --help
usage: /usr/bin/python2.7 [option] ... [-c cmd | -m mod | file | -] [arg] ...
Options and arguments (and corresponding environment variables):
-B : don't write .py[co] files on import; also PYTHONDONTWRITEBYTECODE=x
-c cmd : program passed in as string (terminates option list)
-d : debug output from parser; also PYTHONDEBUG=x
-E : ignore PYTHON* environment variables (such as PYTHONPATH)
-h : print this help message and exit (also --help)
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
use -i option
I have a python script to start a process which I want to monitor using Nagios. When I run that script and perform ps -ef on my ubuntu EC2 instance, it shows process as python <filename>.py --arguments. For Nagios to monitor that process using check_procs, we need to supply process name. Here process name becomes 'python'.
/usr/lib/nagios/plugins/check_procs -C python
It returns the output that one python process is running. This is fine when I'm running one python process. But If I'm running multiple python scripts and monitor only few, then I have to give that particular process name. If in the above command, I give python script name, it throws an error. So I want to mask whole python <filename>.py --arguments to some other name so that while performing check_procs, I can give that new name.
If anyone have any idea, please let me know. I have checked other stackoverflow questions which suggest changing python process name using setproctitle but I want to perform it using shell.
Regards,
Sanket
You can use the check_procs command to look at arguments, which includes the module name. The following command will let you know if the python module 'module.py' is running.
/usr/lib/nagios/plugins/check_procs -c 1:1 -a module.py -C python
The -c argument lets you set the critical range. 1:1 will trigger a critical status if there is more or less than 1 process that matches running.
The -a argument will filter based on processes that contain the args 'module.py' (change it to the name of the module you want to monitor)
The -C argument will make sure that the process is a python process
If you need help figuring out how to create the service definition, I had to figure that out too. Just let me know.
REFERENCE:
check_procs plugin manpage
http://nagiosplugins.org/man/check_procs
You can't change the process name from pure Python, although you can use a wrapper (for example, written in C) to do so.
However, what you should do instead is making your program a daemon, and using a pidfile. Have a look at the python Daemon API and its implementation python-daemon.
check_procs already handles this situation.
check_procs can tell the difference between scripts launched as an argument to the interpreter vs jobs run directly a hashbang interpreter. Even though both of these look the same in the ps output!! The latter case will not be listed in check_procs -C python!
If you run your scripts explicitly via python: python <filename.py>, then you can monitor them with the check_procs -C python -a filename.py.
If you put #!/usr/bin/python in your scripts and run them as ./filename.py, then you can monitor with check_procs -C filename.py.
Example command line session showing this behavior:
#make test.py directly executable. See code below
$ chmod a+x test.py
#launch via python explicitly:
$ /usr/bin/python ./test.py &
[1] 27094
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#launch via python implicitly
$ ./test.py &
[2] 27134
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 2 processes with args 'test.py'
#PS 'COMMAND' output looks the same
$ ps 27094 27134
PID TTY STAT TIME COMMAND
27094 pts/6 S 0:00 /usr/bin/python ./test.py
27134 pts/6 S 0:00 /usr/bin/python ./test.py
#kill the explicit test
$ kill 27094
[1] - terminated /usr/bin/python ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#kill the implicit test
$ kill 27134
[2] + terminated ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 0 processes with args 'test.py'
test.py is a python script that sleeps for 2 minutes. It is chmod +x and has a hashbang #! line invoking /usr/bin/python.
#!/usr/bin/python
import time
time.sleep(120)
Create a pid file and use that file for the process lookup with nagios.
I'm not saying this is the best solution (it wouldn't scale well at all), but you can create a symbolic link to the python command and execute your script using this link. e.g.
ln -s `which python` ~/mypython
~/mypython myscript.py
Scripts launched using the link should show up as mypython in ps.
You can use subprocess.Popen to change the executable name, but you'd have to use a wrapper script (or some weird fork magic). The following code causes ps to list the executable as kwyjibo /tmp/test.py instead of /usr/bin/python /tmp/test.py:
import subprocess
p = subprocess.Popen(['kwyjibo', '/tmp/test.py'], executable='/usr/bin/python')