I am trying sudo python get_gps.py -c and expecting it to load the script and then present the interactive shell to debug the script live as opposed to typing it in manually.
From the docs:
$ python --help
usage: /usr/bin/python2.7 [option] ... [-c cmd | -m mod | file | -] [arg] ...
Options and arguments (and corresponding environment variables):
-B : don't write .py[co] files on import; also PYTHONDONTWRITEBYTECODE=x
-c cmd : program passed in as string (terminates option list)
-d : debug output from parser; also PYTHONDEBUG=x
-E : ignore PYTHON* environment variables (such as PYTHONPATH)
-h : print this help message and exit (also --help)
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
use -i option
Related
I would like to execute a run configuration using one parameter from my main .py module and others that are given from a run configuration created in Eclipse,
the configuration is run by another .py file called let's say "database.py" with parameters taken from the configuration like so :
-q
-u
-p
-c
-f
-s oracle
-d
-w yes
-r parameter taken from console of my main run.py
-o
I can run it from Eclipse IDE going to run configurations, but I would like this to be executed if I type "yes" into the console.
I have an application that runs multiple Python scripts in order. I can run them in docker-compose as follow:
command: >
bash -c "python -m module_a &&
python -m module_b &&
python -m module_c"
Now I'm, scheduling the job in Nomad, and added the below command under configuration for Docker driver:
command = "/bin/bash"
args = ["-c", "python -m module_a", "&&","
"python -m module_b", "&&",
"python -m module_c"]
But Nomad seems to escape &&, and just runs the first module, and issue exit code 0. Is there any way to run the multiline command similar to docker-compose?
The following is guaranteed to work with the exec driver:
command = "/bin/bash"
args = [
"-c", ## next argument is a shell script
"for module; do python -m \"$module\" || exit; done", ## this is that script.
"_", ## passed as $0 to the script
"module_a", "module_b", "module_c" ## passed as $1, $2, and $3
]
Note that only a single argument is passed as a script -- the one immediately following -c. Subsequent arguments are arguments to that script, not additional scripts or script fragments.
Even simpler, you could run:
command = "/bin/bash"
args = ["-c", "python -m module_a && python -m module_b && python -m module_c" ]
I am trying to run a pssh command inside a shell script, but the script freezes and there are no connections made, as verified in a ps -ef command. Also, because there is only one host in the hosts file I am using.
At this point, Control-C fails to kill the script, and it will not timeout. Only a kill command works.
If I run the same command on the command-line, there is no issue. Also, a pscp command in the same script causes no issues, so it seems that the required libraries are being loaded.
$ cat /home/myusername/tmp/hosts
mysinglehostname
Here is the script being run:
$ cat /home/myusername/bin/testpssh
#!/bin/bash
source ~/.bashrc
$HOME/path/to/python-virtualenv/bin/pscp -h "/home/myusername/tmp/hosts" "/tmp/garbage" "/tmp/garbage"
$HOME/path/to/python-virtualenv/bin/pssh -h "/home/myusername/tmp/hosts" -l myusername -p 512 -t 3 -o "out" -O GSSAPIAuthentication=no -i "whoami"
Here is what happens when I run the script:
$ /home/myusername/bin/testpssh &
[1] 18553
$ [1] 14:51:12 [SUCCESS] mysinglehostname 22
$ ps -ef | grep pssh
myusername 18580 18553 0 14:33 pts/16 00:00:00 /home/myusername/path/to/python-virtualenv/bin/python /home/myusername/path/to/python-virtualenv/bin/pssh -h /home/myusername/tmp/hosts -l myusername -p 512 -t 3 -o out -O GSSAPIAuthentication=no -i whoami
$ ## The script above is hanging after completing the pscp, before pssh completes.\
> But if I copy and paste the process line, it works fine as shown here:
$ /home/myusername/path/to/python-virtualenv/bin/python \
> /home/myusername/path/to/python-virtualenv/bin/pssh \
> -h /home/myusername/tmp/hosts -l myusername \
> -p 512 -t 3 -o out -O GSSAPIAuthentication=no -i whoami
[1] 14:59:03 [SUCCESS] mysinglehostname 22
myusername
$
The first [SUCCESS] above is for the pscp action, and no subsequent [SUCCESS] comes from the pssh command, unless it is performed explicitly on the command-line.
Why will the pssh command not work inside the bash shell script?
The script works fine if I use ksh instead of bash (and remove the line to source ~/.bashrc) in the shebang line.
I am on RedHat 6.4, using python 2.6.6
We have a python script that accepts the following arguments as example:
mypie.py --size=20 --pielist='{"apple":[5],"orange":[7]}' --quantity=10
We tried in our test.sh bash shell script:
test.sh 20 5 7 10
test.sh
export SIZE=$1
export APPLE=$2
export ORANGE=$3
export QUANTITY=$4
echo "--size=$SIZE" --pipelist='{\"apple\":[$2],\"orange\":[$3]}\"' --quantity=$4" | tee output.txt
cat output.txt | sudo python mypie.py
We got error
ERROR: size is not specified.
but when we cat output.txt, we can see that it is there with size value.
--size=20 --pielist='{"apple":[5],"orange":[7]}' --quantity=10
What are we doing wrong ?
Thanks
This would do -
cat output.txt | sudo xargs python mypie.py
Using the pipe redirects the content to stdin of the program - not the command line.
You could consider building up an args variable - then passing that to both the output.txt, and your python script.
For example:
export SIZE=$1
export APPLE=$2
export ORANGE=$3
export QUANTITY=$4
export ARGS="--size=$SIZE" --pipelist='{\"apple\":[$2],\"orange\":[$3]}\"' --quantity=$4"
echo $ARGS | tee output.txt
sudo python mypie.py $ARGS
You can use command substitution to directly place the output of command as the arguments to your python program.
python mypie.py `cat output.txt`
or
python mypie.py $(cat output.txt)
The simple and obvious solution is to change test.sh so that it passes the parameters correctly.
#!/bin/sh
set -x # if you want to see the script's parameters
sudo python mypie.py --size="$1" \
--pipelist="{'apple':[$2],'orange':[$3]}" --quantity="$4"
Passing the arguments as standard input (which is what a pipe does) is simply not how it is usually done, nor a particularly suitable model for this particular type of interaction.
I have a python script to start a process which I want to monitor using Nagios. When I run that script and perform ps -ef on my ubuntu EC2 instance, it shows process as python <filename>.py --arguments. For Nagios to monitor that process using check_procs, we need to supply process name. Here process name becomes 'python'.
/usr/lib/nagios/plugins/check_procs -C python
It returns the output that one python process is running. This is fine when I'm running one python process. But If I'm running multiple python scripts and monitor only few, then I have to give that particular process name. If in the above command, I give python script name, it throws an error. So I want to mask whole python <filename>.py --arguments to some other name so that while performing check_procs, I can give that new name.
If anyone have any idea, please let me know. I have checked other stackoverflow questions which suggest changing python process name using setproctitle but I want to perform it using shell.
Regards,
Sanket
You can use the check_procs command to look at arguments, which includes the module name. The following command will let you know if the python module 'module.py' is running.
/usr/lib/nagios/plugins/check_procs -c 1:1 -a module.py -C python
The -c argument lets you set the critical range. 1:1 will trigger a critical status if there is more or less than 1 process that matches running.
The -a argument will filter based on processes that contain the args 'module.py' (change it to the name of the module you want to monitor)
The -C argument will make sure that the process is a python process
If you need help figuring out how to create the service definition, I had to figure that out too. Just let me know.
REFERENCE:
check_procs plugin manpage
http://nagiosplugins.org/man/check_procs
You can't change the process name from pure Python, although you can use a wrapper (for example, written in C) to do so.
However, what you should do instead is making your program a daemon, and using a pidfile. Have a look at the python Daemon API and its implementation python-daemon.
check_procs already handles this situation.
check_procs can tell the difference between scripts launched as an argument to the interpreter vs jobs run directly a hashbang interpreter. Even though both of these look the same in the ps output!! The latter case will not be listed in check_procs -C python!
If you run your scripts explicitly via python: python <filename.py>, then you can monitor them with the check_procs -C python -a filename.py.
If you put #!/usr/bin/python in your scripts and run them as ./filename.py, then you can monitor with check_procs -C filename.py.
Example command line session showing this behavior:
#make test.py directly executable. See code below
$ chmod a+x test.py
#launch via python explicitly:
$ /usr/bin/python ./test.py &
[1] 27094
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#launch via python implicitly
$ ./test.py &
[2] 27134
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 2 processes with args 'test.py'
#PS 'COMMAND' output looks the same
$ ps 27094 27134
PID TTY STAT TIME COMMAND
27094 pts/6 S 0:00 /usr/bin/python ./test.py
27134 pts/6 S 0:00 /usr/bin/python ./test.py
#kill the explicit test
$ kill 27094
[1] - terminated /usr/bin/python ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#kill the implicit test
$ kill 27134
[2] + terminated ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 0 processes with args 'test.py'
test.py is a python script that sleeps for 2 minutes. It is chmod +x and has a hashbang #! line invoking /usr/bin/python.
#!/usr/bin/python
import time
time.sleep(120)
Create a pid file and use that file for the process lookup with nagios.
I'm not saying this is the best solution (it wouldn't scale well at all), but you can create a symbolic link to the python command and execute your script using this link. e.g.
ln -s `which python` ~/mypython
~/mypython myscript.py
Scripts launched using the link should show up as mypython in ps.
You can use subprocess.Popen to change the executable name, but you'd have to use a wrapper script (or some weird fork magic). The following code causes ps to list the executable as kwyjibo /tmp/test.py instead of /usr/bin/python /tmp/test.py:
import subprocess
p = subprocess.Popen(['kwyjibo', '/tmp/test.py'], executable='/usr/bin/python')