I am writing a python script which uses os.system command to call a shell script.I need help understanding how I can pass the arguments to the shell script? Below is what I am trying..but it wouldn't work.
os.system('./script.sh arg1 arg2 arg3')
I do not want to use subprocess for calling the shell script. Any help is appreciated.
Place your script and it's args into a string, see example below.
HTH
#!/usr/bin/env python
import os
arg3 = 'arg3'
cmd = '/bin/echo arg1 arg2 %s' % arg3
print 'running "%s"' % cmd
os.system(cmd)
If you insert the following line before os.system (...), you will likely see your problem.
print './script.sh arg1 arg2 arg3'
When developing this type of thing, it usually is useful to make sure the command really is what you expect, before you actually try it.
example:
def Cmd():
return "something"
print Cmd()
when you are satisfied, comment out the print Cmd() line and use os.system (Cmd ()) or subprocess version.
Related
I want to log the parent command which ultimately executed my Python script. I want to captured it within the Python subprocess so I may include it in existing logs.
I have a shell script run.sh containing:
#!/bin/bash
python runme.py "$#"
And runme.py contains:
import sys
import subprocess
print("Doing stuff...")
# What command called me?
psoutput = subprocess.check_output(['ps', '--no-headers','-o','cmd'], text=True)
cmds = psoutput.strip().split('\n')
print(cmds[0])
I execute this from my company's shell (on RHEL) like this, and it outputs...
$ ./run.sh arg1 arg2 arg3
Doing stuff...
run.sh arg1 arg2 arg3
Looks good! But if another process is running first, it shows up first in ps and this doesn't work:
$ sleep 10s &
$ ./run.sh arg1 arg2 arg3
Doing stuff...
sleep 10s
I'm really trying to select my parent process from ps. This solution uses -C <process-name> but I don't know what my process name is. Following this solution I've tried this with several similar ideas for process name:
subprocess.check_output(['ps', '-o', 'ppid=', '-C', 'python runme.py'], text=True)
But that outputs a lot of numbers that look like PID's, but none seem related to my bash call.
How can I reliably get my parent process from within Python?
Pure python solution.
It works under any shell.
The parent cmd line result is a list with the command and the arguments.
import psutil
import os
ppid = psutil.Process(os.getppid())
print(ppid.cmdline())
One (okay) solution I realized while writing my question:
run.sh contains
command_run=$(echo "$BASH_SOURCE $#")
runme.py --command-run "$command_run" "$#"
runme.py contains
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
'-c', '--command-run', required=False,
default=None, dest='command_run',
help='Command text passed from bash'
)
args = parser.parse_args()
print("Doing stuff...")
print(args.command_run)
Additional arguments can be passed after adding more parsing. As above, the following works now:
$ sleep 5s &
$ ./run.sh
Doing stuff...
run.sh
I'll mark this answer as correct unless someone has a nicer solution than passing $BASH_SOURCE this way.
I know that I can run a python script from my bash script using the following:
python python_script.py
But what about if I wanted to pass a variable / argument to my python script from my bash script. How can I do that?
Basically bash will work out a filename and then python will upload it, but I need to send the filename from bash to python when I call it.
To execute a python script in a bash script you need to call the same command that you would within a terminal. For instance
> python python_script.py var1 var2
To access these variables within python you will need
import sys
print(sys.argv[0]) # prints python_script.py
print(sys.argv[1]) # prints var1
print(sys.argv[2]) # prints var2
Beside sys.argv, also take a look at the argparse module, which helps define options and arguments for scripts.
The argparse module makes it easy to write user-friendly command-line interfaces.
Use
python python_script.py filename
and in your Python script
import sys
print sys.argv[1]
Embedded option:
Wrap python code in a bash function.
#!/bin/bash
function current_datetime {
python - <<END
import datetime
print datetime.datetime.now()
END
}
# Call it
current_datetime
# Call it and capture the output
DT=$(current_datetime)
echo Current date and time: $DT
Use environment variables, to pass data into to your embedded python script.
#!/bin/bash
function line {
PYTHON_ARG="$1" python - <<END
import os
line_len = int(os.environ['PYTHON_ARG'])
print '-' * line_len
END
}
# Do it one way
line 80
# Do it another way
echo $(line 80)
http://bhfsteve.blogspot.se/2014/07/embedding-python-in-bash-scripts.html
use in the script:
echo $(python python_script.py arg1 arg2) > /dev/null
or
python python_script.py "string arg" > /dev/null
The script will be executed without output.
I have a bash script that calls a small python routine to display a message window. As I need to use killall to stop the python script I can't use the above method as it would then mean running killall python which could take out other python programmes so I use
pythonprog.py "$argument" & # The & returns control straight to the bash script so must be outside the backticks. The preview of this message is showing it without "`" either side of the command for some reason.
As long as the python script will run from the cli by name rather than python pythonprog.py this works within the script. If you need more than one argument just use a space between each one within the quotes.
and take a look at the getopt module.
It works quite good for me!
Print all args without the filename:
for i in range(1, len(sys.argv)):
print(sys.argv[i])
Is it possible to run a python script with parameters in command line like this:
./hello(var=True)
or is it mandatory to do like this:
python -c "from hello import *;hello(var=True)"
The first way is shorter and simpler.
Most shells use parentheses for grouping or sub-shells. So you can't call any commands like command(arg) from a normal shell ...but you can write a python script (./hello.py) that takes an argument.
import optparse
parser = optparse.OptionParser()
parser.add_option('-f', dest="f", action="store_true", default=False)
options, remainder = parser.parse_args()
print ("Flag={}".format(options.f))
And the call it with python hello.py -f
./hello(var=True) would be impossible from REPL shell. In some case it could be useful to have python function available in your current shell session. Here a workaround to make your python functions available in your shell environment.
# python-tools.sh
#!/usr/bin/env bash
set -a # make all available export all variable)
function hello(){
cd "/app/python/commands"
python "test.py" $#
}
Content of the python script
#! /usr/bin/env python
# /app/python/commands/test.py script
import sys
def test(*args):
print(args)
if __name__ == '__main__':
if sys.argv[1] in globals().keys():
print(sys.argv[1])
globals()[sys.argv[1]](sys.argv[2:])
else:
print("%s Not known function" % sys.argv[1])
Then source python-tools.sh
source python-tools.sh
After the hello function is available
$ hello test arg2 arg2
test
(['arg2', 'arg2'],)
I am new to python and would like to give the arguments to a script as in-line arguments after the script name.
For example: given the script
mypython_script.py
I would like to launch it from terminal in the following way:
python3.4 mypython_script.py argument_1=5.0, argument_2=12.3, nome_file='out_put.dat'
I am able to read/print into terminal via the following command:
print ('Input Argument_1')
string_input= input()
Argument_1=float(string_input)
print('Argument_1 %f ' % Argument_1)
print ('Input Argument_2')
string_input= input()
Argument_2=float(string_input)
print('Argument_2 %f ' % Argument_2)
print ('Input nome_file')
nome_file= input()
print('nome_file %s ' % nomefile)
But in order to do this I have to interact with the program, my question is: how can I give to the script all the arguments in the line in which I execute it from shell?
Check out the argparse tutorial for information on how to use the argparse module. It's an easy-to-use yet powerful module for interacting with command-line arguments, setting up help messages, formatting printouts, and more.
You have to use sys.argv to retrive parameter after scripts
sys.argv[0] will give you first argument.
Please use different argument as python script_name.py arg1 arg2 arg3 arg4
If you have to use unix type argument syntext like python script arg1=val1 arg2=val2 then user argparser
I want to replace the current process with a new process using the same Python interpreter, but with a new script. I have tried using os.execl, which seemed like the most intuitive approach:
print(sys.executable, script_path, *args)
os.execl(sys.executable, script_path, *args)
The result is that this is printed to the screen (from the print function):
/home/tomas/.pyenv/versions/3.4.1/bin/python script.py arg1 arg2 arg3
And the Python interactive interpreter is launched. Entering this into the interpreter:
>>> import sys
>>> print(sys.argv)
['']
Shows that Python received no arguments.
If I copy the output of the print function and enter it into my terminal, it works as expected. I have also tried using execv and execlp with identical results.
Why doesn't the execl call pass the arguments to the Python executable?
The arg0, arg1, arg2, ... (arguments after the sys.executable) are passed to subprogram as argv. If you pass script_path as a the first argument the subprogram will interpret script_path as argv[0] instead of sys.executable.
Replace the execl line as following will solve your problem:
os.execl(sys.executable, sys.executable, script_path, *args)
^^^^^^^^^^^^^^