bash wrap a piped command with a python script - python

Is there a way to create a python script which wraps an entire bash command including the pipes.
For example, if I have the following simple script
import sys
print sys.argv
and call it like so (from bash or ipython), I get the expected outcome:
[pkerp#pendari trell]$ python test.py ls
['test.py', 'ls']
If I add a pipe, however, the output of the script gets redirected to the pipe sink:
[pkerp#pendari trell]$ python test.py ls > out.txt
And the > out.txt portion is not in sys.argv. I understand that the shell automatically process this output, but I'm curious if there's a way to force the shell to ignore it and pass it to the process being called.
The point of this is to create something like a wrapper for the shell. I'd like to run the commands regularly, but keep track of the strace output for each command (including the pipes). Ideally I'd like to keep all of the bash features, such as tab-completion and up and down arrows and history search, and then just pass the completed command through a python script which invokes a subprocess to handle it.
Is this possible, or would I have to write my own shell to do this?
Edit
It appears I'm asking the exact same thing as this question.

The only thing you can do is pass the entire shell command as a string, then let Python pass it back to a shell for execution.
$ python test.py "ls > out.txt"
Inside test.py, something like
subprocess.call("strace " + sys.argv[1], shell=True, executable="/bin/bash")
to ensure the entire string is passed to the shell (and bash, specifically).

Well, I don't quite see what you are trying to do. The general approach would be to give the desired output destination to the script using command line options: python test.py ls --output=out.txt. Incidentally, strace writes to stderr. You could capture everything using strace python test.py > out 2> err if you want to save everything...
Edit: If your script writes to stderr as well you could use strace -o strace_out python test.py > script_out 2> script_err
Edit2: Okay, I understand better what you want. My suggestion is this: Write a bash helper:
function process_and_evaluate()
{
strace -o /tmp/output/strace_output "$#"
/path/to/script.py /tmp/output/strace_output
}
Put this in a file like ~/helper.sh. Then open a bash, source it using . ~/helper.sh.
Now you can run it like this: process_and_evaluate ls -lA.
Edit3:
To capture output / error you could extend the macro like this:
function process_and_evaluate()
{
out=$1
err=$2
shift 2
strace -o /tmp/output/strace_output "$#" > "$out" 2> "$err"
/path/to/script.py /tmp/output/strace_output
}
You would have to use the (less obvious ) process_and_evaluate out.txt err.txt ls -lA.
This is the best that I can come up with...

At least in your simple example, you could just run the python script as an argument to echo, e.g.
$ echo $(python test.py ls) > test.txt
$ more test.txt
['test.py','ls']
Enclosing a command in parenthesis with a dollar sign first executes the contents then passes the output as an argument to echo.

Related

How to properly redirect Python script output to file?

I am trying to run a script remotely on a server and I intend to use something along the following lines:
nohup ./script.py > runtime.out 2> runtime.err & and monitor the script's progress with tail -f runtiime.out. The problem I am having is that the redirect doesn't seem to work as expected. For the purposes of my problem my problem can be reproduced as described below:
script.py:
#!/usr/bin/env python3
import time
if __name__=='__main__':
for i in range(1000):
print("hi")
time.sleep(1)
Then in shell run ./print.py > a.out &. This will give the PID of the proccess and will exit as expected. However a.out is empty despite the program running. Also if i do ./print.py > a.out without the '&' the a.out file remains empty until I Ctrl-C the command. Then it displays all expected output until the termination of the script.
I thought the ">" redirected continuously the stdout and stderr and not only at command completion.
The simplest way to do that is just by using -u flag of the python command. It should look like that:
nohup python3 -u script.py > runtime.out 2> runtime.err &
According to the python3 --help:
-u : force the stdout and stderr streams to be unbuffered;
this option has no effect on stdin; also PYTHONUNBUFFERED=x
Using print("hi", flush=True) will keep forcing the stream to flush contents, so it will continuously update the output file. I don't have enough information about your program to suggest alternatives, but I would look for a better method if possible.

Python Script input via Bash Shell

I have a python script which i am calling from bash script and this bash script get call from cron
#!/bin/bash
set -o errexit
set -o xtrace
echo "Verify/Update Firmware"
/usr/bin/python -u /usr/bin/Update.py
Now when this python run it ask for some input(from keyboard), but i am not able to capture it. How my python can get input in this scenario?
Python script look like below
ip = raw_input('Enter IP for Switch')
tn = telnetlib.Telnet ( ip, 23, 600 )
For giving command line arguments to a bash script you can use $1, $2, $3 etc. The tutorial here talks about this: http://linuxcommand.org/lc3_wss0120.php
For the python part you can use something like argparse to do this pretty nicely. This also had loads of tutorials out there.
For a single line of input use this:
echo "input" | command arg1 arg2
For multiple lines write the expected input to a file, then redirect the input:
command arg1 arg2 < inputfile
It is not guaranteed to work depending on many details.
Please consider the risk of blindly giving input without reading what the program wants.
For a more sophisticated solution check the expect utility.

Python script returning empty string in Bash script only when arguments supplied

If I have a Python script that simply prints the first argument it is given
#!/usr/bin/env python
import sys
try:
print sys.argv[1]
except IndexError:
print "No args"
I can run it find and get the expected output. Now if I write a Bash script that simply runs the python script and echos its output.
#!/bin/bash
test="Test"
echo `python test.py`
Then the bash script will successfully echo "No args". However if I change line 3 to
echo `python test.py $test`
Then I will simply get an empty string as the output of the Bash script and I'm not sure why. I even get the empty string when I change it to
echo `python test.py "test"`
Changing the backticks to $() seems to have worked
You should try in your bash script file:
python test.py $test
It will print your $test variable content.
I call Python script without backquotes and echo. I work a lot with Bash/Python scripts combinations, and this work well for me.

How to call a shell script function/variable from python?

Is there any way to call a shell script and use the functions/variable defined in the script from python?
The script is unix_shell.sh
#!/bin/bash
function foo
{
...
}
Is it possible to call this function foo from python?
Solution:
For functions: Convert Shell functions to python functions
For shell local variables(non-exported), run this command in shell, just before calling python script:
export $(set | tr '\n' ' ')
For shell global variables(exported from shell), in python, you can:
import os
print os.environ["VAR1"]
Yes, in a similar way to how you would call it from another bash script:
import subprocess
subprocess.check_output(['bash', '-c', 'source unix_shell.sh && foo'])
This can be done with subprocess. (At least this was what I was trying to do when I searched for this)
Like so:
output = subprocess.check_output(['bash', '-c', 'source utility_functions.sh; get_new_value 5'])
where utility_functions.sh looks like this:
#!/bin/bash
function get_new_value
{
let "new_value=$1 * $1"
echo $new_value
}
Here's how it looks in action...
>>> import subprocess
>>> output = subprocess.check_output(['bash', '-c', 'source utility_functions.sh; get_new_value 5'])
>>> print(output)
b'25\n'
No, that's not possible. You can execute a shell script, pass parameters on the command line, and it could print data out, which you could parse from Python.
But that's not really calling the function. That's still executing bash with options and getting a string back on stdio.
That might do what you want. But it's probably not the right way to do it. Bash can not do that many things that Python can not. Implement the function in Python instead.
With the help of above answer and this answer, I come up with this:
import subprocess
command = 'bash -c "source ~/.fileContainingTheFunction && theFunction"'
stdout = subprocess.getoutput(command)
print(stdout)
I'm using Python 3.6.5 in Ubuntu 18.04 LTS.
I do not know to much about python, but if You use export -f foo after the shell script function definition, then if You start a sub bash, the function could be called. Without export, You need to run the shell script as . script.sh inside the sub bash started in python, but it will run everything in it and will define all the functions and all variables.
You could separate each function into their own bash file. Then use Python to pass the right parameters to each separate bash file.
This may be easier than just re-writing the bash functions in Python yourself.
You can then call these functions using
import subprocess
subprocess.call(['bash', 'function1.sh'])
subprocess.call(['bash', 'function2.sh'])
# etc. etc.
You can use subprocess to pass parameters too.

Shell Script: Execute a python program from within a shell script

I've tried googling the answer but with no luck.
I need to use my works supercomputer server, but for my python script to run, it must be executed via a shell script.
For example I want job.sh to execute python_script.py
How can this be accomplished?
Just make sure the python executable is in your PATH environment variable then add in your script
python path/to/the/python_script.py
Details:
In the file job.sh, put this
#!/bin/sh
python python_script.py
Execute this command to make the script runnable for you : chmod u+x job.sh
Run it : ./job.sh
Method 1 - Create a shell script:
Suppose you have a python file hello.py
Create a file called job.sh that contains
#!/bin/bash
python hello.py
mark it executable using
$ chmod +x job.sh
then run it
$ ./job.sh
Method 2 (BETTER) - Make the python itself run from shell:
Modify your script hello.py and add this as the first line
#!/usr/bin/env python
mark it executable using
$ chmod +x hello.py
then run it
$ ./hello.py
Save the following program as print.py:
#!/usr/bin/python3
print('Hello World')
Then in the terminal type:
chmod +x print.py
./print.py
You should be able to invoke it as python scriptname.py e.g.
# !/bin/bash
python /home/user/scriptname.py
Also make sure the script has permissions to run.
You can make it executable by using chmod u+x scriptname.py.
Imho, writing
python /path/to/script.py
Is quite wrong, especially in these days. Which python? python2.6? 2.7? 3.0? 3.1? Most of times you need to specify the python version in shebang tag of python file. I encourage to use #!/usr/bin/env python2 #or python2.6 or python3 or even python3.1 for compatibility.
In such case, is much better to have the script executable and invoke it directly:
#!/bin/bash
/path/to/script.py
This way the version of python you need is only written in one file. Most of system these days are having python2 and python3 in the meantime, and it happens that the symlink python points to python3, while most people expect it pointing to python2.
This works for me:
Create a new shell file job. So let's say:
touch job.sh and add command to run python script (you can even add command line arguments to that python, I usually predefine my command line arguments).
chmod +x job.sh
Inside job.sh add the following py files, let's say:
python_file.py argument1 argument2 argument3 >> testpy-output.txt && echo "Done with python_file.py"
python_file1.py argument1 argument2 argument3 >> testpy-output.txt && echo "Done with python_file1.py"
Output of job.sh should look like this:
Done with python_file.py
Done with python_file1.py
I use this usually when I have to run multiple python files with different arguments, pre defined.
Note: Just a quick heads up on what's going on here:
python_file.py argument1 argument2 argument3 >> testpy-output.txt && echo "completed with python_file.py" .
Here shell script will run the file python_file.py and add multiple command-line arguments at run time to the python file.
This does not necessarily means, you have to pass command line arguments as well.
You can just use it like: python python_file.py, plain and simple.
Next up, the >> will print and store the output of this .py file in the testpy-output.txt file.
&& is a logical operator that will run only after the above is executed successfully and as an optional echo "completed with python_file.py" will be echoed on to your cli/terminal at run time.
This works best for me:
Add this at the top of the script:
#!c:/Python27/python.exe
(C:\Python27\python.exe is the path to the python.exe on my machine)
Then run the script via:
chmod +x script-name.py && script-name.py
I use this and it works fine
#/bin/bash
/usr/bin/python python python_script.py
Since the other posts say everything (and I stumbled upon this post while looking for the following).
Here is a way how to execute a python script from another python script:
Python 2:
execfile("somefile.py", global_vars, local_vars)
Python 3:
with open("somefile.py") as f:
code = compile(f.read(), "somefile.py", 'exec')
exec(code, global_vars, local_vars)
and you can supply args by providing some other sys.argv
Here I have demonstrated an example to run python script within a shell script. For different purposes you may need to read the output from a shell command, execute both python script and shell command within the same file.
To execute a shell command from python use os.system() method. To read output from a shell command use os.popen().
Following is an example which will grep all processes having the text sample_program.py inside of it. Then after collecting the process IDs (using python) it will kill them all.
#!/usr/bin/python3
import os
# listing all matched processes and taking the output into a variable s
s = os.popen("ps aux | grep 'sample_program.py'").read()
s = '\n'.join([l for l in s.split('\n') if "grep" not in l]) # avoiding killing the grep itself
print("To be killed:")
print(s)
# now manipulating this string s and finding the process IDs and killing them
os.system("kill -9 " + ' '.join([x.split()[1] for x in s.split('\n') if x]))
References:
Execute a python program from within a shell script
Assign output of os.system to a variable and prevent it from being displayed on the screen
If you have a bash script and you need to run inside of it a python3 script (with external modules), I recommend that you point in your bash script to your python path like this.
#!/usr/bin/env bash
-- bash code --
/usr/bin/python3 your_python.py
-- bash code --

Categories