Redirect bash output to python script - python

I am using zbarimg to scan bar codes, I want to redirect the output to a python script. How can I redirect the output of the following command:
zbarimg code.png
to a python script, and what should be the script like?
I tried the following script:
#!/usr/local/bin/python
s = raw_input()
print s
I made it an executable by issuing the following:
chmod +x in.py
Than I ran the following :
zbarimg code.png | in.py
I know it's wrong but I can't figure out anything else!

Use sys.stdin to read from stdin in your python script. For example:
import sys
data = sys.stdin.readlines()

Using the pipe operator | from the command is correct, actually. Did it not work?
You might need to explicitly specify the path for the python script as in
zbarimg code.png | ./in.py
and as #dogbane says, reading from stdin like sys.stdin.readlines() is better than using raw_input

I had to invoke the python program command as
somecommand | python mypythonscript.py instead of somecommand | ./mypythonscript.py. This worked for me. The latter produced errors.
My purpose: Sum up the durations of all mp3 files by piping output of soxi -D *mp3
into python: soxi -D *mp3 | python sum_durations.py
Details:
soxi -D *mp3produces:
122.473016
139.533016
128.456009
307.802993
...
sum_durations.py script:
import sys
import math
data = sys.stdin.readlines()
#print(data)
sum = 0.0
for line in data:
#print(line)
sum += float(line)
mins = math.floor(sum / 60)
secs = math.floor(sum) % 60
print("total duration: " + str(mins) + ":" + str(secs))

Related

Problem get file info with Curl in Python

I'm using python 2.7 and I need a python script like following. The problem is waiting file download everytime for result if curl time condition is True. I want to get result without waiting file download even if the condition is True. Same curl command run without problem on shell. How should I make a correction or is there a different method?
import os, subprocess
URL = 'http://www.xxxxx/file.zip'
curl="curl -siz yesterday " + URL + " | grep HTTP/ | awk {'print $2'}"
check = subprocess.check_output(curl, shell=True)
print(check)
if "200" in check:
....
....

Python's sh module - is it at all possible for a script to request input?

Using Python's sh, I am running 3rd party shell script that requests my input (not that it matters much, but to be precise, I'm running an Ansible2 playbook with the --step option)
As an oversimplification of what is happening, I built a simple bash script that requests an input. I believe that if make this simple example work I can make the original case work too.
So please consider this bash script hello.sh:
#!/bin/bash
echo "Please input your name and press Enter:"
read name
echo "Hello $name"
I can run it from python using sh module, but it fails to receive my input...
import errno
import sh
cmd = sh.Command('./hello.sh')
for line in cmd(_iter=True, _iter_noblock=True):
if line == errno.EWOULDBLOCK:
pass
else:
print(line)
How could I make this work?
After following this tutorial, this works for my use case:
#!/usr/bin/env python3
import errno
import sh
import sys
def sh_interact(char, stdin):
global aggregated
sys.stdout.write(char)
sys.stdout.flush()
aggregated += char
if aggregated.endswith(":"):
val = input()
stdin.put(val + "\n")
cmd = sh.Command('./hello.sh')
aggregated = ""
cmd(_out=sh_interact, _out_bufsize=0)
For example, the output is:
$ ./testinput.py
Please input your name and press Enter:arod
Hello arod
There are two ways to solve this:
Using _in:
using _in, we can pass a list which can be taken as input in the python script
cmd = sh.Command('./read.sh')
stdin = ['hello']
for line in cmd(_iter=True, _iter_noblock=True, _in=stdin):
if line == errno.EWOULDBLOCK:
pass
else:
print(line)
Using command line args if you are willing to modify the script.

Redirect python output to file

I have the following code in bash script
python -u - << EOF
from my.package import script
script.run()
EOF
>> /path/to/log/file
In script.py there are a number of print statements and I would like to redirect that output from the console to a file. The file gets created successfully, but it's empty.
How can I go about this? What am I missing here?
The idea is right, but the way you re-direct the output to a file from here-document is wrong, Heredocs themselves re-directs just like any other commands, just do
python -u - << EOF >> /path/to/log/file
from my.package import script
script.run()
EOF
try this,
import sys
f = open("log_file.txt", 'w')
sys.stdout = f
print "My print statements"
f.close()
Right syntax is:
python -u - >> /path/to/log/file << EOF
from my.package import script
script.run()
EOF
The reason is that if you write in bash something like that:
util_1 | util_2 | util_3 >> some_file < another_file
or
util_1 | util_2 | util_3 >> some_file << EOF
...
EOF
another_file or here-document goes to the standard input of the first utility in pipeline (in this case to util_1).

Stream stdout from subprocess to python function, and back to subprocess

I am trying to use python in a unix style pipe.
For example, in unix I can use a pipe such as:
$ samtools view -h somefile.bam | python modifyStdout.py | samtools view -bh - > processed.bam
I can do this by using a for line in sys.stdin: loop in the python script and that appears to work without problems.
However I would like to internalise this unix command into a python script. The files involved will be large so I would like to avoid blocking behaviour, and basically stream between processes.
At the moment I am trying to use Popen to manage each command, and pass the stdout of the first process to the stdin of the next process, and so on.
In a seperate python script I have (sep_process.py):
import sys
f = open("sentlines.txt", 'wr')
f.write("hi")
for line in sys.stdin:
print line
f.write(line)
f.close()
And in my main python script I have this:
import sys
from subprocess import Popen, PIPE
# Generate an example file to use
f = open('sees.txt', 'w')
f.write('somewhere over the\nrainbow')
f.close()
if __name__ == "__main__":
# Use grep as an example command
p1 = Popen("grep over sees.txt".split(), stdout=PIPE)
# Send to sep_process.py
p2 = Popen("python ~/Documents/Pythonstuff/Bam_count_tags/sep_process.py".split(), stdin=p1.stdout, stdout=PIPE)
# Send to final command
p3 = Popen("wc", stdin=p2.stdout, stdout=PIPE)
# Read output from wc
result = p3.stdout.read()
print result
The p2 process however fails [Errno 2] No such file or directory even though the file exists.
Do I need to implement a Queue of some kind and/or open the python function using the multiprocessing module?
The tilde ~ is a shell expansion. You are not using a shell, so it is looking for a directory called ~.
You could read the environment variable HOME and insert that. Use
os.environ['HOME']
Alternatively you could use shell=True if you can't be bothered to do your own expansion.
Thanks #cdarke, that solved the problem for using simple commands like grep, wc etc. However I was too stupid to get subprocess.Popen to work when using an executable such as samtools to provide the data stream.
To fix the issue, I created a string containing the pipe exactly as I would write it in the command line, for example:
sam = '/Users/me/Documents/Tools/samtools-1.2/samtools'
home = os.environ['HOME']
inpath = "{}/Documents/Pythonstuff/Bam_count_tags".format(home)
stream_in = "{s} view -h {ip}/test.bam".format(s=sam, ip=inpath)
pyscript = "python {ip}/bam_tags.py".format(ip=inpath)
stream_out = "{s} view -bh - > {ip}/small.bam".format(s=sam, ip=inpath)
# Absolute paths, witten as a pipe
fullPipe = "{inS} | {py} | {outS}".format(inS=stream_in,
py=pyscript,
outS=stream_out)
print fullPipe
# Translates to >>>
# samtools view -h test.bam | python ./bam_tags.py | samtools view -bh - > small.bam
I then used popen from the os module instead and this worked as expected:
os.popen(fullPipe)

Python - all the print and stdout how can i get from terminal so that i can make a log?

Why it does not give output while doing from bash >>, so that it can be saved to a file.
$ cat > /var/tmp/runme.sh << \EOF
#!/bin/bash
export DISPLAY=:0.0
python /var/tmp/t.py >> /var/tmp/log.log &
sleep 3
ps aux | grep "t.py" | grep -v "grep" | awk '{print $2}' | xargs kill -9;
EOF
$ cat > /var/tmp/t.py << \EOF
import sys
print "[RAN]: OK"
sys.stdout.write("[RAN]: OK")
sys.stdout.flush()
EOF
$ chmod +x /var/tmp/runme.sh ; /var/tmp/runme.sh &
$ cat /var/tmp/log.log
$ tail -f /var/tmp/log.log
^
Showing nothing.
How can i get the outputs to log.log using Bash and Python combination?
you could have python do:
var = "hello world"
f = open('log.log', 'r+')
print(var)
f.write(var)
Within var/tmp/t.py, do something similar to the following:
var = "[RAN]: OK"
f = open("log.log", "a+")
f.write(var)
print var
f.close()
Opening "log.log" with the "a+" argument will allow you to append the output of t.py to the file so that you can keep track of multiple runs of t.py. If you just want the output for the most recent run you could just use "w+" which will create a new file if one does not exist and rewrite the file if it does.
You could also put a time stamp on each log using datetime like so:
import datetime
now = datetime.datetime.now()
var = "Date: " + now.strftime("%Y-%m-%d %H:%M") + "\n" + "[RAN]: OK"
f = open("log.log", "a+")
f.write(var)
print var
f.close()
I don't know if this script is something you will run periodically in which knowing the date the log was created would be useful or whether it is a one and done script but I thought you may find date logging useful.

Categories