PowerShell wrapper to direct piped input to Python script - python

I'm trying to write a little tool that will let me pipe command output to the clipboard. I've read through multiple answers on Stack Overflow, but they didn't work for me, because they didn't include piping, or because they didn't use a function, or they just threw errors (or maybe I just messed up). I threw up my hands with PowerShell and decided to go with Python.
I created a Python script called copyToClipboard.py:
import sys
from Tkinter import Tk
if sys.stdin.isatty() and len(sys.argv) == 1:
#We're checking for input on stdin and first argument
sys.exit()
tk = Tk()
tk.withdraw()
tk.clipboard_clear()
if not sys.stdin.isatty():
#We have data in stdin
while 1:
try:
line = sys.stdin.readline()
except KeyboardInterrupt:
break
if not line:
break
tk.clipboard_append(line)
elif len(sys.argv) > 1:
for line in sys.argv[1]:
tk.clipboard_append(line)
tk.destroy()
(I haven't fully tested the argv[1] part, so that might be shaky. I'm mainly interested in reading from stdin, so the important part is sys.stdin.)
This works great! When I'm in the directory that contains the script, I can execute something like:
ls | python copyToClipboard.py
And the contents of ls magically appear on my clipboard. That's exactly what I want.
The challenge is wrapping this in a PowerShell function that will take a piped input and simply pass the input to the Python script. My goal is to be able to do ls | Out-Clipboard, so I created something like:
function Out-ClipBoard() {
Param(
[Parameter(ValueFromPipeline=$true)]
[string] $text
)
pushd
cd \My\Profile\PythonScripts
$text | python copyToClipboard.py
popd
}
But that doesn't work. Only one line of $text makes its way to the Python script.
How can I structure the wrapper for my PowerShell script such that whatever it receives as stdin simply gets passed to the Python script as stdin?

First, in PowerShell, a multi-line text is an array, so you need a [String[]] parameter. To solve your problem, try using the process block:
function Out-ClipBoard() {
Param(
[Parameter(ValueFromPipeline=$true)]
[String[]] $Text
)
Begin
{
#Runs once to initialize function
pushd
cd \My\Profile\PythonScripts
$output = #()
}
Process
{
#Saves input from pipeline.
#Runs multiple times if pipelined or 1 time if sent with parameter
$output += $Text
}
End
{
#Turns array into single string and pipes. Only runs once
$output -join "`r`n" | python copyToClipboard.py
popd
}
}
I don't have Python here myself, so I can't test it. When you need to pass multiple items (an array) through the pipeline, you need the process block for PowerShell to handle it. More about process block and advanced functions is at TechNet.

Related

Python subprocess: Giving stdin, reading stdout, then giving more stdin

I'm working with a piece of scientific software called Chimera. For some of the code downstream of this question, it requires that I use Python 2.7.
I want to call a process, give that process some input, read its output, give it more input based on that, etc.
I've used Popen to open the process, process.stdin.write to pass standard input, but then I've gotten stuck trying to get output while the process is still running. process.communicate() stops the process, process.stdout.readline() seems to keep me in an infinite loop.
Here's a simplified example of what I'd like to do:
Let's say I have a bash script called exampleInput.sh.
#!/bin/bash
# exampleInput.sh
# Read a number from the input
read -p 'Enter a number: ' num
# Multiply the number by 5
ans1=$( expr $num \* 5 )
# Give the user the multiplied number
echo $ans1
# Ask the user whether they want to keep going
read -p 'Based on the previous output, would you like to continue? ' doContinue
if [ $doContinue == "yes" ]
then
echo "Okay, moving on..."
# [...] more code here [...]
else
exit 0
fi
Interacting with this through the command line, I'd run the script, type in "5" and then, if it returned "25", I'd type "yes" and, if not, I would type "no".
I want to run a python script where I pass exampleInput.sh "5" and, if it gives me "25" back, then I pass "yes"
So far, this is as close as I can get:
#!/home/user/miniconda3/bin/python2
# talk_with_example_input.py
import subprocess
process = subprocess.Popen(["./exampleInput.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
process.stdin.write("5")
answer = process.communicate()[0]
if answer == "25":
process.stdin.write("yes")
## I'd like to print the STDOUT here, but the process is already terminated
But that fails of course, because after `process.communicate()', my process isn't running anymore.
(Just in case/FYI): Actual problem
Chimera is usually a gui-based application to examine protein structure. If you run chimera --nogui, it'll open up a prompt and take input.
I often need to know what chimera outputs before I run my next command. For example, I will often try to generate a protein surface and, if Chimera can't generate a surface, it doesn't break--it just says so through STDOUT. So, in my python script, while I'm looping through many proteins to analyze, I need to check STDOUT to know whether to continue analysis on that protein.
In other use cases, I'll run lots of commands through Chimera to clean up a protein first, and then I'll want to run lots of separate commands to get different pieces of data, and use that data to decide whether to run other commands. I could get the data, close the subprocess, and then run another process, but that would require re-running all of those cleaning up commands each time.
Anyways, those are some of the real-world reasons why I want to be able to push STDIN to a subprocess, read the STDOUT, and still be able to push more STDIN.
Thanks for your time!
you don't need to use process.communicate in your example.
Simply read and write using process.stdin.write and process.stdout.read. Also make sure to send a newline, otherwise read won't return. And when you read from stdin, you also have to handle newlines coming from echo.
Note: process.stdout.read will block until EOF.
# talk_with_example_input.py
import subprocess
process = subprocess.Popen(["./exampleInput.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
process.stdin.write("5\n")
stdout = process.stdout.readline()
print(stdout)
if stdout == "25\n":
process.stdin.write("yes\n")
print(process.stdout.readline())
$ python2 test.py
25
Okay, moving on...
Update
When communicating with an program in that way, you have to pay special attention to what the application is actually writing. Best is to analyze the output in a hex editor:
$ chimera --nogui 2>&1 | hexdump -C
Please note that readline [1] only reads to the next newline (\n). In your case you have to call readline at least four times to get that first block of output.
If you just want to read everything up until the subprocess stops printing, you have to read byte by byte and implement a timeout. Sadly, neither read nor readline does provide such a timeout mechanism. This is probably because the underlying read syscall [2] (Linux) does not provide one either.
On Linux we can write a single-threaded read_with_timeout() using poll / select. For an example see [3].
from select import epoll, EPOLLIN
def read_with_timeout(fd, timeout__s):
"""Reads from fd until there is no new data for at least timeout__s seconds.
This only works on linux > 2.5.44.
"""
buf = []
e = epoll()
e.register(fd, EPOLLIN)
while True:
ret = e.poll(timeout__s)
if not ret or ret[0][1] is not EPOLLIN:
break
buf.append(
fd.read(1)
)
return ''.join(buf)
In case you need a reliable way to read non blocking under Windows and Linux, this answer might be helpful.
[1] from the python 2 docs:
readline(limit=-1)
Read and return one line from the stream. If limit is specified, at most limit bytes will be read.
The line terminator is always b'\n' for binary files; for text files, the newline argument to open() can be used to select the line terminator(s) recognized.
[2] from man 2 read:
#include <unistd.h>
ssize_t read(int fd, void *buf, size_t count);
[3] example
$ tree
.
├── prog.py
└── prog.sh
prog.sh
#!/usr/bin/env bash
for i in $(seq 3); do
echo "${RANDOM}"
sleep 1
done
sleep 3
echo "${RANDOM}"
prog.py
# talk_with_example_input.py
import subprocess
from select import epoll, EPOLLIN
def read_with_timeout(fd, timeout__s):
"""Reads from f until there is no new data for at least timeout__s seconds.
This only works on linux > 2.5.44.
"""
buf = []
e = epoll()
e.register(fd, EPOLLIN)
while True:
ret = e.poll(timeout__s)
if not ret or ret[0][1] is not EPOLLIN:
break
buf.append(
fd.read(1)
)
return ''.join(buf)
process = subprocess.Popen(
["./prog.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE
)
print(read_with_timeout(process.stdout, 1.5))
print('-----')
print(read_with_timeout(process.stdout, 3))
$ python2 prog.py
6194
14508
11293
-----
10506

use python as a repl (read-eval-print-loop), reading commands from stdin

I'm implementing a tool (java program) that embeds some command interpreters, as "bc", "sage", ... . The tool executes the interpreter in a thread, redirects its stdin and stdout, and writes some commands to the redirected stdin of the embedded interpreter (along time). Everything ok with "bc" and "sage" after solving usual problems with buffering.
In case of python, it seems that the interpreter is not reading its commands from stdin (I've verified that with a simple echo 1+2 | python ).
I do not find how to instruct the python interpreter to read its commands from stdin. I've read lots of similar questions, but all them tries to read data from stdin, not commands.
Note the command send to the python interpreter can be multi-line (a python "for" loop, ...) and commands are not yet available when python starts, they are generated and send to the interpreter along time.
In short, I want use python as a repl (read-eval-print-loop), reading from stdin.
The python REPL mode is intended to be interactive and only works when it detects a terminal. You can pass commands into stdin, but they don't automatically print, you have to specify that.
Try running
echo print 1+2 | python
That should get your expected result. You can also write your code to a file like this
echo print 1+2 > myfile
python myfile.py
It is also possible to use a buffer like a fifo node to pass data to python's stdin, but python exits after every line, and you get no advantage over simply running "echo print 1+2 | python".
mkfifo a=rw MYFIFO
echo "print 1+2" > MYFIFO &
cat MYFIFO | python".
Using python the expected way, by executing each set of code as you need it, should work better than keeping a repl open, and will produce more consistent results.

Unix: why I need to close the input FIFO to my program before I can read from its output FIFO

I've got a problem: I have one program running in a shell that does some calculations based on user input, and I can launch this program in an interactive way so it will keep asking for input and it outputs its calculations after user press enter. So it remains open inside the shell till user types the exit-word.
What I want to do is to create an interface in such a way that user has to type input somewhere else off the shell and using pipe, fifo and so on, input is carried away to that program, and its output goes to this interface.
In a few word: I have a long running process and I need to attach, when needed, my interface to its stdin and stdout.
For this kind of problem, I was thinking to use a FIFO file made by mkfifo command (we are in Unix, especially for Mac user) and redirect program stdin and stdout to this file:
my_program < fifofile > fifofile
But I've found some difficulties about reading and writing to this fifo file. So I decided to use 2 fifo files, one for input and one for output. So:
exec my_program < fifofile_in > fifofile_out
(don't know why I use exec for redirection, but it works... and I'm okay with exec ;) )
If I launch this command in a shell, and in another one I wrote:
echo -n "date()" > fifofile_in
The echoing process is succesful, and if I do:
cat fifofile_out
I'm able to see my_program output. Ok! But I don't want to deal with shell, instead I want to deal with a program written by me, like this python script:
import os, time;
text="";
OUT=os.open("sample_out",os.O_RDONLY | os.O_NONBLOCK)
out=os.fdopen(OUT);
while(1):
#IN=open("sample_in",'w' );
IN=os.open("sample_in",os.O_WRONLY)
#OUT=os.fdopen(os.open("sample_out",os.O_RDONLY| os.O_NONBLOCK |os.O_APPEND));
#OUT=open("sample_out","r");
print "Write your mess:";
text=raw_input();
if (text=="exit"):
break;
os.write(IN,text);
os.close(IN);
#os.fsync(IN);
time.sleep(0.05);
try:
while True:
#c=os.read(OUT,1);
c=out.readline();
print "Read: ", c#, " -- ", ord(c);
if not c:
print "End of file";
quit();
#break;
except OSError as e:
continue;
#print "OSError"
except IOError as e:
continue;
#print "IOError"
Where:
sample_in, sample_out are respectively fifo files used for redirections to stdin and stdout (so I write to stdin in order to give input to my_program and I read from stdout in order to get my_program output)
out is my os.fdopen file descriptor used for getting lines with out.readline() instead of using OUT.read(1) (char by char)
time.sleep(0.05) is for delay some time before go reading my_program output (needed for calculations, else I got nothing to read).
Whit this script and my_program running in background from the shell, I'm able to write to stdin and read from stdout correctly, but the journey to achieve this code wasn't easy: after have read all posts about fifo and reading/writing from/to fifo files, I came with this solution of closing the IN fd before reading from OUT even if the fifo files are different! From what I read around internet and in Stackoverflow articles, I thought that this procedure was for handle only one fifo file, but here I deal with two (different!). I think it is something related to how I write into sample_in: I tried to flush to look like echo -n command, but it seems useless.
So I would like to ask you if this behaviour is normal, and how can achieve the same thing with echo -n "...." > sample_in and in other shell cat sample_out? In particular, cat is outputting data continuously as soon as I echo input in sample_in, but my way of reading is with data blocks.
Thanks so much, I hope everything it's clear enough!

running TCL through python by subprocess, but not giving any output

I am trying to run my tcl script through python subprocess as follow:
import subprocess
>>> subprocess.Popen(["tclsh", "tcltest.tcl"])
<subprocess.Popen object at 0x0000000001DD4DD8>
>>> subprocess.Popen(["tclsh", "tcltest.tcl"], shell=True )
<subprocess.Popen object at 0x0000000002B34550>
I don't know if it is working or not, since I don't see any anything from it!
my tcl script also has some packages from my company that causes errors when I use Tkinter, Tk, and eval,
import Tkinter
import socket
def TCLRun():
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1', 5006))
root = Tkinter.Tk()
## root.tk.eval('package require Azimuth-Sdk')
tcl_script ="""
##package require Company-Sdk
## set players [ace_azplayer_list_players]
set players 2
puts $players
## if { $players != "" } {
## foreach player $players {
## set cmd ace_azplayer_remove_player
## if { [catch { [ $cmd $player ] } err] } {
## puts " $cmd $player - $err"
## sleep 1
## }
## }
## } """
# call the Tkinter tcl interpreter
root.tk.call('eval', tcl_script)
root.mainloop()
gives me this error
import TCLCall
>>> TCLCall.TCLRun()
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
TCLCall.TCLRun()
File "C:\Users\XXX\Desktop\PKT\TCLCall.py", line 24, in TCLRun
root.tk.call('eval', tcl_script)
TclError: can not find channel named "stdout"
that's why I switched to subprocess. at least it doesn't give me error!
any idea how to run my tcl script with internal required package through python?!
Thanks
To get the output from using subprocess.Popen, you can try the following:
import subprocess
p = subprocess.Popen(
"tclsh tcltest.tcl",
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
print stdout
print stderr
It's entirely possible that the script you're running with subprocess.Popen is also generating an error, but isn't displaying since you aren't explicitly looking for it.
Edit:
To prevent some information from being lost in the comments below:
You probably have several potential errors here, or things you can try.
Either your tcl script by itself isn't able to import teapot properly, or some sort of interaction between the tcl script and the python script isn't working properly, or subprocess.Popen isn't correctly finding the teapot package from your path.
I would try debugging your programs in that order. First confirm that your tcl script works without using python or subprocess.Popen and just directly run it from the command line (for example, C:\Users\blah tclsh tcltest.tcl)
Then, after you've made sure your script work, bring in Python. From what I'm seeing, you don't have any problem with the python stuff, but with either your tcl script or some issue with your path.
The whole point of subprocess.Popen is redirection of standard channels, so you can handle output programmatically instead of seeing it on your own standard output. Did you try handling it? How?
Maybe you don't need redirection at all: then os.system("tclsh tcltest.tcl") should be enough. Or maybe subprocess.Popen has other advantages for you -- then figure out how to disable redirection, or how to redirect child's stdout to your own stdout.
I think I might have a solution for you. Or at least a different method to try. This example opens the TCL shell as a process. Then you pump commands to it as though you are on command line. Since you said your command line worked, I would think that this would too. This works for me on Windows with Python 3.7.6 and TCL 8.5.
There needs to be a little trickery. I've found solutions that require threads and all sorts of other overhead to get this job done but they just fell flat. What I came up with is simple and synchronous.
stdout.readline() will block. So if your TCL command doesn't throw anything back, you are dead in the water.
SO, you force something to come back by appending an innocuous TCL puts command that communicates back to the Python script that the job is done.
The other trick is that you need to "puts []" the command to force the output back into stdout.
If you need to source more TCL files, then add those in before you make your final call to what you are trying to run. Whatever you need to do on the cli, you do through this process.
My code has this all wrapped in a class with methods. I brought them all together here in-line for the example. Leave the errorInfo code in while debugging and remove as you see fit.
Note: You can also use TCL "info body " to read a script into a string variable and then run each line through the process. In essence, if in a Python debugging session, stepping through TCL. Sorry, this does not work too well. The workaround for comments does not work for opening curly braces.
Hope this helps someone out there.
EDIT: Using multi-line string handles comment lines.
import subprocess
print("------")
shell_path = r"<YOUR PATH TO THE TCL INTERPRETER SHELL>"
tcl_shell = subprocess.Popen(shell_path,
stdin =subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines = True,
bufsize = 0)
#
# Use ONE of the "command"s below. I have them here in series for exmaples.
#
# Simple
command = r"set temp 5"
# Multiline with error --> This will give an error of course to see the error extraction in action
command = r"""
set temp 5
set temp2 [expr temp + 5]
"""
# Multiline working
command = r"""
set temp 5
set temp2 [expr $temp + 5]
"""
# More output
command = r"""
puts "Starting process"
set temp 5
puts $temp
set temp2 [expr $temp + 5]
"""
# Comments handled
command = r"# comment!"
# Be sure to leave the newline to handle comments
tcl_shell.stdin.write(f"""puts [{command}
] \nputs \"tcl_shell_cmd_complete\"\n""")
# NOTE: tcl_shell.stdin.flush() does not seem to be needed. Consider if needed.
result = ""
line = tcl_shell.stdout.readline()
while line != "tcl_shell_cmd_complete\n":
result += line
line = tcl_shell.stdout.readline()
print(f"tcl_shell sent:\n{command}")
print(f"tcl_shell result:\n{result}".rstrip())
command_error_check = "puts $errorInfo"
tcl_shell.stdin.write(f"{command_error_check} \nputs \"tcl_shell_cmd_complete\"\n")
resultErr = ""
line = tcl_shell.stdout.readline()
while line != "tcl_shell_cmd_complete\n":
resultErr += line
line = tcl_shell.stdout.readline()
print(f"tcl_shell error info:\n{resultErr}")
tcl_shell.stdin.close()
tcl_shell.terminate()
tcl_shell.wait(timeout=0.5)
print("------")

How to make a Python program handle a here document?

I've written a Python wrapper (pyprog) to run a program (someprogram), something like this:
...do some setup stuff in Python...
print("run [y=yes]")
CHOICE=input()
...do some setup stuff in Python...
if CHOICE == "y":
status=subprocess.call(["someprogram"])
sys.exit(status)
A user wants to use a shell script to run the program and feed it input using a here document like this:
#!/bin/sh
pyprog > pyprog.log << EOF
y
file1
file2
EOF
Is there a way to spawn the subprocess so that the here document will work (the "y" gets consumed by the Python input(), and the "file1" and "file2" continue along as stdin to someprogram)? Right now, the Python input() takes the "y", but the rest of it disappears.
You need to connect sys.stdin to the stdin of the call.
status=subprocess.call(["someprogram"], stdin=sys.stdin)
import sys
status=subprocess.call(["someprogram"], stdin=sys.stdin)
I've used something like this a few times before: https://gist.github.com/887225
Basically it's a python script that accepts a number of command line parameters, performs some transformation based on what was input, then uses os.system() to evoke a shell command.
In this example I'm calling Java, passing in a class path then running the ProgramName.jar program.

Categories