How can I echo Python loop processing info to the command line? - python

I am trying to set up a (iron)python script that shows some processing information to the command line. Something like:
list = ["a", "b", "c"]
for i in list:
cmd /k echo str(list.index(i)) #just echo index and leave window open so user can see progression
The challenge is to send multiple echo commands to the same instance of the command line.
Note: I can't just use print(i) in this case, because this ironpython script will be running inside a visual programming application, that does not allow printing to the command line. The goal of the script is that users can see progression during heavy operations.
I have come across multiple threads that suggest subprocess for this. I've seen and tested a lot of examples but I can get none of them to work. I could really use some help with that.
For example, when I do this...
import subprocess
proc=subprocess.Popen("echo hello world", shell=True, stdout=subprocess.PIPE)
.... then I can see that 'hello world' is being returned, so the echo was probably successful. But I never see a shell window pop up.
Edit 1
Some more test results: method 1 gives me a good result (command is executed and screen is left open for reading); method 2: adding stdout opens the shell and leaves the screen open but without a visible result - theres nothing to read; method 3: adding stdin flashes the screen (closes it too soon)
method 1:
process = Popen("cmd /k dir")
method2:
process = Popen("cmd /k dir",stdout=subprocess.PIPE)
method 3:
process = Popen('cmd /k dir',stdin=subprocess.PIPE)
Edit 2
Method 4 - creates two separate cmd windows, one showing a, one showing b. Not it.
process = Popen("cmd /k echo a")
process = Popen("cmd /k echo b")
Method 5 - adding process.stdin.write after stdin - just flashes
process = Popen('cmd /k echo a',stdin=subprocess.PIPE)
process.stdin.write("cmd /k echo b")
It seems so simple what I'm looking for but its giving me a headache...

What you are trying to do is not as simple as you think. Because the CMD window is a different process from your IronPython script, to do anything interactive (like showing the progress of a script), you need to use some form of inter-process communication.
For example, you could write a server script to run in the CMD window and then send messages to it from your IronPython script. But then you have to worry about how to start the server, how to stop it, and how to synchronise with the IronPython client script. You can see an example of this kind of inter-process communication in this StackOverflow answer.
Before trying something like that I would make sure that there's no way to display progress in your visual programming application, as staying inside the same process is likely to be easier.

Related

How to insert os.system output into a text file? [duplicate]

This question already has answers here:
Running shell command and capturing the output
(21 answers)
Closed 2 years ago.
I want to get the stdout in a variable after running the os.system call.
Lets take this line as an example:
batcmd="dir"
result = os.system(batcmd)
result will contain the error code (stderr 0 under Windows or 1 under some linux for the above example).
How can I get the stdout for the above command without using redirection in the executed command?
If all you need is the stdout output, then take a look at subprocess.check_output():
import subprocess
batcmd="dir"
result = subprocess.check_output(batcmd, shell=True)
Because you were using os.system(), you'd have to set shell=True to get the same behaviour. You do want to heed the security concerns about passing untrusted arguments to your shell.
If you need to capture stderr as well, simply add stderr=subprocess.STDOUT to the call:
result = subprocess.check_output([batcmd], stderr=subprocess.STDOUT)
to redirect the error output to the default output stream.
If you know that the output is text, add text=True to decode the returned bytes value with the platform default encoding; use encoding="..." instead if that codec is not correct for the data you receive.
These answers didn't work for me. I had to use the following:
import subprocess
p = subprocess.Popen(["pwd"], stdout=subprocess.PIPE)
out = p.stdout.read()
print out
Or as a function (using shell=True was required for me on Python 2.6.7 and check_output was not added until 2.7, making it unusable here):
def system_call(command):
p = subprocess.Popen([command], stdout=subprocess.PIPE, shell=True)
return p.stdout.read()
import subprocess
string="echo Hello world"
result=subprocess.getoutput(string)
print("result::: ",result)
I had to use os.system, since subprocess was giving me a memory error for larger tasks. Reference for this problem here. So, in order to get the output of the os.system command I used this workaround:
import os
batcmd = 'dir'
result_code = os.system(batcmd + ' > output.txt')
if os.path.exists('output.txt'):
fp = open('output.txt', "r")
output = fp.read()
fp.close()
os.remove('output.txt')
print(output)
I would like to expand on the Windows solution. Using IDLE with Python 2.7.5, When I run this code from file Expts.py:
import subprocess
r = subprocess.check_output('cmd.exe dir',shell=False)
print r
...in the Python Shell, I ONLY get the output corresponding to "cmd.exe"; the "dir" part is ignored. HOWEVER, when I add a switch such as /K or /C ...
import subprocess
r = subprocess.check_output('cmd.exe /K dir',shell=False)
print r
...then in the Python Shell, I get all that I expect including the directory listing. Woohoo !
Now, if I try any of those same things in DOS Python command window, without the switch, or with the /K switch, it appears to make the window hang because it is running a subprocess cmd.exe and it awaiting further input - type 'exit' then hit [enter] to release. But with the /K switch it works perfectly and returns you to the python prompt. Allrightee then.
Went a step further...I thought this was cool...When I instead do this in Expts.py:
import subprocess
r = subprocess.call("cmd.exe dir",shell=False)
print r
...a new DOS window pops open and remains there displaying only the results of "cmd.exe" not of "dir". When I add the /C switch, the DOS window opens and closes very fast before I can see anything (as expected, because /C terminates when done). When I instead add the /K switch, the DOS window pops open and remain, AND I get all the output I expect including the directory listing.
If I try the same thing (subprocess.call instead of subprocess.check_output) from a DOS Python command window; all output is within the same window, there are no popup windows. Without the switch, again the "dir" part is ignored, AND the prompt changes from the python prompt to the DOS prompt (since a cmd.exe subprocess is running in python; again type 'exit' and you will revert to the python prompt). Adding the /K switch prints out the directory listing and changes the prompt from python to DOS since /K does not terminate the subprocess. Changing the switch to /C gives us all the output expected AND returns to the python prompt since the subprocess terminates in accordance with /C.
Sorry for the long-winded response, but I am frustrated on this board with the many terse 'answers' which at best don't work (seems because they are not tested - like Eduard F's response above mine which is missing the switch) or worse, are so terse that they don't help much at all (e.g., 'try subprocess instead of os.system' ... yeah, OK, now what ??). In contrast, I have provided solutions which I tested, and showed how there are subtle differences between them. Took a lot of time but...
Hope this helps.
commands also works.
import commands
batcmd = "dir"
result = commands.getoutput(batcmd)
print result
It works on linux, python 2.7.

Python script using subprocess to get PID and kill it acts weird when launched from outside its sitting directory

Thank you in advance for the time you'll give to read this question. I am learning Python and I looked up a lot before asking here, please forgive me for the newbie question.
So I created this script in python 3 using subprocess module to search for another python script's PID, while only knowing the beginning of the script's name and terminate it nicely.
Basically I run python clocks on my LCD screen through Raspberry and I2C, and I terminate the script, clear the LCD and turn it off. This "off" script code is provided below.
The issue is that when I run it from the directory it sits in with a:
python3 off.py
It works perfectly, getting parsing and terminating the PID, then turning off the LCD display.
Ideally I want to trigger it through telegram-cli because I did it in bash and it worked nicely, I find it to be a nice feature. In python it fails.
So I tested and it appears that when I try to launch it from another directory like this:
python3 ~/code/off.py
The grep subprocess returns more than the one PID it returns normally when launched from the script residing directory. For instance (with python3 -v):
kill: failed to parse argument: '25977
26044'
The second PID number is from a sub process created by the script, I can't seem to find what it is as it terminates when the script ends but fails it initial purpose.
Any help in understanding what is happening here would be really appreciated.
I came so far, as show below, from two ugly lines of bash mixed with a call to an dummy four lines python scripts, so I really feel I am getting close to a proper way of achieving my first real python script.
I tried to decompose the script line by line in the interpreter and could not reproduce the error, everything behave as expected. I only get this double PID result when running the script from an outer location.
Thank you in advance for any helpful insight on how to understand what is happening!
#!/usr/bin/env python3
import subprocess
import I2C_LCD_driver
import string
# Defining variables for searched strings and string encoding
searched_process_name = 'lcd_'
cut_grep_out_of_results = 'grep'
result_string_encoding = 'utf-8'
mylcd = I2C_LCD_driver.lcd()
LCD_NOBACKLIGHT = 0x00
run = True
def kill_script():
# Listing processes and getting the searched process
ps_process = subprocess.Popen(["ps", "aux"], stdout=subprocess.PIPE)
grep_process = subprocess.Popen(["grep", "-i", searched_process_name], stdin=ps_process.stdout, stdout=subprocess.PIPE)
# The .stdout.close() lines below allow the previous process to receive a SIGPIPE if the next process exits.
ps_process.stdout.close()
# Cleaning the result until only the PID number is returned in a string
grep_cutout = subprocess.Popen(["grep", "-v", cut_grep_out_of_results], stdin=grep_process.stdout, stdout=subprocess.PIPE)
grep_process.stdout.close()
awk = subprocess.Popen(["cut", "-c", "10-14"], stdin=grep_cutout.stdout, stdout=subprocess.PIPE)
grep_cutout.stdout.close()
output = awk.communicate()[0]
clean_output = output.decode(result_string_encoding)
clean_output_no_new_line = clean_output.rstrip()
clean_output_no_quote = clean_output_no_new_line.replace("'", '')
PID = clean_output_no_quote
# Terminating the LCD script process
subprocess.Popen(["kill", "-9", PID])
while run:
kill_script()
# Cleaning and shutting off LCD screen
mylcd.lcd_clear()
mylcd.lcd_device.write_cmd(LCD_NOBACKLIGHT)
break
I found out the reason of this weird comportment. An error on my end:
I forgot I called some directories with a name including the characters string I was running grep -i against provoking the double result when running the script from outside its directory using its full path.
Turns out the script runs pretty well using subprocess.
So in the end, I renamed the scripts I wanted to terminate with disp_ rather than lcd_ and added shell=False to my subprocesses to make sure there were no risk of unwantedly sending the output to bash while the running the script.

Call to several batch files through CMD doesn't block

I'm trying to call several install.bat files one after another with Python trough CMD.
It is necessary that each bat file be displayed in an interactive console window because it asks for some users instructions and that the python program only resume after each CMD process is resolved
Each install.bat file can take a pretty long time to finish its process.
My code is the following :
for game in games :
print("----------- Starting conversion for %s -----------" %game)
subprocess.call("start cmd /C " + "Install.bat", cwd=os.path.join(gamesDosDir,game), shell=True)
print("end")
But the console windows inside the shell are launched all at once and the "end" message appears event before any of them is finished, whereas I would like them appearing one by one and not go to the n+1 one until the n one is finished and the console window closed (either by user or automatically /K or /C then).
I understand this is some problems using CMD as call should be blocking. How to resolve that? Additionally, if possible how to keep it exactly the same and add 'Y' and 'Y' as default user input?
The most common way to start a batch file (or more generally a CLI command) if to pass it as an argument to cmd /c. After you comment I can assume that you need to use start to force the creation of a (new) command window.
In that case the correct way is to add the /wait option to the start command: it will force the start command to wait the end of its subprocess:
subprocess.call("start /W cmd /C " + "Install.bat", cwd=os.path.join(gamesDosDir,game),
shell=True)
But #eryksun proposed a far cleaner way. On Windows, .bat files can be executed without shell = True, and creationflags=CREATE_NEW_CONSOLE is enough to ensure a new console is created. So above line could simply become:
subprocess.call("Install.bat", cwd=os.path.join(gamesDosDir,game),
creationflags = subprocess.CREATE_NEW_CONSOLE)

subprocess window disappear quickly

In python, I use subprocess.Popen() to launch several processes, I want to debug those processes, but the windows of those processes disappeared quickly and I got no chance to see the error message. I would like to know whether there is any way I can stop the window from disappearing or write the contents in the windows to a file so that I can see the error message later.
Thanks in advance!
you can use the stdout and stderr arguments to write the outputs in a file.
example:
with open("log.txt", 'a') as log:
proc = subprocess.Popen(['cmd', 'args'], stdout=log, stderr=log)
In windows, the common way of keeping cmd windows opened after the end of a console process is to use cmd /k
Example : in a cmd window, typing start cmd /k echo foo
opens a new window (per start)
displays the output foo
leave the command window opened

Python and subprocess input piping

I have a small script that launches and, every half hour, feeds a command to a java program (game server manager) as if the user was typing it. However, after reading documentation and experimenting, I can't figure out how I can get two things:
1) A version which allows the user to type commands into the terminal windoe and they will be sent to the server manager input just as the "save-all" command is.
2) A version which remains running, but sends any new input to the system itself, removing the need for a second terminal window. This one is actually half-happening right now as when something is typed, there is no visual feedback, but once the program is ended, it's clear the terminal has received the input. For example, a list of directory contents will be there if "dir" was typed while the program was running. This one is more for understanding than practicality.
Thanks for the help. Here's the script:
from time import sleep
import sys,os
import subprocess
# Launches the server with specified parameters, waits however
# long is specified in saveInterval, then saves the map.
# Edit the value after "saveInterval =" to desired number of minutes.
# Default is 30
saveInterval = 30
# Start the server. Substitute the launch command with whatever you please.
p = subprocess.Popen('java -Xmx1024M -Xms1024M -jar minecraft_server.jar',
shell=False,
stdin=subprocess.PIPE);
while(True):
sleep(saveInterval*60)
# Comment out these two lines if you want the save to happen silently.
p.stdin.write("say Backing up map...\n")
p.stdin.flush()
# Stop all other saves to prevent corruption.
p.stdin.write("save-off\n")
p.stdin.flush()
sleep(1)
# Perform save
p.stdin.write("save-all\n")
p.stdin.flush()
sleep(10)
# Allow other saves again.
p.stdin.write("save-on\n")
p.stdin.flush()
Replace your sleep() with a call to select((sys.stdin, ), (), (), saveInterval*60) -- that will have the same timeout but listens on stdin for user commands. When select says you have input, read a line from sys.stdin and feed it to your process. When select indicates a timeout, perform the "save" command that you're doing now.
It won't completely solve your problem, but you might find python's cmd module useful. It's a way of easily implementing an extensible command line loop (often called a REPL).
You can run the program using screen, then you can send the input to the specific screen session instead of to the program directly (if you are in Windows just install cygwin).

Categories