I have Linux command that is running in Python.
roc = subprocess.Popen(['sshpass', '-p', password, 'rsync', '-avz', '--info=progress2', hostname, '/home/zurelsoft/test'],
stderr=subprocess.PIPE, stdout=subprocess.PIPE).communicate()[0]
print roc
This print the command processing only when it finishes execution. But, I want the output of the command as it is happening and stops when the command is fully executed. How it can be done?
you can check out Select and Select Example
Python’s select() function is a direct interface to the underlying operating system implementation. It monitors sockets, open files, and pipes (anything with a fileno() method that returns a valid file descriptor) until they become readable or writable, or a communication error occurs.
select() makes it easier to monitor multiple connections at the same time, and is more efficient than writing a polling loop in Python using socket timeouts, because the monitoring happens in the operating system network layer, instead of the interpreter.
If this does not help you can also look at
Persistent python subprocess
How can I read all availably data from subprocess.Popen.stdout (non blocking)?
This should work from commandline if you call it via subprocess.Popen as you do, but pass a pipe to sys.stdout by changing:
stdout=sys.stdout
Passing a pipe to an open, writable file object saves the output.
out = open("output.tmp","w")
subprocess.Popen(["ls","-R"],stdout = out)
out = open("output.tmp","r")
output = out.readlines()
Related
I have a program called my_program that operates a system. the program runs on Linux, and I'm trying to automate it using Python.
my_program is constantly generating output and is suppose to receive input and respond to it.
When I'm running my_program in bash it does work like it should, I receive a constant output from the program and when I press a certain sequence (for instance /3 to change the mode of the system), the program responds with an output.
to start the process I am using:
self.process = Popen(my_program,stdin=PIPE,stdout=PIPE,text=True)
And in order to write input to the system I am using:
self.process.stdin.write('/3')
But the writing does not seem to work, I also tried using:
self.process.communicate('/3)
But since my system constantly generating output, it deadlooks the process and the whole program gets stuck.
Any solution for writing to a process that is constantly generating output?
Edit:
I don't think I can provide a code that can reproduce the problem because I'm using a unique SW that my company has, but it goes somthing like this:
self.process = Popen(my_program,stdin=PIPE,stdout=PIPE,text=True)
self.process.stdin.write('/3')
# try to find a specific string that indicated that the input string was received
string_received = False
while(string_received = False):
response = self.process.stdout.readline().strip()
if (response == expected_string):
break
The operating system implements buffered I/O between processes unless you specifically request otherwise.
In very brief, the output buffer will be flushed and written when it fills up, or (with default options) when you write a newline.
You can disable buffering when you create the Popen object:
self.process = Popen(my_program, stdin=PIPE, stdout=PIPE, text=True, bufsize=1)
... or you can explicitly flush() the file handle when you want to force writing.
self.process.stdin.flush()
However, as the documentation warns you, if you can't predict when the subprocess can read and when it can write, you can easily end up in deadlock. A more maintainable solution might be to run the subprocess via pexpect or similar.
I am trying to create a program to easily handle IT requests, and I have created a program to test if a PC on my network is active from a list.
To do this, I wrote the following code:
self.btn_Ping.clicked.connect(self.ping)
def ping(self):
hostname = self.listWidget.currentItem().text()
if hostname:
os.system("ping " + hostname + " -t")
When I run it my main program freezes and I can't do anything until I close the ping command window. What can I do about this? Is there any other command I can use to try to ping a machine without making my main program freeze?
The docs state that os.system() returns the value returned by the command you called, therefore blocking your program until it exits.
They also state that you should use the subprocess module instead.
From ping documentation:
ping /?
Options:
-t Ping the specified host until stopped.
To see statistics and continue - type Control-Break;
To stop - type Control-C.
So, by using -t you are waiting until that machine has stopped, and in case that machine is not stopping, your Python script will run forever.
As mentioned by HyperTrashPanda, use another parameter for launching ping, so that it stops after one or some attempts.
As mentioned in Tim Pietzcker's answer, the use of subprocess is highly recommended over os.system (and others).
To separate the new process from your script, use subprocess.Popen. You should get the output printed normally into sys.stdout. If you want something more complex (e.g. for only printing something if something changes), you can set the stdout (and stderr and stdin) arguments:
Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object, and None. PIPE indicates that a new pipe to the child should be created. DEVNULL indicates that the special file os.devnull will be used. With the default settings of None, no redirection will occur; the child’s file handles will be inherited from the parent.
-- docs on subproces.Popen, if you scroll down
If you want to get the exit code, use myPopenProcess.poll().
I am executing a shell script using Popen. I am also using stdout=PIPE to capture the output.The code is
pipe = Popen('acbd.sh', shell=True, stdout = PIPE)
while pipe.poll() is None:
time.sleep(0.5)
text = pipe.communicate()[0]
if pipe.returncode == 0:
print "File executed"
According to documentation using poll with stdout = PIPE can lead to deadlock. Also communicate() can be used to solve this problem. I have used communicate() here.
Will my code lead to deadlock with communicate too or am I using communicate usage wrong?
Also I have an alternate in subprocess.check_output but I would prefer to use Popen and record the output with same.
Yes, you can deadlock, because of these two lines:
while pipe.poll() is None:
time.sleep(0.5)
Take them out; there's no need for them here. communicate() will wait for the subprocess to close its FDs (as happens on exit) as it is; when you add a loop yourself, and don't read until after that loop completes, then your program can be stuck indefinitely trying to write contents which can't be written until communicate() causes the other side of the pipeline to start reading.
As background: The POSIX specification for the write() call does not make any guarantees about the amount of data that can be written to a FIFO before it will block, or that this amount of data will be consistent even within a given system -- thus, the safe thing is to assume that any write to a FIFO is always allowed to block unless there's a reader actively consuming that data.
I have this little script that puts your wireless device into monitor mode. It does an airodump scan and then after terminating the scan dumps the output to file.txt or a variable, so then I can scrape the BSSID and whatever other info I may need.
I feel I haven't grasped the concept or difference between subprocess.call() and subprocess.Popen().
This is what I currently have:
def setup_device():
try:
output = open("file.txt", "w")
put_device_down = subprocess.call(["ifconfig", "wlan0", "down"])
put_device_mon = subprocess.call(["iwconfig", "wlan0", "mode", "monitor"])
put_device_up = subprocess.call(["iwconfig", "wlano", "up"])
start_device = subprocess.call(["airmon-ng", "start", "wlan0"])
scanned_networks = subprocess.Popen(["airodump-ng", "wlan0"], stdout = output)
time.sleep(10)
scanned_networks.terminate()
except Exception, e:
print "Error:", e
I am still clueless about where and when and in which way to use subprocess.call() and subprocess.Popen()
The thing that I think is confusing me most is the stdout and stderr args. What is PIPE?
Another thing that I could possibly fix myself once I get a better grasp is this:
When running subprocess.Popen() and running airodump, the console window pops up showing the scan. Is there a way to hide this from the user to sort of clean things up?
You don't have to use Popen() if you don't want to. The other functions in the module, such as .call() use Popen(), give you a simpler API to do what you want.
All console applications have 3 'file' streams: stdin for input, and stdout and stderr for output. The application decides what to write where; usually error and diagnostic information to stderr, the rest to stdout. If you want to capture the output for either of these outputs in your Python program, you specify the subprocess.PIPE argument so that the 'stream' is redirected into your program. Hence the name.
If you want to capture the output of the airodump-ng wlan0 command, it's easiest to use the subprocess.check_output() function; it takes care of the PIPE argument for you:
scanned_networks = subprocess.check_output(["airodump-ng", "wlan0"])
Now output contains whatever airodump-ng wrote to its stdout stream.
If you need to have more control over the process, then you do need to use the Popen() class:
proc = subprocess.Popen(["airodump-ng", "wlan0"], stdout=subprocess.PIPE)
for line in proc.stdout:
# do something with line
proc.terminate()
Most of the examples I've seen with os.fork and the subprocess/multiprocessing modules show how to fork a new instance of the calling python script or a chunk of python code. What would be the best way to spawn a set of arbitrary shell command concurrently?
I suppose, I could just use subprocess.call or one of the Popen commands and pipe the output to a file, which I believe will return immediately, at least to the caller. I know this is not that hard to do, I'm just trying to figure out the simplest, most Pythonic way to do it.
Thanks in advance
All calls to subprocess.Popen return immediately to the caller. It's the calls to wait and communicate which block. So all you need to do is spin up a number of processes using subprocess.Popen (set stdin to /dev/null for safety), and then one by one call communicate until they're all complete.
Naturally I'm assuming you're just trying to start a bunch of unrelated (i.e. not piped together) commands.
I like to use PTYs instead of pipes. For a bunch of processes where I only want to capture error messages I did this.
RNULL = open('/dev/null', 'r')
WNULL = open('/dev/null', 'w')
logfile = open("myprocess.log", "a", 1)
REALSTDERR = sys.stderr
sys.stderr = logfile
This next part was in a loop spawning about 30 processes.
sys.stderr = REALSTDERR
master, slave = pty.openpty()
self.subp = Popen(self.parsed, shell=False, stdin=RNULL, stdout=WNULL, stderr=slave)
sys.stderr = logfile
After this I had a select loop which collected any error messages and sent them to the single log file. Using PTYs meant that I never had to worry about partial lines getting mixed up because the line discipline provides simple framing.
There is no best for all possible circumstances. The best depends on the problem at hand.
Here's how to spawn a process and save its output to a file combining stdout/stderr:
import subprocess
import sys
def spawn(cmd, output_file):
on_posix = 'posix' in sys.builtin_module_names
return subprocess.Popen(cmd, close_fds=on_posix, bufsize=-1,
stdin=open(os.devnull,'rb'),
stdout=output_file,
stderr=subprocess.STDOUT)
To spawn multiple processes that can run in parallel with your script and each other:
processes, files = [], []
try:
for i, cmd in enumerate(commands):
files.append(open('out%d' % i, 'wb'))
processes.append(spawn(cmd, files[-1]))
finally:
for p in processes:
p.wait()
for f in files:
f.close()
Note: cmd is a list everywhere.
I suppose, I could just us subprocess.call or one of the Popen
commands and pipe the output to a file, which I believe will return
immediately, at least to the caller.
That's not a good way to do it if you want to process the data.
In this case, better do
sp = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE)
and then sp.communicate() or read directly from sp.stdout.read().
If the data shall be processed in the calling program at a later time, there are two ways to go:
You can retrieve the data ASAP, maybe via a separate thread, reading them and storing them somewhere where the consumer can get them.
You can have the producing subprocess have block and retrieve the data from it when you need them. The subprocess produces as many data as fit in the pipe buffer (usually 64 kiB) and then blocks on further writes. As soon as you need the data, you read() from the subprocess object's stdout (maybe stderr as well) and use them - or, again, you use sp.communicate() at that later time.
Way 1 would the way to go if producing the data needs much time, so that your wprogram would have to wait.
Way 2 would be to be preferred if the size of the data is quite huge and/or the data is produced so fast that buffering would make no sense.
See an older answer of mine including code snippets to do:
Uses processes not threads for blocking I/O because they can more reliably be p.terminated()
Implements a retriggerable timeout watchdog that restarts counting whenever some output happens
Implements a long-term timeout watchdog to limit overall runtime
Can feed in stdin (although I only need to feed in one-time short strings)
Can capture stdout/stderr in the usual Popen means (Only stdout is coded, and stderr redirected to stdout; but can easily be separated)
It's almost realtime because it only checks every 0.2 seconds for output. But you could decrease this or remove the waiting interval easily
Lots of debugging printouts still enabled to see whats happening when.
For spawning multiple concurrent commands, you would need to alter the class RunCmd to instantiate mutliple read output/write input queues and to spawn mutliple Popen subprocesses.