No data with QProcess.waitForReadyRead(): non consistent behavior - python

I experienced this bug in a more complex application I am developing. I execute a python server program and want to read the first data available. And then close it. I do that to validate some settings from the server.
My problem boils down to:
QProcess.waitForReadyRead() doesn't return and timeouts, it's supposed to return True very quickly
It used to work, I rollbacked to an older revision to try to find what caused this to break now, but it's always there now, I really tried everything I could think of, so I want to know if it's a normal problem or maybe something that would only affect me and caused by my environment.
This is the test I wrote to show the problem, when I execute it, the 3 first checks return the data immediately, but the last one timeouts and I get no data.
This is certainly not logical. In the test I used wait, in my server it's just a select-like function, it's implemented with base modules in python.
from PyQt4 import QtCore
#FILE: 1.py
#print 'TEST'
#FILE: 2.py
# import time
#print 'TEST'
#time.sleep(100)
#FILE: 1.sh
# echo 'TEST'
#FILE: 2.sh
# echo 'TEST'
# sleep 100
proc0= QtCore.QProcess()
proc0.start('sh', ['./1.sh'])
proc0.waitForStarted()
proc0.waitForReadyRead(10000)
output0 = proc0.readAll()
proc1= QtCore.QProcess()
proc1.start('sh', ['./2.sh'])
proc1.waitForStarted()
proc1.waitForReadyRead(10000)
output1 = proc1.readAll()
proc2= QtCore.QProcess()
proc2.start('python', ['./1.py'])
proc2.waitForStarted()
proc2.waitForReadyRead(10000)
output2 = proc2.readAll()
proc3= QtCore.QProcess()
proc3.start('python', ['./2.py'])
proc3.waitForStarted()
proc3.waitForReadyRead(10000)
output3 = proc3.readAll()
print "0"
print output0.size()
print repr(output0.data())
print "1"
print output1.size()
print repr(output1.data())
print "2"
print output2.size()
print repr(output2.data())
print "3"
print output3.size()
print repr(output3.data())
proc0.close()
proc1.close()
proc2.close()
proc3.close()
Is the last test (proc3) supposed to behave like I described? Is there a workaround or a fix that would let me read the data form stdout in my python server...? What is it?

It's a comment but...
I found the solution, python's print doesn't flush stdout and waits for a certain amount of data before actually pushing the data to the stdout. sys.stdout.flush() fixed it.
Hope it helps.

Related

python: subprocess returns nothing when running scripts which take longer time

Beginner here with subrpocess problem:
The below line works fine when I run both the scripts with less data which takes around 10-20minutes, however with bigger data to be processed the subprocess returns nothing once both the scripts are completed (lets say in an hour).
Also: Often with lesser data, it behaves abnormally as well i.e. not returning the status_code/going through. I then have to just run the snippet again and it works. Any solid reasoning would help alot!
status_code = subprocess.run(f"python3 scriptA.py {param_1} & python3 scriptB.py {param_2}",
shell=True).returncode
if status_code == 0:
print("subprocess ended ..") # the control does not come here incase of huge files
some_other_file.main(some_param1, some_param2)
I fail to understand why that happens (or should I use a different approach?), Any help would be very much appreciated. Thank you!
Sample of scripts (both scriptA and scriptB):
def main(param_2):
some_func_with_csv_operations() # not returning anything
more_funcs()
if __name__ == "__main__":
param_2 = sys.argv[1]
main(param_2)
Also, no scriptA or scriptB has anywhere sys.exit()
EDIT:
Screenshot:
tried printig the status_code before the if condition, nothing was printed and in the terminal, I see a cursor just blinking.
enter image description here
Also (looking for any python processes):
using ps -ax | grep python shows no relevant information (Picture attached)
enter image description here
Try running your processes separately to get a better idea where the failure is occurring
running_processes = []
running_processes.append(subprocess.Popen(["python3", "scriptA.py", f"{param_1}"]))
running_processes.append(subprocess.Popen(["python3", "scriptB.py", f"{param_2}"]))
both_succeeded = True
for p in running_processes:
ret = p.wait()
if ret: # anything other than zero will evaluate to True here
both_succeeded = False
cmd = " ".join(p.args)
print(f'command "{cmd}" returned code {ret}'
if both_succeeded:
do_more_things()
The above uses a list to hold the running processes so you can process them in a loop and avoid repetitive code. If you will definitely only have two subprocesses you could choose not to use the loop.
process1 = subprocess.Popen(shlex.split(f"python3 scriptA.py {param_1}"))
process2 = subprocess.Popen(shlex.split(f"python3 scriptB.py {param_2}"))
ret1 = process1.wait()
ret2 = process2.wait()
if ret1 == 0 and ret2 == 0: # or "if not ret1 and not ret2:" would also work
do_more_things()
notice that Popen requires a list of arguments and won't accept a single string, but I've used shlex.split in this example to allow use of a single string for the command and achieve the same result.

python simple threading won't ends without doing anything (maybe)

When i run the following code (using "sudo python servers.py") the process seem to just finish immediately with just printing "test".
why doesn't the functions "proxy_server" won't run ? or maybe they do but i do not realize that. (because the first line in proxy function doesn't print anything)
this is an impotent code, i didn't want to put unnecessary content, yet it still demonstrate my problem:
import os,sys,thread,socket,select,struct,time
HTTP_PORT = 80
FTP_PORT=21
FTP_DATA_PORT = 20
IP_IN = '10.0.1.3'
IP_OUT = '10.0.3.3'
sys_http = 'http_proxy'
sys_ftp = 'ftp_proxy'
sys_ftp_data = 'ftp_data_proxy'
def main():
try:
thread.start_new_thread(proxy_server, (HTTP_PORT, IP_IN,sys_http,http_handler))
thread.start_new_thread(proxy_server, (FTP_PORT, IP_IN,sys_ftp,http_handler))
thread.start_new_thread(proxy_server, (FTP_DATA_PORT, IP_OUT,sys_ftp_data,http_handler))
print "test"
except e:
print 'Error!'
sys.exit(1)
def proxy_server(host,port,fileName,handler):
print "Proxy Server Running on ",host,":",port
def http_handler(src,sock):
return ''
if __name__ == '__main__':
main()
What am i missing or doing wrong ?
First, you have indentation problems related to using mixed tabs and spaces for indentation. While they didn't cause your code to misbehave in this particular case, they will cause you problems later if you don't stick to consistently using one or the other. They've already broken the displayed indentation in your question; see the print "test" line in main, which looks misaligned.
Second, instead of the low-level thread module, you should be using threading. Your problem is occurring because, as documented in the thread module documentation,
When the main thread exits, it is system defined whether the other threads survive. On SGI IRIX using the native thread implementation, they survive. On most other systems, they are killed without executing try ... finally clauses or executing object destructors.
threading threads let you explicitly define whether other threads should survive the death of the main thread, and default to surviving. In general, threading is much easier to use correctly.

Maya GUI freezes during subprocess call

I need to conform some maya scenes we receive from a client to make them compatible to our pipeline. I'd like to batch that action, obviously, and I'm asked to launch the process from within Maya.
I've tried two methods already (quite similar to each other), which both work, but the problem is that the Maya GUI freezes until the process is complete. I'd like for the process to be completely transparent for the user so that they can keep workind, and only a message when it's done.
Here's what I tried and found until now:
This tutorial here : http://www.toadstorm.com/blog/?p=136 led me to write this and save it:
filename = sys.argv[1]
def createSphere(filename):
std.initialize(name='python')
try:
mc.file(filename, open=True, pmt=False, force=True)
sphere = mc.polySphere() [0]
mc.file(save=True, force=True)
sys.stdout.write(sphere)
except Exception, e:
sys.stderr.write(str(e))
sys.exit(-1)
if float(mc.about(v=True)) >= 2016.0:
std.uninitialize()
createSphere(filename)
Then to call it from within maya that way:
mayapyPath = 'C:/Program Files/Autodesk/Maya2016/bin/mayapy.exe'
scriptPath = 'P:/WG_MAYA_Users/lbouet/scripts/createSphere.py'
filenames = ['file1', 'file2', 'file3', 'file4']
def massCreateSphere(filenames):
for filename in filenames:
maya = subprocess.Popen(mayapyPath+' '+scriptPath+' '+filename,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
out,err = maya.communicate()
exitcode = maya.returncode
if str(exitcode) != '0':
print(err)
print 'error opening file: %s' % (filename)
else:
print 'added sphere %s to %s' % (out,filename)
massCreateSphere(filenames)
It works fine, but like I said, freezes Maya GUI until the process is over. And it's just for creating a sphere, so not nearly close to all the actions I'll actually have to perform on the scenes.
I've also tried to run the first script via a .bat file calling mayabatch and running the script, same issue.
I found this post (Running list of cmd.exe commands from maya in Python) who seems to be exactly what I'm looking for, but I can't see how to adapt it to my situation ?
From what I understand the issue might come from calling Popen in a loop (i.e. multiple times), but I really can't see how to do otherwise... I'm thinking maybe saving the second script somewhere on disk too and calling that one from Maya ?
In this case subprocess.communicate() will block until the child process is done, so it is not going to fix your problem on its own.
If you just want to kick off the processes and not wait for them to complete -- 'fire and forget' style -- you can just use threads, starting off a new thread for each process. However you'll have to be very careful about reporting back to the user -- if you try to touch the Maya scene or GUI from an outside thread you'll get mysterious, undebuggable errors. print() is usually ok but maya.cmds() is not. If you're only printing messages you can probably get away with maya.utils.executeDeferred() which is discussed in this question and in the docs.

Problems refreshing stdout line using print with python

I've been trying to print out the progress of a for loop in python2.7 using the following code:
for i in range(100):
if float(i) % 10.0 == 0:
print i, "\r",
The behaviour I'm after is the refreshing of the same line on std out rather than writing to a new line every time.
EDIT 1:
Testing in my console (Xfce Terminal 0.4.8), I actually don't get any output regardless of whether I include the if statement or not.
Why is there no output?
I originally said the behaviour of the stdout changed depending on the if statement being there or not because I simplified the code that produced the problem to its most simple form (only to produce the above mentioned effect). My apologies.
EDIT 2:
Thanks to senderle, this is solved. If you miss out the sleep() command, the prints and carriage return happen so quickly you can't see them.
EDIT 3:
One last thing. If you don't catch for the final number in range(100), i.e. 99, the number is cleared off the screen.
EDIT 4:
Note the comma after print i in senderle's answer.
I have found that using sys.stdout is a more system-independent way of doing this, for varions reasons having to do with the way print works. But you have to flush the buffer explicitly, so I put it in a function.
def carriage_return():
sys.stdout.write('\r')
sys.stdout.flush()
This is kind of a WAG. Let me know if it helps.
I tried this and it works for me. The time.sleep is just for dramatization.
import sys, time
def carriage_return():
sys.stdout.write('\r')
sys.stdout.flush()
for i in range(100):
if i % 10 == 0:
print i,
carriage_return()
time.sleep(1)
Finally, I have seen people do this as well. Using terminal control codes like this seems right in some ways, but it also seems more brittle to me. This works for me with the above code as well (on OS X).
def carriage_return():
if sys.platform.lower().startswith('win'):
print '\r'
else:
print chr(27) + '[A'
Testing your code as is, and just including a :colon: at the end of the first line, works just fine with Py2.7 32bit, Windows7 64-bit.
Do you have any out writes to stdout in your if or for block that could be causing the new-lines to be written out ?

UNIX named PIPE end of file

I'm trying to use a unix named pipe to output statistics of a running service. I intend to provide a similar interface as /proc where one can see live stats by catting a file.
I'm using a code similar to this in my python code:
while True:
f = open('/tmp/readstatshere', 'w')
f.write('some interesting stats\n')
f.close()
/tmp/readstatshere is a named pipe created by mknod.
I then cat it to see the stats:
$ cat /tmp/readstatshere
some interesting stats
It works fine most of the time. However, if I cat the entry several times in quick successions, sometimes I get multiple lines of some interesting stats instead of one. Once or twice, it has even gone into an infinite loop printing that line forever until I killed it. The only fix that I've got so far is to put a delay of let's say 500ms after f.close() to prevent this issue.
I'd like to know why exactly this happens and if there is a better way of dealing with it.
Thanks in advance
A pipe is simply the wrong solution here. If you want to present a consistent snapshot of the internal state of your process, write that to a temporary file and then rename it to the "public" name. This will prevent all issues that can arise from other processes reading the state while you're updating it. Also, do NOT do that in a busy loop, but ideally in a thread that sleeps for at least one second between updates.
What about a UNIX socket instead of a pipe?
In this case, you can react on each connect by providing fresh data just in time.
The only downside is that you cannot cat the data; you'll have to create a new socket handle and connect() to the socket file.
MYSOCKETFILE = '/tmp/mysocket'
import socket
import os
try:
os.unlink(MYSOCKETFILE)
except OSError: pass
s = socket.socket(socket.AF_UNIX)
s.bind(MYSOCKETFILE)
s.listen(10)
while True:
s2, peeraddr = s.accept()
s2.send('These are my actual data')
s2.close()
Program querying this socket:
MYSOCKETFILE = '/tmp/mysocket'
import socket
import os
s = socket.socket(socket.AF_UNIX)
s.connect(MYSOCKETFILE)
while True:
d = s.recv(100)
if not d: break
print d
s.close()
I think you should use fuse.
it has python bindings, see http://pypi.python.org/pypi/fuse-python/
this allows you to compose answers to questions formulated as posix filesystem system calls
Don't write to an actual file. That's not what /proc does. Procfs presents a virtual (non-disk-backed) filesystem which produces the information you want on demand. You can do the same thing, but it'll be easier if it's not tied to the filesystem. Instead, just run a web service inside your Python program, and keep your statistics in memory. When a request comes in for the stats, formulate them into a nice string and return them. Most of the time you won't need to waste cycles updating a file which may not even be read before the next update.
You need to unlink the pipe after you issue the close. I think this is because there is a race condition where the pipe can be opened for reading again before cat finishes and it thus sees more data and reads it out, leading to multiples of "some interesting stats."
Basically you want something like:
while True:
os.mkfifo(the_pipe)
f = open(the_pipe, 'w')
f.write('some interesting stats')
f.close()
os.unlink(the_pipe)
Update 1: call to mkfifo
Update 2: as noted in the comments, there is a race condition in this code as well with multiple consumers.

Categories