Maya GUI freezes during subprocess call - python

I need to conform some maya scenes we receive from a client to make them compatible to our pipeline. I'd like to batch that action, obviously, and I'm asked to launch the process from within Maya.
I've tried two methods already (quite similar to each other), which both work, but the problem is that the Maya GUI freezes until the process is complete. I'd like for the process to be completely transparent for the user so that they can keep workind, and only a message when it's done.
Here's what I tried and found until now:
This tutorial here : http://www.toadstorm.com/blog/?p=136 led me to write this and save it:
filename = sys.argv[1]
def createSphere(filename):
std.initialize(name='python')
try:
mc.file(filename, open=True, pmt=False, force=True)
sphere = mc.polySphere() [0]
mc.file(save=True, force=True)
sys.stdout.write(sphere)
except Exception, e:
sys.stderr.write(str(e))
sys.exit(-1)
if float(mc.about(v=True)) >= 2016.0:
std.uninitialize()
createSphere(filename)
Then to call it from within maya that way:
mayapyPath = 'C:/Program Files/Autodesk/Maya2016/bin/mayapy.exe'
scriptPath = 'P:/WG_MAYA_Users/lbouet/scripts/createSphere.py'
filenames = ['file1', 'file2', 'file3', 'file4']
def massCreateSphere(filenames):
for filename in filenames:
maya = subprocess.Popen(mayapyPath+' '+scriptPath+' '+filename,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
out,err = maya.communicate()
exitcode = maya.returncode
if str(exitcode) != '0':
print(err)
print 'error opening file: %s' % (filename)
else:
print 'added sphere %s to %s' % (out,filename)
massCreateSphere(filenames)
It works fine, but like I said, freezes Maya GUI until the process is over. And it's just for creating a sphere, so not nearly close to all the actions I'll actually have to perform on the scenes.
I've also tried to run the first script via a .bat file calling mayabatch and running the script, same issue.
I found this post (Running list of cmd.exe commands from maya in Python) who seems to be exactly what I'm looking for, but I can't see how to adapt it to my situation ?
From what I understand the issue might come from calling Popen in a loop (i.e. multiple times), but I really can't see how to do otherwise... I'm thinking maybe saving the second script somewhere on disk too and calling that one from Maya ?

In this case subprocess.communicate() will block until the child process is done, so it is not going to fix your problem on its own.
If you just want to kick off the processes and not wait for them to complete -- 'fire and forget' style -- you can just use threads, starting off a new thread for each process. However you'll have to be very careful about reporting back to the user -- if you try to touch the Maya scene or GUI from an outside thread you'll get mysterious, undebuggable errors. print() is usually ok but maya.cmds() is not. If you're only printing messages you can probably get away with maya.utils.executeDeferred() which is discussed in this question and in the docs.

Related

is it possible to pass data from one python program to another python program? [duplicate]

Is it possible -- other than by using something like a .txt/dummy file -- to pass a value from one program to another?
I have a program that uses a .txt file to pass a starting value to another program. I update the value in the file in between starting the program each time I run it (ten times, essentially simultaneously). Doing this is fine, but I would like to have the 'child' program report back to the 'mother' program when it is finished, and also report back what files it found to download.
Is it possible to do this without using eleven files to do it (that's one for each instance of the 'child' to 'mother' reporting, and one file for the 'mother' to 'child')? I am talking about completely separate programs, not classes or functions or anything like that.
To operate efficently, and not be waiting around for hours for everything to complete, I need the 'child' program to run ten times and get things done MUCH faster. Thus I run the child program ten times and give each program a separate range to check through.
Both programs run fine, I but would like to get them to run/report back and forth with each other and hopefully not be using file 'transmission' to accomplish the task, especially on the child-mother side of the transferring of data.
'Mother' program...currently
import os
import sys
import subprocess
import time
os.chdir ('/media/')
#find highest download video
Hival = open("Highest.txt", "r")
Histr = Hival.read()
Hival.close()
HiNext = str(int(Histr)+1)
#setup download #1
NextVal = open("NextVal.txt","w")
NextVal.write(HiNext)
NextVal.close()
#call download #1
procs=[]
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
#setup download #2-11
Histr2 = int(Histr)/10000
Histr2 = Histr2 + 1
for i in range(10):
Hiint = str(Histr2)+"0000"
NextVal = open("NextVal.txt","w")
NextVal.write(Hiint)
NextVal.close()
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
Histr2 = Histr2 + 1
for proc in procs:
proc.wait()
'Child' program
import urllib
import os
from Tkinter import *
import time
root = Tk()
root.title("Audiodownloader")
root.geometry("200x200")
app = Frame(root)
app.grid()
os.chdir('/media/')
Fileval = open('NextVal.txt','r')
Fileupdate = Fileval.read()
Fileval.close()
Fileupdate = int(Fileupdate)
Filect = Fileupdate/10000
Filect2 = str(Filect)+"0009"
Filecount = int(Filect2)
while Fileupdate <= Filecount:
root.title(Fileupdate)
url = 'http://www.yourfavoritewebsite.com/audio/encoded/'+str(Fileupdate)+'.mp3'
urllib.urlretrieve(url,str(Fileupdate)+'.mp3')
statinfo = os.stat(str(Fileupdate)+'.mp3')
if statinfo.st_size<10000L:
os.remove(str(Fileupdate)+'.mp3')
time.sleep(.01)
Fileupdate = Fileupdate+1
root.update_idletasks()
I'm trying to convert the original VB6 program over to Linux and make it much easier to use at the same time. Hence the lack of .mainloop being missing. This was my first real attempt at anything in Python at all hence the lack of def or classes. I'm trying to come back and finish this up after 1.5 months of doing nothing with it mostly due to not knowing how to. In research a little while ago I found this is WAY over my head. I haven't ever did anything with threads/sockets/client/server interaction so I'm purely an idiot in this case. Google anything on it and I just get brought right back here to stackoverflow.
Yes, I want 10 running copies of the program at the same time, to save time. I could do without the gui interface if it was possible for the program to report back to 'mother' so the mother could print on the screen the current value that is being searched. As well as if the child could report back when its finished and if it had any file that it downloaded successfully(versus downloaded and then erased due to being to small). I would use the successful download information to update Highest.txt for the next time the program got ran.
I think this may clarify things MUCH better...that or I don't understand the nature of using server/client interaction:) Only reason time.sleep is in the program was due to try to make sure that the files could get written before the next instance of the child program got ran. I didn't know for sure what kind of timing issue I may run into so I included those lines for safety.
This can be implemented using a simple client/server topology using the multiprocessing library. Using your mother/child terminology:
server.py
from multiprocessing.connection import Listener
# client
def child(conn):
while True:
msg = conn.recv()
# this just echos the value back, replace with your custom logic
conn.send(msg)
# server
def mother(address):
serv = Listener(address)
while True:
client = serv.accept()
child(client)
mother(('', 5000))
client.py
from multiprocessing.connection import Client
c = Client(('localhost', 5000))
c.send('hello')
print('Got:', c.recv())
c.send({'a': 123})
print('Got:', c.recv())
Run with
$ python server.py
$ python client.py
When you talk about using txt to pass information between programs, we first need to know what language you're using.
Within my knowledge of Java and Python achi viable despite laborious depensendo the amount of information that wants to work.
In python, you can use the library that comes with it for reading and writing txt and schedule execution, you can use the apscheduler.

Python - good way to determine that my PyInstaller-based Python script is the only copy running, and possibly terminate other copies?

I'm looking for a way to determine that a script I wrote, packed by PyInstaller, is the only copy of itself running - so that it can quit if it finds itself open already.
I'd also like to implement an argument to kill all currently running versions of the .exe. Killing them one by one by simple list of PIDs associated with the .exe isn't an option since I could accidentally kill my own process before finishing.
It would be the best if I could use only win32 APIs, as this script is sometimes called by services and thus is unfriendly to many subprocess.Popen calls. I don't want to have to go through UAC spoofing. However, sometimes the .exe is invoked by the Windows Scheduler or by user-land programs.
My current version of finding processes uses win32pdh. I'm not exactly sure where to attribute this, though it's very close to first example from here: http://www.programcreek.com/python/example/51184/win32pdh.OpenQuery
def get_win_processes():
win32pdh.EnumObjects(None, None, win32pdh.PERF_DETAIL_WIZARD)
junk, instances = win32pdh.EnumObjectItems(None,None,'Process', win32pdh.PERF_DETAIL_WIZARD)
proc_dict = {}
for instance in instances:
if proc_dict.has_key(instance):
proc_dict[instance] = proc_dict[instance] + 1
else:
proc_dict[instance]=0
proc_ids = []
for instance, max_instances in proc_dict.items():
for inum in xrange(max_instances+1):
hq = win32pdh.OpenQuery() # initializes the query handle
try:
path = win32pdh.MakeCounterPath( (None, 'Process', instance, None, inum, 'ID Process') )
counter_handle=win32pdh.AddCounter(hq, path) #convert counter path to counter handle
try:
win32pdh.CollectQueryData(hq) #collects data for the counter
type, val = win32pdh.GetFormattedCounterValue(counter_handle, win32pdh.PDH_FMT_LONG)
proc_ids.append((instance, val))
except win32pdh.error, e:
pass
win32pdh.RemoveCounter(counter_handle)
except win32pdh.error, e:
pass
win32pdh.CloseQuery(hq)
return proc_ids
However, this returns two processes, one of which is guardian process for PyInstaller, the other is the actual instance of the program. Furthermore, it doesn't indicate which one is the currently-running guardian or child.
Example output when exe is 'wcdo.exe' and there are two copies running:
(u'wcdo', 11700)
(u'wcdo', 8748)
(u'wcdo', 4152)
(u'wcdo', 9308)
Thanks!
You could query wmic and check which applications are connected ...
C:\>wmic process where name="webserver2.exe" get processid,parentprocessid,commandline
CommandLine ParentProcessId ProcessId
webserver2.exe --scheduled 3136 2212
webserver2.exe --scheduled 2212 6004
Here:
3112 is cmd.exe
4140 the 'pyInstaller wrappper' (because it is parent and process)
3220 the application itself
Using PHD seems to be overhead, it is slow and quite unflexible to indentify processes on Windows.
Calling 'wmic' through subprocess and parsing the output is done in a few lines.
Additionally there is a format flag, how the wmic output is presented (csv, xml, ...)
Btw. you could try to create your exe with py2exe, that does not use a wrapper application.
Not sure if it is relevant, to identify how the application was started. But you could add a special command line argument to your Windows Scheduler to run wcdo.exe --scheduled.

Performing an action upon unexpected exit python

I was wandering if there was a way to perform an action before the program closes. I am running a program over a long time and I do want to be able to close it and have the data be saved in a text file or something but there is no way of me interfering with the while True loop I have running, and simply saving the data each loop would be highly ineffective.
So is there a way that I can save data, say a list, when I hit the x or destroy the program? I have been looking at the atexit module but have had no luck, except when I set the program to finish at a certain point.
def saveFile(list):
print "Saving List"
with open("file.txt", "a") as test_file:
test_file.write(str(list[-1]))
atexit.register(saveFile(list))
That is my whole atexit part of the code and like I said, it runs fine when I set it to close through the while loop.
Is this possible, to save something when the application is terminated?
Your atexit usage is wrong. It expects a function and its arguments, but you're just calling your function right away and passing the result to atexit.register(). Try:
atexit.register(saveFile, list)
Be aware that this uses the list reference as it exists at the time you call atexit.register(), so if you assign to list afterwards, those changes will not be picked up. Modifying the list itself without reassigning should be fine, though.
You could use the handle_exit context manager from this ActiveState recipe:
http://code.activestate.com/recipes/577997-handle-exit-context-manager/
It handles SystemExit, KeyboardInterrupt, SIGINT, and SIGTERM, with a simple interface:
def cleanup():
print 'do some cleanup here'
def main():
print 'do something'
if __name__ == '__main__':
with handle_exit(cleanup):
main()
There's nothing you can in reaction to a SIGKILL. It kills your process immediately, without any allowed cleanup.
Catch the SystemExit exception at the top of your application, then rethrow it.
There are a a couple of approaches to this. As some have commented you could used signal handling ... your [Ctrl]+[C] from the terminal where this is running in the foreground is dispatching a SIGHUP signal to your process (from the terminal's drivers).
Another approach would be to use a non-blocking os.read() on sys.stdin.fileno such that you're polling your keyboard one during every loop to see if an "exit" keystroke or sequence has been entered.
A similarly non-blocking polling approach can be implemented using the select module's functionality. I've see that used with the termios and tty modules. (Seems inelegant that it needs all those to save, set changes to, and restore the terminal settings, and I've also seen some examples using os and fcntl; and I'm not sure when or why one would prefer one over the other if os.isatty(sys.stdin.fileno())).
Yet another approach would be to use the curses module with window.nodelay() or window.timeout() to set your desired input behavior and then either window.getch() or window.getkey() to poll for any input.

pygtk textview getbuffer and write at the same time

I am trying to make a program(in python) that as I write it writes to a file and opens to a certain window that I have already created.I have looked allarund for a vaible soution bt it would seem that multi-threading may be the only option.
I was hoping that when option autorun is "activated" it will:
while 1:
wbuffer = textview.get_buffer()
text = wbuffer.get_text(wbuffer.get_start_iter(), wbuffer.get_end_iter())
openfile = open(filename,"w")
openfile.write(text)
openfile.close()
I am using pygtk and have a textview window, but when I get the buffer it sits forever.
I am thinking that I need to multi-thread it and queue it so one thread will be writing the buffer while it is being queued.
my source is here. (I think the statement is at line 177.)
any help is much appreciated. :)
and here is the function:
def autorun(save):
filename = None
chooser = gtk.FileChooserDialog("Save File...", None,
gtk.FILE_CHOOSER_ACTION_SAVE,
(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL,
gtk.STOCK_SAVE, gtk.RESPONSE_OK))
response = chooser.run()
if response == gtk.RESPONSE_OK: filename = chooser.get_filename()
filen = filename
addr = (filename)
addressbar.set_text("file://" + filename)
web.open(addr)
chooser.destroy()
wbuffer = textview.get_buffer()
while 1:
text = wbuffer.get_text(wbuffer.get_start_iter(), wbuffer.get_end_iter())
time.sleep(1)
openfile = open(filename,"w")
openfile.write(text)
openfile.close()
Though not too easy to see exactly what your GTK-stuff not included here is doing, the main problem is that the control needs to be returned to the gtk main-loop. Else the program will hang.
So if you have a long process (like this eternal one here), then you need to thread it. The problem is that you need the thread to exit nicely when the main program quits, so you'll have to redesign a bit around that. Also, threading with gtk needs to be initialized correctly (look here).
However, I don't think you need threading, instead you could connect the changed signal of your TextBuffer to a function that writes the buffer to the target-file (if the user has put the program in autorun-mode). A problem with this is if the buffer gets large or program slow, in which case, you should consider threading the callback of the changed signal. So this solution requires to make sure you don't get into the situation where save-requests get stacked on top of each other because the user is faster at typing than the computer is saving. Takes some design thought.
So, finally, the easier solution: you may not want the buffer to save for every button-press. In which case, you could have the save-function (which could look like your first code-block without the loop) on a timeout instead. Just don't make the time-out too short.

Overriding basic signals (SIGINT, SIGQUIT, SIGKILL??) in Python

I'm writing a program that adds normal UNIX accounts (i.e. modifying /etc/passwd, /etc/group, and /etc/shadow) according to our corp's policy. It also does some slightly fancy stuff like sending an email to the user.
I've got all the code working, but there are three pieces of code that are very critical, which update the three files above. The code is already fairly robust because it locks those files (ex. /etc/passwd.lock), writes to to a temporary files (ex. /etc/passwd.tmp), and then, overwrites the original file with the temporary. I'm fairly pleased that it won't interefere with other running versions of my program or the system useradd, usermod, passwd, etc. programs.
The thing that I'm most worried about is a stray ctrl+c, ctrl+d, or kill command in the middle of these sections. This has led me to the signal module, which seems to do precisely what I want: ignore certain signals during the "critical" region.
I'm using an older version of Python, which doesn't have signal.SIG_IGN, so I have an awesome "pass" function:
def passer(*a):
pass
The problem that I'm seeing is that signal handlers don't work the way that I expect.
Given the following test code:
def passer(a=None, b=None):
pass
def signalhander(enable):
signallist = (signal.SIGINT, signal.SIGQUIT, signal.SIGABRT, signal.SIGPIPE, signal.SIGALRM, signal.SIGTERM, signal.SIGKILL)
if enable:
for i in signallist:
signal.signal(i, passer)
else:
for i in signallist:
signal.signal(i, abort)
return
def abort(a=None, b=None):
sys.exit('\nAccount was not created.\n')
return
signalhander(True)
print('Enabled')
time.sleep(10) # ^C during this sleep
The problem with this code is that a ^C (SIGINT) during the time.sleep(10) call causes that function to stop, and then, my signal handler takes over as desired. However, that doesn't solve my "critical" region problem above because I can't tolerate whatever statement encounters the signal to fail.
I need some sort of signal handler that will just completely ignore SIGINT and SIGQUIT.
The Fedora/RH command "yum" is written is Python and does basically exactly what I want. If you do a ^C while it's installing anything, it will print a message like "Press ^C within two seconds to force kill." Otherwise, the ^C is ignored. I don't really care about the two second warning since my program completes in a fraction of a second.
Could someone help me implement a signal handler for CPython 2.3 that doesn't cause the current statement/function to cancel before the signal is ignored?
As always, thanks in advance.
Edit: After S.Lott's answer, I've decided to abandon the signal module.
I'm just going to go back to try: except: blocks. Looking at my code there are two things that happen for each critical region that cannot be aborted: overwriting file with file.tmp and removing the lock once finished (or other tools will be unable to modify the file, until it is manually removed). I've put each of those in their own function inside a try: block, and the except: simply calls the function again. That way the function will just re-call itself in the event of KeyBoardInterrupt or EOFError, until the critical code is completed.
I don't think that I can get into too much trouble since I'm only catching user provided exit commands, and even then, only for two to three lines of code. Theoretically, if those exceptions could be raised fast enough, I suppose I could get the "maximum reccurrsion depth exceded" error, but that would seem far out.
Any other concerns?
Pesudo-code:
def criticalRemoveLock(file):
try:
if os.path.isFile(file):
os.remove(file)
else:
return True
except (KeyboardInterrupt, EOFError):
return criticalRemoveLock(file)
def criticalOverwrite(tmp, file):
try:
if os.path.isFile(tmp):
shutil.copy2(tmp, file)
os.remove(tmp)
else:
return True
except (KeyboardInterrupt, EOFError):
return criticalOverwrite(tmp, file)
There is no real way to make your script really save. Of course you can ignore signals and catch a keyboard interrupt using try: except: but it is up to your application to be idempotent against such interrupts and it must be able to resume operations after dealing with an interrupt at some kind of savepoint.
The only thing that you can really to is to work on temporary files (and not original files) and move them after doing the work into the final destination. I think such file operations are supposed to be "atomic" from the filesystem prospective. Otherwise in case of an interrupt: restart your processing from start with clean data.

Categories