pygtk textview getbuffer and write at the same time - python

I am trying to make a program(in python) that as I write it writes to a file and opens to a certain window that I have already created.I have looked allarund for a vaible soution bt it would seem that multi-threading may be the only option.
I was hoping that when option autorun is "activated" it will:
while 1:
wbuffer = textview.get_buffer()
text = wbuffer.get_text(wbuffer.get_start_iter(), wbuffer.get_end_iter())
openfile = open(filename,"w")
openfile.write(text)
openfile.close()
I am using pygtk and have a textview window, but when I get the buffer it sits forever.
I am thinking that I need to multi-thread it and queue it so one thread will be writing the buffer while it is being queued.
my source is here. (I think the statement is at line 177.)
any help is much appreciated. :)
and here is the function:
def autorun(save):
filename = None
chooser = gtk.FileChooserDialog("Save File...", None,
gtk.FILE_CHOOSER_ACTION_SAVE,
(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL,
gtk.STOCK_SAVE, gtk.RESPONSE_OK))
response = chooser.run()
if response == gtk.RESPONSE_OK: filename = chooser.get_filename()
filen = filename
addr = (filename)
addressbar.set_text("file://" + filename)
web.open(addr)
chooser.destroy()
wbuffer = textview.get_buffer()
while 1:
text = wbuffer.get_text(wbuffer.get_start_iter(), wbuffer.get_end_iter())
time.sleep(1)
openfile = open(filename,"w")
openfile.write(text)
openfile.close()

Though not too easy to see exactly what your GTK-stuff not included here is doing, the main problem is that the control needs to be returned to the gtk main-loop. Else the program will hang.
So if you have a long process (like this eternal one here), then you need to thread it. The problem is that you need the thread to exit nicely when the main program quits, so you'll have to redesign a bit around that. Also, threading with gtk needs to be initialized correctly (look here).
However, I don't think you need threading, instead you could connect the changed signal of your TextBuffer to a function that writes the buffer to the target-file (if the user has put the program in autorun-mode). A problem with this is if the buffer gets large or program slow, in which case, you should consider threading the callback of the changed signal. So this solution requires to make sure you don't get into the situation where save-requests get stacked on top of each other because the user is faster at typing than the computer is saving. Takes some design thought.
So, finally, the easier solution: you may not want the buffer to save for every button-press. In which case, you could have the save-function (which could look like your first code-block without the loop) on a timeout instead. Just don't make the time-out too short.

Related

is it possible to pass data from one python program to another python program? [duplicate]

Is it possible -- other than by using something like a .txt/dummy file -- to pass a value from one program to another?
I have a program that uses a .txt file to pass a starting value to another program. I update the value in the file in between starting the program each time I run it (ten times, essentially simultaneously). Doing this is fine, but I would like to have the 'child' program report back to the 'mother' program when it is finished, and also report back what files it found to download.
Is it possible to do this without using eleven files to do it (that's one for each instance of the 'child' to 'mother' reporting, and one file for the 'mother' to 'child')? I am talking about completely separate programs, not classes or functions or anything like that.
To operate efficently, and not be waiting around for hours for everything to complete, I need the 'child' program to run ten times and get things done MUCH faster. Thus I run the child program ten times and give each program a separate range to check through.
Both programs run fine, I but would like to get them to run/report back and forth with each other and hopefully not be using file 'transmission' to accomplish the task, especially on the child-mother side of the transferring of data.
'Mother' program...currently
import os
import sys
import subprocess
import time
os.chdir ('/media/')
#find highest download video
Hival = open("Highest.txt", "r")
Histr = Hival.read()
Hival.close()
HiNext = str(int(Histr)+1)
#setup download #1
NextVal = open("NextVal.txt","w")
NextVal.write(HiNext)
NextVal.close()
#call download #1
procs=[]
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
#setup download #2-11
Histr2 = int(Histr)/10000
Histr2 = Histr2 + 1
for i in range(10):
Hiint = str(Histr2)+"0000"
NextVal = open("NextVal.txt","w")
NextVal.write(Hiint)
NextVal.close()
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
Histr2 = Histr2 + 1
for proc in procs:
proc.wait()
'Child' program
import urllib
import os
from Tkinter import *
import time
root = Tk()
root.title("Audiodownloader")
root.geometry("200x200")
app = Frame(root)
app.grid()
os.chdir('/media/')
Fileval = open('NextVal.txt','r')
Fileupdate = Fileval.read()
Fileval.close()
Fileupdate = int(Fileupdate)
Filect = Fileupdate/10000
Filect2 = str(Filect)+"0009"
Filecount = int(Filect2)
while Fileupdate <= Filecount:
root.title(Fileupdate)
url = 'http://www.yourfavoritewebsite.com/audio/encoded/'+str(Fileupdate)+'.mp3'
urllib.urlretrieve(url,str(Fileupdate)+'.mp3')
statinfo = os.stat(str(Fileupdate)+'.mp3')
if statinfo.st_size<10000L:
os.remove(str(Fileupdate)+'.mp3')
time.sleep(.01)
Fileupdate = Fileupdate+1
root.update_idletasks()
I'm trying to convert the original VB6 program over to Linux and make it much easier to use at the same time. Hence the lack of .mainloop being missing. This was my first real attempt at anything in Python at all hence the lack of def or classes. I'm trying to come back and finish this up after 1.5 months of doing nothing with it mostly due to not knowing how to. In research a little while ago I found this is WAY over my head. I haven't ever did anything with threads/sockets/client/server interaction so I'm purely an idiot in this case. Google anything on it and I just get brought right back here to stackoverflow.
Yes, I want 10 running copies of the program at the same time, to save time. I could do without the gui interface if it was possible for the program to report back to 'mother' so the mother could print on the screen the current value that is being searched. As well as if the child could report back when its finished and if it had any file that it downloaded successfully(versus downloaded and then erased due to being to small). I would use the successful download information to update Highest.txt for the next time the program got ran.
I think this may clarify things MUCH better...that or I don't understand the nature of using server/client interaction:) Only reason time.sleep is in the program was due to try to make sure that the files could get written before the next instance of the child program got ran. I didn't know for sure what kind of timing issue I may run into so I included those lines for safety.
This can be implemented using a simple client/server topology using the multiprocessing library. Using your mother/child terminology:
server.py
from multiprocessing.connection import Listener
# client
def child(conn):
while True:
msg = conn.recv()
# this just echos the value back, replace with your custom logic
conn.send(msg)
# server
def mother(address):
serv = Listener(address)
while True:
client = serv.accept()
child(client)
mother(('', 5000))
client.py
from multiprocessing.connection import Client
c = Client(('localhost', 5000))
c.send('hello')
print('Got:', c.recv())
c.send({'a': 123})
print('Got:', c.recv())
Run with
$ python server.py
$ python client.py
When you talk about using txt to pass information between programs, we first need to know what language you're using.
Within my knowledge of Java and Python achi viable despite laborious depensendo the amount of information that wants to work.
In python, you can use the library that comes with it for reading and writing txt and schedule execution, you can use the apscheduler.

Maya GUI freezes during subprocess call

I need to conform some maya scenes we receive from a client to make them compatible to our pipeline. I'd like to batch that action, obviously, and I'm asked to launch the process from within Maya.
I've tried two methods already (quite similar to each other), which both work, but the problem is that the Maya GUI freezes until the process is complete. I'd like for the process to be completely transparent for the user so that they can keep workind, and only a message when it's done.
Here's what I tried and found until now:
This tutorial here : http://www.toadstorm.com/blog/?p=136 led me to write this and save it:
filename = sys.argv[1]
def createSphere(filename):
std.initialize(name='python')
try:
mc.file(filename, open=True, pmt=False, force=True)
sphere = mc.polySphere() [0]
mc.file(save=True, force=True)
sys.stdout.write(sphere)
except Exception, e:
sys.stderr.write(str(e))
sys.exit(-1)
if float(mc.about(v=True)) >= 2016.0:
std.uninitialize()
createSphere(filename)
Then to call it from within maya that way:
mayapyPath = 'C:/Program Files/Autodesk/Maya2016/bin/mayapy.exe'
scriptPath = 'P:/WG_MAYA_Users/lbouet/scripts/createSphere.py'
filenames = ['file1', 'file2', 'file3', 'file4']
def massCreateSphere(filenames):
for filename in filenames:
maya = subprocess.Popen(mayapyPath+' '+scriptPath+' '+filename,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
out,err = maya.communicate()
exitcode = maya.returncode
if str(exitcode) != '0':
print(err)
print 'error opening file: %s' % (filename)
else:
print 'added sphere %s to %s' % (out,filename)
massCreateSphere(filenames)
It works fine, but like I said, freezes Maya GUI until the process is over. And it's just for creating a sphere, so not nearly close to all the actions I'll actually have to perform on the scenes.
I've also tried to run the first script via a .bat file calling mayabatch and running the script, same issue.
I found this post (Running list of cmd.exe commands from maya in Python) who seems to be exactly what I'm looking for, but I can't see how to adapt it to my situation ?
From what I understand the issue might come from calling Popen in a loop (i.e. multiple times), but I really can't see how to do otherwise... I'm thinking maybe saving the second script somewhere on disk too and calling that one from Maya ?
In this case subprocess.communicate() will block until the child process is done, so it is not going to fix your problem on its own.
If you just want to kick off the processes and not wait for them to complete -- 'fire and forget' style -- you can just use threads, starting off a new thread for each process. However you'll have to be very careful about reporting back to the user -- if you try to touch the Maya scene or GUI from an outside thread you'll get mysterious, undebuggable errors. print() is usually ok but maya.cmds() is not. If you're only printing messages you can probably get away with maya.utils.executeDeferred() which is discussed in this question and in the docs.

How do I restructure this code so Pickle 'Buffer Region' errors do not occur

I was wondering if anyone had any good solutions to the pickling error I am having at the moment. I am trying to set my code up to open several different processes in parallel, each with a fitting process to be display on a matplotlib canvas in real time. Within my main application, I have a button which activates this function:
def process_data(self):
process_list = []
for tab in self.tab_list:
process_list.append(mp.Process(target=process_and_fit, args=(tab,)))
process_list[-1].start()
process_list[-1].join()
return
As you may notice, a 'tab' (PyQt4.QtGui.QTabWidget object) is passed to the function process_and_fit, which I have noticed is not able to be pickled readily (link here) .
However, I am not certain how to change the code to get rid of the frame being passed since it needs to be called in the process_and_fit function indirectly. By indirectly I mean something like this: (psuedo code again)
def process_and_fit(tab): # this just sets up and starts the fitting process
result = lmfit.Minimizer(residual, parameters, fcn_args=(tab,))
result.prepare_fit()
result.leastsq()
def residual(params, tab):
residual_array = Y - model
tab.refreshFigure()
return residual_array
class tab(QtGui.QTabWidget):
def __init__(self, parent, spectra):
# stuff to initialize the tab widget and hold all of the matplotlib lines and canvases
# This just refreshes the GUI stuff everytime that the parameters are fit in the least squares method
def refreshFigure(self):
self.line.set_data(self.spectra.X, self.spectra.model)
self.plot.draw_artist(self.line)
self.plot.figure.canvas.blit(self.plot.bbox)
Does anyone know how to get around this pickling error since the tab associated with a process should have only one set of data associated with it? I looked at Steven Bethard's approach but I really didn't understand where to put the code or how to utilize it. (I am a chemical engineer, not a computer scientist so there's a lot that I don't understand)
Any help is greatly appreciated.
EDIT: I added the links in that I forgot about, as requested.
The main issue is that you can't make UI changes from a separate process from the main UI thread (the one that all of your Qt calls are in). You need to use a mp.Pipe or mp.Queue to communicate back to the main process.
def process_data(self):
for tab in self.tab_list:
consumer, producer = mp.Pipe()
process_list.append(mp.Process(target=process_and_fit, args=(producer,)))
process_list[-1].start()
while (true):
message = consumer.recv() # blocks
if message == 'done':
break
# tab.spectra.X, tab.spectra.model = message
tab.refreshFigure()
process_list[-1].join()
return
def process_and_fit(pipe_conn):
...
pipe_conn.send('done')
def residual(params, pipe_conn):
residual_array = Y - model
pipe_conn.send('refresh') # or replace 'refresh' with (X, model)
return residual_array
One more thing to note: blocking for the consumer.recv() will probably hang the GUI thread. There are plenty of resources to mitigate this, the question "subprocess Popen blocking PyQt GUI" will help, since you should probably switch to QThreads. (Qthread: PySide, PyQt)
The advantage of using QThreads instead of Python threads is that with QThreads, since you're already in Qt's main event loop, you can have asynchronous (non-blocking) callbacks to update the UI.

UNIX named PIPE end of file

I'm trying to use a unix named pipe to output statistics of a running service. I intend to provide a similar interface as /proc where one can see live stats by catting a file.
I'm using a code similar to this in my python code:
while True:
f = open('/tmp/readstatshere', 'w')
f.write('some interesting stats\n')
f.close()
/tmp/readstatshere is a named pipe created by mknod.
I then cat it to see the stats:
$ cat /tmp/readstatshere
some interesting stats
It works fine most of the time. However, if I cat the entry several times in quick successions, sometimes I get multiple lines of some interesting stats instead of one. Once or twice, it has even gone into an infinite loop printing that line forever until I killed it. The only fix that I've got so far is to put a delay of let's say 500ms after f.close() to prevent this issue.
I'd like to know why exactly this happens and if there is a better way of dealing with it.
Thanks in advance
A pipe is simply the wrong solution here. If you want to present a consistent snapshot of the internal state of your process, write that to a temporary file and then rename it to the "public" name. This will prevent all issues that can arise from other processes reading the state while you're updating it. Also, do NOT do that in a busy loop, but ideally in a thread that sleeps for at least one second between updates.
What about a UNIX socket instead of a pipe?
In this case, you can react on each connect by providing fresh data just in time.
The only downside is that you cannot cat the data; you'll have to create a new socket handle and connect() to the socket file.
MYSOCKETFILE = '/tmp/mysocket'
import socket
import os
try:
os.unlink(MYSOCKETFILE)
except OSError: pass
s = socket.socket(socket.AF_UNIX)
s.bind(MYSOCKETFILE)
s.listen(10)
while True:
s2, peeraddr = s.accept()
s2.send('These are my actual data')
s2.close()
Program querying this socket:
MYSOCKETFILE = '/tmp/mysocket'
import socket
import os
s = socket.socket(socket.AF_UNIX)
s.connect(MYSOCKETFILE)
while True:
d = s.recv(100)
if not d: break
print d
s.close()
I think you should use fuse.
it has python bindings, see http://pypi.python.org/pypi/fuse-python/
this allows you to compose answers to questions formulated as posix filesystem system calls
Don't write to an actual file. That's not what /proc does. Procfs presents a virtual (non-disk-backed) filesystem which produces the information you want on demand. You can do the same thing, but it'll be easier if it's not tied to the filesystem. Instead, just run a web service inside your Python program, and keep your statistics in memory. When a request comes in for the stats, formulate them into a nice string and return them. Most of the time you won't need to waste cycles updating a file which may not even be read before the next update.
You need to unlink the pipe after you issue the close. I think this is because there is a race condition where the pipe can be opened for reading again before cat finishes and it thus sees more data and reads it out, leading to multiples of "some interesting stats."
Basically you want something like:
while True:
os.mkfifo(the_pipe)
f = open(the_pipe, 'w')
f.write('some interesting stats')
f.close()
os.unlink(the_pipe)
Update 1: call to mkfifo
Update 2: as noted in the comments, there is a race condition in this code as well with multiple consumers.

How to poll a file in /sys

I am stuck reading a file in /sys/ which contains the light intensity in Lux of the ambient light sensor on my Nokia N900 phone.
See thread on talk.maemo.org here
I tried to use pyinotify to poll the file but this looks some kind of wrong to me since the file is alway "process_IN_OPEN", "process_IN_ACCESS" and "process_IN_CLOSE_NOWRITE"
I basically want to get the changes ASAP and if something changed trigger an event, execute a class...
Here's the code I tried, which works, but not as I expected (I was hoping for process_IN_MODIFY to be triggered):
#!/usr/bin/env python
import os, time, pyinotify
import pyinotify
ambient_sensor = '/sys/class/i2c-adapter/i2c-2/2-0029/lux'
wm = pyinotify.WatchManager() # Watch Manager
mask = pyinotify.ALL_EVENTS
def action(self, the_event):
value = open(the_event.pathname, 'r').read().strip()
return value
class EventHandler(pyinotify.ProcessEvent):
...
def process_IN_MODIFY(self, event):
print "MODIFY event:", action(self, event)
...
#log.setLevel(10)
notifier = pyinotify.ThreadedNotifier(wm, EventHandler())
notifier.start()
wdd = wm.add_watch(ambient_sensor, mask)
wdd
time.sleep(5)
notifier.stop()
Update 1:
Mmmh, all I came up without having a clue if there is a special mechanism is the following:
f = open('/sys/class/i2c-adapter/i2c-2/2-0029/lux')
while True:
value = f.read()
print value
f.seek(0)
This, wrapped in a own thread, could to the trick, but does anyone have a smarter, less CPU-hogging and faster way to get the latest value?
Since the /sys/file is a pseudo-file which just presents a view on an underlying, volatile operating system value, it makes sense that there would never be a modify event raised. Since the file is "modified" from below it doesn't follow regular file-system semantics.
If a modify event is never raised, using a package like pinotify isn't going to get you anywhere. 'twould be better to look for a platform-specific mechanism.
Response to Update 1:
Since the N900 maemo runtime supports GFileMonitor, you'd do well to check if it can provide the asynchronous event that you desire.
Busy waiting - as I gather you know - is wasteful. On a phone it can really drain a battery. You should at least sleep in your busy loop.
Mmmh, all I came up without having a clue if there is a special mechanism is the following:
f = open('/sys/class/i2c-adapter/i2c-2/2-0029/lux')
while True:
value = f.read()
print value
f.seek(0)
This, wrapped in a own thread, could to the trick, but does anyone have a smarter, less CPU-hogging and faster way to get the latest value?
Cheers
Bjoern

Categories