Yes, there are many other questions here on this topic. I have looked at the responses but I have not seen any which gives a useful solution.
I have the problem in its simplest form:
import os, time
hfile = "Positions.htm"
hf = open(hfile, "w")
hf.write(str(buf))
hf.close
time.sleep(2) # give it time to catch up
os.system(hfile) # run the html file in the default browser
I get this error message: "The process cannot access the file because it is being used by another process". The file is referenced nowhere else in the program.
No other process is using it, since I can access it without error from any other program, even if I run os.system(file) from the python console.
There's no point in using unlocker, because as soon as I leave the program, I can open the html file in the browser with no complaints from the system.
It looks like 'close' is not properly releasing the file.
I run programs this way out of perl all the time, with no problem except requiring the 1 or 2 second delay.
I'm using Python 3.4 on Win7.
Any suggestions?
You're not calling close(). Needs to be:
import os, time
hfile = "Positions.htm"
hf = open(hfile, "w")
hf.write(str(buf))
hf.close() # note the parens
time.sleep(2) # give it time to catch up
os.system(hfile) # run the html file in the default browser
However to avoid problems like this you should use a context manager:
import os, time
hfile = "Positions.htm"
with open(hfile, 'w') as hf:
hf.write(str(buf))
os.system(hfile) # run the html file in the default browser
The context manager will handle the closing of the file automatically.
Related
Is it possible -- other than by using something like a .txt/dummy file -- to pass a value from one program to another?
I have a program that uses a .txt file to pass a starting value to another program. I update the value in the file in between starting the program each time I run it (ten times, essentially simultaneously). Doing this is fine, but I would like to have the 'child' program report back to the 'mother' program when it is finished, and also report back what files it found to download.
Is it possible to do this without using eleven files to do it (that's one for each instance of the 'child' to 'mother' reporting, and one file for the 'mother' to 'child')? I am talking about completely separate programs, not classes or functions or anything like that.
To operate efficently, and not be waiting around for hours for everything to complete, I need the 'child' program to run ten times and get things done MUCH faster. Thus I run the child program ten times and give each program a separate range to check through.
Both programs run fine, I but would like to get them to run/report back and forth with each other and hopefully not be using file 'transmission' to accomplish the task, especially on the child-mother side of the transferring of data.
'Mother' program...currently
import os
import sys
import subprocess
import time
os.chdir ('/media/')
#find highest download video
Hival = open("Highest.txt", "r")
Histr = Hival.read()
Hival.close()
HiNext = str(int(Histr)+1)
#setup download #1
NextVal = open("NextVal.txt","w")
NextVal.write(HiNext)
NextVal.close()
#call download #1
procs=[]
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
#setup download #2-11
Histr2 = int(Histr)/10000
Histr2 = Histr2 + 1
for i in range(10):
Hiint = str(Histr2)+"0000"
NextVal = open("NextVal.txt","w")
NextVal.write(Hiint)
NextVal.close()
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
Histr2 = Histr2 + 1
for proc in procs:
proc.wait()
'Child' program
import urllib
import os
from Tkinter import *
import time
root = Tk()
root.title("Audiodownloader")
root.geometry("200x200")
app = Frame(root)
app.grid()
os.chdir('/media/')
Fileval = open('NextVal.txt','r')
Fileupdate = Fileval.read()
Fileval.close()
Fileupdate = int(Fileupdate)
Filect = Fileupdate/10000
Filect2 = str(Filect)+"0009"
Filecount = int(Filect2)
while Fileupdate <= Filecount:
root.title(Fileupdate)
url = 'http://www.yourfavoritewebsite.com/audio/encoded/'+str(Fileupdate)+'.mp3'
urllib.urlretrieve(url,str(Fileupdate)+'.mp3')
statinfo = os.stat(str(Fileupdate)+'.mp3')
if statinfo.st_size<10000L:
os.remove(str(Fileupdate)+'.mp3')
time.sleep(.01)
Fileupdate = Fileupdate+1
root.update_idletasks()
I'm trying to convert the original VB6 program over to Linux and make it much easier to use at the same time. Hence the lack of .mainloop being missing. This was my first real attempt at anything in Python at all hence the lack of def or classes. I'm trying to come back and finish this up after 1.5 months of doing nothing with it mostly due to not knowing how to. In research a little while ago I found this is WAY over my head. I haven't ever did anything with threads/sockets/client/server interaction so I'm purely an idiot in this case. Google anything on it and I just get brought right back here to stackoverflow.
Yes, I want 10 running copies of the program at the same time, to save time. I could do without the gui interface if it was possible for the program to report back to 'mother' so the mother could print on the screen the current value that is being searched. As well as if the child could report back when its finished and if it had any file that it downloaded successfully(versus downloaded and then erased due to being to small). I would use the successful download information to update Highest.txt for the next time the program got ran.
I think this may clarify things MUCH better...that or I don't understand the nature of using server/client interaction:) Only reason time.sleep is in the program was due to try to make sure that the files could get written before the next instance of the child program got ran. I didn't know for sure what kind of timing issue I may run into so I included those lines for safety.
This can be implemented using a simple client/server topology using the multiprocessing library. Using your mother/child terminology:
server.py
from multiprocessing.connection import Listener
# client
def child(conn):
while True:
msg = conn.recv()
# this just echos the value back, replace with your custom logic
conn.send(msg)
# server
def mother(address):
serv = Listener(address)
while True:
client = serv.accept()
child(client)
mother(('', 5000))
client.py
from multiprocessing.connection import Client
c = Client(('localhost', 5000))
c.send('hello')
print('Got:', c.recv())
c.send({'a': 123})
print('Got:', c.recv())
Run with
$ python server.py
$ python client.py
When you talk about using txt to pass information between programs, we first need to know what language you're using.
Within my knowledge of Java and Python achi viable despite laborious depensendo the amount of information that wants to work.
In python, you can use the library that comes with it for reading and writing txt and schedule execution, you can use the apscheduler.
As an extension to a previous post that unfortunately seems to have died a death:
select.select issue for sockets and pipes. Since this post I have been trying various things to no avail and I wanted to see if anyone has any idea where I am going wrong. I'm using the select() module to identify when data is present on either a pipe or a socket. The socket seems to be working fine but the pipe is proving problematic.
I have set up the pipe as follows:
pipe_name = 'testpipe'
if not os.path.exists(pipe_name):
os.mkfifo(pipe_name)
and the pipe read is:
pipein = open(pipe_name, 'r')
line = pipein.readline()[:-1]
pipein.close()
It works perfectly as a stand alone piece of code but when I try and link it to the select.select function it fails:
inputdata,outputdata,exceptions = select.select([tcpCliSock,xxxx],[],[])
I have tried entering 'pipe_name', 'testpipe' and 'pipein' in the inputdata argument but I always get a 'not defined' error. Looking at various other posts I thought it might be because the pipe does not have an object identifier so I tried:
pipein = os.open(pipe_name, 'r')
fo = pipein.fileno()
and put 'fo' in the select.select arguments but got a TypeError: an integer is required. I have also had a Error 9: Bad file descriptor when using this configuration of 'fo'. Any ideas what I have done wrong would be appreciated.
EDITED CODE:
I have managed to find a way to resolve it although not sure it is particularly neat - I would be interested in any comments-
Revised pipe setup:
pipe_name = 'testpipe'
pipein = os.open(pipe_name, os.O_RDONLY)
if not os.path.exists(pipe_name):
os.mkfifo(pipe_name)
Pipe Read:
def readPipe()
line = os.read(pipein, 1094)
if not line:
return
else:
print line
Main loop to monitor events:
inputdata, outputdata,exceptions = select.select([tcpCliSock,pipein],[],[])
if tcpCliSock in inputdata:
readTCP() #function and declarations not shown
if pipein in inputdata:
readPipe()
It all works well, my only problem now is getting the code to read from the socket before any event monitoring from select gets underway. As soon as connection is made to the TCP server a command is sent via the socket and I seem to have to wait until the pipe has been read for the first time before this command comes through.
According to the docs, select needs a file descriptor from os.open or similar. So, you should use select.select([pipein], [], []) as your command.
Alternatively, you can use epoll if you are on a linux system.
poller = epoll.fromfd(pipein)
events = poller.poll()
for fileno, event in events:
if event is select.EPOLLIN:
print "We can read from", fileno
I have a Python HTTP server, on a certain GET request a file is created which is returned as response afterwards. The file creation might take a second, respectively the modification (updating) of the file.
Hence, I cannot return immediately the file as response. How do I approach such a problem? Currently I have a solution like this:
while not os.path.isfile('myfile'):
time.sleep(0.1)
return myfile
This seems very inconvenient, but is there a possibly better way?
A simple notification would do, but I don't have control over the process which creates/updates the files.
You could use Watchdog for a nicer way to watch the file system?
Something like this will remove the os call:
while updating:
time.sleep(0.1)
return myfile
...
def updateFile():
# updating file
updating = false
Implementing blocking io operations in synchronous HTTP requests is a bad approach. If many people run the same procedure simultaneously you may soon run out of threads (if there is a limited thread pool). I'd do the following:
A client requests the file creation URI. A file generating procedure is initialized in a background process (some asynchronous task system), the user gets a file id / name in the HTTP response. Next the client makes AJAX calls every once a while (polling), to check if the file has been created/modified (seperate file serve/check-if-exists URI). When the file is finaly created, the user is redirected (js window.location) to the file serving URI.
This approach will require a bit more work, but eventually it will pay off.
You can try using os.path.getmtime, this would check the modification time of the file and return if it's less than 1 sec ago. Also I suggest you only make a limited amount of tries or you will be stuck in an infinite loop if the file doesn't get created/modified. And as #Krzysztof RosiĆski pointed out you should probably think about doing it in a non-blocking way.
import os
from datetime import datetime
import time
for i in range(10):
try:
dif = datetime.now()-datetime.fromtimestamp(os.path.getmtime(file_path))
if dif.total_seconds() < 1:
return file
except OSError:
time.sleep(0.1)
I am trying to make a program(in python) that as I write it writes to a file and opens to a certain window that I have already created.I have looked allarund for a vaible soution bt it would seem that multi-threading may be the only option.
I was hoping that when option autorun is "activated" it will:
while 1:
wbuffer = textview.get_buffer()
text = wbuffer.get_text(wbuffer.get_start_iter(), wbuffer.get_end_iter())
openfile = open(filename,"w")
openfile.write(text)
openfile.close()
I am using pygtk and have a textview window, but when I get the buffer it sits forever.
I am thinking that I need to multi-thread it and queue it so one thread will be writing the buffer while it is being queued.
my source is here. (I think the statement is at line 177.)
any help is much appreciated. :)
and here is the function:
def autorun(save):
filename = None
chooser = gtk.FileChooserDialog("Save File...", None,
gtk.FILE_CHOOSER_ACTION_SAVE,
(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL,
gtk.STOCK_SAVE, gtk.RESPONSE_OK))
response = chooser.run()
if response == gtk.RESPONSE_OK: filename = chooser.get_filename()
filen = filename
addr = (filename)
addressbar.set_text("file://" + filename)
web.open(addr)
chooser.destroy()
wbuffer = textview.get_buffer()
while 1:
text = wbuffer.get_text(wbuffer.get_start_iter(), wbuffer.get_end_iter())
time.sleep(1)
openfile = open(filename,"w")
openfile.write(text)
openfile.close()
Though not too easy to see exactly what your GTK-stuff not included here is doing, the main problem is that the control needs to be returned to the gtk main-loop. Else the program will hang.
So if you have a long process (like this eternal one here), then you need to thread it. The problem is that you need the thread to exit nicely when the main program quits, so you'll have to redesign a bit around that. Also, threading with gtk needs to be initialized correctly (look here).
However, I don't think you need threading, instead you could connect the changed signal of your TextBuffer to a function that writes the buffer to the target-file (if the user has put the program in autorun-mode). A problem with this is if the buffer gets large or program slow, in which case, you should consider threading the callback of the changed signal. So this solution requires to make sure you don't get into the situation where save-requests get stacked on top of each other because the user is faster at typing than the computer is saving. Takes some design thought.
So, finally, the easier solution: you may not want the buffer to save for every button-press. In which case, you could have the save-function (which could look like your first code-block without the loop) on a timeout instead. Just don't make the time-out too short.
I'm trying to use a unix named pipe to output statistics of a running service. I intend to provide a similar interface as /proc where one can see live stats by catting a file.
I'm using a code similar to this in my python code:
while True:
f = open('/tmp/readstatshere', 'w')
f.write('some interesting stats\n')
f.close()
/tmp/readstatshere is a named pipe created by mknod.
I then cat it to see the stats:
$ cat /tmp/readstatshere
some interesting stats
It works fine most of the time. However, if I cat the entry several times in quick successions, sometimes I get multiple lines of some interesting stats instead of one. Once or twice, it has even gone into an infinite loop printing that line forever until I killed it. The only fix that I've got so far is to put a delay of let's say 500ms after f.close() to prevent this issue.
I'd like to know why exactly this happens and if there is a better way of dealing with it.
Thanks in advance
A pipe is simply the wrong solution here. If you want to present a consistent snapshot of the internal state of your process, write that to a temporary file and then rename it to the "public" name. This will prevent all issues that can arise from other processes reading the state while you're updating it. Also, do NOT do that in a busy loop, but ideally in a thread that sleeps for at least one second between updates.
What about a UNIX socket instead of a pipe?
In this case, you can react on each connect by providing fresh data just in time.
The only downside is that you cannot cat the data; you'll have to create a new socket handle and connect() to the socket file.
MYSOCKETFILE = '/tmp/mysocket'
import socket
import os
try:
os.unlink(MYSOCKETFILE)
except OSError: pass
s = socket.socket(socket.AF_UNIX)
s.bind(MYSOCKETFILE)
s.listen(10)
while True:
s2, peeraddr = s.accept()
s2.send('These are my actual data')
s2.close()
Program querying this socket:
MYSOCKETFILE = '/tmp/mysocket'
import socket
import os
s = socket.socket(socket.AF_UNIX)
s.connect(MYSOCKETFILE)
while True:
d = s.recv(100)
if not d: break
print d
s.close()
I think you should use fuse.
it has python bindings, see http://pypi.python.org/pypi/fuse-python/
this allows you to compose answers to questions formulated as posix filesystem system calls
Don't write to an actual file. That's not what /proc does. Procfs presents a virtual (non-disk-backed) filesystem which produces the information you want on demand. You can do the same thing, but it'll be easier if it's not tied to the filesystem. Instead, just run a web service inside your Python program, and keep your statistics in memory. When a request comes in for the stats, formulate them into a nice string and return them. Most of the time you won't need to waste cycles updating a file which may not even be read before the next update.
You need to unlink the pipe after you issue the close. I think this is because there is a race condition where the pipe can be opened for reading again before cat finishes and it thus sees more data and reads it out, leading to multiples of "some interesting stats."
Basically you want something like:
while True:
os.mkfifo(the_pipe)
f = open(the_pipe, 'w')
f.write('some interesting stats')
f.close()
os.unlink(the_pipe)
Update 1: call to mkfifo
Update 2: as noted in the comments, there is a race condition in this code as well with multiple consumers.