I want to use my project's new lib file. However, i don't hope to main progress stop running?
for example ,i have b.py:
import a
import time
def main():
for i in range(1000):
time.sleep(5)
print i
a.abc()
main()
a.py is
def abc():
print 'abc'
I want to modify my abc function in a.py to
def abc():
print '123'
When i finish modified abc function in a.py, I hope it worked at once in main process in a.py .
i remove a.pyc file, but it still print abc, not 123. How to print 123 when don't stop main progress?
Can't change main process. Because it is always running.
You might want to save your module hashsum and after every execution check if it changed and reload if so:
import hashlib
import time
import a
def main():
with open('a.py', 'rb') as f:
module_hashsum = hashlib.md5(f.read()).hexdigest()
for i in range(1000):
time.sleep(5)
print i
a.abc()
with open('a.py', 'rb') as f:
hashsum_temp = hashlib.md5(f.read()).hexdigest()
if module_hashsum != hashsum_temp:
module_hashsum = hashsum_temp
reload(a)
main()
Or just reload it after every execution.
Might aswell do some fancy checks for file's mtime, like it's done in django. But it will trigger reload even if you didn't change anything in a file and only did :wq in Vim (just saved file) for e.g.
But I don't think it's necessary, as building hash is fast.
Related
I want to remote control a python application which uses urwid for the user interface.
My idea was to create a file, pass it's name as command line argument to the application and whenever I write to the file the application reads from that file.
Urwid's event loop has a method watch_file(fd, callback).
This method is described as "Call callback() when fd has some data to read."
This sounds exactly like what I want to have, but it causes an infinite loop.
callback is executed as often as possible, despite the fact that the file is empty.
Even if I delete the file, callback is still called.
#!/usr/bin/env python3
import urwid
import atexit
def onkeypress(key, size=None):
if key == 'q':
raise urwid.ExitMainLoop()
text.set_text(key)
def onfilechange():
text.set_text(cmdfile.read())
# clear file so that I don't read already executed commands again
# and don't run into an infinite loop - but I am doing that anyway
with open(cmdfile.name, 'w') as f:
pass
cmdfile = open('/tmp/cmd', 'rt')
atexit.register(cmdfile.close)
text = urwid.Text("hello world")
filler = urwid.Filler(text)
loop = urwid.MainLoop(filler, unhandled_input=onkeypress)
loop.watch_file(cmdfile, onfilechange)
if __name__ == '__main__':
loop.run()
(My initial idea was to open the file only for reading instead of keeping it open all the time but fd has to be a file object, not a path.)
Urwid offers several different event loops.
By default, SelectEventLoop is used.
GLibEventLoop has the same behaviour, it runs into an infinite loop.
AsyncioEventLoop instead throws an "operation not permitted" exception.
TwistedEventLoop and TornadoEventLoop would need additional software to be installed.
I have considered using the independent watchdog library but it seems accessing the user interface from another thread would require to write a new loop, see this stack overflow question.
The answer to that question recommends polling instead which I would prefer to avoid.
If urwid specifically provides a method to watch a file I cannot believe that it does not work in any implementation.
So what am I doing wrong?
How do I react to a file change in a python/urwid application?
EDIT:
I have tried using named pipes (and removed the code to clear the file) but visually it has the same behaviour: the app does not start.
Audibly, however, there is a great difference: It does not go into the infinite loop until I write to the file.
Before I write to the file callback is not called but the app is not started either, it just does nothing.
After I write to the file, it behaves as described above for regular files.
I have found the following work around: read a named pipe in another thread, safe each line in a queue and poll in the UI thread to see if something is in the queue.
Create the named pipe with mkfifo /tmp/mypipe.
Then write to it with echo >>/tmp/mypipe "some text".
#!/usr/bin/env python3
import os
import threading
import queue
import urwid
class App:
POLL_TIME_S = .5
def __init__(self):
self.text = urwid.Text("hello world")
self.filler = urwid.Filler(self.text)
self.loop = urwid.MainLoop(self.filler, unhandled_input=self.onkeypress)
def watch_pipe(self, path):
self._cmd_pipe = path
self.queue = queue.Queue()
threading.Thread(target=self._read_pipe_thread, args=(path,)).start()
self.loop.set_alarm_in(0, self._poll_queue)
def _read_pipe_thread(self, path):
while self._cmd_pipe:
with open(path, 'rt') as pipe:
for ln in pipe:
self.queue.put(ln)
self.queue.put("!! EOF !!")
def _poll_queue(self, loop, args):
while not self.queue.empty():
ln = self.queue.get()
self.text.set_text(ln)
self.loop.set_alarm_in(self.POLL_TIME_S, self._poll_queue)
def close(self):
path = self._cmd_pipe
# stop reading
self._cmd_pipe = None
with open(path, 'wt') as pipe:
pipe.write("")
os.remove(path)
def run(self):
self.loop.run()
def onkeypress(self, key, size=None):
if key == 'q':
raise urwid.ExitMainLoop()
self.text.set_text(key)
if __name__ == '__main__':
a = App()
a.watch_pipe('/tmp/mypipe')
a.run()
a.close()
I'm relatively new to python so please forgive early level understanding!
I am working to create a kind of flag file. Its job is to monitor a Python executable, the flag file is constantly running and prints "Start" when the executable started, "Running" while it runs and "Stop" when its stopped or crashed, if a crash occurs i want it to be able to restart the script. so far i have this down for the Restart:
from subprocess import run
from time import sleep
# Path and name to the script you are trying to start
file_path = "py"
restart_timer = 2
def start_script():
try:
# Make sure 'python' command is available
run("python "+file_path, check=True)
except:
# Script crashed, lets restart it!
handle_crash()
def handle_crash():
sleep(restart_timer) # Restarts the script after 2 seconds
start_script()
start_script()
how can i implement this along with a flag file?
Not sure what you mean with "flag", but this minimally achieves what you want.
Main file main.py:
import subprocess
import sys
from time import sleep
restart_timer = 2
file_path = 'sub.py' # file name of the other process
def start():
try:
# sys.executable -> same python executable
subprocess.run([sys.executable, file_path], check=True)
except subprocess.CalledProcessError:
sleep(restart_timer)
return True
else:
return False
def main():
print("starting...")
monitor = True
while monitor:
monitor = start()
if __name__ == '__main__':
main()
Then the process that gets spawned, called sub.py:
from time import sleep
sleep(1)
print("doing stuff...")
# comment out to see change
raise ValueError("sub.py is throwing error...")
Put those files into the same directory and run it with python main.py
You can comment out the throwing of the random error to see the main script terminate normally.
On a larger note, this example is not saying it is a good way to achieve the quality you need...
I'm creating a virtual OS with different files, like boot.py and Run.py
It worked for a while until I typed this in:
if Run.run == 0:
error()
else:
start()
There is no errors saying about it like red lines or any. Now I get this error:
AttributeError("module 'boot' has no attribute 'bootup'",)
Here's the code:
boot.py:
#Imports
import Run
#Varibles
name = "PyOS"
version = 1
creator = "© Dragon Softwares"
def bootup(run):
if Run.run == 0:
error()
else:
start()
def start():
print("Starting PyOS")
print("Version ", version)
def error():
print("Please Run 'Run.py' To bootup the system")
bootup()
Run.py:
#Imports
import time
import boot
#Varibles
run = 0
def main():
run = 1
time.sleep(0.1)
boot.bootup
main()
You have a circular import. Don't have boot import Run and vice-versa.
What happens now is:
Run.py is executed as the __main__ module.
The line import boot is reached and executed.
boot has not been imported before, so boot.py is found and loaded to be the boot module.
The line import Run is reached and executed.
Run has not been imported before, so Run.py is found and loaded to be the Run module. Note: that Run.py is also used for __main__ does not matter here.
The line import boot is reached and executed.
boot is already being imported, so it used to set the name boot in the Run module
Various other names are set in the Run module namespace (run and main).
The line main() is reached and executed
The main() function references boot.bootup, which doesn't exist yet, because the boot module top-level code never reached the def bootup(run): line.
Instead of importing Run, pass the run value as a parameter to the boot function:
def main():
run = 1
time.sleep(0.1)
boot.bootup(run)
Also, don't call bootup() at the end of the boot module. That function call will be executed when you import boot, and makes no sense there. If you want to present an error message, use a if __name__ == '__main__': guard:
if __name__ == '__main__':
import sys
error()
sys.exit(1)
The if test will only ever be true if you ran boot.py as the script.
I have a python file B with all my function and a main code which is in loop of 0.25 sec, and I want to call this file in a loop in my file A. Can you get my weird mind ?
What I did but only read the loop from file B once :
#FileA
while 1:
from FileB import *
And my file B :
#FileB
while t<0.25:
#my stuff
Thanks.
PS : I forget to mention that i can't modify the file B.
The import statement only reads the target module one time.
If you have control of both files, I'd suggest that you make your loop a function in file B:
def main():
while t<0.25:
#my stuff
if __name__ == '__main__':
main()
Then you can call it repeatedly from file A:
from fileB import main as Bmain
while 1:
Bmain()
If you don't have control of the source code for the files (meaning: if the code comes from someone else), there are a few options. Probably the easiest and fastest to code would be to use the os.system(command) function to run the contents of fileB in a separate process.
You should use functions, you don't have any functions in your questions. Here is an example:
# fileA.py
import time
from fileB import myFunction
def main():
while True:
ret = myFunciton()
time.sleep(3)
main()
# fileB.py
def myFunction():
print "running myFunction"
return "result"
I have the following problem. Whenever a child thread wants to perform some IO operation (writing to file, downloading a file) the program hangs. In the following example the program hangs on opener.retrieve. If I execute python main.py the program is blocked on an retrieve function. If I execute python ./src/tmp.py everything is fine. I don't understand why. Can anybody explain me what is happening?
I am using python2.7 on Linux system (kernel 3.5.0-27).
File ordering:
main.py
./src
__init__.py
tmp.py
main.py
import src.tmp
tmp.py
import threading
import urllib
class DownloaderThread(threading.Thread):
def __init__(self, pool_sema, i):
threading.Thread.__init__(self)
self.pool_sema = pool_sema
self.daemon = True
self.i = i
def run(self):
try:
opener = urllib.FancyURLopener({})
opener.retrieve("http://www.greenteapress.com/thinkpython/thinkCSpy.pdf", "/tmp/" + str(self.i) + ".pdf")
finally:
self.pool_sema.release()
class Downloader(object):
def __init__(self):
maxthreads = 1
self.pool_sema = threading.BoundedSemaphore(value=maxthreads)
def download_folder(self):
for i in xrange(20):
self.pool_sema.acquire()
print "Downloading", i
t = DownloaderThread(self.pool_sema,i)
t.start()
d = Downloader()
d.download_folder()
I managed to get it to work by hacking urllib.py - if you inspect it you will see many import statements dispersed within the code - i.e. it uses imports stuff 'on the fly' and not just when the module loads.
So, the real reason is still unknown - but not worth investigating - probably some deadlock in Python's import system. You just shouldn't run nontrivial code during an import - that's just asking for trouble.
If you insist, you can get it to work if you move all these weird import statements to the beginning of urllib.py.