Is there Python equivalent of VB6 DoEvents statement where the handler will temporarily pass to the OS? I found similar code in http://code.activestate.com/recipes/496767-set-process-priority-in-windows/
def setpriority(pid=None,priority=1):
""" Set The Priority of a Windows Process. Priority is a value between 0-5 where
2 is normal priority. Default sets the priority of the current
python process but can take any valid process ID. """
import win32api,win32process,win32con
priorityclasses = [win32process.IDLE_PRIORITY_CLASS,
win32process.BELOW_NORMAL_PRIORITY_CLASS,
win32process.NORMAL_PRIORITY_CLASS,
win32process.ABOVE_NORMAL_PRIORITY_CLASS,
win32process.HIGH_PRIORITY_CLASS,
win32process.REALTIME_PRIORITY_CLASS]
if pid == None:
pid = win32api.GetCurrentProcessId()
handle = win32api.OpenProcess(win32con.PROCESS_ALL_ACCESS, True, pid)
win32process.SetPriorityClass(handle, priorityclasses[priority])
Is there better and more pythonic way?
Your code doesn't do what you say it does. It sets priority for a program.
DoEvents is an internal VB thing that allows VB to suspend its code and run other code in your program. Thus it's dangerous because events can become reentrant.
The part about switching to the OS is done at the end of DoEvents and it calls the Windows API to call Sleep(0) which is sleep 0 seconds but surrender the rest of your 20 ms of time to other processes.
See documentation:
https://learn.microsoft.com/en-us/windows/desktop/api/synchapi/nf-synchapi-sleep
More on DoEvents.
In VBA/VB6 all functions/subs/properties/methods run from Sub to End Sub. No other sub/function can run while another is running. DoEvents changes this.
In the distance past of WordBasic etc one used sendkeys to send keystrokes to the application you were running in (Word for Wordbasic). But because your macro was running Word couldn't so didn't update it's state. One used DoEvents with SendKeys to automate things in word that weren't programmable and have it update it's UI. WordBasic didn't have events.
You can't change global variables, can't use global variables that may change, you can't change input parameters, and you have to make sure the sub/function with DoEvents isn't re-entrant (or only a limited number of times). Or bad things happen and it may not be predictable. EG calling doevents in a selectionchange event that changes the selection will eventually crash with out of stack space.
VB6 DoEvents has nothing to do with priority of threads. It was a method to let the system handle the message-loop so it could update the UI. VB6 programs were single-treaded, so if you had a time-consuming task, let us say, some stuff running in a do-while loop then, in order to keep your UI responsive, you had to call DoEvents in that loop. So DoEvents is a UI-thing, and has nothing to do with a programming language per sé.
But...
If you use Python+Tkinter, and for some reason you don't want to use multithreading(preferred). Let us say, you want a fast solution (for testing purpose), then you can use the real equivalent of VB6 DoEvents:
root=Tk()
root.update()
#or
root.update_idletasks()
Related
After reading A LOT of data on the subject I still couldn't find any actual solution to my problem (there might not be any).
My problem is as following:
In my project I have multiple drivers working with various hardware's (IO managers, programmable loads, power supplies and more).
Initializing connection to these hardware's is costly (in time), and I cant open and then close the connection for every communication iteration between us.
Meaning I cant do this (Assuming programmable load implements enter / exit):
start of code...
with programmable_load(args) as program_instance:
programmable_load_instance.do_something()
rest of code...
So I went for a different solution :
class programmable_load():
def __init__(self):
self.handler = handler_creator()
def close_connection(self):
self.handler.close_connection()
self.handler = None
def __del__(self):
if (self.handler != None):
self.close_connection()
For obvious reasons I dont 'trust' the destructor to actually get called so I explicitly call close_connection() when I want to end my program (for all drivers).
The problem happens when I abruptly terminate the process, for example when I run via debug mode and quit debugging.
In these cases the process terminates without running through any destructors.
I understand that the OS will clear all memory unused at this point, but is there any way to clear the memory in an organized manner?
and if not, is there a way to make the quit debugging function pass through a certain set of functions? Does the python process know it got a quite debugging event or does it treat it as a normal termination?
Operating system: Windows
According to this documentation:
If a process is terminated by TerminateProcess, all threads of the
process are terminated immediately with no chance to run additional
code.
(Emphasis mine.) This implies that there is nothing you can do in this case.
As detailed here, signals don't work very well on ms-windows.
As was mentioned in a comment, you could use atexit to do the cleanup. But that only works if the process is asked to close (e.g. QUIT signal on Linux) and not just killed (as is likely the case when stopping the debugging session). Similarily if you force your computer to turn off (e.g. long press power button or remove power) then it won't be called either. There is no 'solution' to that for obvious reasons. Your program can't expect to be called when the power suddenly goes off or when it is forcefully killed. The point of forcefully killing is to definitely kill the process now. If it first called your clean-up code then you could delay that which defeats the purpose. That is why there are signals such as to ask your process to stop. This is not Python specific. The same concept also applies across operating systems.
Bonus (design suggestion, not a solution): I would argue that you can still make use of the context manager (using with). Your problem is not unique. Database connections are usually kept alive for longer as well. It is a question of the scope. Move the context further up to the application level. Then it is clear what the boundary is and you don't need any magic (you are probably also aware of #contextmanager to make that a breeze).
I haven't tested properly as I don't have wingide installed over here so I can't grant you this will work but what about using setconsolectrlhandler? For instance, try something like this:
import os
import sys
import win32api
if __name__ == "__main__":
def callback(sig, func=None):
print("Exit handler called!")
try:
win32api.SetConsoleCtrlHandler(callback, True)
except Exception as e:
print("Captured exception", e)
sys.exit(1)
print("Press to quit")
input()
print("Bye!")
It'll be able to handle CTRL+C and CTRL+BREAK signals:
MyButton1 =Button(master, text='Quit',bg="grey",width=20,
command=master.quit)
MyButton1.place(x=200, y=100)
MyButton2 =Button(master, text='Propagate', bg="grey",width=20,
command=mainmethod)
MyButton2.place(x=1000, y=100)
master.geometry("1500x1500")
master.mainloop( )
In the above code after pressing propagate button mainmethod is invoking..
I wrote my logic in main method where this method alone taking 2minutes to execute in the mean time GUI going unresponsive state for few min and later displaying all my required output on text box i inserted
whether any away to avoid the unresponsive issue apart from using multi threading
and i am looking such that after pressing propagate button button should disabled and window should not go unresponsive and display text.insert statements continuously which i added in main method ?????
To prevent hanging, you need to separate the calculations in the mainmethod from Tkinter's main loop by executing them in different threads. However, threading system in Python is not that well-developed as in other languages (AFAIK) because of GIL (Global Interpreter Lock), but there is an alternative - using processes just like threads. This is possible with multiprocessing library.
In order to just prevent hanging, you could create another function
from multiprocessing import Process
def mainmethodLaunch():
global mainmethodProcess
mainmethodProcess = Process(target=mainmethod)
mainmethodProcess.start()
And bind this function to MyButton2 instead of mainmethod itself.
Docs: https://docs.python.org/2/library/multiprocessing.html#the-process-class
You can see p.join in the example. join method will cause your main process to wait for the other one to complete, which you don't want.
So when you press the button, mainmethodLaunch function will be invoked, and it will create another process executing mainmethod. mainmethodLaunch function's own run duration should be insignificant. Due to usage of another process, Tkinter window will not hang. However, if you do just this, you will not be able to interact with mainmethod process in any kind while it will be working.
In order to let these processes communicate with each other, you could use pipes (https://docs.python.org/2/library/multiprocessing.html#exchanging-objects-between-processes)
I guess the example is quite clear.
In order to receive some data from the mainmethod process over time, you will have to poll the parent_conn once a little time, let's say, second. This can be achieved with Tkinter's after method
(tkinter: how to use after method)
IMPORTANT NOTE: when using multiprocessing, you MUST initialize the program in if __name__ == '__main__': block. I mean, there should be no doing-something code outside functions and this block, no doing-something code with zero indent.
This is because multiprocessing is going to fork the same Python executable file, and it will have to distinguish the main process from the forked one, and not do initializing stuff in the forked one.
Check twice if you have done that because if you make such a mistake, it can cost you hanging of not just Tkinter window, but the whole system :)
Because the process will be going to fork itself endlessly, consuming all RAM you have, regardless of how much you have.
I have an event based application where I select() on various things and depending on what gets selected, the application handles it. To add to this, I want to add an interactive menu implemented in python. In a typical update() (called from within while(!done) loop):
- while(!done) {
select() with timeout
give control to python menu with timeout
}
Also, I don't want to run the python menu on a separate thread since it brings alongwith it the issue of race conditions and due to the nature of code/application/customers, I cannot have locks in the part of the code where the menu interacts. So, it has to be done in a single threaded environment.
I have been reading online but I couldn't find a way to do this in single threaded environment with a timeout. Can anyone please point me in the right direction or any pointers on how to do this ?
I'm new to Python as well as event-driven/GUI programming in general. As far as I can tell, all the event choices are things like mouse clicks and key presses.
I've written a set of functions in a separate library that read from an I2C device (on Raspberry Pi). The functions return -1 if nothing is read. So basically, I want to loop, calling the read function each time, until something besides -1 is returned.
My first instinct was to write something like:
readResult = -1
while (readResult == -1):
readResult = IO.read()
changeGUI()
This doesn't seem to work though in the tkinter structure. I get how to make a function get called on a button press, but I don't know how to do a custom event.
There are a few ways to go with this -- you could give up using Tkinter's mainloop(), and build your own event loop that polled for both types of events. Or, you could spawn a separate thread to monitor IO. Or, you could use the after() method from Tkinter.
For the first two cases, if IO.read() returns immediately, whether or not there's a result, then you probably want to throw a time.sleep() call in the loop, to avoid hogging the CPU.
If your call to IO.read() doesn't block, and doesn't take very long, it's very easy to set up a loop to poll the device every few milliseconds. All you need to do is something like this:
def read_one_result():
readResult = IO.read()
if readResult != -1:
changeGUI()
root.after(100, read_one_result)
This will read one result, update the GUI if anything was read, and the schedule itself to run again in 100ms.
I'm starting up in Python and have done a lot of programming in the past in VB. Python seems much easier to work with and far more powerful. I'm in the process of ditching Windows altogether and have quite a few VB programs I've written and want to use them on Linux without having to touch anything that involves Windows.
I'm trying to take one of my VB programs and convert it to Python. I think I pretty much have.
One thing I never could find a way to do in VB was to use Program 1, a calling program, to call Program 2 and run in it multiple times. I had a program that would search a website looking for new updated material, everything was updated numerically(1234567890_1.mp3, for example). Not every value was used and I would have to search to find which files existed and which didn't. Typically the site would run through around 100,000 possible files a day with only 2-3 files actually being used each day. I had the program set up to search 10,000 files and if it found a file that existed it downloaded it and then moved to the next possible file and tested it. I would run this program, simultaneously 10 times and have each program set up to search a separate 10,000 file block. I always wanted to set up it so I could have a calling program that would have the user set the Main Block(1234) and the Secondary Block(5) with the Secondary Block possibly being a range of values. The calling program then would start up 10 separate programs(6, err 0-9 in reality) and would use Main Block and Secondary Block as the values to set up the call for each of the 10 Program 2s. When each one of the 10 programs got called, all running at the same time, they would be called with the appropriate search locations so they would be searching the website to find what new files had been added throughout the previous day. It would only take 35-45 minutes to complete each day versus multiple hours if I ran through everything in one long continuous program.
I think I could do this with Python using a Program 1(set) and Program 2(read) .txt file. I'm not sure if I would run into problems possibly with changing the set value before Program 2 could have read the value and started using it. I think I would have a to add a pause into the program to play it safe... I'm not really sure.
Is there another way I could pass a value from Program 1 to Program 2 to accomplish the task I'm looking to accomplish?
It seems simple enough to have a master class (a block as you would call it), that would contain all of the different threads/classes in a list. Communication wise, there could simply be a communication stucture like so:
Thread <--> Master <--> Thread
or
Thread <---> Thread <---> Thread
The latter would look more like a web structure.
The first option would probably be a bit more complicated since you have a "middle man". However, this does allow for mass communication. Also all that would need to passed to the other classes is the class Master, or a function that will provide communication
The second option allows for direct communication between threads. So if there is two threads that need to work together a lot, and you don't want to have to deal with other threads possibly interacting to the commands, then the second option is the best. However, the list of classes would have to be sent to the function.
Either way you are going to need to have a Master thread at the central of communication (for simplicity reasons). Otherwise you could get into sock and file stuff, but the other one is faster, more efficient, and is less likely to cause headaches.
Hopefully you understood what I was saying. It is all theoretical, but I have used similar systems in my programs.
There are a bunch of ways to do this; probably the simplest is to use os.spawnv to run multiple instances of Program2 and pass the appropriate values to each, ie
import os
import sys
def main(urlfmt, start, end, threads):
start, end, threads = int(start), int(end), int(threads)
per_thread = (end - start + 1) // threads
for i in range(threads):
s = str(start + i*per_thread)
e = str(min(s + per_thread - 1, end))
outfile = 'output{}.txt'.format(i+1)
os.spawnv(os.P_NOWAIT, 'program2.exe', [urlfmt, s, e, outfile])
if __name__=="__main__":
if len(sys.argv) != 4:
print('Usage: web_search.py urlformat start end threads')
else:
main(*(sys.argv))