Is this a valid use case for a context manager? - python

I'm making a progress indicator for some long-running console process with intent to use it like this:
pi = ProgressIndicator()
for x in somelongstuff:
do stuff
pi.update()
pi.print_totals()
Basically, it should output some kind of a progress bar with dots and dashes, and something like "234234 bytes processed" at the end.
I thought it would be nice to use it as a context manager:
with ProgressIndicator() as pi:
for x in somelongstuff:
do stuff
pi.update()
However there are a few things that concern me about this solution:
extra indentation makes the indicator feature appear more important than it actually is
I don't want ProgressIndicator to handle any exceptions that might occur in the loop
Is this a valid use case for a context manager? What other solutions can you suggest?

It definitely seems a valid use case. The context manager doesn't have to handle exceptions if you don't want it to, although you would want to end the line that the progress bar is output on to prevent it being confused with the traceback, and have it not print the totals if it is exited through an exception.
With regard to indentation, I'd argue that letting the user see progress is actually a very important feature so it's fine for it to take up an indentation level.

There's a GUI application which has a very similar ProgressTask API, which you use like this:
def slow_func():
t = nuke.ProgressTask()
t.setMessage("Doing something")
for x in range(100):
do_something()
t.setProgress(x+1)
When ProgressTask.__del__ is called, the progress bar UI disappears. This works nicely for the most part, however if an exception is raised (e.g by do_something()), the traceback object keeps a reference to the ProgressTask object, so the progress-bar gets stuck (until another traceback occurs)
The ProgressTask implemented the context-manager protocol, it could use the __exit__ method to ensure the progress bar has been hidden.
For a command-line UI (which is sounds like you are writing), this may not be an issue, but you could do similar cleanup tasks, e.g to display an ######### 100% (error) type bar, and ensure the traceback output isn't messed up etc
There's no reason your progress-bar class couldn't be usable in both manners - most context-managers are perfectly usable as both regular objects and context-managers, e.g:
lock = threading.Lock()
lock.acquire()
lock.release()
# or:
with lock:
pass

Related

Segmentation fault when initializing array

I am getting a segmentation fault when initializing an array.
I have a callback function from when an RFID tag gets read
IDS = []
def readTag(e):
epc = str(e.epc, 'utf-8')
if not epc in IDS:
now = datetime.datetime.now().strftime('%m/%d/%Y %H:%M:%S')
IDS.append([epc, now, "name.instrument"])
and a main function from which it's called
def main():
for x in vals:
IDS.append([vals[0], vals[1], vals[2]])
for x in IDS:
print(x[0])
r = mercury.Reader("tmr:///dev/ttyUSB0", baudrate=9600)
r.set_region("NA")
r.start_reading(readTag, on_time=1500)
input("press any key to stop reading: ")
r.stop_reading()
The error occurs because of the line IDS.append([epc, now, "name.instrument"]). I know because when I replace it with a print call instead the program will run just fine. I've tried using different types for the array objects (integers), creating an array of the same objects outside of the append function, etc. For some reason just creating an array inside the "readTag" function causes the segmentation fault like row = [1,2,3]
Does anyone know what causes this error and how I can fix it? Also just to be a little more specific, the readTag function will work fine for the first two (only ever two) calls, but then it crashes and the Reader object that has the start_reading() function is from the mercury-api
This looks like a scoping issue to me; the mercury library doesn't have permission to access your list's memory address, so when it invokes your callback function readTag(e) a segfault occurs. I don't think that the behavior that you want is supported by that library
To extend Michael's answer, this appears to be an issue with scoping and the API you're using. In general pure-Python doesn't seg-fault. Or at least, it shouldn't seg-fault unless there's a bug in the interpreter, or some extension that you're using. That's not to say pure-Python won't break, it's just that a genuine seg-fault indicates the problem is probably the result of something messy outside of your code.
I'm assuming you're using this Python API.
In that case, the README.md mentions that the Reader.start_reader() method you're using is "asynchronous". Meaning it invokes a new thread or process and returns immediately and then the background thread continues to call your callback each time something is scanned.
I don't really know enough about the nitty gritty of CPython to say exactly what going on, but you've declared IDS = [] as a global variable and it seems like the background thread is running the callback with a different context to the main program. So when it attempts to access IDS it's reading memory it doesn't own, hence the seg-fault.
Because of how restrictive the callback is and the apparent lack of a buffer, this might be an oversight on the behalf of the developer. If you really need asynchronous reads it's worth sending them an issue report.
Otherwise, considering you're just waiting for input you probably don't need the asynchronous reads, and you could use the synchronous Reader.read() method inside your own busy loop instead with something like:
try:
while True:
readTags(r.read(timeout=10))
except KeyboardInterrupt: ## break loop on SIGINT (Ctrl-C)
pass
Note that r.read() returns a list of tags rather than just one, so you'd need to modify your callback slightly, and if you're writing more than just a quick script you probably want to use threads to interrupt the loop properly as SIGINT is pretty hacky.

Masking exceptions in Python?

It is typical to use the with statement to open a file so that the file handle cannot be leaked:
with open("myfile") as f:
…
But what if the exception occurs somewhere within the open call? The open function is very likely not an atomic instruction in the Python interpreter, so it's entirely possible that an asynchronous exception such as KeyboardInterrupt would be thrown* at some moment before the open call has finished, but after the system call has already completed.
The conventional way of handle this (in, for example, POSIX signals) to use the masking mechanism: while masked, the delivery of exceptions is suspended until they are later unmasked. This allows operations such as open to be implemented in an atomic way. Does such a primitive exist in Python?
[*] One might say it's doesn't matter for KeyboardInterrupt since the program is about to die anyway, but that is not true of all programs. It's conceivable that a program might choose to catch KeyboardInterrupt on the top level and continue execution, in which case the leaked file handle can add up over time.
I do not think its possible to mask exceptions , you can mask signals but not exceptions . In your case KeyboardInterrupt is the exception that is raised when the signal.SIGINT is raised (which is the Ctrl + C) .
It is not possible to mask Exceptions because well it does not make sense, right? Let's say you are doing open('file','r') , but file does not exist, this causes the open function to throw IOError Exception, we should not be able to mask these kinds of exceptions. It does not make sense to mask it, because open would never be able to complete in the above case.
exceptions – anomalous or exceptional conditions requiring special processing
For KeyboardInterrupt exception , its different because like I said, its actually a signal that causes the KeyboardInterrupt exception to be raised.
You can only mask signals in Unix starting from Python 3.3 using the function signal.pthread_sigmask [Reference]
For that you will have to move the the context expression to a different block so that we can so something like mask the signal, run the context expression to get the context manager and then unmask the signal , a sample code would look like (please note I have not personally tested this code) -
import signal
signal.pthread_sigmask(signal.SIG_BLOCK,[signal.SIGINT])
with <context expression> as variable: # in your case ,open('filename','r')
signal.pthread_sigmask(signal.SIG_UNBLOCK,[signal.SIGINT])
...
Some clarification: it seems that asynchronous exceptions are not commonly used in Python. The standard library only documents KeyboardInterrupt AFAIK. Other libraries can implement their own via signal handlers, but I don't think (or hope?) this is a common practice, as asynchronous exceptions are notoriously tricky to work with.
Here is a naive solution that won't work:
try:
handle = acquire_resource(…)
except BaseException as e:
handle.release()
raise e
else:
with handle:
…
The exception-handling part is still vulnerable to exceptions: a KeyboardInterrupt could occur a second time after the exception is caught but before release is complete.
There is also a "gap" between the end of the try statement and the beginning of the with statement where it is vulnerable to exceptions.
I don't think there's anyway to make it work this way.
Thinking from a different perspective, it seems that the only way in which asynchronous exceptions can arise is from signals. If this is true, one could mask them as #AnandSKumar suggested. However, masking is not portable as it requires pthreads.
Nonetheless, we can fake masking with a little trickery:
def masking_handler(queue, prev_handler):
def handler(*args):
queue.append(lambda: prev_handler[0](*args))
return handler
mask_stack = []
def mask():
queue = []
prev_handler = []
new_handler = masking_handler(queue, prev_handler)
# swap out the signal handler with our own
prev_handler[0] = signal.signal(signal.SIGINT, new_handler)
mask_stack.append((prev_handler[0], queue))
def unmask():
prev_handler, queue = mask_stack.pop()
# restore the original signal handler
signal.signal(signal.SIGINT, prev_handler)
# the remaining part may never occur if a signal happens right now
# but that's fine
for event in queue:
event()
mask()
with acquire_resource(…) as handle:
unmask()
…
This will work if SIGINT is the only source that we care about. Unfortunately it breaks down for multiple signals, not just because we don't know which ones are being handled, but also because we can't swap out multiple signals atomically!

Performing an action upon unexpected exit python

I was wandering if there was a way to perform an action before the program closes. I am running a program over a long time and I do want to be able to close it and have the data be saved in a text file or something but there is no way of me interfering with the while True loop I have running, and simply saving the data each loop would be highly ineffective.
So is there a way that I can save data, say a list, when I hit the x or destroy the program? I have been looking at the atexit module but have had no luck, except when I set the program to finish at a certain point.
def saveFile(list):
print "Saving List"
with open("file.txt", "a") as test_file:
test_file.write(str(list[-1]))
atexit.register(saveFile(list))
That is my whole atexit part of the code and like I said, it runs fine when I set it to close through the while loop.
Is this possible, to save something when the application is terminated?
Your atexit usage is wrong. It expects a function and its arguments, but you're just calling your function right away and passing the result to atexit.register(). Try:
atexit.register(saveFile, list)
Be aware that this uses the list reference as it exists at the time you call atexit.register(), so if you assign to list afterwards, those changes will not be picked up. Modifying the list itself without reassigning should be fine, though.
You could use the handle_exit context manager from this ActiveState recipe:
http://code.activestate.com/recipes/577997-handle-exit-context-manager/
It handles SystemExit, KeyboardInterrupt, SIGINT, and SIGTERM, with a simple interface:
def cleanup():
print 'do some cleanup here'
def main():
print 'do something'
if __name__ == '__main__':
with handle_exit(cleanup):
main()
There's nothing you can in reaction to a SIGKILL. It kills your process immediately, without any allowed cleanup.
Catch the SystemExit exception at the top of your application, then rethrow it.
There are a a couple of approaches to this. As some have commented you could used signal handling ... your [Ctrl]+[C] from the terminal where this is running in the foreground is dispatching a SIGHUP signal to your process (from the terminal's drivers).
Another approach would be to use a non-blocking os.read() on sys.stdin.fileno such that you're polling your keyboard one during every loop to see if an "exit" keystroke or sequence has been entered.
A similarly non-blocking polling approach can be implemented using the select module's functionality. I've see that used with the termios and tty modules. (Seems inelegant that it needs all those to save, set changes to, and restore the terminal settings, and I've also seen some examples using os and fcntl; and I'm not sure when or why one would prefer one over the other if os.isatty(sys.stdin.fileno())).
Yet another approach would be to use the curses module with window.nodelay() or window.timeout() to set your desired input behavior and then either window.getch() or window.getkey() to poll for any input.

pygtk's strange problem about set button's sensitive property

on one of my methods, I have the following code:
def fun():
self.button1.set_sensitive(False)
self.get_time()
However, self.button1 only becomes insensitive after get_time() return !!,use the time.sleep(n) replace the get_time() could get same result
Any clue why?
I think programmic changes to widgets applies in the next lap of event loop (gtk.main()), that is probably after finishing fun function. Does that make a problem for you? How much time self.get_time()
takes? If that takes a sensible time, you can update widgets before that:
def fun():
self.button1.set_sensitive(False)
while gtk.events_pending():
gtk.main_iteration_do(False)
self.get_time()
Uhh are you sure you want to do that?
All GUI programming events are done by message passing and so you really shouldn't block the main thread for long enough you'd ever need some workaround like this. And if you do that, you'll soon have other problems like the window manager killing your window because it's not responding to ping or reentrance problems when you do the iteration. If you have some complicated task like burning a CD or whatever that takes that long, put the actual burning into its own executable and call it by glib.spawn_async (or similar). Use gobject.child_watch_add to ask to be notified about termination.

wxPython: Using EVT_IDLE

I defined an handler for EVT_IDLE that does a certain background task for me. (That task is to take completed work from a few processes and integrate it into some object, making a visible change in the GUI.)
The problem is that when the user is not moving the mouse or doing anything, EVT_IDLE doesn't get called more than once. I would like this handler to be working all the time. So I tried calling event.RequestMore() at the end of the handler. Works, but now it takes a whole lot of CPU. (I'm guessing it's just looping excessively on that task.)
I'm willing to limit the number of times the task will be carried out per second; How do I do that?
Or do you have another solution in mind?
Something like this (executes at most every second):
...
def On_Idle(self, event):
if not self.queued_batch:
wx.CallLater(1000, self.Do_Batch)
self.queued_batch = True
def Do_Batch(self):
# <- insert your stuff here
self.queued_batch = False
...
Oh, and don't forget to set self.queued_batch to False in the constructor and maybe call event.RequestMore() in some way in On_Idle.
This sounds like a use case for wxTimerEvent instead of wxIdleEvent. When there is processing to do call wxTimerEvent.Start(). When there isn't any to do, call wxTimerEvent.Stop() and call your methods to do processing from EVT_TIMER.
(note: i use from wxWidghets for C++ and am not familiar with wxPython but I assume they have a similar API)

Categories