I have class that opens a socket connection on initialization, and can transmit and receive certain messages back and forth with the counterparty. I create an instance of the object using a with statement. In my class, if I receive certain messages back on the socket, I want to explicitly close the connection, and exit the with statement.
I attempt to do so, by explicitly calling self.\__exit__(None, None, None)
def __exit__(self, type, value, traceback):
print 'Closing Connection'
self.logout()
self.conn.close()
sys.exit(1)
However, I am finding that I am getting the Closing Connection message back twice, and running into problems because on the second call, there is no longer a connection to close. Examining the code, I have ruled out all other instances of my explicit call to self.__exit__(None, None, None). What's going on? Is the sys.exit(1) insufficient for preventing the with from garbage collecting again (although from what I've read, this seems to be the most "approved" way to do this)? How do I prevent the with statement from calling self.__exit__(None, None, None). Any help, or a point in the right direction, would be greatly appreciated!
Once you are in a with statement, the only way to leave with without running __exit__ is to use os._exit; that's bad. Instead, explicitly begin by calling __enter__ if you want this behavior. Or change your class to make sure it doesn't do the cleanup twice if called twice like #Kay suggests in his answer. Or, do as #IsmailBadawi suggests in his comment, and refactor your code so you don't need to explicitly call __exit__.
Just memorize if the you already closed the connection / log / something:
class MyContext(object):
def __init__(self):
self.__already_closed = False
....
def close(self):
if not self.__already_closed:
self.__already_closed = True
self.logout()
self.conn.close()
def __exit__(...):
self.close()
Maybe even add a "please don't cleanup" method:
def do_not_cleanup(self):
self.__already_closed = True
Related
I want to realize some sort oft client-server-connection using Python and are rather new to multiprocessing. Basically, I have a class 'Manager' that inherits from multiprocessing.Process and manages the connection from a client to different data sources. This process has some functions like 'get_value(key)' that should return the value of the key-data source. Now, as I want this to run asynchronized, I cannot simply call this function from my client process.
My idea so far would be that I connect the Client- and Manager-Processes using a Pipe and then send a message from the Client to the Manager to execute this function. I would realize this by sending a list through the pipe where the first element is the name of the function the remaining elements are the arguments of the actual function, e.g. ['get_value', 'datasource1']. The process then would receive this and send the return value through the pipe to the client. This would look something like this:
from multiprocessing import Process, Pipe
import time
class Manager(Process):
def __init__(self, connection):
super(Process, self).__init__()
self.connection = connection
def run(self):
while True:
if self.connection.poll():
msg = self.connection.recv()
self.call_function(msg[0], msg[:])
def call_function(self, name, *args):
print('Function Called with %s' % name)
return_val = getattr(self, name)(*args)
self.connection.send(return_val)
def get_value(self, key):
return 1.0
While I guess that this would work, I am not very happy with this solution. Especially the call-function-by-string-method seems very error-prone. Is there a more elegant way of requesting to execute a function in Python?
I think that your approach, all in all, is a good one (there are other ways to do the same thing, of course, but there is nothing wrong with your general approach).
That said, I would change the design slightly to add a "routing" component: think of some logic that somehow limits what "commands" can be sent by clients, and hooks between commands and "handlers" - that is functions that handle them. Basically think Web Framework routing (if you are familiar with the concept).
This is a good idea both in terms of flexibility of the design, in terms of error detection and in terms of security (you don't want clients to call ['__del__'] for example on your Manager.
At it's very basic form, a router can be a dictionary mapping commands to class methods:
class Manager(Process):
def __init__(self, connection):
super(Process, self).__init__()
self.connection = connection
self.routes = {'do_action': self._do_action,
'do_other_action': some_callable,
'ping': lambda args: args} # <- as long as it's callable and has the right signature...
def call_function(self, name, *args):
try:
handler = self.routes[name]
except KeyError:
return self._error_reply('{} is not a valid command'.format(name))
try:
return_val = handler(*args) # handler functions will need to throw something if arguments are wrong...
except ValueError as e:
return self._error_reply('Invalid command arguments: {}'.format(str(e)))
except Exception as e:
# This is your catch-all "internal server error" handler
return self._error_reply(str(e))
self.connection.send(return_val)
This is of course just an example of an approach. You will need to implement _error_reply() in whatever way works for you.
You can expand on it by creating a Router class and passing it as a dependency to Manager, making it even more flexible. You might also want to think about making your Manager a separate thing and not a subclass of Process (because you might want to run it regardless of whether it is in a subprocess - for example in testing).
BTW, there are frameworks for implementing such things with various degrees of complexity and flexibility (Thrift, ZeroMQ, ...), but if you want to do something simple and learn, doing it yourself is in my opinion a great choice.
Suppose I have a pool with a few processes inside of a class that I use to do some processing, like this:
class MyClass:
def __init_(self):
self.pool = Pool(processes = NUM_PROCESSES)
self.pop = []
self.finished = []
def gen_pop(self):
self.pop = [ self.pool.apply_async(Item.test, (Item(),)) for _ in range(NUM_PROCESSES) ]
while (not self.check()):
continue
# Do some other stuff
def check(self):
self.finished = filter(lambda t: self.pop[t].ready(), range(NUM_PROCESSES))
new_pop = []
for f in self.finished:
new_pop.append(self.pop[f].get(timeout = 1))
self.pop[f] = None
# Do some other stuff
When I run this code I get a cPickle.PicklingError which states that a <type 'function'> can't be pickled. What this tells me is that one of the apply_async functions has not returned yet so I am attempting to append a running function to another list. But this shouldn't be happening because all running calls should have been filtered out using the ready() function.
On a related note, the actual nature of the Item class is unimportant but what is important is that at the top of my Item.test function I have a print statement which is supposed to fire for debugging purposes. However, that does not occur. This tells me that that the function has been initiated but has not actually started execution.
So then, it appears that ready() does not actually tell me whether or not a call has finished execution or not. What exactly does ready() do and how should I edit my code so that I can filter out the processes that are still running?
Multiprocessing uses pickle module internally to pass data between processes,
so your data must be picklable. See the list of what is considered picklable, object method is not in that list.
To solve this quickly just use a wrapper function around the method:
def wrap_item_test(item):
item.test()
class MyClass:
def gen_pop(self):
self.pop = [ self.pool.apply_async(wrap_item_test, (Item(),)) for _ in range(NUM_PROCESSES) ]
while (not self.check()):
continue
To answer the question you asked, .ready() is really telling you whether .get() may block: if .ready() returns True, .get() will not block, but if .ready() returns False, .get() may block (or it may not: quite possible the async call will complete before you get around to calling .get()).
So, e.g., the timeout = 1 in your .get() serves no purpose: since you only call .get() if .ready() returned True, you already know for a fact that .get() won't block.
But .get() not blocking does not imply the async call was successful, or even that a worker process even started working on an async call: as the docs say,
If the remote call raised an exception then that exception will be reraised by get().
That is, e.g., if the async call couldn't be performed at all, .ready() will return True and .get() will (re)raise the exception that prevented the attempt from working.
That appears to be what's happening in your case, although we have to guess because you didn't post runnable code, and didn't include the traceback.
Note that if what you really want to know is whether the async call completed normally, after already getting True back from .ready(), then .successful() is the method to call.
It's pretty clear that, whatever Item.test may be, it's flatly impossible to pass it as a callable to .apply_async(), due to pickle restrictions. That explains why Item.test never prints anything (it's never actually called!), why .ready() returns True (the .apply_async() call failed), and why .get() raises an exception (because .apply_async() encountered an exception while trying to pickle one of its arguments - probably Item.test).
In python is there a way to not return to the caller function if a certain event happened in the called function. For example,...
def acquire_image(sdkobject):
ret = sdkobject.PrepareAcquisition()
error_check(ret)
ret = sdkobject.StartAcquisition()
error_check(ret)
error_check is a function that checks the return code to see if the sdk call had an error. If it is an error message then I would like to not go back to acquire and image but go to another function to reinitalise the sdk and start from the beginning again. Is there a pythonic way of doing this?
Have your error_check function raise an exception (like SDKError) if there is a problem, then run all the commands in a while loop.
class SDKError(Exception):
pass
# Perhaps define a separate exception for each possible
# error code, and make a dict that maps error codes to the
# appropriate exception.
class SDKType1Error(SDKError):
pass
class SDKType5Error(SDKError):
pass
sdk_errors = {
1: SDKType1Error,
5: SDKType5Error,
}
# Either return, if there was no error, or raise
# the appropriate exception
def error_check(return_code):
if return_code == 0:
return # No error
else:
raise sdk_errors[return_code]
# Example of how to deal with specific SDKErrors subclasses, or a generic
# catch-all SDKError
def acquire_image(sdkobject):
while True:
try:
# initialize sdk here
error_check(sdkobject.PrepareAcquisition())
error_check(sdkobject.StartAcquisition())
except SDKType1Error:
# Special handling for this error
except SDKError:
pass
else:
break
Return the error and use an if condition to check if the returned value has error, and if it has, call the reinitialization code from the calling function.
Use return for happy scenario
Returning to calling function is done by simple return or return response.
Use it for solving typical run of your code, when all goes well.
Throw exception, when something goes wrong
If something goes wrong, call raise Exception(). In many situations, your code does not has to do it explicitly, it throws the exception on its own.
You may even you your own Exception instances and use them to pass to the caller more information about what went wrong.
It took me a while to learn this approach and it made my coding much simpler and shorter then.
Do not care about what will your calling code do with it
Let your function do the task or fail, if there are problems.
Trying to think for client responsibility in your function will mess up your code and will not be complete solution anyway.
Things to avoid
Ignore who is calling you
In OOP this is principle of client anonymity. Just serve the request and do not care, who is calling.
Do not attempt using Exceptions as replacement for returning a value
Sometime, people use the fact, Exception can pass some information to to caller. But this is rather antipattern (there are always exception.)
I've encountered a situation where I'm working over a piece of code where I command changes on a remote object (that is one I can't duplicate to work over a clone), then ask the remote object for some operation in the new state and revert all the changes I made to it by a sequence of opposite commands.
The problem is that if in the middle of all these changes I encounter an error, I want to be able to roll-back all the changes I made so far.
The best fitting solution that came to my mind is the python try-finally workflow, but it's rather problematic when the sequence of commands is long:
try:
# perform action
try:
# perform action
try:
# ...
finally:
# unroll
finally:
# unroll
finally:
# unroll
This way, the more commands I need the deeper my indentation and nesting goes and the less my code is readable.
I've considered some other solutions such as maintaining a stack where for every command I push a rollback action, but this could get rather complicated, and I dislike pushing bound methods into stacks.
I've also considered incrementing a counter for every action I perform then in a single finally decide on the kind of rollback I want depending on the counter, but again, the maintainability of such code becomes a pain.
Most hits I got on searches for "transactions" and "rollback" were DB related and didn't fit very well to a more generic kind of code...
Anyone has a good idea as to how to systematically flatten this atrocity?
Wouldn't Context Manager objects and the with statement improve the situation? Especially if you can use a version of Python where the with statement supports multiple context expressions, as 2.7 or 3.x. Here's an example:
class Action(object):
def __init__(self, count):
self.count = count
def perform(self):
print "perform " + str(self.count)
if self.count == 2:
raise Exception("self.count is " + str(self.count))
def commit(self):
print "commit " + str(self.count)
def rollback(self):
print "rollback " + str(self.count)
def __enter__(self):
perform()
return self
def __exit__(self, exc_type, exc_value, traceback):
if exc_value is None:
self.commit()
else:
self.rollback()
with Action(1), Action(2), Action(3):
pass
You'd have to move your code to a set of "transactional" classes, such as Action above, where the action to be performed is executed in the __enter__() method and, if this terminates normally, you would be guaranteed that the corresponding __exit()__ method would be called.
Note that my example doesn't correspond exactly to yours; you'd have to tune what to execute in the __enter__() methods and what to execute in the with statement's body. In that case you might want to use the following syntax:
with Action(1) as a1, Action(2) as a2:
pass
To be able to access the Action objects from within the body of the with statement.
I have to open a file-like object in python (it's a serial connection through /dev/) and then close it. This is done several times in several methods of my class. How I WAS doing it was opening the file in the constructor, and then closing it in the destructor. I'm getting weird errors though and I think it has to do with the garbage collector and such, I'm still not used to not knowing exactly when my objects are being deleted =\
The reason I was doing this is because I have to use tcsetattr with a bunch of parameters each time I open it and it gets annoying doing all that all over the place. So I want to implement an inner class to handle all that so I can use it doing
with Meter('/dev/ttyS2') as m:
I was looking online and I couldn't find a really good answer on how the with syntax is implemented. I saw that it uses the __enter__(self) and __exit(self)__ methods. But is all I have to do implement those methods and I can use the with syntax? Or is there more to it?
Is there either an example on how to do this or some documentation on how it's implemented on file objects already that I can look at?
Those methods are pretty much all you need for making the object work with with statement.
In __enter__ you have to return the file object after opening it and setting it up.
In __exit__ you have to close the file object. The code for writing to it will be in the with statement body.
class Meter():
def __init__(self, dev):
self.dev = dev
def __enter__(self):
#ttysetattr etc goes here before opening and returning the file object
self.fd = open(self.dev, MODE)
return self
def __exit__(self, type, value, traceback):
#Exception handling here
close(self.fd)
meter = Meter('dev/tty0')
with meter as m:
#here you work with the file object.
m.fd.read()
Easiest may be to use standard Python library module contextlib:
import contextlib
#contextlib.contextmanager
def themeter(name):
theobj = Meter(name)
try:
yield theobj
finally:
theobj.close() # or whatever you need to do at exit
# usage
with themeter('/dev/ttyS2') as m:
# do what you need with m
m.read()
This doesn't make Meter itself a context manager (and therefore is non-invasive to that class), but rather "decorates" it (not in the sense of Python's "decorator syntax", but rather almost, but not quite, in the sense of the decorator design pattern;-) with a factory function themeter which is a context manager (which the contextlib.contextmanager decorator builds from the "single-yield" generator function you write) -- this makes it so much easier to separate the entering and exiting condition, avoids nesting, &c.
The first Google hit (for me) explains it simply enough:
http://effbot.org/zone/python-with-statement.htm
and the PEP explains it more precisely (but also more verbosely):
http://www.python.org/dev/peps/pep-0343/