I'm developing a python program to monitor and control a game-server. The game-server has many game-cores, and those cores handle the clients.
I have a python class called Server that holds instances of the class Core, and those instances are used to manage the actual game-cores. The Core class needs to connect to the game-core via TCP-Socket, in order to send commands to that specific game-core. To close those sockets properly, the Core class has a __del__ method which closes the socket.
An example:
class Server(object):
Cores = [] # list which will be filled with the Core objects
def __init__(self):
# detect the game-cores, create the core objects and append them to self.Cores
class Core(object):
CoreSocket = None # when the socket gets created, the socket-object will be bound to this variable
def __init__(self, coreID):
# initiate the socket connection between the running game-core and this python object
def __del__(self):
# properly close the socket connection
Now, when I use the Core class itself, the destructor always gets called properly. But when I use the Server class, the Core objects inside Server.Cores never get destructed. I have read that the gc has a problem with circular references and classes with destructors, but the Core objects never reference the Server object (only the socket-object, in Core.CoreSocket), so no circular references are created.
I usually prefer using the with-statement for resource cleaning, but in this case I need to send commands over many different methods in the Server class, so using with won't help ... I also tried to create and close the socket on each command, but that really kills the performance when I need to send many commands. Weak refereneces created with the weakref module won't help eigther, because the destructors then get called immediately after I create the Server object.
Why don't the Core objects get destructed properly when the Server object gets cleaned up by the gc? I guess I'm just forgetting something simple, but I just can't find out what it is.
Or maybe there is a better approach for closing those sockets when the object gets cleaned up?
You've mixed up class and instance members. Unlike in some other languages, defining a variable at class scope creates a class variable, not an instance variable. When a Server instance dies, the Server class is still around and still holds references to the cores. Define self.cores in the __init__ method instead:
class Server(object):
def __init__(self):
self.cores = []
Related
If I instantiate an object in the main thread, and then send one of it's member methods to a ThreadPoolExecutor, does Python somehow create a copy-by-value of the object and sends it to the subthread, so that the objects member method will have access to its own copy of self?
Or is it indeed accessing self from the object in the main thread, thus meaning that every member in a subthread is modifying / overwriting the same properties (living in the main thread)?
Threads share a memory space. There is no magic going on behind the scenes, so code in different threads accesses the same objects. Thread switches can occur at any time, although most simple Python operations are atomic. It is up to you to avoid race conditions. Normal Python scoping rules apply.
You might want to read about ThreadLocal variables if you want to find out about workarounds to the default behavior.
Processes as quite different. Each Process has its own memory space and its own copy of all the objects it references.
I'm running a REST server using Flask, and I have a method that updates some variables that other methods only read. I'd like to be able to safely update these variables, but I'm not sure how to approach this:
Is there some built-in Flask feature to suspend other requests while a specific one is being handled? If that method isn't running, other methods are free to run concurrently.
Perhaps I need to use some thread lock? I reviewed the locks Python's threading library has to offer, and couldn't find a lock that offers two kinds of locking: for writing and for reading. Do I need to implement such a thing myself?
I think a lock probably is what you want; an example of how to use one is as follows:
from threading import RLock
class App(object):
def __init__(self):
self._lock = RLock()
self._thing = 0
def read_thing(self):
with self._lock:
print self._thing
def write_thing(self)
with self._lock:
self._thing += 1
So, let's imagine this object of ours (App) is created and then accessed from two different threads (e.g. two different requests); the lock object is used in a context-management fashion (the "with" keyword) to ensure that all operations that could be thread-unsafe are done within the lock.
Somewhere at the low level some magic is done to ensure that for the duration that the lock is held, nothing else happens to that variable.
This means we can spam read_thing and write_thing to our hearts contents in as many threads as we like, and we shouldn't break anything.
So, for your Flask app, declare a lock and then whenever you access those variables you're worried about, do so inside the lock.
NOTE: If you're working with dictionaries, be sure to take copies of the dictionary ("copy.deepcopy" is one way), because otherwise you'll pass a reference to the actual dictionary and you'll be back to being thread-unsafe.
I have a number of threads in my software that all do the same thing, but each thread operates from a different "perspective." I have a "StateModel" object that is used throughout the thread and the objects within the thread, but StateModel needs to be calculated differently for each thread.
I don't like the idea of passing the StateModel object around to all of the functions that need it. Normally, I would create a module variable and all of the objects throughout the program could reference the same data from the module variable. But, is there a way to have this concept of a static module variable that is different and independent for each thread? A kind of static Thread variable?
Thanks.
This is implemented in threading.local.
I tend to dislike mostly-quoting-the-docs answers, but... well, time and place for everything.
A class that represents thread-local data. Thread-local data are data
whose values are thread specific. To manage thread-local data, just
create an instance of local (or a subclass) and store attributes on
it:
mydata = threading.local()
mydata.x = 1
The instance’s values will be
different for separate threads.
For more details and extensive examples, see the documentation string
of the _threading_local module.
Notably you can just have your class extend threading.local, and suddenly your class has thread-local behavior.
I'm building an application that uses Redis as a datastore. Accordingly, I have many functions that interact with Redis, usually as wrappers for a group of Redis commands.
As the application grows past my initial .py file, I'm at a loss for how to handle the Redis connection across multiple modules. Currently, my pointer to the Redis connection is declared at the top of the file and every function assumes it's present rather than passing it to every function. If I spread these functions into multiple files, then each module creates its own Redis pointer to use and each instance of the application opens up multiple connections to Redis.
I would like one instance to just make use of the same connection.
I don't want to do this:
import redis
class MyApp(object):
def __init__(self):
self.r = redis.Redis()
(all my app functions that touch redis go here)
I also don't want to pass the Redis pointer as an argument into every function.
Is there some other way I can get functions from different modules to share a single Redis() instance?
Something like this:
Module redis_manager:
class RedisManager(object):
def __init__():
# Connect to redis etc
self.redis = 12345
redis_manager = RedisManager()
Then in your other modules, you can do:
from redis_manager import redis_manager
redis_manager.redis.stuff
In python multiprocessing module, in order to obtain an object from a remote Manager, most recipes tell us that we need to build a getter to recover each object:
class QueueMgr(multiprocessing.managers.SyncManager): pass
datos=Queue()
resultados=Queue()
topList=list(top)
QueueMgr.register('get_datos',callable=lambda:datos)
QueueMgr.register('get_resultados',callable=lambda:resultados)
QueueMgr.register('get_top',callable=lambda:topList)
def Cola_run():
queueMgr=QueueMgr(address=('172.2.0.1', 25555),authkey="foo")
queueMgr.get_server().serve_forever()
Cola=Thread(target=Cola_run)
Cola.daemon=True
Cola.start()
and than the same getter must be declared in the client program:
class QueueMgr(multiprocessing.managers.SyncManager): pass
QueueMgr.register('get_datos')
QueueMgr.register('get_resultados')
QueueMgr.register('get_top')
queueMgr=QueueMgr(address=('172.22.0.4', 25555),authkey="foo")
queueMgr.connect()
datos=queueMgr.get_datos()
resultados=queueMgr.get_resultados()
top=queueMgr.get_top()._getvalue()
Ok, it covers most usage cases. But I find the code looks ugly. Perhaps I am not getting the right recipe. But if it is really so, then at least I could do some nicer code in the client, perhaps automagically declaring the getters, if I were able to known in advance what objects the Manager is sharing. Is they a way to do it?
It is particularly troubling if you think that the instances of SyncManager provided by multiprocessing.Manager() allow to create sophisticated Proxy objects but that any client connecting to such SyncManager seems to need to obtain the reference to such proxies from elsewhere.
There's nothing stopping you from introspecting into the class and, for each shared attribute, generating the getter and calling register.