I am searching for a way to run a module while replacing imports. This would be the missing magic to implement run_patched in the following pseudocode.
from argparse import ArgumentParser
class ArgumentCounter(ArgumentParser):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
arg_counter = 0
def add_argument(self, *args, **kwargs):
super().add_argument(*args, **kwargs)
arg_counter += 1
def parse_args(self, *args, **kwargs):
super().parse_args(*args, **kwargs)
print(arg_counter)
run_patched('test.test_argparse', ArgumentParser = ArgumentCounter)
I know that single methods could be replaced by assignment, for example stating ArgumentParser.parse_args = print, so I was tempted to mess with globals like sys.modules and then execute the module by runpy.run_module.
Unfortunately, the whole strategy should be able to work in a multithreaded scenario. So the change should only affect the module executed while other parts of the program can continue to use the unpatched module(s) as if they were never touched.
Related
I would like to improve the socketio.event decorator to make it print the event fired and its parameters.
I have a Manager class which has a self.sio: socketio.Server attribute. I try to define a new decorator as a Manager method such as it returns a function decorated by self.sio.eventand which also prints its data. I have tried this solution, but it does not work :
def event(self, func):
#self.sio.event
def wrapper(*args, **kwargs):
print(f'[{func.__name__}] : {args} {kwargs}')
func(*args, **kwargs)
return wrapper
Any recommendation ?
I think something like this should work for you:
def event(self, func):
def wrapper(*args, **kwargs):
print(f'[{func.__name__}] : {args} {kwargs}')
return func(*args, **kwargs)
return self.sio.on(func.__name__, wrapper)
You can't really use the #sio.event on the wrapper, because then the event that will be configured is going to be named wrapper. My solution uses the #sio.on decorator, which accepts the event name explicitly.
am making a text-based RPG game where i split the game into a lot of files containing scripts for creatures classes and weapons, example: classes/mobs/goblin.py, and i have a global list for entities so when a mob attack something it finds that thing's location from that global list, but the instance class from classes/mobs/default.py can't access the list on the main.py file.
example codes:
main.py:
entity_list = ['test']
x = classes.mobs.default.goblin()
x.attack()
goblin.py:
class goblin():
def __init__(self):
pass
def attack(self):
print(entity_list)
is there a workaround? thanks in advance.
This is a design flaw, and while you can push imports down into functions, it's better to solve the underlying problem.
I think the best option here is to create a Context class to hold the global state, context.py:
_CONTEXT = None
def context():
global _CONTEXT
if _CONTEXT is None:
_CONTEXT = Context()
return _CONTEXT
class Context(object):
def __init__(self):
self.entity_list = ['test']
you can then use it freely in goblin.py:
from path.to.context import context
class Goblin(object):
def attack(self):
print(context().entity_list)
and main.py:
x = classes.mobs.default.Goblin()
x.attack()
if you at some point want to implement save/replay/network play, you can extend the context:
class Context(object):
def __init__(self):
self.__events = []
self.entity_list = ['test']
def _addevent(self, name, *options):
self.__events.append(name, options)
# and/or send the event to other processes/users/etc.
def create(self, cls, *args, **kwargs):
self._addevent('create', cls, args, kwargs)
return cls(*args, **kwargs)
def call(self, obj, method, *args, **kwargs):
self._addevent('call', obj, method, args, kwargs)
return getattr(obj, method)(*args, **kwargs)
your main.py would then look like:
from path.to.context import context
x = context().create(classes.mobs.default.goblin)
context().call(x, 'attack')
the awkwardness in syntax can be hidden with meta classes and decorators if need be.
def attack(self):
from .main import entity_list
print(entity_list)
Importing inside methods or functions can be used to cause delayed imports.
This way, the fact that main.py may have to import goblin.py won't throw you in a circular import situation, since the importing of main.py in the goblin file is delayed until its needed.
And it cause no performance impact as well: the main and goblin modules are already in memory and loaded, the import statement just makes an assignment to an object that is already there.
I've written some code to wrap shutil.copy file like so (this is a largely simplified example):
from functools import wraps
from shutil import copyfile
def my_wrapper(f):
#wraps(f)
def wrapper(*args, **kwargs):
return f(*args, **kwargs)
return wrapper
#my_wrapper
def mycopyfile(*args, **kwargs):
"""Wrap :func:`shutil.copyfile`"""
return copyfile(*args, **kwargs)
In PyCharm, if I type mycopyfile. it suggests *args, and **kwargs as the params. How can I make it so PyCharm, and other IDE's suggest the params of shutil.copyfile?
In addition, the quick docs in PyCharm use the documents for mycopyfile, rather than the docs shutil.copy file, even though mycopyfile.doc returns the docs correctly (as deteremined by the #wraps decorator)
I have a memoizer decorator class in a library, as such:
class memoizer(object):
def __init__(self, f):
"some code here"
def __call__(self, *args, **kwargs):
"some code here"
When I use it for functions in the library, I use #memoizer. However, I'd like to have the client (ie the programmer using the library) initialize the memoization class from outside the library with some arguments so that it holds for all uses of the decorator used in the client program. Particularly, this particular memoization class saves the results to a file, and I want the client to be able to specify how much memory the files can take. Is this possible?
You can achieve this using decorator factory:
class DecoratorFactory(object):
def __init__(self, value):
self._value = value
def decorator(self, function):
def wrapper(*args, **kwargs):
print(self._value)
return function(*args, **kwargs)
return wrapper
factory = DecoratorFactory("shared between all decorators")
#factory.decorator
def dummy1():
print("dummy1")
#factory.decorator
def dummy2():
print("dummy2")
# prints:
# shared between all decorators
# dummy1
dummy1()
# prints:
# shared between all decorators
# dummy2
dummy2()
If you don't like factories you can create global variables within some module and set them before usage of our decorators (not nice solution, IMO factory is more clean).
I'm trying to implement a Singleton and I am running into difficulty when I import the module. My set up is the following. I am using Python 2.7.
MODULE 1
class SingletonClass(object):
def __new__(self, *args, **kwargs):
if not self._instance:
self._instance = super(SingletonClass, self).__new__(
self, *args, **kwargs)
return self._instance
print SingletonClass() #OUTPUT: 0x00000000030F1630
print SingletonClass() #OUTPUT: 0x00000000030F1630 (Good, what I want)
MODULE 2
import SingletonClass
class AnotherClass:
print SingletonClass.SingletonClass() #OUTPUT: 0x0000000003292208
Within the module the singleton is working, but in another module the Singleton is not returning the same object as it did in the first. Any idea why?
Edit
For now I will put the only thing that I have found that works. I'm sure there is a better solution to this, but I think this may better convey what the underlying problem is.
MODULE 1
class SingletonParent(object):
_instance = None
def __new__(self, *args, **kwargs):
if not self._instance:
self._instance = super(SingletonParent, self).__new__(
self, *args, **kwargs)
return self._instance
MODULE 2
import SingletonParent
class SingletonClass(object):
def __new__(self, *args, **kwargs):
if not SingletonParent.SingletonParent._instance:
SingletonParent.SingletonParent._instance = super(SingletonClass, self).__new__(
self, *args, **kwargs)
return SingletonParent.SingletonParent._instance
print SingletonClass() #OUTPUT: 0x00000000030F1630
print SingletonClass() #OUTPUT: 0x00000000030F1630
MODULE 3
import SingletonClass
class AnotherClass:
print SingletonClass.SingletonClass() #OUTPUT: 0x00000000030F1630
Solution (Edit 3)
Lesson: Don't have your main function in the same module as your Singleton!
Your problem is most likely that the module is being imported twice under two different names.
To test for this, add something like:
print "Being imported..."
In module1.py.
If this message is printed twice, then the module is being imported twice, and that's your problem. To fix it, make sure that you're using the same name to import the module everywhere[0], and that you're not doing hackery with sys.path.
[0]: Technically this shouldn't be necessary, but it's a simple fix.