Why does the following code work while the code after it breaks?
I'm not sure how to articulate my question in english, so I attached the smallest code I could come up with to highlight my problem.
(Context: I'm trying to create a terminal environment for python, but for some reason the namespaces seem to be messed up, and the below code seems to be the essence of my problem)
No errors:
d={}
exec('def a():b',d)
exec('b=None',d)
exec('a()',d)
Errors:
d={}
exec('def a():b',d)
d=d.copy()
exec('b=None',d)
d=d.copy()
exec('a()',d)
It is because the d does not use the globals provided by exec; it uses the mapping to which it stored the reference in the first exec. While you set 'b' in the new dictionary, you never set b in the globals of that function.
>>> d={}
>>> exec('def a():b',d)
>>> exec('b=None',d)
>>> d['a'].__globals__ is d
True
>>> 'b' in d['a'].__globals__
True
vs
>>> d={}
>>> exec('def a():b',d)
>>> d = d.copy()
>>> exec('b=None',d)
>>> d['a'].__globals__ is d
False
>>> 'b' in d['a'].__globals__
False
If exec didn't work this way, then this too would fail:
mod.py
b = None
def d():
b
main.py
from mod import d
d()
A function will remember the environment where it was first created.
It is not possible to change the dictionary that an existing function points to. You can either modify its globals explicitly, or you can make another function object altogether:
from types import FunctionType
def rebind_globals(func, new_globals):
f = FunctionType(
code=func.__code__,
globals=new_globals,
name=func.__name__,
argdefs=func.__defaults__,
closure=func.__closure__
)
f.__kwdefaults__ = func.__kwdefaults__
return f
def foo(a, b=1, *, c=2):
print(a, b, c, d)
# add __builtins__ so that `print` is found...
new_globals = {'d': 3, '__builtins__': __builtins__}
new_foo = rebind_globals(foo, new_globals)
new_foo(a=0)
Related
How can I get a list of all the names that point to a Python object?
import my_function from example
a = my_function
b = my_function
get_names(my_function)
[a, b]
Edit: The goal is to help find how to monkey patch an object that is loaded in an unknown way.
Search the global namespace for objects matching via identity, and report the keys (names).
def my_func():
pass
a = my_func
b = my_func
def get_names(x):
for k, v in globals().items():
if v is x:
yield k
print(list(get_names(my_func))) #prints ['my_func', 'a', 'b']
See below (use globals and make sure you do not return the function itself)
from example import my_function
def get_names(func):
result = []
for k,v in globals().items():
if v == func and k not in str(v).split():
result.append(k)
return result
def foo():
pass
a = my_function
b = my_function
c = foo
print(get_names(my_function))
example.py
def my_function():
pass
output
['a','b']
While running python 3.6.3 I am trying to edit a dictionary initialized in a module. Reducing complexity, I have the module Foo.py
d_1 = {}
def edit(a,b):
global d_1
d_1[a] = b
def remove():
global d_1
d_1 = {}
And Main.py
from Foo import d_1, edit, remove
import Foo
remove()
edit("Test", 1)
print(d_1)
Running Main.py prints {}, but if I commont out the remove(), it printst {"Test":1}. In both cases, printing Foo.d_1 prints {"Test":1}.
Why is it different, and is there a way to make edit work while before hand calling remove?
The reason is you are creating a new dict in remove.
If you use id function to check d_1 id. You will see the difference.
It's better to use d_1.clear() instead of d_1 = {}.
The reason why Foo.d_1 is correct is remove fn "removed" another dict object.
(They are operating different dict object.)
The easiest way would be to create a class to encapsulate all your methods.
foo.py
class Dictionnary:
def __init__(self):
self.d_1 = {}
def edit(self, a, b):
self.d_1[a] = b
def remove(self):
self.d_1 = {}
def __str__(self):
return self.d_1.__str__()
main.py
from foo import Dictionnary
d = Dictionnary()
print("Initialization", d)
d.remove()
print("After remove", d)
d.edit("hello", 1)
print("Edited one time", d)
d.edit("world", ":)")
print("Edited 2 times", d)
d.remove()
print("Removed", d)
Output:
Initialization {}
After remove {}
Edited one time {'hello': 1}
Edited 2 times {'hello': 1, 'world': ':)'}
Removed {}
You could also use a static class, so you won't have to initialize it and store the object somewhere.
Given three or more variables, I want to find the name of the variable with the min value.
I can get the min value from the list, and I can get the index within the list of the min value. But I want the variable name.
I feel like there's another way to go about this that I'm just not thinking of.
a = 12
b = 9
c = 42
cab = [c,a,b]
# yields 9 (the min value)
min(cab)
# yields 2 (the index of the min value)
cab.index(min(cab))
What code would yield 'b'?
The magic of vars prevents you from having to make a dictionary up front if you want to have things in instance variables:
class Foo():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def min_name(self, names = None):
d = vars(self)
if not names:
names = d.keys()
key_min = min(names, key = (lambda k: d[k]))
return key_min
In action
>>> x = Foo(1,2,3)
>>> x.min_name()
'a'
>>> x.min_name(['b','c'])
'b'
>>> x = Foo(5,1,10)
>>> x.min_name()
'b'
Right now it'll crash if you pass an invalid variable name in the parameter list for min_name, but that's resolvable.
You can also update the dictionary and it's reflected in the source
def increment_min(self):
key = self.min_name()
vars(self)[key] += 1
Example:
>>> x = Foo(2,3,4)
>>> x.increment_min()
>>> x.a
3
You cannot get the name of the variable with the minimum/maximum value like this*, since as #jasonharper commented: cab is nothing more than a list containing three integers; there is absolutely no connection to the variables that those integers originally came from.
A simple workaround is to user pairs, like this:
>>> pairs = [("a", 12), ("b", 9), ("c", 42)]
>>> min(pairs)
('b', 9)
>>> min(pairs)[0]
'b'
See Green Cloak Guy's answer, but if you want to go for readability, I suggest following a similar approach to mine.
You'd have to get very creative for this to work, and the only solution I can think of is rather inefficient.
You can get the memory address of the data b refers to fairly easily:
>>> hex(id(b))
'0xaadd60'
>>> hex(id(cab[2]))
'0xaadd60'
To actually correspond that with a variable name, though, the only way to do that would be to look through the variables and find the one that points to the right place.
You can do this by using the globals() function:
# get a list of all the variable names in the current namespace that reference your desired value
referent_vars = [k for k,v in globals().items() if id(v) == id(cab[2])]
var_name = referent_vars[0]
There are two big problems with this solution:
Namespaces - you can't put this code in a function, because if you do that and then call it from another function, then it won't work.
Time - this requires searching through the entire global namespace.
The first problem could be alleviated by additionally passing the current namespace in as a variable:
def get_referent_vars(val, globals):
return [k for k,v in globals.items() if id(v) == id(val)]
def main():
a = 12
b = 9
c = 42
cab = [a, b, c]
var_name = get_referent_vars(
cab[cab.index(min(cab))],
globals()
)[0]
print(var_name)
# should print 'b'
I have a script where I have to change some functions and reset the changes I made to them. I currently do it like this:
def a():
pass
def b():
pass
def c():
pass
def d():
pass
previous_a = a
previous_b = b
previous_c = c
a = d
b = d
c = d
# I want to make the following code block shorter.
a = previous_a
b = previous_b
c = previous_c
Instead of enumerating all the functions to reset, I would like to have a loop that iterates on a data structure (a dictionary, perhaps) and resets the function variables with their previous values. In the previous example, the current approach 3 functions is ok, but doing that for 15+ functions will produce a big code chunk that I would like to reduce.
Unfortunately, I have been unable to find a viable solution. I thought of weakrefs, but my experiments with them failed.
Just store the old functions in a dictionary:
old = {'a': a, 'b': b, 'c': c}
then use the globals() dictionary to restore them:
globals().update(old)
This only works if a, b and c were globals to begin with.
You can use the same trick to assign d to all those names:
globals().update(dict.fromkeys(old.keys(), d))
This sets the keys a, b and c to the same value d.
Function definitions are stored in the "global" scope of the module where they are declared. The global scope is a dictionary. As such, you could access/modify its values by key.
See this example:
>>> def a():
... print "a"
...
>>> def b():
... print "b"
...
>>> def x():
... print "x"
...
>>> for i in ('a', 'b'):
... globals()[i] = x
...
>>> a()
x
I'm having problems using copy.copy() and copy.deepcopy() and Python's scope. I call a function and a dictionary is passed as an argument. The dictionary copies a local dictionary but the dictionary does not retain the values that were copied.
def foo (A, B):
localDict = {}
localDict['name'] = "Simon"
localDict['age'] = 55
localDict['timestamp'] = "2011-05-13 15:13:22"
localDict['phone'] = {'work':'555-123-1234', 'home':'555-771-2190', 'mobile':'213-601-9100'}
A = copy.deepcopy(localDict)
B['me'] = 'John Doe'
return
def qua (A, B):
print "qua(A): ", A
print "qua(B): ", B
return
# *** MAIN ***
#
# Test
#
A = {}
B = {}
print "initial A: ", A
print "initial B: ", B
foo (A, B)
print "after foo(A): ", A
print "after foo(B): ", B
qua (A, B)
The copy.deepcopy works and within function "foo", dict A has the contents of localDict. But outside the scope of "foo", dict A is empty. Meanwhile, after being assigned a key and value, dict B retains the value after coming out of function 'foo'.
How do I maintain the values that copy.deepcopy() copies outside of function "foo"?
Ponder this:
>>> def foo(d):
... d = {1: 2}
...
>>> d = {3: 4}
>>> d
{3: 4}
>>> foo(d)
>>> d
{3: 4}
>>>
Inside foo, d = {1: 2} binds some object to the name d. This name is local, it does not modify the object d used to point to. On the other hand:
>>> def bar(d):
... d[1] = 2
...
>>> bar(d)
>>> d
{1: 2, 3: 4}
>>>
So this has nothing to do with your use of (deep)copy, it's just the way "variables" in Python work.
What's happening is that inside foo() you create a copy of B and assigns it to A, shadowing the empty dict you sent as an argument by reassigning a new object to the same name. Now inside the function you have a new dict called A, completely unrelated to the A outside in the global scope, and it gets garbage collected when the function ends, so actually nothing happens, only the 'me' key added to B.
If instead of:
A = copy.deepcopy(localDict)
You do something like this, it would work as you expect:
C = copy.deepcopy(localDict)
A.update(C)
But it seems like what you really want has nothing to do with the copy module and would be something like this:
def foo (A, B):
A['name'] = "Simon"
A['age'] = 55
A['timestamp'] = "2011-05-13 15:13:22"
A['phone'] = {'work':'555-123-1234', 'home':'555-771-2190', 'mobile':'213-601-9100'}
B['me'] = 'John Doe'
The behavior you are seeing isn't related to deepcopy(), you are reassigning the name A to a new value, and that assignment will not carry over unless you use the global keyword. The reason the changes to B are persistent is that you are modifying a mutable variable, here are two options for how you could get the behavior you want:
Instead of using localDict, just modify A:
def foo(A, B):
A['name'] = "Simon"
A['age'] = 55
A['timestamp'] = "2011-05-13 15:13:22"
A['phone'] = {'work':'555-123-1234', 'home':'555-771-2190', 'mobile':'213-601-9100'}
B['me'] = 'John Doe'
return
Use A.update(copy.deepcopy(localDict)) instead of A = copy.deepcopy(localDict):
def foo(A, B):
localDict = {}
localDict['name'] = "Simon"
localDict['age'] = 55
localDict['timestamp'] = "2011-05-13 15:13:22"
localDict['phone'] = {'work':'555-123-1234', 'home':'555-771-2190', 'mobile':'213-601-9100'}
A.update(copy.deepcopy(localDict))
B['me'] = 'John Doe'
return