eval not reading variable inside a internal function - python

When using inner function, it reads variable defined in outer function. But somehow it fails when using eval(). It seems to be related to how locals() works... but I'm not sure how and why...
def main():
aaa = 'print this'
def somethingelse():
print(locals())
#print(aaa)
print(eval('aaa'))
print(locals())
somethingelse()
main()
The above codes wouldn't work, giving error message:
File "", line 1, in
NameError: name 'aaa' is not defined
But if unmark the print(aaa) so both print lines exists, then both of them will work.
I tried to print locals() before and after this print(aaa) command, it turns out that if the print(aaa) line is marked, both locals() would be empty {}. But if unmarked, then both locals() would be {aaa: 'print this'}
This is puzzling to me...

When your Python code is compiled, the compiler has to do special things to allow a local variable to be accessible from inside a nested function. This makes all access to the variable slower so it only does it for variables that it knows are used in the inner function. Other local variables from the outer function just don't exist in the inner function's namespace.
It can't analyse inside the string you use for eval so it doesn't know that code is attempting to access a variable that otherwise wouldn't exist in the inner function. You need to access the variable directly from inside the inner function for the compiler to add it to the local variables for that function.
You probably don't want to be using eval anyway, there are extremely few cases where it is a good idea to use it.

Related

How interpreter of Python recognize undefined global variable in function?

How interpreter of Python recognize undefined global variable (a) in function in the following code ?
def show():
print(a)
a = 1
show()
Python is interactive language, so it processes each line of code line by line.
Given this, it should throw an error at the line with undefined variable (print(a)).
However, it works without error.
How does interpreter of Python recognize the undefined variable (a) ?
Or is it just recognized as letters until show function is called?
I converted the above code to bytecode, but I didn't understand well it...
When you define your function inside a python interpreter, python treats it as a sort of black box. It initializes the variables that are not defined inside and outside the function as free variables. Then, it stores a reference to the function inside the global table (you can access it using globals()). This global table holds the values for global variables and global function references.
When you define the variable a python stores it inside the global dictionary as well. Just like the function before it.
After that when you call your function, python sees a variable a. It knows that the variable is free, therefore, must be declared inside the global variable by now. Then it looks up the global table and uses the value that is stored.
Python is run line by line, and in saying that, it will skip over the function until the function is called. So even though it's line by line, it's still running afterwards.
Use of global keyword:
To access a global variable inside a function, there is no need to use global keyword. As a is not assigned inside the function, python will look at the global scope.
We use global keyword to assign a new value to the global variable.
This example throws an error -
def show():
a = a + 5
print(a)
a = 1
show()
Error:
UnboundLocalError: local variable 'a' referenced before assignment

Limit Python function scope to local variables only

Is there a way to limit function so that it would only have access to local variable and passed arguments?
For example, consider this code
a = 1
def my_fun(x):
print(x)
print(a)
my_fun(2)
Normally the output will be
2
1
However, I want to limit my_fun to local scope so that print(x) would work but throw an error on print(a). Is that possible?
I feel like I should preface this with: Do not actually do this.
You (sort of) can with functions, but you will also disable calls to all other global methods and variables, which I do not imagine you would like to do.
You can use the following decorator to have the function act like there are no variables in the global namespace:
import types
noglobal = lambda f: types.FunctionType(f.__code__, {})
And then call your function:
a = 1
#noglobal
def my_fun(x):
print(x)
print(a)
my_fun(2)
However this actually results in a different error than you want, it results in:
NameError: name 'print' is not defined
By not allowing globals to be used, you cannot use print() either.
Now, you could pass in the functions that you want to use as parameters, which would allow you to use them inside the function, but this is not a good approach and it is much better to just keep your globals clean.
a = 1
#noglobal
def my_fun(x, p):
p(x)
p(a)
my_fun(2, print)
Output:
2
NameError: name 'a' is not defined
Nope. The scoping rules are part of a language's basic definition. To change this, you'd have to alter the compiler to exclude items higher on the context stack, but still within the user space. You obviously don't want to limit all symbols outside the function's context, as you've used one in your example: the external function print. :-)

Modify Python global variable inside eval

I have code like this:
globals_defined = {'add': my_add_fn, 'divide': my_divide_fn}
eval_result = eval(<some code>, {data: {'name_1': 'NAME1', 'name_2': 'NAME2'}, globals_defined)
I would like set a global variable inside the eval and then be able to access it afterwards. So like:
globals_defined = {'add': my_add_fn, 'divide': my_divide_fn, count_iterations: 0}
eval_result = eval(<some code>, {data: {'name_1': 'NAME1', 'name_2': 'NAME2'}, globals_defined)
print 'iterations: ' + str(globals_defined['count_iterations'])
And ideally that would print a modified value of count_iterations. Inside the eval, the my_add_fn would do something like the below to increment it:
def my_add_fn():
global count_iterations
count_terations += 1
return 'blah!'
Edit: I should have added this at first. Yes, I need to use eval. This eval is from user input originally but has been parsed into an Abstract Syntax Tree that rejects all but a few mathematical operations. Then, that AST is what is being eval'd with some custom function definitions defined.
Sounds like I can't do it this way though.
You should not be using eval to start with.
But since you don't show the code you are running in there, there is no way to suggest alternatives.
The fact is Python's eval only executes expressions - and assignments in Python are statements - you can't, ordinarily, have an assignment inside an eval.
The way to fix that is just to use exec instead of eval. By default, exec will use the global variables of the module where it is run, so, any assignment in the executed code will affect the global namespace.
If you pass a dictionary as a second argument to exec it will use that dictionary as its globals dictionary - and then you can have it modify the values inside that dict with assignments.
That are a general statement about eval and exec - your specific problem is that you are (likely) trying to call a function inside eval - add() - which is exposed to it in the globals_defined dictionary itself. The problem is your add function is defined in your module - outside the eval (or exec) scope. The Global variables for the my_add_fn function are not the globals passed to eval, rather the module globals themselves. So the variable count_iterations it accesses is on the module scope, not inside the globals_defined dictionary.
Of course there are ways to fix that. The best is: not to use eval (or exec).
Your print idiom on the last line suggests you are new to Python - otherwise you would know about the powerfull ways of string formatting in existence.
That said: you can call functions stored in a dictionary just as you can access any element on a dictionary - that is:
globals_defined["add"] () will all your my_add_fn function - this is a hint of how you can get rid of the need to use "eval".
And finally, back to your problem as is - one way to solve it, is to have your function not rely on its global namespace, and rather receive and return parameters - and you then make assignments of the values returned by the function outside it - on your exec string scope - thus:
def my_add_fn(count):
return 'blah!', count+1
global_dict = {"add": my_add_fn, "count_iterations": 0}
exec("result, count_iterations = add()", global_dict)
print "Count = {}".format(global_dict["count_iterations"])
Why try to use a global namespace? Why not construct a specific local namespace to pass the functions to the eval? The problem with passing globals() as the global namespace to eval or exec is that you are giving away control of your namespace to the code you exit, so (e.g.) if it binds to a name you are already using your value will be overwritten.

Update locals from inside a function

I would like to write a function which receives a local namespace dictionary and update it. Something like this:
def UpdateLocals(local_dict):
d = {'a':10, 'b':20, 'c':30}
local_dict.update(d)
When I call this function from the interactive python shell it works all right, like this:
a = 1
UpdateLocals(locals())
# prints 20
print a
However, when I call UpdateLocals from inside a function, it doesn't do what I expect:
def TestUpdateLocals():
a = 1
UpdateLocals(locals())
print a
# prints 1
TestUpdateLocals()
How can I make the second case work like the first?
UPDATE:
Aswin's explanation makes sense and is very helpful to me. However I still want a mechanism to update the local variables. Before I figure out a less ugly approach, I'm going to do the following:
def LoadDictionary():
return {'a': 10, 'b': 20, 'c': 30}
def TestUpdateLocals():
a = 1
for name, value in LoadDictionary().iteritems():
exec('%s = value' % name)
Of course the construction of the string statements can be automated, and the details can be hidden from the user.
You have asked a very good question. In fact, the ability to update local variables is very important and crucial in saving and loading datasets for machine learning or in games. However, most developers of Python language have not come to a realization of its importance. They focus too much on conformity and optimization which is nevertheless important too.
Imagine you are developing a game or running a deep neural network (DNN), if all local variables are serializable, saving the entire game or DNN can be simply put into one line as print(locals()), and loading the entire game or DNN can be simply put into one line as locals().update(eval(sys.stdin.read())).
Currently, globals().update(...) takes immediate effect but locals().update(...) does not work because Python documentation says:
The default locals act as described for function locals() below:
modifications to the default locals dictionary should not be
attempted. Pass an explicit locals dictionary if you need to see
effects of the code on locals after function exec() returns.
Why they design Python in such way is because of optimization and conforming the exec statement into a function:
To modify the locals of a function on the fly is not possible without
several consequences: normally, function locals are not stored in a
dictionary, but an array, whose indices are determined at compile time
from the known locales. This collides at least with new locals added
by exec. The old exec statement circumvented this, because the
compiler knew that if an exec without globals/locals args occurred in
a function, that namespace would be "unoptimized", i.e. not using the
locals array. Since exec() is now a normal function, the compiler does
not know what "exec" may be bound to, and therefore can not treat is
specially.
Since global().update(...) works, the following piece of code will work in root namespace (i.e., outside any function) because locals() is the same as globals() in root namespace:
locals().update({'a':3, 'b':4})
print(a, b)
But this will not work inside a function.
However, as hacker-level Python programmers, we can use sys._getframe(1).f_locals instead of locals(). From what I have tested so far, on Python 3, the following piece of code always works:
def f1():
sys._getframe(1).f_locals.update({'a':3, 'b':4})
print(a, b)
f1()
However, sys._getframe(1).f_locals does not work in root namespace.
The locals are not updated here because, in the first case, the variable declared has a global scope. But when declared inside a function, the variable loses scope outside it.
Thus, the original value of the locals() is not changed in the UpdateLocals function.
PS: This might not be related to your question, but using camel case is not a good practice in Python. Try using the other method.
update_locals() instead of UpdateLocals()
Edit To answer the question in your comment:
There is something called a System Stack. The main job of this system stack during the execution of a code is to manage local variables, make sure the control returns to the correct statement after the completion of execution of the called function etc.,
So, everytime a function call is made, a new entry is created in that stack,
which contains the line number (or instruction number) to which the control has to return after the return statement, and a set of fresh local variables.
The local variables when the control is inside the function, will be taken from the stack entry. Thus, the set of locals in both the functions are not the same. The entry in the stack is popped when the control exits from the function. Thus, the changes you made inside the function are erased, unless and until those variables have a global scope.

Globals as function input instead arguments

I'm just learning about how Python works and after reading a while I'm still confused about globals and proper function arguments. Consider the case globals are not modified inside functions, only referenced.
Can globals be used instead function arguments?
I've heard about using globals is considered a bad practice. Would it be so in this case?
Calling function without arguments:
def myfunc() :
print myvalue
myvalue = 1
myfunc()
Calling function with arguments
def myfunc(arg) :
print arg
myvalue = 1
myfunc(myvalue)
I've heard about using globals is considered a bad practice. Would it be so in this case?
It depends on what you're trying to achieve. If myfunc() is supposed to print any value, then...
def myfunc(arg):
print arg
myfunc(1)
...is better, but if myfunc() should always print the same value, then...
myvalue = 1
def myfunc():
print myvalue
myfunc()
...is better, although with an example so simple, you may as well factor out the global, and just use...
def myfunc():
print 1
myfunc()
Yes. Making a variable global works in these cases instead of passing them in as a function argument. But, the problem is that as soon as you start writing bigger functions, you quickly run out of names and also it is hard to maintain the variables which are defined globally. If you don't need to edit your variable and only want to read it, there is no need to define it as global in the function.
Read about the cons of the global variables here - Are global variables bad?
There are several reasons why using function arguments is better than using globals:
It eliminates possible confusion: once your program gets large, it will become really hard to keep track of which global is used where. Passing function arguments lets you be much more clear about which values the function uses.
There's a particular mistake you WILL make eventually if you use globals, which will look very strange until you understand what's going on. It has to do with both modifying and reading a global variable in the same function. More on this later.
Global variables all live in the same namespace, so you will quickly run into the problem of overlapping names. What if you want two different variables named "index"? Calling them index1 and index2 is going to get real confusing, real fast. Using local variables, or function parameters, means that they all live in different namespaces, and the potential for confusion is greatly reduced.
Now, I mentioned modifying and reading a global variable in the same function, and a confusing error that can result. Here's what it looks like:
record_count = 0 # Global variable
def func():
print "Record count:", record_count
# Do something, maybe read a record from a database
record_count = record_count + 1 # Would normally use += 1 here, but it's easier to see what's happening with the "n = n + 1" syntax
This will FAIL: UnboundLocalError: local variable 'record_count' referenced before assignment
Wait, what? Why is record_count being treated as a local variable, when it's clearly global? Well, if you never assigned to record_count in your function, then Python would use the global variable. But when you assign a value to record_count, Python has to guess what you mean: whether you want to modify the global variable, or whether you want to create a new local variable that shadows (hides) the global variable, and deal only with the local variable. And Python will default to assume that you're being smart with globals (i.e., not modifying them without knowing exactly what you're doing and why), and assume that you meant to create a new local variable named record_count.
But if you're accessing a local variable named record_count inside your function, Python won't let you access the global variable with the same name inside the function. This is to spare you some really nasty, hard-to-track-down bugs. Which means that if this function has a local variable named record_count -- and it does, because of the assignment statement -- then all access to record_count is considered to be accessing the local variable. Including the access in the print statement, before the local variable's value is defined. Thus, the UnboundLocalError exception.
Now, an exercise for the reader. Remove the print statement and notice that the UnboundLocalError exception is still thrown. Can you figure out why? (Hint: before assigning to a variable, the value on the right-hand side of the assignment has to be calculated.)
Now: if you really want to use the global record_count variable in your function, the way to do it is with Python's global statement, which says "Hey, this variable name I'm about to specify? Don't ever make it a local variable, even if I assign to it. Assign to the global variable instead." The way it works is just global record_count (or any other variable name), at the start of your function. Thus:
record_count = 0 # Global variable
def func():
global record_count
print "Record count:", record_count
# Do something, maybe read a record from a database
record_count = record_count + 1 # Again, you would normally use += 1 here
This will do what you expected in the first place. But hopefully now you understand why it will work, and the other version won't.
It depends on what you want to do.
If you need to change the value of a variable that is declared outside of the function then you can't pass it as an argument since that would create a "copy" of that variable inside the functions scope.
However if you only want to work with the value of a variable you should pass it as an argument. The advantage of this is that you can't mess up the global variable by accident.
Also you should declare global variable before they are used.

Categories