I've run into a bit of a wall importing modules in a Python script. I'll do my best to describe the error, why I run into it, and why I'm tying this particular approach to solve my problem (which I will describe in a second):
Let's suppose I have a module in which I've defined some utility functions/classes, which refer to entities defined in the namespace into which this auxiliary module will be imported (let "a" be such an entity):
module1:
def f():
print a
And then I have the main program, where "a" is defined, into which I want to import those utilities:
import module1
a=3
module1.f()
Executing the program will trigger the following error:
Traceback (most recent call last):
File "Z:\Python\main.py", line 10, in <module>
module1.f()
File "Z:\Python\module1.py", line 3, in f
print a
NameError: global name 'a' is not defined
Similar questions have been asked in the past (two days ago, d'uh) and several solutions have been suggested, however I don't really think these fit my requirements. Here's my particular context:
I'm trying to make a Python program which connects to a MySQL database server and displays/modifies data with a GUI. For cleanliness sake, I've defined the bunch of auxiliary/utility MySQL-related functions in a separate file. However they all have a common variable, which I had originally defined inside the utilities module, and which is the cursor object from MySQLdb module.
I later realised that the cursor object (which is used to communicate with the db server) should be defined in the main module, so that both the main module and anything that is imported into it can access that object.
End result would be something like this:
utilities_module.py:
def utility_1(args):
code which references a variable named "cur"
def utility_n(args):
etcetera
And my main module:
program.py:
import MySQLdb, Tkinter
db=MySQLdb.connect(#blahblah) ; cur=db.cursor() #cur is defined!
from utilities_module import *
And then, as soon as I try to call any of the utilities functions, it triggers the aforementioned "global name not defined" error.
A particular suggestion was to have a "from program import cur" statement in the utilities file, such as this:
utilities_module.py:
from program import cur
#rest of function definitions
program.py:
import Tkinter, MySQLdb
db=MySQLdb.connect(#blahblah) ; cur=db.cursor() #cur is defined!
from utilities_module import *
But that's cyclic import or something like that and, bottom line, it crashes too. So my question is:
How in hell can I make the "cur" object, defined in the main module, visible to those auxiliary functions which are imported into it?
Thanks for your time and my deepest apologies if the solution has been posted elsewhere. I just can't find the answer myself and I've got no more tricks in my book.
Globals in Python are global to a module, not across all modules. (Many people are confused by this, because in, say, C, a global is the same across all implementation files unless you explicitly make it static.)
There are different ways to solve this, depending on your actual use case.
Before even going down this path, ask yourself whether this really needs to be global. Maybe you really want a class, with f as an instance method, rather than just a free function? Then you could do something like this:
import module1
thingy1 = module1.Thingy(a=3)
thingy1.f()
If you really do want a global, but it's just there to be used by module1, set it in that module.
import module1
module1.a=3
module1.f()
On the other hand, if a is shared by a whole lot of modules, put it somewhere else, and have everyone import it:
import shared_stuff
import module1
shared_stuff.a = 3
module1.f()
… and, in module1.py:
import shared_stuff
def f():
print shared_stuff.a
Don't use a from import unless the variable is intended to be a constant. from shared_stuff import a would create a new a variable initialized to whatever shared_stuff.a referred to at the time of the import, and this new a variable would not be affected by assignments to shared_stuff.a.
Or, in the rare case that you really do need it to be truly global everywhere, like a builtin, add it to the builtin module. The exact details differ between Python 2.x and 3.x. In 3.x, it works like this:
import builtins
import module1
builtins.a = 3
module1.f()
As a workaround, you could consider setting environment variables in the outer layer, like this.
main.py:
import os
os.environ['MYVAL'] = str(myintvariable)
mymodule.py:
import os
myval = None
if 'MYVAL' in os.environ:
myval = os.environ['MYVAL']
As an extra precaution, handle the case when MYVAL is not defined inside the module.
This post is just an observation for Python behaviour I encountered. Maybe the advices you read above don't work for you if you made the same thing I did below.
Namely, I have a module which contains global/shared variables (as suggested above):
#sharedstuff.py
globaltimes_randomnode=[]
globalist_randomnode=[]
Then I had the main module which imports the shared stuff with:
import sharedstuff as shared
and some other modules that actually populated these arrays. These are called by the main module. When exiting these other modules I can clearly see that the arrays are populated. But when reading them back in the main module, they were empty. This was rather strange for me (well, I am new to Python). However, when I change the way I import the sharedstuff.py in the main module to:
from globals import *
it worked (the arrays were populated).
Just sayin'
A function uses the globals of the module it's defined in. Instead of setting a = 3, for example, you should be setting module1.a = 3. So, if you want cur available as a global in utilities_module, set utilities_module.cur.
A better solution: don't use globals. Pass the variables you need into the functions that need it, or create a class to bundle all the data together, and pass it when initializing the instance.
The easiest solution to this particular problem would have been to add another function within the module that would have stored the cursor in a variable global to the module. Then all the other functions could use it as well.
module1:
cursor = None
def setCursor(cur):
global cursor
cursor = cur
def method(some, args):
global cursor
do_stuff(cursor, some, args)
main program:
import module1
cursor = get_a_cursor()
module1.setCursor(cursor)
module1.method()
Since globals are module specific, you can add the following function to all imported modules, and then use it to:
Add singular variables (in dictionary format) as globals for those
Transfer your main module globals to it
.
addglobals = lambda x: globals().update(x)
Then all you need to pass on current globals is:
import module
module.addglobals(globals())
Since I haven't seen it in the answers above, I thought I would add my simple workaround, which is just to add a global_dict argument to the function requiring the calling module's globals, and then pass the dict into the function when calling; e.g:
# external_module
def imported_function(global_dict=None):
print(global_dict["a"])
# calling_module
a = 12
from external_module import imported_function
imported_function(global_dict=globals())
>>> 12
The OOP way of doing this would be to make your module a class instead of a set of unbound methods. Then you could use __init__ or a setter method to set the variables from the caller for use in the module methods.
Update
To test the theory, I created a module and put it on pypi. It all worked perfectly.
pip install superglobals
Short answer
This works fine in Python 2 or 3:
import inspect
def superglobals():
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals
save as superglobals.py and employ in another module thusly:
from superglobals import *
superglobals()['var'] = value
Extended Answer
You can add some extra functions to make things more attractive.
def superglobals():
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals
def getglobal(key, default=None):
"""
getglobal(key[, default]) -> value
Return the value for key if key is in the global dictionary, else default.
"""
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals.get(key, default)
def setglobal(key, value):
_globals = superglobals()
_globals[key] = value
def defaultglobal(key, value):
"""
defaultglobal(key, value)
Set the value of global variable `key` if it is not otherwise st
"""
_globals = superglobals()
if key not in _globals:
_globals[key] = value
Then use thusly:
from superglobals import *
setglobal('test', 123)
defaultglobal('test', 456)
assert(getglobal('test') == 123)
Justification
The "python purity league" answers that litter this question are perfectly correct, but in some environments (such as IDAPython) which is basically single threaded with a large globally instantiated API, it just doesn't matter as much.
It's still bad form and a bad practice to encourage, but sometimes it's just easier. Especially when the code you are writing isn't going to have a very long life.
This question already has answers here:
Is there an easy way to pickle a python function (or otherwise serialize its code)?
(12 answers)
Closed 2 years ago.
I am trying to run the following code:
import pickle
def foo():
print("i am foo")
pickle_foo = pickle.dumps(foo)
def foo():
print("i am the new foo")
fkt = pickle.loads(pickle_foo)
return fkt()
foo()
The expected behavior would be:
the new defined function "foo" is called
in the new function the old function gets unpickeled and then called
output:
i am the new foo
i am foo
What actually happens is:
the new function foo gets called, and then recursively calls itself until a Recursion Error gets thrown:
RecursionError: maximum recursion depth exceeded while calling a Python object
The error does not occur, when the two functions are named differently, but that would be very unpractical for my project.
Could anyone explain, why this behavior occurs and how to avoid it (without changing the function names)?
The pickle module pickles functions based on their fully-qualified name reference. This means that if your function is redefined somewhere in code, and then you unpickle a pickled reference to it, calling it will result in a call to the new definition.
From the Python docs on pickle:
Note that functions (built-in and user-defined) are pickled by “fully
qualified” name reference, not by value. 2 This means that only the
function name is pickled, along with the name of the module the
function is defined in. Neither the function’s code, nor any of its
function attributes are pickled. Thus the defining module must be
importable in the unpickling environment, and the module must contain
the named object, otherwise an exception will be raised.
What you can do however, is use inspect.getsource() to retrieve the source code for your function, and pickle that. This requires that your code be available as source somewhere on the file system, so compiled C code imported, or other outside sources (interpreter input, dynamically loaded modules) will not work.
When you unpickle it, you can use exec to convert it into a function and execute it.
Note: this will redefine foo every time, so calls to foo cannot be guaranteed to have the same effect.
Note 2: exec is unsafe and usually unsuitable for code that will be interacting with external sources. Make sure you protect calls to exec from potential external attacks that attempt to execute arbitrary code.
This question already has answers here:
Callable modules
(7 answers)
Closed 3 years ago.
I'm writing a module called foo that has many internal functions, but ultimately boils down to a single external function foo(). When using this module/function, I'd like to do
import foo
foo(whatever)
instead of
from foo import foo
foo(whatever)
Is this possible?
You could monkey patch the sys.modules dictionary to make the name of your module point to the function instead of your module.
foo.py (the file defining your module foo) would look like this
import sys
def foo(x):
return x + x
sys.modules[__name__] = foo
then you can use this module from a different file like this
import foo
print(foo(3))
6
There are probably reasons for why you shouldn't do this. sys.modules isn't supposed to point to functions, when you do from some_module import some_function, the module some_module is what gets added to sys.modules, not the function some_function.
It is not strictly possible. Python module names are ment to help the programmer distinguish between modules. So even if you had one function, bar in your module foo, using import foo will still need a foo.bar(). You're probably better off just using from foo import *.
However there may be a way around this. Python also has built-in functions, and you may be able to add your own functions to this. Doing so might require rewriting the compile though.
So conclusion: writing from foo import * isn't all that ugly and is a lot easier and prettier than the long way around.
This question already has answers here:
Injecting variables into an import namespace
(2 answers)
Closed 4 years ago.
Basically, I'd like to force a variable, lets call him jim into a plugin I load as a global, before the plugin loads, for instance:
load_plugin('blah', variables={'jim':1}) #For instance
And then inside blah.py:
print jim #prints 1
Is there any easy way to do this? Not a big deal if its not in the standard library.
No - there is no way to do that before the plug-in is imported in first place - so, if your variable is used in the module body itself, you are out of luck.
If the variable is used as a global variable inside the module's functions or methods (but not class bodies), you can change it after the module is imported simply doing:
import module
module.jim = 5
as you probably know. (And I am aware this is not what you are asking for).
So, the only way to achieve that would be to parse the source code for the module, and change the variable assignment there, save the source code and import it. Ok, there are ways to emulate importing with the source code in memory, but this approach is so impratical, we should not detail it.
If you have control over the source of the module you want to monkey-patch this way, my suggestion would be to use a configuration file from which the module would pick the variable names.
Then you generate the configuration file, perform the importing (taking care that it is not already imported into sys.modules) and you are done.
You could use the __import__ function. It lets you override the globals.
for instance:
__import__('blah', dict(jim=1, **globals()))
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
from . import x using __import__?
How does one do the equivalent of import * from module with Python's __import__ function?
How would I do the from ... import * with the __import__ function?
The reason being that i only know the name of the file at runtime and it only has 1 class inside that file.
In case someone reading this wants an actual answer to the question in the title, you can do it by manipulating the vars() dictionary. Yes, it is dumb to do this in most scenarios, but I can think of use cases where it would actually be really useful/cool (e.g. maybe you want a static module name, but want the contents of the module to come from somewhere else that's defined at runtime. Similar to and, IMO, better than the behavior of django.conf.settings if you're familiar with it)
module_name = "foo"
for key, val in vars(__import__(module_name)).iteritems():
if key.startswith('__') and key.endswith('__'):
continue
vars()[key] = val
This imports every non-system variable in the module foo.py into the current namespace.
Use sparingly :)
Don't. Just don't. Do I need to explain just how horrible that it? Dynamically importing (though sometimes inevitable) and importing into global namespace (always avoidable, but sometimes the easier solution and fine withtrusted modules) are bad enough themselves. Dynamically importing into global namespace is... a nightmare (and avoidable).
Just do _temp = __import__(name, globals(), locals(), [name_of_class]); your_class = _temp.name_of_class. According to the docs, this is about what from name import name_of_class would do.