Consider the following code:
#main.py
From toolsmodule import *
database = "foo"
#toolsmodule
database = "mydatabase"
As it seems, this creates one variable in each module with different content. How can I modify the variable inside toolsmodule from main? The following does not work:
toolsmodule.database = "foo"
Sounds like yet another of the multitude of good reasons not to use from toolsmodule import *.
If you just do import toolsmodule, then you can do toolsmodule.database = 'foo', and everything is wonderful.
Pythons variable names are just labels on variables. When you import * all those labels are local and when you then set the database, you just replace the local variable, not the one in toolsmodule. Hence, do this:
toolsmodule.py:
database = "original"
def printdatabase():
print "Database is", database
And then run:
import toolsmodule
toolsmodule.database = "newdatabase"
toolsmodule.printdatabase()
The output is
Database is newdatabase
Note that if you then from ANOTHER module ALSO did an import * the change is not reflected.
In short: NEVER use from x import *. I don't know why all newbies persist in doing this despite all documentation I know of says that it's a bad idea.
Why don't you do it like that:
import toolsmodule
toolsmodule.database = "foo"
from toolsmodule import * #bad idea, but let's say you have to..
print database #outputs foo
Related
I've run into a bit of a wall importing modules in a Python script. I'll do my best to describe the error, why I run into it, and why I'm tying this particular approach to solve my problem (which I will describe in a second):
Let's suppose I have a module in which I've defined some utility functions/classes, which refer to entities defined in the namespace into which this auxiliary module will be imported (let "a" be such an entity):
module1:
def f():
print a
And then I have the main program, where "a" is defined, into which I want to import those utilities:
import module1
a=3
module1.f()
Executing the program will trigger the following error:
Traceback (most recent call last):
File "Z:\Python\main.py", line 10, in <module>
module1.f()
File "Z:\Python\module1.py", line 3, in f
print a
NameError: global name 'a' is not defined
Similar questions have been asked in the past (two days ago, d'uh) and several solutions have been suggested, however I don't really think these fit my requirements. Here's my particular context:
I'm trying to make a Python program which connects to a MySQL database server and displays/modifies data with a GUI. For cleanliness sake, I've defined the bunch of auxiliary/utility MySQL-related functions in a separate file. However they all have a common variable, which I had originally defined inside the utilities module, and which is the cursor object from MySQLdb module.
I later realised that the cursor object (which is used to communicate with the db server) should be defined in the main module, so that both the main module and anything that is imported into it can access that object.
End result would be something like this:
utilities_module.py:
def utility_1(args):
code which references a variable named "cur"
def utility_n(args):
etcetera
And my main module:
program.py:
import MySQLdb, Tkinter
db=MySQLdb.connect(#blahblah) ; cur=db.cursor() #cur is defined!
from utilities_module import *
And then, as soon as I try to call any of the utilities functions, it triggers the aforementioned "global name not defined" error.
A particular suggestion was to have a "from program import cur" statement in the utilities file, such as this:
utilities_module.py:
from program import cur
#rest of function definitions
program.py:
import Tkinter, MySQLdb
db=MySQLdb.connect(#blahblah) ; cur=db.cursor() #cur is defined!
from utilities_module import *
But that's cyclic import or something like that and, bottom line, it crashes too. So my question is:
How in hell can I make the "cur" object, defined in the main module, visible to those auxiliary functions which are imported into it?
Thanks for your time and my deepest apologies if the solution has been posted elsewhere. I just can't find the answer myself and I've got no more tricks in my book.
Globals in Python are global to a module, not across all modules. (Many people are confused by this, because in, say, C, a global is the same across all implementation files unless you explicitly make it static.)
There are different ways to solve this, depending on your actual use case.
Before even going down this path, ask yourself whether this really needs to be global. Maybe you really want a class, with f as an instance method, rather than just a free function? Then you could do something like this:
import module1
thingy1 = module1.Thingy(a=3)
thingy1.f()
If you really do want a global, but it's just there to be used by module1, set it in that module.
import module1
module1.a=3
module1.f()
On the other hand, if a is shared by a whole lot of modules, put it somewhere else, and have everyone import it:
import shared_stuff
import module1
shared_stuff.a = 3
module1.f()
… and, in module1.py:
import shared_stuff
def f():
print shared_stuff.a
Don't use a from import unless the variable is intended to be a constant. from shared_stuff import a would create a new a variable initialized to whatever shared_stuff.a referred to at the time of the import, and this new a variable would not be affected by assignments to shared_stuff.a.
Or, in the rare case that you really do need it to be truly global everywhere, like a builtin, add it to the builtin module. The exact details differ between Python 2.x and 3.x. In 3.x, it works like this:
import builtins
import module1
builtins.a = 3
module1.f()
As a workaround, you could consider setting environment variables in the outer layer, like this.
main.py:
import os
os.environ['MYVAL'] = str(myintvariable)
mymodule.py:
import os
myval = None
if 'MYVAL' in os.environ:
myval = os.environ['MYVAL']
As an extra precaution, handle the case when MYVAL is not defined inside the module.
This post is just an observation for Python behaviour I encountered. Maybe the advices you read above don't work for you if you made the same thing I did below.
Namely, I have a module which contains global/shared variables (as suggested above):
#sharedstuff.py
globaltimes_randomnode=[]
globalist_randomnode=[]
Then I had the main module which imports the shared stuff with:
import sharedstuff as shared
and some other modules that actually populated these arrays. These are called by the main module. When exiting these other modules I can clearly see that the arrays are populated. But when reading them back in the main module, they were empty. This was rather strange for me (well, I am new to Python). However, when I change the way I import the sharedstuff.py in the main module to:
from globals import *
it worked (the arrays were populated).
Just sayin'
A function uses the globals of the module it's defined in. Instead of setting a = 3, for example, you should be setting module1.a = 3. So, if you want cur available as a global in utilities_module, set utilities_module.cur.
A better solution: don't use globals. Pass the variables you need into the functions that need it, or create a class to bundle all the data together, and pass it when initializing the instance.
The easiest solution to this particular problem would have been to add another function within the module that would have stored the cursor in a variable global to the module. Then all the other functions could use it as well.
module1:
cursor = None
def setCursor(cur):
global cursor
cursor = cur
def method(some, args):
global cursor
do_stuff(cursor, some, args)
main program:
import module1
cursor = get_a_cursor()
module1.setCursor(cursor)
module1.method()
Since globals are module specific, you can add the following function to all imported modules, and then use it to:
Add singular variables (in dictionary format) as globals for those
Transfer your main module globals to it
.
addglobals = lambda x: globals().update(x)
Then all you need to pass on current globals is:
import module
module.addglobals(globals())
Since I haven't seen it in the answers above, I thought I would add my simple workaround, which is just to add a global_dict argument to the function requiring the calling module's globals, and then pass the dict into the function when calling; e.g:
# external_module
def imported_function(global_dict=None):
print(global_dict["a"])
# calling_module
a = 12
from external_module import imported_function
imported_function(global_dict=globals())
>>> 12
The OOP way of doing this would be to make your module a class instead of a set of unbound methods. Then you could use __init__ or a setter method to set the variables from the caller for use in the module methods.
Update
To test the theory, I created a module and put it on pypi. It all worked perfectly.
pip install superglobals
Short answer
This works fine in Python 2 or 3:
import inspect
def superglobals():
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals
save as superglobals.py and employ in another module thusly:
from superglobals import *
superglobals()['var'] = value
Extended Answer
You can add some extra functions to make things more attractive.
def superglobals():
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals
def getglobal(key, default=None):
"""
getglobal(key[, default]) -> value
Return the value for key if key is in the global dictionary, else default.
"""
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals.get(key, default)
def setglobal(key, value):
_globals = superglobals()
_globals[key] = value
def defaultglobal(key, value):
"""
defaultglobal(key, value)
Set the value of global variable `key` if it is not otherwise st
"""
_globals = superglobals()
if key not in _globals:
_globals[key] = value
Then use thusly:
from superglobals import *
setglobal('test', 123)
defaultglobal('test', 456)
assert(getglobal('test') == 123)
Justification
The "python purity league" answers that litter this question are perfectly correct, but in some environments (such as IDAPython) which is basically single threaded with a large globally instantiated API, it just doesn't matter as much.
It's still bad form and a bad practice to encourage, but sometimes it's just easier. Especially when the code you are writing isn't going to have a very long life.
How Do I Change the Value of a Variable or Function in a Foreign Module?
I have a C++ header file that I need to parse. I'm using CppHeaderParser. Sadly, the header generates a lot of warnings that I'd like to suppress. The header in question is maintained by someone else, so I can't just fix it and be done.
CppHeaderParser doesn't include a configurable way to suppress warnings, but it is controlled by a variable and a function in the module
# Controls warning_print
print_warnings = 1
...
def warning_print(arg):
if print_warnings: print(("[%4d] %s"%(inspect.currentframe().f_back.f_lineno, arg)))
In my script, I tried changing the value of print_warnings:
import CppHeaderParser
CppHeaderParser.print_warnings = 0
cpp_info = CppHeaderParser.CppHeader(my_h_file)
But this had no effect.
How do I set a variable in a different module such that a class defined in that module will see it?
In my case, I might also like to redefine warning_print to examine the warnings and skip only the specific warnings I wish to ignore. I encountered the same problem as setting print_warnings. The assignment "worked" but had no effect, as if the code in CppHeaderParser wasn't looking at the values I set.
Note: I have worked around the problem by making a temp copy of the header file, correcting the problems, but I consider this a fragile solution.
Update: I was able to completely and unintelligently suppress all the warnings with:
CppHeaderParser.CppHeaderParser.print_warnings = 0
I've looked the source. The problem with your method is that in CppHeaderParser file there are import with *:
from .CppHeaderParser import *
So you need to change the way you import CppHeaderParser class:
from CppHeaderParser import CppHeaderParser
It should work.
Finally, just try this:
from CppHeaderParser import CppHeaderParser
CppHeaderParser.print_warnings = 0
cpp_info = CppHeaderParser.CppHeader(my_h_file)
The reason of such a behaviour is that from statement creates the copy of variable from imported module but not an alias. I will try to explain it on a simple example. Let's suppose we have some module named import_test with the following contents:
foo = "Init value"
def f():
print(foo)
Then execute the following code:
>> from import_test import *
>> f()
Init value
>> foo = "Updated value"
>> f()
Init value
The reason is that you change the copy of variable foo so the actual value of import_test.foo variable is not changed.
But we have a different behaviour when we import the module itself:
>> import import_test
>> import_test.f()
Init value
>> import_test.foo = "Updated value"
>> import_test.f()
Updated value
So in the case of CppHeaderParser package when you make import CppHeaderParser the code inside CppHeaderParser.__init__ is executed. And python interpreter creates a copy of warnings_print variable inside a CppHeaderParser. But to change the behaviour of a print_warning function you have to change the value of CppHeaderParser.CppHeaderParser.warnings_print.
Try monkey patching:
import CppHeaderParser
def my_silent_warning_print(arg):
pass
CppHeaderParser.warning_print = my_silent_warning_print
Is it possible to import a module with some parameter in python ?
All I mean by parameter is that there exists a variable in the module which is not initialized in that module, still I am using that variable in that module. In short, I want behavior similar to a function but unlike function, I want the variables of module to be exposed in the calling code.
eg a.py:
#lists like data, count, prob_distribution are constructed from training_pool (not initialized in this file)
x = pymc.Uniform('x', lower = 0, upper = 1)
rv = [ Multinomial("rv"+str(i), count[i], prob_distribution[i], value = data[i], observed=True) for i in xrange(0, len(count)) ]
b.py:
import a #I want some way tr pass value of training_pool
m = pymc.MCMC(a)
I want all random variables in a.py to be exposed to MCMC. I am open to a better approach for my problem at hand, but I would also like to know whether passing arguments to modules is possible in python or not.
there are various approaches to do so, here is just a silly and simple one:
main.py
"""A silly example - main supplies a parameter
"""
import sys,os
print os.path.basename(__file__)+":Push it by: --myModuleParam "+str(123)
sys.argv.append('--myModuleParam')
sys.argv.append(123)
import module
print os.path.basename(__file__)+":Pushed my param:"+str(module.displayMyParam)
module.py
"""A silly example - module consumes parameter
"""
import sys,os
displayMyParam = 'NotYetInitialized'
for px in sys.argv:
if px == '--myModuleParam':
idx = sys.argv.index(px)
sys.argv.pop(idx) # remove option
displayMyParam = sys.argv[idx]
sys.argv.pop(idx) # remove value
print os.path.basename(__file__)+":Got my param:"+str(displayMyParam)
#
# That's it...
#
As #otus already answered, there is no way to pass parameters to modules.
I think you are following some of the introductory examples for PyMC2, which use a pattern where a module wraps all the code for the nodes in a Bayesian model. This approach is good for getting started, but, as you have found, can be limiting, when you want to run your model with a range of variations.
Fortunately, PyMC2 can create an MCMC object from a list or a dictionary as well as a module. What I recommend in this case is just what #oleg-s suggested in the comments: use a function. You can end the function with return locals() to get a dictionary of everything that would have been in the module, and this is suitable input to the pymc.MCMC constructor. Here is an example:
# a.py
from pymc import *
count = [10, 10] # perhaps good to put this stuff in data.py
prob_distribution = [[.5, .5], [.1, .2, .7]]
data = [[2, 8], [2, 3, 5]]
def model(training_pool):
x = Uniform('x', lower = 0, upper = 1)
rv = [ Multinomial("rv"+str(i), count[i], prob_distribution[i], value = data[i], observed=True) for i in training_pool ]
return locals()
# b.py
import pymc, a
training_pool = [0]
m = pymc.MCMC(a.model(training_pool))
I found it helpful to define global variables, and allow these to be set by an init function.
def init(config_filename=CONFIG_FILENAME):
config = configparser.ConfigParser(interpolation=configparser.ExtendedInterpolation())
config.read(config_filename)
global YEARS
YEARS = config['DEFAULT']['YEARS']
global FEATURES
FEATURES = config['DEFAULT']['FEATURES']
Then all the user has to do is remember to initialize the module before using these methods:
import module
module.init('config.ini')
Note, I would NOT use this on a module that I expect to spread publicly. This is more for single-file modules for my own personal use.
There is no way to pass parameters to modules. However, you could use a global in a third module for this:
# a.py
parameter = None
# b.py
import a
a.parameter = 4
import c
# c.py
import a
# use a.parameter
Of course, this only works if nothing else imports c, because modules only get imported once.
Module-wide globals should be indeed enough for most uses, but what if
the parameter needs to be evaluated during module initialization, or
you need multiple versions of the module with different parameters
In recent versions of python, it is possible to load in two steps, first the spec, then exec. In the middle, you can set up extra variables.
import importlib
abstractModuleSpec=importlib.util.find_spec('myModule')
module4=importlib.util.module_from_spec(abstractModuleSpec)
module2=importlib.util.module_from_spec(abstractModuleSpec)
module2.parameter="you are version 2"
module4.parameter="you are version 4"
module4.__spec__.loader.exec_module(module4)
module2.__spec__.loader.exec_module(module2)
In the module you can check dir() or similar, to see if the variable is defined.
I really wonder nobody mentioned environment variables. That's the cleanest way I found:
a.py
import os
param = os.getenv('MY_PACKAGE_PARAM', None)
print(param)
b.py
import os
os.setenv('MY_PACKAGE_PARAM', 'Hello world!')
import a
There is no such way to pass parameters to the module, however you can revamp your code a bit and import the parameters from a module as global parameters.
I'd like to load a module dynamically, given its string name (from an environment variable). I'm using Python 2.7. I know I can do something like:
import os, importlib
my_module = importlib.import_module(os.environ.get('SETTINGS_MODULE'))
This is roughly equivalent to
import my_settings
(where SETTINGS_MODULE = 'my_settings'). The problem is, I need something equivalent to
from my_settings import *
since I'd like to be able to access all methods and variables in the module. I've tried
import os, importlib
my_module = importlib.import_module(os.environ.get('SETTINGS_MODULE'))
from my_module import *
but I get a bunch of errors doing that. Is there a way to import all methods and attributes of a module dynamically in Python 2.7?
If you have your module object, you can mimic the logic import * uses as follows:
module_dict = my_module.__dict__
try:
to_import = my_module.__all__
except AttributeError:
to_import = [name for name in module_dict if not name.startswith('_')]
globals().update({name: module_dict[name] for name in to_import})
However, this is almost certainly a really bad idea. You will unceremoniously stomp on any existing variables with the same names. This is bad enough when you do from blah import * normally, but when you do it dynamically there is even more uncertainty about what names might collide. You are better off just importing my_module and then accessing what you need from it using regular attribute access (e.g., my_module.someAttr), or getattr if you need to access its attributes dynamically.
Not answering precisely the question as worded, but if you wish to have a file as proxy to a dynamic module, you can use the ability to define __getattr__ on the module level.
import importlib
import os
module_name = os.environ.get('CONFIG_MODULE', 'configs.config_local')
mod = importlib.import_module(module_name)
def __getattr__(name):
return getattr(mod, name)
My case was a bit different - wanted to dynamically import the constants.py names in each gameX.__init__.py module (see below), cause statically importing those would leave them in sys.modules forever (see: this excerpt from Beazley I picked from this related question).
Here is my folder structure:
game/
__init__.py
game1/
__init__.py
constants.py
...
game2/
__init__.py
constants.py
...
Each gameX.__init__.py exports an init() method - so I had initially a from .constants import * in all those gameX.__init__.py which I tried to move inside the init() method.
My first attempt in the lines of:
## -275,2 +274,6 ## def init():
# called instead of 'reload'
+ yak = {}
+ yak.update(locals())
+ from .constants import * # fails here
+ yak = {x: y for x,y in locals() if x not in yak}
+ globals().update(yak)
brec.ModReader.recHeader = RecordHeader
Failed with the rather cryptic:
SyntaxError: import * is not allowed in function 'init' because it contains a nested function with free variables
I can assure you there are no nested functions in there. Anyway I hacked and slashed and ended up with:
def init():
# ...
from .. import dynamic_import_hack
dynamic_import_hack(__name__)
Where in game.__init__.py:
def dynamic_import_hack(package_name):
print __name__ # game.init
print package_name # game.gameX.init
import importlib
constants = importlib.import_module('.constants', package=package_name)
import sys
for k in dir(constants):
if k.startswith('_'): continue
setattr(sys.modules[package_name], k, getattr(constants, k))
(for setattr see How can I add attributes to a module at run time? while for getattr How can I import a python module function dynamically? - I prefer to use those than directly access the __dict__)
This works and it's more general than the approach in the accepted answer cause it allows you to have the hack in one place and use it from whatever module. However I am not really sure it's the best way to implement it - was going to ask a question but as it would be a duplicate of this one I am posting it as an answer and hope to get some feedback. My questions would be:
why this "SyntaxError: import * is not allowed in function 'init'" while there are no nested functions ?
dir has a lot of warnings in its doc - in particular it attempts to produce the most relevant, rather than complete, information - this complete worries me a bit
is there no builtin way to do an import * ? even in python 3 ?
I have a Python module that I want to dynamically import given only a string of the module name. Normally I use importlib or __import__ and this works quite well given that I know which objects I want to import from the module, but is there a way to do the equivalent of import * dynamically. Or is there a better approach?
I know in general its bad practice to use import * but the modules I'm trying to import are automatically generated on the fly and I have no way of knowing the exact module which contains the class I'm addressing.
Thanks.
Use update for dicts:
globals().update(importlib.import_module('some.package').__dict__)
Note, that using a_module.__dict__ is not the same as from a_module import *, because all names are "imported", not only those from __all__ or not starting with _.
I came up with some ugly hacky code, it works in python 2.6. I'm not sure if this is the smartest thing to do though, perhaps some other people here have some insight:
test = __import__('os',globals(),locals())
for k in dir(test):
globals()[k] = test.__dict__[k]
You probably want to put a check here to make sure you aren't overwriting anything in the global namespace. You could probably avoid the globals part and just look through each dynamically imported module for your class of interest. This would probably be much better than polluting the global namespace with everything you are importing.
For example, say your class is named Request from urllib2
test = __import__('urllib2',globals(),locals())
cls = None
if 'Request' in dir(test):
cls = test.__dict__['Request']
# you found the class now you can use it!
cls('http://test.com')
The following is highly sinful and will condemn you to purgatory or worse
# module_a.py
myvar = "hello"
# module_b.py
import inspect
def dyn_import_all(modpath):
"""Incredibly hackish way to load into caller's global namespace"""
exec('from ' + modpath + ' import *', inspect.stack()[1][0].f_globals)
# module_c.py
from module_b import dyn_import_all
def print_from(modpath):
dyn_import_all(modpath)
print(myvar)
Demo:
>>> import module_c
>>> module_c.print_from("module_a")
hello