My python script needs to read files from a directory passed on the command line. I have defined a readable_dir type as below to be used with argparse for validating that the directory passed on the command line is existent and readable.
Additionally, a default value (/tmp/non_existent_dir in the example below) has also been specified for the directory argument.
The problem here is that argparse invokes readable_dir() on the default value even in a situation where a directory argument is explicitly passed in on the command line. This causes the script to crap out as the default path /tmp/non_existent_dir does not exist in a context where a directory is explicitly passed in on the command line.
I could get around this by not specifying a default value and making this argument mandatory, or by deferring the validation until later in the script but is a more elegant solution that anyone is aware of?
#!/usr/bin/python
import argparse
import os
def readable_dir(prospective_dir):
if not os.path.isdir(prospective_dir):
raise Exception("readable_dir:{0} is not a valid path".format(prospective_dir))
if os.access(prospective_dir, os.R_OK):
return prospective_dir
else:
raise Exception("readable_dir:{0} is not a readable dir".format(prospective_dir))
parser = argparse.ArgumentParser(description='test', fromfile_prefix_chars="#")
parser.add_argument('-l', '--launch_directory', type=readable_dir, default='/tmp/non_existent_dir')
args = parser.parse_args()
I submitted a patch for "path arguments" to the Python standard library mailing list a few months ago.
With this PathType class, you can simply specify the following argument type to match only an existing directory--anything else will give an error message:
type = PathType(exists=True, type='dir')
Here's the code, which could be easily modified to require specific file/directory permissions as well:
from argparse import ArgumentTypeError as err
import os
class PathType(object):
def __init__(self, exists=True, type='file', dash_ok=True):
'''exists:
True: a path that does exist
False: a path that does not exist, in a valid parent directory
None: don't care
type: file, dir, symlink, None, or a function returning True for valid paths
None: don't care
dash_ok: whether to allow "-" as stdin/stdout'''
assert exists in (True, False, None)
assert type in ('file','dir','symlink',None) or hasattr(type,'__call__')
self._exists = exists
self._type = type
self._dash_ok = dash_ok
def __call__(self, string):
if string=='-':
# the special argument "-" means sys.std{in,out}
if self._type == 'dir':
raise err('standard input/output (-) not allowed as directory path')
elif self._type == 'symlink':
raise err('standard input/output (-) not allowed as symlink path')
elif not self._dash_ok:
raise err('standard input/output (-) not allowed')
else:
e = os.path.exists(string)
if self._exists==True:
if not e:
raise err("path does not exist: '%s'" % string)
if self._type is None:
pass
elif self._type=='file':
if not os.path.isfile(string):
raise err("path is not a file: '%s'" % string)
elif self._type=='symlink':
if not os.path.symlink(string):
raise err("path is not a symlink: '%s'" % string)
elif self._type=='dir':
if not os.path.isdir(string):
raise err("path is not a directory: '%s'" % string)
elif not self._type(string):
raise err("path not valid: '%s'" % string)
else:
if self._exists==False and e:
raise err("path exists: '%s'" % string)
p = os.path.dirname(os.path.normpath(string)) or '.'
if not os.path.isdir(p):
raise err("parent path is not a directory: '%s'" % p)
elif not os.path.exists(p):
raise err("parent directory does not exist: '%s'" % p)
return string
You can create a custom action instead of a type:
import argparse
import os
import tempfile
import shutil
import atexit
class readable_dir(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
prospective_dir=values
if not os.path.isdir(prospective_dir):
raise argparse.ArgumentTypeError("readable_dir:{0} is not a valid path".format(prospective_dir))
if os.access(prospective_dir, os.R_OK):
setattr(namespace,self.dest,prospective_dir)
else:
raise argparse.ArgumentTypeError("readable_dir:{0} is not a readable dir".format(prospective_dir))
ldir = tempfile.mkdtemp()
atexit.register(lambda dir=ldir: shutil.rmtree(ldir))
parser = argparse.ArgumentParser(description='test', fromfile_prefix_chars="#")
parser.add_argument('-l', '--launch_directory', action=readable_dir, default=ldir)
args = parser.parse_args()
print (args)
But this seems a little fishy to me -- if no directory is given, it passes a non-readable directory which seems to defeat the purpose of checking if the directory is accessible in the first place.
Note that as pointed out in the comments, it might be nicer to raise argparse.ArgumentError(self, ...) rather than argparse.ArgumentTypeError.
EDIT
As far as I'm aware, there is no way to validate the default argument. I suppose the argparse developers just assumed that if you're providing a default, then it should be valid. The quickest and easiest thing to do here is to simply validate the arguments immediately after you parse them. It looks like, you're just trying to get a temporary directory to do some work. If that's the case, you can use the tempfile module to get a new directory to work in. I updated my answer above to reflect this. I create a temporary directory, use that as the default argument (tempfile already guarantees the directory it creates will be writeable) and then I register it to be deleted when your program exits.
If your script can't work without a valid launch_directory then it should be made a mandatory argument:
parser.add_argument('launch_directory', type=readable_dir)
btw, you should use argparse.ArgumentTypeError instead of Exception in readable_dir().
Related
i'm writing a test of this function
def create_folder_if_not_exists(
sdk: looker_sdk,
folder_name: str,
parent_folder_name: str) -> dict:
folder = sdk.search_folders(name=folder_name)[0]
try:
parent_id = sdk.search_folders(name=parent_folder_name)[0].id
logger.info(f'Creating folder "{folder_name}"')
folder = sdk.create_folder(
body=models.CreateFolder(
name=folder_name,
parent_id=parent_id
)
)
return folder
except looker_sdk.error.SDKError as err:
logger.error(err.args[0])
return folder
This is my current test, using the python pytest library, but i keep getting this for my test Failed: DID NOT RAISE <class 'looker_sdk.error.SDKError'>
def test_create_folder_if_not_exists_parent1(mocker):
# Tests if a folder has parent id of 1 we raise an exception
sdk = fake_methods_data.MockSDK()
sf_data = fake_methods_data.MockSearchFolder(
name='goog', parent_id=1, id=3)
mocker.patch.object(sdk, "search_folders")
mocker.patch.object(sdk, "create_folder",
side_effect=[looker_sdk.error.SDKError])
sdk.search_folders.return_value = [sf_data]
with pytest.raises(looker_sdk.error.SDKError) as err:
test = fc.create_folder_if_not_exists(
sdk=sdk, folder_name='googn', parent_folder_name='1')
assert str(err.value) == 'test'
assert test.parent_id == 1
assert test.name == 'googn'
Does anyone know how to force a function to return a class error using pytest ? I've been looking at this [stackoverflow] (Mocking a function to raise an Exception to test an except block) but am struggling to get it to work. Hoping for some other thoughts.
This sounds like something I have done for work (open-source software dev stuff). In my case, I needed to test an except block raised when an executable file could not be run on a particular OS version. In our testing framework we use pytest and monkeypatch to test things. I've included the relevant bits of code below, along with some explanation about what is happening. I think this is probably what you mean by 'patch the sdk error', and I believe that is probably what you need to do. If anything is unclear, or you have more questions, let me know.
In conftest.py I define pytest fixtures that get used for tests in more than one test file. Here, I mock the scenario I want to test, using monkeypatch to fake the results I want from the parts of the get_version() function I'm not trying to test.
# conftest.py
import subprocess
import shutil
import os
import re
import platform
from pathlib import Path
import pytest
#pytest.fixture
def executable_incompatible_with_os(monkeypatch):
"""
Mocks an executable file that is incompatible with the OS.
(This situation likely only applies to blastall.)
"""
def mock_which(*args, **kwargs):
"""Mock an absolute file path."""
return args[0]
def mock_isfile(*args, **kwargs):
"""Mock a call to `os.path.isfile()`."""
return True
def mock_access(*args, **kwargs):
"""Mock a call to `os.access()`."""
return True
def mock_subprocess(*args, **kwargs):
"""Mock a call to `subprocess.run()` with an incompatible program."""
raise OSError
# Replace calls to existing methods with my mocked versions
monkeypatch.setattr(shutil, "which", mock_which)
monkeypatch.setattr(Path, "is_file", mock_isfile)
monkeypatch.setattr(os.path, "isfile", mock_isfile)
monkeypatch.setattr(os, "access", mock_access)
monkeypatch.setattr(subprocess, "run", mock_subprocess)
In test_aniblastall.py I test parts of aniblastall.py. In this case, I'm testing the behaviour when an OSError is raised; the code that raises the error in the test is in conftest.py. The entire pytest fixture I defined there is passed as a parameter to the test.
# test_aniblastall.py
from pathlib import Path
import unittest
# Test case 4: there is an executable file, but it will not run on the OS
def test_get_version_os_incompatible(executable_incompatible_with_os):
"""Test behaviour when the program can't run on the operating system.
This will happen with newer versions of MacOS."""
test_file_4 = Path("/os/incompatible/blastall")
assert (
aniblastall.get_version(test_file_4)
== f"blastall exists at {test_file_4} but could not be executed"
)
aniblastall.py contains the function the error should be raised from.
# aniblastall.py
import logging
import os
import platform
import re
import shutil
import subprocess
from pathlib import Path
def get_version(blast_exe: Path = pyani_config.BLASTALL_DEFAULT) -> str:
"""
The following circumstances are explicitly reported as strings
- no executable at passed path
- non-executable file at passed path (this includes cases where the user doesn't have execute permissions on the file)
- no version info returned
- executable cannot be run on this OS
"""
logger = logging.getLogger(__name__)
try:
blastall_path = Path(shutil.which(blast_exe)) # type:ignore
except TypeError:
return f"{blast_exe} is not found in $PATH"
if not blastall_path.is_file(): # no executable
return f"No blastall at {blastall_path}"
# This should catch cases when the file can't be executed by the user
if not os.access(blastall_path, os.X_OK): # file exists but not executable
return f"blastall exists at {blastall_path} but not executable"
if platform.system() == "Darwin":
cmdline = [blast_exe, "-version"]
else:
cmdline = [blast_exe]
try:
result = subprocess.run(
cmdline, # type: ignore
shell=False,
stdout=subprocess.PIPE, # type: ignore
stderr=subprocess.PIPE,
check=False, # blastall doesn't return 0
)
except OSError:
logger.warning("blastall executable will not run", exc_info=True)
return f"blastall exists at {blastall_path} but could not be executed"
version = re.search( # type: ignore
r"(?<=blastall\s)[0-9\.]*", str(result.stderr, "utf-8")
).group()
if 0 == len(version.strip()):
return f"blastall exists at {blastall_path} but could not retrieve version"
return f"{platform.system()}_{version} ({blastall_path})"
This is super valuable #baileythegreen, however my problem was far simpler. I had an if/else and the else had the try/catch error code piece. I was so focused on that I didn't check the simple part of if it was even getting to the else. :(
I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.
So I'm building a simple script that backs up certain documents to my second hard-drive (you never know what could happen!). So, I used the shutil.copytree function to replicate my data on the second drive. It works beautifully, and that is not the problem.
I use the shutil.rmtree function to remove the tree if the destination already exists. I'll show you my code:
import shutil
import os
def overwrite(src, dest):
if(not os.path.exists(src)):
print(src, "does not exist, so nothing may be copied.")
return
if(os.path.exists(dest)):
shutil.rmtree(dest)
shutil.copytree(src, dest)
print(dest, "overwritten with data from", src)
print("")
overwrite(r"C:\Users\Centurion\Dropbox\Documents", r"D:\Backup\Dropbox Documents")
overwrite(r"C:\Users\Centurion\Pictures", r"D:\Backup\All Pictures")
print("Press ENTER to continue...")
input()
As you can see, a simple script. Now, when I run the script for the first time, everything is fine. Pictures and Documents copy over to my D: drive just fine. However, when I run for the second time, this is my output:
C:\Users\Centurion\Programming\Python>python cpdocsnpics.py
D:\Backup\Dropbox Documents overwritten with data from C:\Users\Centurion\Dropbox\Documents
Traceback (most recent call last):
File "cpdocsnpics.py", line 17, in <module>
overwrite(r"C:\Users\Centurion\Pictures", r"D:\Backup\All Pictures")
File "cpdocsnpics.py", line 10, in overwrite
shutil.rmtree(dest)
File "C:\Python34\lib\shutil.py", line 477, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Python34\lib\shutil.py", line 376, in _rmtree_unsafe
onerror(os.rmdir, path, sys.exc_info())
File "C:\Python34\lib\shutil.py", line 374, in _rmtree_unsafe
os.rmdir(path)
PermissionError: [WinError 5] Access is denied: 'D:\\Backup\\All Pictures'
The error only happens when I copy Pictures after the first time; I'm assuming it has something to do with being a Library.
What should I do?
That's a cross-platform consistency issue.
You've copied files/dirs with readonly attribute. On the first time "dest" not exists, thus rmtree method is not performed. However, when you try run "overwrite" function we can notice that "dest" location exists (and its subtree) but it was copied with readonly access. So here we got a problem.
In order to "fix" issue, you must provide a handler for onerror parameter of shutil.rmtree. As long as your problem is regarding readonly issues the workaround is somewhat like this:
def readonly_handler(func, path, execinfo):
os.chmod(path, 128) #or os.chmod(path, stat.S_IWRITE) from "stat" module
func(path)
As you can see in the python doc onerror must be a callable that accepts three parameters: function, path, and excinfo. For further info, read the docs.
def overwrite(src, dest):
if(not os.path.exists(src)):
print(src, "does not exist, so nothing may be copied.")
return
if(os.path.exists(dest)):
shutil.rmtree(dest, onerror=readonly_handler)
shutil.copytree(src, dest)
print(dest, "overwritten with data from", src)
print("")
Of course, this handler is simple and specific but if other errors occur, new exceptions will be raised and this handler may not be able to fix them!
Note:
Tim Golden (Python for windows contributor) has been patching the shutil.rmtree issue and it seems it will be resolved in Python 3.5 (see issue 19643).
I found a problem other than the read-only file using shutil.rmtree on Windows (testing on Windows 7). I was using a combination of shutil.rmtree and shutil.copytree to create a test fixture in a test suite, so the sequence was being called repeatedly in a short period of tme (<1 sec intervals), and I was seeing unpredictable failures part way through the test suite, with both EACCES and ENOTEMPTY errors reported. The symptoms suggested to me that the shutil.rmtree function had not completed on return to the calling program, and that it was only after some time that the deleted filenames were available for re-use.
TL;DR: the solution isn't pretty - broadly, it renames the directory before deleting it, but there are a number of wrinkles that need to be handled because the Windows file system seems to take some time to catch up with the operations perfumed. The actual code catches a variety of failure conditions and retries a variant of the failed operation after a short delay.
A longer discussion follows, with my final code at the end.
My first thought was to try renaming the directory tree before removing it, so that the original directory name is immediately available for re-use. This does appear to help. To this end, I created a replacement for rmtree whose essence is this:
def removetree(tgt):
def error_handler(func, path, execinfo):
e = execinfo[1]
if e.errno == errno.ENOENT or not os.path.exists(path):
return # path does not exist - treat as success
if func in (os.rmdir, os.remove) and e.errno == errno.EACCES:
os.chmod(path, stat.S_IRWXU| stat.S_IRWXG| stat.S_IRWXO) # 0777
func(path) # read-only file; make writable and retry
raise e
tmp = os.path.join(os.path.dirname(tgt),"_removetree_tmp")
os.rename(tgt, tmp)
shutil.rmtree(tmp, onerror=error_handler)
return
I found this logic was an improvement, but it was subject to unpredictable failure of the os.rename operation, with one of several possible errors. So I also added some retry logic around os.rename, thus:
def removetree(tgt):
def error_handler(func, path, execinfo):
# figure out recovery based on error...
e = execinfo[1]
if e.errno == errno.ENOENT or not os.path.exists(path):
return # path does not exist
if func in (os.rmdir, os.remove) and e.errno == errno.EACCES:
os.chmod(path, stat.S_IRWXU| stat.S_IRWXG| stat.S_IRWXO) # 0777
func(path) # read-only file; make writable and retry
raise e
# Rename target directory to temporary value, then remove it
count = 0
while count < 10: # prevents indefinite loop
count += 1
tmp = os.path.join(os.path.dirname(tgt),"_removetree_tmp_%d"%(count))
try:
os.rename(tgt, tmp)
shutil.rmtree(tmp, onerror=error_handler)
break
except OSError as e:
time.sleep(1) # Give file system some time to catch up
if e.errno in [errno.EACCES, errno.ENOTEMPTY]:
continue # Try another temp name
if e.errno == errno.EEXIST:
shutil.rmtree(tmp, ignore_errors=True) # Try to clean up old files
continue # Try another temp name
if e.errno == errno.ENOENT:
break # 'src' does not exist(?)
raise # Other error - propagate
return
The above code is not tested, but the general idea here does seem to work. The full code I actually use is below, and uses two functions. It probably contains some unnecessary logic, but does seem to be working more reliably for me (in that my test suite now passes repeatedly on Windows where previously it failed unpredictably on a majority of runs):
def renametree_temp(src):
"""
Rename tree to temporary name, and return that name, or
None if the source directory does not exist.
"""
count = 0
while count < 10: # prevents indefinite loop
count += 1
tmp = os.path.join(os.path.dirname(src),"_removetree_tmp_%d"%(count))
try:
os.rename(src, tmp)
return tmp # Success!
except OSError as e:
time.sleep(1)
if e.errno == errno.EACCES:
log.warning("util.renametree_temp: %s EACCES, retrying"%tmp)
continue # Try another temp name
if e.errno == errno.ENOTEMPTY:
log.warning("util.renametree_temp: %s ENOTEMPTY, retrying"%tmp)
continue # Try another temp name
if e.errno == errno.EEXIST:
log.warning("util.renametree_temp: %s EEXIST, retrying"%tmp)
shutil.rmtree(tmp, ignore_errors=True) # Try to clean up old files
continue # Try another temp name
if e.errno == errno.ENOENT:
log.warning("util.renametree_temp: %s ENOENT, skipping"%tmp)
break # 'src' does not exist(?)
raise # Other error: propagaee
return None
def removetree(tgt):
"""
Work-around for python problem with shutils tree remove functions on Windows.
See:
https://stackoverflow.com/questions/23924223/
https://stackoverflow.com/questions/1213706/
https://stackoverflow.com/questions/1889597/
http://bugs.python.org/issue19643
"""
# shutil.rmtree error handler that attempts recovery from attempts
# on Windows to remove a read-only file or directory (see links above).
def error_handler(func, path, execinfo):
e = execinfo[1]
if e.errno == errno.ENOENT or not os.path.exists(path):
return # path does not exist: nothing to do
if func in (os.rmdir, os.remove) and e.errno == errno.EACCES:
try:
os.chmod(path, stat.S_IRWXU| stat.S_IRWXG| stat.S_IRWXO) # 0777
except Exception as che:
log.warning("util.removetree: chmod failed: %s"%che)
try:
func(path)
except Exception as rfe:
log.warning("util.removetree: 'func' retry failed: %s"%rfe)
if not os.path.exists(path):
return # Gone, assume all is well
raise
if e.errno == errno.ENOTEMPTY:
log.warning("util.removetree: Not empty: %s, %s"%(path, tgt))
time.sleep(1)
removetree(path) # Retry complete removal
return
log.warning("util.removetree: rmtree path: %s, error: %s"%(path, repr(execinfo)))
raise e
# Try renaming to a new directory first, so that the tgt is immediately
# available for re-use.
tmp = renametree_temp(tgt)
if tmp:
shutil.rmtree(tmp, onerror=error_handler)
return
(The above code incorporates a solution to the read-only file problem from What user do python scripts run as in windows?, which according to Deleting directory in Python is tested. I don't think I encounter the read-only file problem, so assume it is not tested in my test suite.)
I am adding validation using schema for CLI that uses docopt, but I cannot seem to get optional to work. I want to validate that:
the input file exists
valid options are used
if the PATH is added that the directory exists.
Here is app so far
"""DVget
Usage:
DVget [-s] FILE [PATH]
Process a file, return data based on selection
and write results to PATH/output-file
Arguments:
FILE specify input file
PATH specify output directory (default: ./)
Options:
-s returns sections
-p returns name-sets
-m returns modules
"""
import os
from docopt import docopt
from schema import Schema, And, Use, Optional, SchemaError
# START OF SCRIPT
if __name__ == "__main__":
arguments = docopt(__doc__, version="0.1")
#print(arguments)
schema = Schema({
'FILE': [Use(open, error='FILE should be readable')],
Optional('PATH'): And(os.path.exists, error='PATH should exist'),
'-': And(str, lambda s: s in ('s', 'p', 'm'))})
try:
arguments = schema.validate(arguments)
# process(arguments)
except SchemaError as e:
exit(e)
running DVget -s "c:\test.txt" gives me the error message 'PATH should exist' even when using Optional in schema and docopt. Any suggestions?
In python 2.7, I want to run:
$ ./script.py initparms.py
This is a trick to supply a parameter file to script.py, since initparms.py contains several python variables e.g.
Ldir = '/home/marzipan/jelly'
LMaps = True
# etc.
script.py contains:
X = __import__(sys.argv[1])
Ldir = X.Ldir
LMaps = X.Lmaps
# etc.
I want to do a bulk promotion of the variables in X so they are available to script.py, without spelling out each one in the code by hand.
Things like
import __import__(sys.argv[1])
or
from sys.argv[1] import *
don't work. Almost there perhaps... Any ideas? Thanks!
here's a one-liner:
globals().update(__import__(sys.argv[1]).__dict__)
You can use execfile:
execfile(sys.argv[1])
Of course, the usual warnings with exec or eval apply (Your script has no way of knowing whether it is running trusted or untrusted code).
My suggestion would be to not do what you're doing and instead use configparser and handling the configuration though there.
You could do something like this:
import os
import imp
import sys
try:
module_name = sys.argv[1]
module_info = imp.find_module(module_name, [os.path.abspath(os.path.dirname(__file__))] + sys.path)
module_properties = imp.load_module(module_name, *module_info)
except ImportError:
pass
else:
try:
attrlist = module_properties.__all__
except AttributeError:
attrlist = dir(module_properties)
for attr in attrlist:
if attr.startswith('__'):
continue
globals()[attr] = getattr(module_properties, attr)
Little complicated, but gets the job done.