Python - bulk promote variables to parent scope - python

In python 2.7, I want to run:
$ ./script.py initparms.py
This is a trick to supply a parameter file to script.py, since initparms.py contains several python variables e.g.
Ldir = '/home/marzipan/jelly'
LMaps = True
# etc.
script.py contains:
X = __import__(sys.argv[1])
Ldir = X.Ldir
LMaps = X.Lmaps
# etc.
I want to do a bulk promotion of the variables in X so they are available to script.py, without spelling out each one in the code by hand.
Things like
import __import__(sys.argv[1])
or
from sys.argv[1] import *
don't work. Almost there perhaps... Any ideas? Thanks!

here's a one-liner:
globals().update(__import__(sys.argv[1]).__dict__)

You can use execfile:
execfile(sys.argv[1])
Of course, the usual warnings with exec or eval apply (Your script has no way of knowing whether it is running trusted or untrusted code).
My suggestion would be to not do what you're doing and instead use configparser and handling the configuration though there.

You could do something like this:
import os
import imp
import sys
try:
module_name = sys.argv[1]
module_info = imp.find_module(module_name, [os.path.abspath(os.path.dirname(__file__))] + sys.path)
module_properties = imp.load_module(module_name, *module_info)
except ImportError:
pass
else:
try:
attrlist = module_properties.__all__
except AttributeError:
attrlist = dir(module_properties)
for attr in attrlist:
if attr.startswith('__'):
continue
globals()[attr] = getattr(module_properties, attr)
Little complicated, but gets the job done.

Related

Custom Ansible module is giving param extra params error

I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.

Loop through list of lists to generate modules to import [duplicate]

I'm working on a documentation (personal) for nested matplotlib (MPL) library, which differs from MPL own provided, by interested submodule packages. I'm writing Python script which I hope will automate document generation from future MPL releases.
I selected interested submodules/packages and want to list their main classes from which I'll generate list and process it with pydoc
Problem is that I can't find a way to instruct Python to load submodule from string. Here is example of what I tried:
import matplotlib.text as text
x = dir(text)
.
i = __import__('matplotlib.text')
y = dir(i)
.
j = __import__('matplotlib')
z = dir(j)
And here is 3 way comparison of above lists through pprint:
I don't understand what's loaded in y object - it's base matplotlib plus something else, but it lack information that I wanted and that is main classes from matplotlib.text package. It's top blue coloured part on screenshot (x list)
Please don't suggest Sphinx as different approach.
The __import__ function can be a bit hard to understand.
If you change
i = __import__('matplotlib.text')
to
i = __import__('matplotlib.text', fromlist=[''])
then i will refer to matplotlib.text.
In Python 3.1 or later, you can use importlib:
import importlib
i = importlib.import_module("matplotlib.text")
Some notes
If you're trying to import something from a sub-folder e.g. ./feature/email.py, the code will look like importlib.import_module("feature.email")
Before Python 3.3 you could not import anything if there was no __init__.py in the folder with file you were trying to import (see caveats before deciding if you want to keep the file for backward compatibility e.g. with pytest).
importlib.import_module is what you are looking for. It returns the imported module.
import importlib
# equiv. of your `import matplotlib.text as text`
text = importlib.import_module('matplotlib.text')
You can thereafter access anything in the module as text.myclass, text.myfunction, etc.
spent some time trying to import modules from a list, and this is the thread that got me most of the way there - but I didnt grasp the use of ___import____ -
so here's how to import a module from a string, and get the same behavior as just import. And try/except the error case, too. :)
pipmodules = ['pycurl', 'ansible', 'bad_module_no_beer']
for module in pipmodules:
try:
# because we want to import using a variable, do it this way
module_obj = __import__(module)
# create a global object containging our module
globals()[module] = module_obj
except ImportError:
sys.stderr.write("ERROR: missing python module: " + module + "\n")
sys.exit(1)
and yes, for python 2.7> you have other options - but for 2.6<, this works.
Apart from using the importlib one can also use exec method to import a module from a string variable.
Here I am showing an example of importing the combinations method from itertools package using the exec method:
MODULES = [
['itertools','combinations'],
]
for ITEM in MODULES:
import_str = "from {0} import {1}".format(ITEM[0],', '.join(str(i) for i in ITEM[1:]))
exec(import_str)
ar = list(combinations([1, 2, 3, 4], 2))
for elements in ar:
print(elements)
Output:
(1, 2)
(1, 3)
(1, 4)
(2, 3)
(2, 4)
(3, 4)
Module auto-install & import from list
Below script works fine with both submodules and pseudo submodules.
# PyPI imports
import pkg_resources, subprocess, sys
modules = {'lxml.etree', 'pandas', 'screeninfo'}
required = {m.split('.')[0] for m in modules}
installed = {pkg.key for pkg in pkg_resources.working_set}
missing = required - installed
if missing:
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '--upgrade', 'pip'])
subprocess.check_call([sys.executable, '-m', 'pip', 'install', *missing])
for module in set.union(required, modules):
globals()[module] = __import__(module)
Tests:
print(pandas.__version__)
print(lxml.etree.LXML_VERSION)
I developed these 3 useful functions:
def loadModule(moduleName):
module = None
try:
import sys
del sys.modules[moduleName]
except BaseException as err:
pass
try:
import importlib
module = importlib.import_module(moduleName)
except BaseException as err:
serr = str(err)
print("Error to load the module '" + moduleName + "': " + serr)
return module
def reloadModule(moduleName):
module = loadModule(moduleName)
moduleName, modulePath = str(module).replace("' from '", "||").replace("<module '", '').replace("'>", '').split("||")
if (modulePath.endswith(".pyc")):
import os
os.remove(modulePath)
module = loadModule(moduleName)
return module
def getInstance(moduleName, param1, param2, param3):
module = reloadModule(moduleName)
instance = eval("module." + moduleName + "(param1, param2, param3)")
return instance
And everytime I want to reload a new instance I just have to call getInstance() like this:
myInstance = getInstance("MyModule", myParam1, myParam2, myParam3)
Finally I can call all the functions inside the new Instance:
myInstance.aFunction()
The only specificity here is to customize the params list (param1, param2, param3) of your instance.
You can also use exec built-in function that execute any string as a Python code.
In [1]: module = 'pandas'
...: function = 'DataFrame'
...: alias = 'DF'
In [2]: exec(f"from {module} import {function} as {alias}")
In [3]: DF
Out[3]: pandas.core.frame.DataFrame
For me this was the most readable way to solve my problem.

Start multiprocessing.process in namespace

I'm trying to start a new process from within an already created namespace (named 'test').
I've looked into a few methods including nsenter:
import subprocess
from nsenter import Namespace
with Namespace(mypid, 'net'):
# output network interfaces as seen from within the mypid's net NS:
subprocess.check_output(['ip', 'a'])
But I cant seem to find a reference of where to find the var, mypid...!
Ideally I'd like to keep dependancies like nsenter to a minimum (for portability) so i'd probably like to go down the ctypes route, something like (although there is no syscall for netns...):
nsname = 'test'
netnspath = '%s%s' % ('/run/netns/', nsname)
netnspath = netnspath.encode('ascii')
libc = ctypes.CDLL('libc.so.6')
printdir(libc)
fd = open(netnspath)
print libc.syscall(???, fd.fileno())
OR (taken from http://tech.zalando.com/posts/entering-kernel-namespaces-with-python.html)
import ctypes
libc = ctypes.CDLL('libc.so.6')
# replace MYPID with the container's PID
fd = open('/proc/<MYPID>/ns/net')
libc.setns(fd.fileno(), 0)
# we are now inside MYPID's network namespace
However, I still have to know the PID, plus my libc does not have setns!
Any thoughts on how I could obtain the PID would be great!
TIA!
The problem with the nsenter module is that you need to provide it with the PID of a process that is already running inside your target namespace. This means that you can't actually use this module to make use of a network namespace that you have created using something like ip netns add.
The kernel's setns() system call takes a file descriptor rather than a PID. If you're willing to solve it with ctypes, you can do something like this:
from ctypes import cdll
libc = cdll.LoadLibrary('libc.so.6')
_setns = libc.setns
CLONE_NEWIPC = 0x08000000
CLONE_NEWNET = 0x40000000
CLONE_NEWUTS = 0x04000000
def setns(fd, nstype):
if hasattr(fd, 'fileno'):
fd = fd.fileno()
_setns(fd, nstype)
def get_netns_path(nspath=None, nsname=None, nspid=None):
'''Generate a filesystem path from a namespace name or pid,
and return a filesystem path to the appropriate file. Returns
the nspath argument if both nsname and nspid are None.'''
if nsname:
nspath = '/var/run/netns/%s' % nsname
elif nspid:
nspath = '/proc/%d/ns/net' % nspid
return nspath
If your libc doesn't have the setns() call, you may be out of luck
(although where are you running that you have a kernel recent enough
to support network namespaces but a libc that doesn't?).
Assuming you have a namespace named "blue" available (ip netns add
blue) you can run:
with open(get_netns_path(nsname="blue")) as fd:
setns(fd, CLONE_NEWNET)
subprocess.check_call(['ip', 'a'])
Note that you must run this code as root.
This works, however I'm unsure at what the 0 does as part of the syscall. So if someone could enlighten me that would be great!
import ctypes
nsname = 'test'
netnspath = '%s%s' % ('/run/netns/', nsname)
netnspath = netnspath.encode('ascii')
libc = ctypes.CDLL('libc.so.6')
fd = open(netnspath)
print libc.syscall(308, fd.fileno(), 0)
After finding this question we've updated python-nsenter so it is now able to enter namespaces via an arbitrary path in addition to providing the pid.
For example if you wanted to enter a namespace created by ip netns add you can now do something like:
with Namespace('/var/run/netns/foo', 'net'):
# do something in the namespace
pass
Version 0.2 is now available via PyPi with this update.

Importing modules globally

I would like to provide users with a python script which would import some modules for them and then drop to the python interpreter with the imported modules available at that level. I have some code which I thought might work, but it does not seem to:
module_list = ['module_a','module_b']
# Import our common modules
for module in module_list:
try:
print "Importing: {0}".format(module)
exec("import {0}".format(module))
except:
print "FYI we failed importing {0}. It will not be available for you to use".format(module)
So when the script is done it will drop to the python module where the user can do:
>>> module_a.run()
You can use code.InteractiveConsole() and pass a locals dictionary containing the local context in which the console will run. By storing the modules in there you can easily make them available to the interactive shell.
I wouldn't do this, if I were you.. but
import imp
module_list = ['module_a','module_b']
for name in module_list:
try:
module_info = imp.find_module(name)
globals()[name] = imp.load_module(name, *module_info)
except:
"Import error: %s" %name
save above code file, for example imptest.py and load it with -i switch, as
python -i imptest.py
Against the advise of many comments I have read on touching __main__, I did it.
So here is the code that ended up working for me:
module_list = ['module_a','module_b']
# Import our modules
for name in module_list:
try:
__import__(name)
if name in sys.modules:
setattr(__main__, name, sys.modules[name])
except:
print "FYI we failed importing {0}. It will not be available for you to use".format(name)

i got this error: "ImportError: cannot import name python" How do I fix it?

File "G:\Python25\Lib\site-packages\PyAMF-0.6b2-py2.5-win32.egg\pyamf\util\__init__.py", line 15, in <module>
ImportError: cannot import name python
How do I fix it?
If you need any info to know how to fix this problem, I can explain, just ask.
Thanks
Code:
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext import webapp
from TottysGateway import TottysGateway
import logging
def main():
services_root = 'services'
#services = ['users.login']
#gateway = TottysGateway(services, services_root, logger=logging, debug=True)
#app = webapp.WSGIApplication([('/', gateway)], debug=True)
#run_wsgi_app(app)
if __name__ == "__main__":
main()
Code:
from pyamf.remoting.gateway.google import WebAppGateway
import logging
class TottysGateway(WebAppGateway):
def __init__(self, services_available, root_path, not_found_service, logger, debug):
# override the contructor and then call the super
self.services_available = services_available
self.root_path = root_path
self.not_found_service = not_found_service
WebAppGateway.__init__(self, {}, logger=logging, debug=True)
def getServiceRequest(self, request, target):
# override the original getServiceRequest method
try:
# try looking for the service in the services list
return WebAppGateway.getServiceRequest(self, request, target)
except:
pass
try:
# don't know what it does but is an error for now
service_func = self.router(target)
except:
if(target in self.services_available):
# only if is an available service import it's module
# so it doesn't access services that should be hidden
try:
module_path = self.root_path + '.' + target
paths = target.rsplit('.')
func_name = paths[len(paths) - 1]
import_as = '_'.join(paths) + '_' + func_name
import_string = "from "+module_path+" import "+func_name+' as service_func'
exec import_string
except:
service_func = False
if(not service_func):
# if is not found load the default not found service
module_path = self.rootPath + '.' + self.not_found_service
import_string = "from "+module_path+" import "+func_name+' as service_func'
# add the service loaded above
assign_string = "self.addService(service_func, target)"
exec assign_string
return WebAppGateway.getServiceRequest(self, request, target)
You need to post your full traceback. What you show here isn't all that useful. I ended up digging up line 15 of pyamf/util/init.py. The code you should have posted is
from pyamf import python
This should not fail unless your local environment is messed up.
Can you 'import pyamf.util' and 'import pyamf.python' in a interactive Python shell? What about if you start Python while in /tmp (on the assumption that you might have a file named 'pyamf.py' in the current directory. Which is a bad thing.)
= (older comment below) =
Fix your question. I can't even tell where line 15 of util/__init__.py is supposed to be. Since I can't figure that out, I can't answer your question. Instead, I'll point out ways to improve your question and code.
First, use the markup language correctly, so that all the code is in a code block. Make sure you've titled the code, so we know it's from util/__init__.py and not some random file.
In your error message, include the full traceback, and not the last two lines.
Stop using parens in things like "if(not service_func):" and use a space instead, so its " if not service_func:". This is discussed in PEP 8.
Read the Python documentation and learn how to use the language. Something like "func_name = paths[len(paths) - 1]" should be "func_name = paths[-1]"
Learn about the import function and don't use "exec" for this case. Nor do you need the "exec assign_string" -- just do the "self.addService(service_func, target)"

Categories