How to run the same python script with different module import? - python

I'm working on a project where I would like to run a same script but with two different softwares api.
What I have :
-One module for each software where I have the same classes and methods names.
-One construction script where I need to call these classes and method.
I would like to not duplicate the construction code, but rather run the same bit of code just by changing the imported module.
Exemple :
first_software_module.py
import first_software_api
class Box(x,y,z):
init():
first_software_api.makeBox()
second_software_module.py
import second_software_api
class Box(x,y,z):
init():
second_software_api.makeBox()
construction.py
first_box = Box(1,2,3)
And I would like to run construction.py with the first module, then with the second module.
I tryed with imports, execfile, but none of these solutions seems to work.
What i would like to do :
import first_software_module
run construction.py
import second_software_module
run construction.py

You could try by passing a command line argument to construction.py.
construction.py
import sys
if len(sys.argv) != 2:
sys.stderr.write('Usage: python3 construction.py <module>')
exit(1)
if sys.argv[1] == 'first_software_module':
import first_software_module
elif sys.argv[1] == 'second_software_module':
import second_software_module
box = Box(1, 2, 3)
You could then call construction.py with each import type from a shell script, say main.sh.
main.sh
#! /bin/bash
python3 construction.py first_software_module
python3 construction.py second_software_module
Make the shell script executable using chmod +x main.sh. Run it as ./main.sh.
Alternatively, if you do not want to use a shell script, and want to do it in pure Python, you could do the following:
main.py
import subprocess
subprocess.run(['python3', 'construction.py', 'first_software_module'])
subprocess.run(['python3', 'construction.py', 'second_software_module'])
and run main.py as you normally would using python3 main.py.

You can pass a command-line argument that will tell your script which module to import. There are many ways to do this, but I'm going to demonstrate with the argparse module
import argparse
parser = argparse.ArgumentParser(description='Run the construction')
parser.add_argument('--module', nargs=1, type=str, required=True, help='The module to use for the construction', choices=['module1', 'module2'])
args = parser.parse_args()
Now, args.module will contain the contents of the argument you passed. Using this string and an if-elif ladder (or the match-case syntax in 3.10+) to import the correct module, and alias it as (let's say) driver.
if args.module[0] == "module1":
import first_software_api as driver
print("Using first_software_api")
elif args.module[0] == "module2":
import second_software_api as driver
print("Using second_software_api")
Then, use driver in your Box class:
class Box(x,y,z):
def __init__(self):
driver.makeBox()
Say we had this in a file called construct.py. Running python3 construct.py --help gives:
usage: construct.py [-h] --module {module1,module2}
Run the construction
optional arguments:
-h, --help show this help message and exit
--module {module1,module2}
The module to use for the construction
Running python3 construct.py --module module1 gives:
Using first_software_api

Related

Custom Ansible module is giving param extra params error

I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.

Python3 - output __main__ file prints when running unittests (from actual program, not unittests)

How can I make that my __main__ file prints are outputted, when I run tests? I mean prints from that file, not unittests files prints.
I have this sample structure (all files are in the same directory):
main.py:
import argparse
print('print me?') # no output
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('name')
args = parser.parse_args()
print(args.name) # no output
other.py:
def print_me():
print('ran print_me')
test.py:
import unittest
import sh
import other
class TestMain(unittest.TestCase):
def test_main(self):
print('test_main') # prints it.
sh.python3('main.py', 'test123')
def test_other(self):
print('test_other') # prints it.
other.print_me()
And I run it with python3 -m nose -s or python3 -m unittest, but it makes no difference, prints are not outputted from main.py, only the ones that are defined directly on test file. Here is what I do get:
user#user:~/python-programs/test_main$ python3 -m nose -s
test_main
.test_other
ran print_me
.
----------------------------------------------------------------------
Ran 2 tests in 0.040s
OK
P.S. Of course if I run main.py without using tests, then it prints normally (for example using python shell/interpreter and calling main.py with sh, just like in unittests)
sh.python3 starts new process and its output is not captured by nose. You can redirect the output printing the result from it:
print(sh.python3('main.py', 'test123'))

"boto required for this module" error Ansible

I am running the following script inside AWS Lambda:
#!/usr/bin/python
from __future__ import print_function
import json
import os
import ansible.inventory
import ansible.playbook
import ansible.runner
import ansible.constants
from ansible import utils
from ansible import callbacks
print('Loading function')
def run_playbook(**kwargs):
stats = callbacks.AggregateStats()
playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
runner_cb = callbacks.PlaybookRunnerCallbacks(
stats, verbose=utils.VERBOSITY)
# use /tmp instead of $HOME
ansible.constants.DEFAULT_REMOTE_TMP = '/tmp/ansible'
out = ansible.playbook.PlayBook(
callbacks=playbook_cb,
runner_callbacks=runner_cb,
stats=stats,
**kwargs
).run()
return out
def lambda_handler(event, context):
return main()
def main():
out = run_playbook(
playbook='little.yml',
inventory=ansible.inventory.Inventory(['localhost'])
)
return(out)
if __name__ == '__main__':
main()
However, I get the following error: failed=True msg='boto required for this module'
However, according to this comment(https://github.com/ansible/ansible/issues/5734#issuecomment-33135727), it works.
But, I'm not understanding how do I mention that in my script? Or, can I have a separate hosts file, and include it in the script, like how I call my playbook?
If so, then how?
[EDIT - 1]
I have added the line inventory=ansible.inventory.Inventory('hosts')
with hosts file as:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
But, I get this error: /bin/sh: /usr/local/bin/python: No such file or directory
So, where is python located inside AWS Lambda?
I installed boto just like I installed other packages in the Lambda's deployment package: pip install boto -t <folder-name>
The bash command which python will usually give the location of the Python binary. There's an example of how to call a bash script from AWS Lambda here.

nose2 with such DSL does not find tests

This might be really stupid, but I can't get it to work...
I'm want to use the such DLS in nose2 with python 2.7 in Linux.
I'm trying out the beginning of the example from the documentation http://nose2.readthedocs.org/en/latest/such_dsl.html (see code below) but it doesn't run the tests, no matter how I launch it from the command line.
My file is called test_something.py, it's the only file in the directory.
I've tried running from the command line with >> nose2 and >> nose2 --plugin nose2.plugins.layers, but I always get Ran 0 tests in 0.000s. With >> nose2 --plugin layers I get ImportError: No module named layers.
How am I supposed to run this test from the command line??
Thanks!
Code below:
import unittest
from nose2.tools import such
with such.A("system with complex setup") as it:
#it.has_setup
def setup():
print "Setup"
it.things = [1]
#it.has_teardown
def teardown():
print "Teardown"
it.things = []
#it.should("do something")
def test():
print "Test"
assert it.things
it.assertEqual(len(it.things), 1)
DOH!
I forgot to add it.createTests(globals()) at the end of the file!

How to split python 3 unittests into separate file with script to control which one to run?

I want to split my Python 3.4 unit tests in separate modules and still be able to control which tests to run or skip from the command line, as if all tests were located in the same file. I'm having trouble doing so.
According to the docs, command line arguments can be used to select which tests to run. For example:
TestSeqFunc.py:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import random
import unittest
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.seq = list(range(10))
def test_shuffle(self):
# make sure the shuffled sequence does not lose any elements
random.shuffle(self.seq)
self.seq.sort()
self.assertEqual(self.seq, list(range(10)))
# should raise an exception for an immutable sequence
self.assertRaises(TypeError, random.shuffle, (1,2,3))
def test_choice(self):
element = random.choice(self.seq)
self.assertTrue(element in self.seq)
def test_sample(self):
with self.assertRaises(ValueError):
random.sample(self.seq, 20)
for element in random.sample(self.seq, 5):
self.assertTrue(element in self.seq)
if __name__ == '__main__':
unittest.main()
can be controlled with either:
./TestSeqFunc.py
to run all tests in the file,
./TestSeqFunc.py TestSequenceFunctions
to run all tests defined in the TestSequenceFunctions class, and finally:
./TestSeqFunc.py TestSequenceFunctions.test_sample
to run the specific test_sample() method.
The problem I have is that I cannot find an organization of files that will allow me to:
Have multiple modules containing multiple classes and methods in separate files
Use a kind of wrapper script that will give the same kind of control over which tests (module/file, class, method) to run.
The problem I have is I cannot find a way to emulate the python3 -m unittest behaviour using a run_tests.py script. For example, I want to be able to do:
Run all the tests in the current directory
So ./run_tests.py -v should do do the same as python3 -m unittest -v
Run one module (file):
./run_tests.py -v TestSeqFunc being equivalent to python3 -m unittest -v TestSeqFunc
Run one class:
./run_tests.py -v TestSeqFunc.TestSequenceFunctions being equivalent to python3 -m unittest -v TestSeqFunc.TestSequenceFunctions
Run specific methods from a class:
./run_tests.py -v TestSeqFunc.TestSequenceFunctions.test_sample being equivalent to python3 -m unittest -v TestSeqFunc.TestSequenceFunctions.test_sample
Note that I want to:
be able to pass arguments to unittests, for example the verbose flag used previously;
allow running specific modules, classes and even methods.
As of now, I use a suite() function in my run_all.py script which loads manually the modules and add their class to a suite using addTest(unittest.makeSuite(obj)). Then, my main() is simple:
if __name__ == '__main__':
unittest.main(defaultTest='suite')
But using this I cannot run specific tests. In the end, I might just execute python3 -m unittest <sys.argv> from inside the run_all.py script, but that would be inelegant...
Any suggestions?!
Thanks!
Here's my final run_all.py:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import unittest
import glob
test_pattern = 'validate_*.py'
if __name__ == '__main__':
# Find all files matching pattern
module_files = sorted(glob.glob(test_pattern))
module_names = [os.path.splitext(os.path.basename(module_file))[0] for module_file in module_files]
# Iterate over the found files
print('Importing:')
for module in module_names:
print(' ', module)
exec('import %s' % module)
print('Done!')
print()
unittest.main(defaultTest=module_names)
Notes:
I use exec() to simulate 'import modulename'. The issue is that using importlib (explained here for example) will import the module but will not create a namespace for the module content. When I type import os, an "os" namespace is created and I can then access os.path. By using importlib, I couldn't figure out a way to do create that namespace. Having such a namespace is required for unittest; you get these kind of errors:
Traceback (most recent call last):
File "./run_all.py", line 89, in <module>
unittest.main(argv=sys.argv)
File "~/usr/lib/python3.4/unittest/main.py", line 92, in __init__
self.parseArgs(argv)
File "~/usr/lib/python3.4/unittest/main.py", line 139, in parseArgs
self.createTests()
File "~/usr/lib/python3.4/unittest/main.py", line 146, in createTests
self.module)
File "~/usr/lib/python3.4/unittest/loader.py", line 146, in loadTestsFromNames
suites = [self.loadTestsFromName(name, module) for name in names]
File "~/usr/lib/python3.4/unittest/loader.py", line 146, in <listcomp>
suites = [self.loadTestsFromName(name, module) for name in names]
File "~/usr/lib/python3.4/unittest/loader.py", line 114, in loadTestsFromName
parent, obj = obj, getattr(obj, part)
AttributeError: 'module' object has no attribute 'validate_module1'
Hence the use of exec().
I have to add defaultTest=module_names or else main() defaults to all test classes inside the current file. Since there is no test class in run_all.py, nothing gets executed. So defaultTest must point to a list of all the modules name.
You can pass command-line arguments to unittest.main using the argv parameter:
The argv argument can be a list of options passed to the program, with
the first element being the program name. If not specified or None,
the values of sys.argv are used. (my emphasis)
So you should be able to use
if __name__ == '__main__':
unittest.main(defaultTest='suite')
without any change and be able to call your script with command-line arguments as desired.

Categories