I am running the following script inside AWS Lambda:
#!/usr/bin/python
from __future__ import print_function
import json
import os
import ansible.inventory
import ansible.playbook
import ansible.runner
import ansible.constants
from ansible import utils
from ansible import callbacks
print('Loading function')
def run_playbook(**kwargs):
stats = callbacks.AggregateStats()
playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
runner_cb = callbacks.PlaybookRunnerCallbacks(
stats, verbose=utils.VERBOSITY)
# use /tmp instead of $HOME
ansible.constants.DEFAULT_REMOTE_TMP = '/tmp/ansible'
out = ansible.playbook.PlayBook(
callbacks=playbook_cb,
runner_callbacks=runner_cb,
stats=stats,
**kwargs
).run()
return out
def lambda_handler(event, context):
return main()
def main():
out = run_playbook(
playbook='little.yml',
inventory=ansible.inventory.Inventory(['localhost'])
)
return(out)
if __name__ == '__main__':
main()
However, I get the following error: failed=True msg='boto required for this module'
However, according to this comment(https://github.com/ansible/ansible/issues/5734#issuecomment-33135727), it works.
But, I'm not understanding how do I mention that in my script? Or, can I have a separate hosts file, and include it in the script, like how I call my playbook?
If so, then how?
[EDIT - 1]
I have added the line inventory=ansible.inventory.Inventory('hosts')
with hosts file as:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
But, I get this error: /bin/sh: /usr/local/bin/python: No such file or directory
So, where is python located inside AWS Lambda?
I installed boto just like I installed other packages in the Lambda's deployment package: pip install boto -t <folder-name>
The bash command which python will usually give the location of the Python binary. There's an example of how to call a bash script from AWS Lambda here.
Related
I'm trying to run this python task in azure pipeline.
- task: PythonScript#0
inputs:
scriptSource: 'inline'
script: |
import json
import os
import requests
from requests.auth import HTTPBasicAuth
url = "https://dev.azure.com/{organization}/{project}/_apis/build/builds?definitionId={id}&api-version=6.0"
But it gives me ##[error]Parameter 'toolPath' cannot be null or empty
Asking for toolPath as said by #msanford is for python Interpretor.
# Run a Python file or inline script
- task: PythonScript#0
inputs:
#scriptSource: 'filePath' # Options: filePath, inline
#scriptPath: # Required when scriptSource == filePath
#script: # Required when scriptSource == inline
#arguments: # Optional
#pythonInterpreter: # Optional
#workingDirectory: # Optional
#failOnStderr: false # Optional
You can follow above syntax and provide pythonInterpreter: /usr/bin/python3, however path might be different.
To get exact path run your task in Bask#3 and execute any random python3 command in script for example python3 -m import sys it will show you error with complete interpreter path and use that one.
I'm working on a project where I would like to run a same script but with two different softwares api.
What I have :
-One module for each software where I have the same classes and methods names.
-One construction script where I need to call these classes and method.
I would like to not duplicate the construction code, but rather run the same bit of code just by changing the imported module.
Exemple :
first_software_module.py
import first_software_api
class Box(x,y,z):
init():
first_software_api.makeBox()
second_software_module.py
import second_software_api
class Box(x,y,z):
init():
second_software_api.makeBox()
construction.py
first_box = Box(1,2,3)
And I would like to run construction.py with the first module, then with the second module.
I tryed with imports, execfile, but none of these solutions seems to work.
What i would like to do :
import first_software_module
run construction.py
import second_software_module
run construction.py
You could try by passing a command line argument to construction.py.
construction.py
import sys
if len(sys.argv) != 2:
sys.stderr.write('Usage: python3 construction.py <module>')
exit(1)
if sys.argv[1] == 'first_software_module':
import first_software_module
elif sys.argv[1] == 'second_software_module':
import second_software_module
box = Box(1, 2, 3)
You could then call construction.py with each import type from a shell script, say main.sh.
main.sh
#! /bin/bash
python3 construction.py first_software_module
python3 construction.py second_software_module
Make the shell script executable using chmod +x main.sh. Run it as ./main.sh.
Alternatively, if you do not want to use a shell script, and want to do it in pure Python, you could do the following:
main.py
import subprocess
subprocess.run(['python3', 'construction.py', 'first_software_module'])
subprocess.run(['python3', 'construction.py', 'second_software_module'])
and run main.py as you normally would using python3 main.py.
You can pass a command-line argument that will tell your script which module to import. There are many ways to do this, but I'm going to demonstrate with the argparse module
import argparse
parser = argparse.ArgumentParser(description='Run the construction')
parser.add_argument('--module', nargs=1, type=str, required=True, help='The module to use for the construction', choices=['module1', 'module2'])
args = parser.parse_args()
Now, args.module will contain the contents of the argument you passed. Using this string and an if-elif ladder (or the match-case syntax in 3.10+) to import the correct module, and alias it as (let's say) driver.
if args.module[0] == "module1":
import first_software_api as driver
print("Using first_software_api")
elif args.module[0] == "module2":
import second_software_api as driver
print("Using second_software_api")
Then, use driver in your Box class:
class Box(x,y,z):
def __init__(self):
driver.makeBox()
Say we had this in a file called construct.py. Running python3 construct.py --help gives:
usage: construct.py [-h] --module {module1,module2}
Run the construction
optional arguments:
-h, --help show this help message and exit
--module {module1,module2}
The module to use for the construction
Running python3 construct.py --module module1 gives:
Using first_software_api
I have created a simple HTTP trigger-based azure function in python which is calling another python script to create a sample file in azure data lake gen 1. My solution structure is given below: -
Requirements.txt contains the following imports: -
azure-functions
azure-mgmt-resource
azure-mgmt-datalake-store
azure-datalake-store
init.py
import logging, os, sys
import azure.functions as func
import json
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
full_path_to_script = os.path.join(os.path.dirname( __file__ ) + '/Test.py')
logging.info(f"Path: - {full_path_to_script}")
os.system(f"python {full_path_to_script}")
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
Test.py
import json
from azure.datalake.store import core, lib, multithread
directoryId = ''
applicationKey = ''
applicationId = ''
adlsCredentials = lib.auth(tenant_id = directoryId, client_secret = applicationKey, client_id = applicationId)
adlsClient = core.AzureDLFileSystem(adlsCredentials, store_name = '')
with adlsClient.open('stage1/largeFiles/TestFile.json', 'rb') as input_file:
data = json.load(input_file)
with adlsClient.open('stage1/largeFiles/Result.json', 'wb') as responseFile:
responseFile.write(data)
Test.py is failing with an error that no module found azure.datalake.store
Why other required modules are not working for Test.py since it is inside the same directory?
pip freeze output: -
adal==1.2.2
azure-common==1.1.23
azure-datalake-store==0.0.48
azure-functions==1.0.4
azure-mgmt-datalake-nspkg==3.0.1
azure-mgmt-datalake-store==0.5.0
azure-mgmt-nspkg==3.0.2
azure-mgmt-resource==6.0.0
azure-nspkg==3.0.2
certifi==2019.9.11
cffi==1.13.2
chardet==3.0.4
cryptography==2.8
idna==2.8
isodate==0.6.0
msrest==0.6.10
msrestazure==0.6.2
oauthlib==3.1.0
pycparser==2.19
PyJWT==1.7.1
python-dateutil==2.8.1
requests==2.22.0
requests-oauthlib==1.3.0
six==1.13.0
urllib3==1.25.6
Problem
os.system(f"python {full_path_to_script}") from your functions project is causing the issue.
Azure Functions Runtime sets up the environment, along with modifying process level variables like os.path so that your function can load any dependencies you may have. When you create a sub-process like that, not all information will flow through. Additionally, you will face issues with logging -- logs from test.py would not show up properly unless explicitly handled.
Importing works locally because you have all your requirements.txt modules installed and available to test.py. This is not the case in Azure. After remotely building as part of publish, your modules are included as part of your code package published. It's not "installed" globally in the Azure environment per se.
Solution
You shouldn't have to run your script like that. In the example above, you could import your test.py from your __init__.py file, and that should behave like it was called python test.py (at least in the case above). Is there a reason you'd want to do python test.py in a sub-process over importing it?
Here's the official guide on how you'd want to structure your app to import shared code -- https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#folder-structure
Side-Note
I think once you get through the import issue, you may also face problems with adlsClient.open('stage1/largeFiles/TestFile.json', 'rb'). We recommend following the developer guide above to structure your project and using __file__ to get the absolute path (reference).
For example --
import pathlib
with open(pathlib.Path(__file__).parent / 'stage1' / 'largeFiles' /' TestFile.json'):
....
Now, if you really want to make os.system(f"python {full_path_to_script}") work, we have workarounds to the import issue. But, I'd rather not recommend such approach unless you have a really compelling need for it. :)
I have a simple Python Code that uses Elasticsearch module "curator" to make snapshots.
I've tested my code locally and it works.
Now I want to run it in an AWS Lambda but I have this error :
Unable to import module 'lambda_function': No module named 'error'
Here is how I proceeded :
I created manually a Lambda and gave it a "AISA-BasicLambdaExecutionRole" role. Then I created my package with my function and the dependencies that I installed with the command :
pip install elasticsearch-curator -t /<path>/myRepository
I zipped the content (not the folder) and uploaded it in my Lambda.
I changed the Handler name to "lambda_function.lambda_handler" (my function's name is "lambda_function.py").
Did I miss something ? This is my first time working with Lambda and Python
I've seen the other questions about this error :
"errorMessage": "Unable to import module 'lambda_function'"
But nothing works for me.
EDIT :
Here is my lambda_function :
from __future__ import print_function
import curator
import time
from curator.exceptions import NoIndices
from elasticsearch import Elasticsearch
def lambda_handler(event, context):
es = Elasticsearch()
index_list = curator.IndexList(es)
index_list.filter_by_regex(kind='prefix', value="logstash-")
Number = 1
try:
while Number <= 3:
Name="snapshotLmbd_n_"+ str(Number) +""
curator.Snapshot(index_list, repository="s3-backup", name= Name , wait_for_completion=True).do_action()
Number += 1
print('Just taking a nap ! will be back soon')
time.sleep(30)
except KeyboardInterrupt:
print('My bad ! I interrupted this')
return
Thank you for your time.
Ok, since you have everything else correct, check for the permissions of the python script.
It must have executable permissions (755)
I'm new to Python. This is my first Ansible module in order to delete the SimpleDB domain from ChaosMonkey deletion.
When tested in my local venv with my Mac OS X, it keeps saying
Module unable to decode valid JSON on stdin. Unable to figure out
what parameters were passed.
Here is the code:
#!/usr/bin/python
# Delete SimpleDB Domain
from ansible.module_utils.basic import *
import boto3
def delete_sdb_domain():
fields = dict(
sdb_domain_name=dict(required=True, type='str')
)
module = AnsibleModule(argument_spec=fields)
client = boto3.client('sdb')
response = client.delete_domain(DomainName='module.params['sdb_domain_name']')
module.exit_json(changed = False, meta = response)
def main():
delete_sdb_domain()
if __name__ == '__main__':
main()
And I'm trying to pass in parameters from this file: /tmp/args.json.
and run the following command to make the local test:
$ python ./delete_sdb_domain.py /tmp/args.json
please note I'm using venv test environment on my Mac.
If you find any syntax error in my module, please also point it out.
This is not how you should test your modules.
AnsibleModule expects to have specific JSON as stdin data.
So the closest thing you can try is:
python ./delete_sdb_domain.py < /tmp/args.json
But I bet you have your json file in wrong format (no ANSIBLE_MODULE_ARGS, etc.).
To debug your modules you can use test-module script from Ansible hacking pack:
./hacking/test-module -m delete_sdb_domain.py -a "sdb_domain_name=zzz"