Python task in azure - python

I'm trying to run this python task in azure pipeline.
- task: PythonScript#0
inputs:
scriptSource: 'inline'
script: |
import json
import os
import requests
from requests.auth import HTTPBasicAuth
url = "https://dev.azure.com/{organization}/{project}/_apis/build/builds?definitionId={id}&api-version=6.0"
But it gives me ##[error]Parameter 'toolPath' cannot be null or empty

Asking for toolPath as said by #msanford is for python Interpretor.
# Run a Python file or inline script
- task: PythonScript#0
inputs:
#scriptSource: 'filePath' # Options: filePath, inline
#scriptPath: # Required when scriptSource == filePath
#script: # Required when scriptSource == inline
#arguments: # Optional
#pythonInterpreter: # Optional
#workingDirectory: # Optional
#failOnStderr: false # Optional
You can follow above syntax and provide pythonInterpreter: /usr/bin/python3, however path might be different.
To get exact path run your task in Bask#3 and execute any random python3 command in script for example python3 -m import sys it will show you error with complete interpreter path and use that one.

Related

How to run the same python script with different module import?

I'm working on a project where I would like to run a same script but with two different softwares api.
What I have :
-One module for each software where I have the same classes and methods names.
-One construction script where I need to call these classes and method.
I would like to not duplicate the construction code, but rather run the same bit of code just by changing the imported module.
Exemple :
first_software_module.py
import first_software_api
class Box(x,y,z):
init():
first_software_api.makeBox()
second_software_module.py
import second_software_api
class Box(x,y,z):
init():
second_software_api.makeBox()
construction.py
first_box = Box(1,2,3)
And I would like to run construction.py with the first module, then with the second module.
I tryed with imports, execfile, but none of these solutions seems to work.
What i would like to do :
import first_software_module
run construction.py
import second_software_module
run construction.py
You could try by passing a command line argument to construction.py.
construction.py
import sys
if len(sys.argv) != 2:
sys.stderr.write('Usage: python3 construction.py <module>')
exit(1)
if sys.argv[1] == 'first_software_module':
import first_software_module
elif sys.argv[1] == 'second_software_module':
import second_software_module
box = Box(1, 2, 3)
You could then call construction.py with each import type from a shell script, say main.sh.
main.sh
#! /bin/bash
python3 construction.py first_software_module
python3 construction.py second_software_module
Make the shell script executable using chmod +x main.sh. Run it as ./main.sh.
Alternatively, if you do not want to use a shell script, and want to do it in pure Python, you could do the following:
main.py
import subprocess
subprocess.run(['python3', 'construction.py', 'first_software_module'])
subprocess.run(['python3', 'construction.py', 'second_software_module'])
and run main.py as you normally would using python3 main.py.
You can pass a command-line argument that will tell your script which module to import. There are many ways to do this, but I'm going to demonstrate with the argparse module
import argparse
parser = argparse.ArgumentParser(description='Run the construction')
parser.add_argument('--module', nargs=1, type=str, required=True, help='The module to use for the construction', choices=['module1', 'module2'])
args = parser.parse_args()
Now, args.module will contain the contents of the argument you passed. Using this string and an if-elif ladder (or the match-case syntax in 3.10+) to import the correct module, and alias it as (let's say) driver.
if args.module[0] == "module1":
import first_software_api as driver
print("Using first_software_api")
elif args.module[0] == "module2":
import second_software_api as driver
print("Using second_software_api")
Then, use driver in your Box class:
class Box(x,y,z):
def __init__(self):
driver.makeBox()
Say we had this in a file called construct.py. Running python3 construct.py --help gives:
usage: construct.py [-h] --module {module1,module2}
Run the construction
optional arguments:
-h, --help show this help message and exit
--module {module1,module2}
The module to use for the construction
Running python3 construct.py --module module1 gives:
Using first_software_api

how to log hydra's multi-run in mlflow

I am trying to manage the results of machine learning with mlflow and hydra.
So I tried to run it using the multi-run feature of hydra.
I used the following code as a test.
import mlflow
import hydra
from hydra import utils
from pathlib import Path
import time
#hydra.main('config.yaml')
def main(cfg):
print(cfg)
mlflow.set_tracking_uri('file://' + utils.get_original_cwd() + '/mlruns')
mlflow.set_experiment(cfg.experiment_name)
mlflow.log_param('param1',5)
# mlflow.log_param('param1',5)
# mlflow.log_param('param1',5)
with mlflow.start_run() :
mlflow.log_artifact(Path.cwd() / '.hydra/config.yaml')
if __name__ == '__main__':
main()
This code will not work.
I got the following error
Exception: Run with UUID [RUNID] is already active. To start a new run, first end the current run with mlflow.end_run(). To start a nested run, call start_run with nested=True
So I modified the code as follows
import mlflow
import hydra
from hydra import utils
from pathlib import Path
import time
#hydra.main('config.yaml')
def main(cfg):
print(cfg)
mlflow.set_tracking_uri('file://' + utils.get_original_cwd() + '/mlruns')
mlflow.set_experiment(cfg.experiment_name)
mlflow.log_param('param1',5)
# mlflow.log_param('param1',5)
# mlflow.log_param('param1',5)
with mlflow.start_run(nested=True) :
mlflow.log_artifact(Path.cwd() / '.hydra/config.yaml')
if __name__ == '__main__':
main()
This code works, but the artifact is not saved.
The following corrections were made to save the artifacts.
import mlflow
import hydra
from hydra import utils
from pathlib import Path
import time
#hydra.main('config.yaml')
def main(cfg):
print(cfg)
mlflow.set_tracking_uri('file://' + utils.get_original_cwd() + '/mlruns')
mlflow.set_experiment(cfg.experiment_name)
mlflow.log_param('param1',5)
# mlflow.log_param('param1',5)
# mlflow.log_param('param1',5)
mlflow.log_artifact(Path.cwd() / '.hydra/config.yaml')
if __name__ == '__main__':
main()
As a result, the artifacts are now saved.
However, when I run the following command
python test.py model=A,B hidden=12,212,31 -m
Only the artifact of the last execution condition was saved.
How can I modify mlflow to manage the parameters of the experiment by taking advantage of the multirun feature of hydra?
MLFlow is not officially supported by Hydra. At some point there will be a plugin that will make this smoother.
Looking at the errors you are reporting (and without running your code):
One thing that you can try to to use the Joblib launcher plugin to get job isolation through processes (this requires Hydra 1.0.0rc1 or newer).
What you are observing is due to the interaction between MLFlow and Hydra. As far as MLflow can tell, all of your Hydra multiruns are the same MLflow run!
Since both frameworks use the term "run", I will need to be verbose in the following text. Please bear with me.
If you didn't explicitly start a MLflow run, MLflow will do it for you when you do mlflow.log_params or mlflow.log_artifacts. Within a Hydra multirun context, it appears that instead of creating a new MLflow run for each Hydra run, the previous MLflow run is inherited after the first Hydra run. This is why you would get this error where MLflow thinks you are trying to update parameter values in logging: mlflow.exceptions.MlflowException: Changing param values is not allowed.
You can fix this by wrapping your MLFlow logging code within a with mlflow.start_run() context manager:
import mlflow
import hydra
from hydra import utils
from pathlib import Path
#hydra.main(config_path="", config_name='config.yaml')
def main(cfg):
print(cfg)
mlflow.set_tracking_uri('file://' + utils.get_original_cwd() + '/mlruns')
mlflow.set_experiment(cfg.experiment_name)
with mlflow.start_run() as run:
mlflow.log_params(cfg)
mlflow.log_artifact(Path.cwd() / '.hydra/config.yaml')
print(run.info.run_id) # just to show each run is different
if __name__ == '__main__':
main()
The context manager will start and end MLflow runs properly, preventing the issue from occuring.
Alternatively, you can also start and end an MLFlow run manually:
activerun = mlflow.start_run()
mlflow.log_params(cfg)
mlflow.log_artifact(Path.cwd() / '.hydra/config.yaml')
print(activerun.info.run_id) # just to show each run is different
mlflow.end_run()
This is related to the way you defined your MLFlow run. You use log_params and then start_run, so you have two concurrent runs of mlflow which explains the error. You could try getting rid of the following line in your first code sample and see what happens
mlflow.log_param('param1',5)

Nagios check giving error and not the output I expect

I have a python code which when run locally gives correct output but when I run it with Nagios check locally it gives errors.
Code :
#!/usr/bin/env python
import pandas as pd
df = pd.read_csv("...")
print(df)
Nagios configuration :
inside localhost.cfg
define service {
use local-service
host_name localhost
service_description active edges
check_command. check_edges
}
inside commands.cfg
define command {
command_name check_edges
command_line $USER1$/check_edges.py $HOSTADDRESS$
}
Error :
(No output on stdout) stderr : Traceback File "/usr/local/nagios/libexec/check_edges.py" line 3, in <module> import pandas as pd
ImportError: No module named pandas
Please give as much details as possible to solve this problem
****pip show python gives :
Location: /usr/lib/python2.7/lib-dynload
pip show pandas gives :
Location : /home/nwvepops01/.local/lib/python2.7/site-packages****
As user, from the shell, check the istance of python.
For example with this command:
env python
Modify the script and on the first line replace this
#!/usr/bin/env python
with the absolute path of the python executable.

ansible: local test new module with Error:Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed

I'm new to Python. This is my first Ansible module in order to delete the SimpleDB domain from ChaosMonkey deletion.
When tested in my local venv with my Mac OS X, it keeps saying
Module unable to decode valid JSON on stdin. Unable to figure out
what parameters were passed.
Here is the code:
#!/usr/bin/python
# Delete SimpleDB Domain
from ansible.module_utils.basic import *
import boto3
def delete_sdb_domain():
fields = dict(
sdb_domain_name=dict(required=True, type='str')
)
module = AnsibleModule(argument_spec=fields)
client = boto3.client('sdb')
response = client.delete_domain(DomainName='module.params['sdb_domain_name']')
module.exit_json(changed = False, meta = response)
def main():
delete_sdb_domain()
if __name__ == '__main__':
main()
And I'm trying to pass in parameters from this file: /tmp/args.json.
and run the following command to make the local test:
$ python ./delete_sdb_domain.py /tmp/args.json
please note I'm using venv test environment on my Mac.
If you find any syntax error in my module, please also point it out.
This is not how you should test your modules.
AnsibleModule expects to have specific JSON as stdin data.
So the closest thing you can try is:
python ./delete_sdb_domain.py < /tmp/args.json
But I bet you have your json file in wrong format (no ANSIBLE_MODULE_ARGS, etc.).
To debug your modules you can use test-module script from Ansible hacking pack:
./hacking/test-module -m delete_sdb_domain.py -a "sdb_domain_name=zzz"

"boto required for this module" error Ansible

I am running the following script inside AWS Lambda:
#!/usr/bin/python
from __future__ import print_function
import json
import os
import ansible.inventory
import ansible.playbook
import ansible.runner
import ansible.constants
from ansible import utils
from ansible import callbacks
print('Loading function')
def run_playbook(**kwargs):
stats = callbacks.AggregateStats()
playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
runner_cb = callbacks.PlaybookRunnerCallbacks(
stats, verbose=utils.VERBOSITY)
# use /tmp instead of $HOME
ansible.constants.DEFAULT_REMOTE_TMP = '/tmp/ansible'
out = ansible.playbook.PlayBook(
callbacks=playbook_cb,
runner_callbacks=runner_cb,
stats=stats,
**kwargs
).run()
return out
def lambda_handler(event, context):
return main()
def main():
out = run_playbook(
playbook='little.yml',
inventory=ansible.inventory.Inventory(['localhost'])
)
return(out)
if __name__ == '__main__':
main()
However, I get the following error: failed=True msg='boto required for this module'
However, according to this comment(https://github.com/ansible/ansible/issues/5734#issuecomment-33135727), it works.
But, I'm not understanding how do I mention that in my script? Or, can I have a separate hosts file, and include it in the script, like how I call my playbook?
If so, then how?
[EDIT - 1]
I have added the line inventory=ansible.inventory.Inventory('hosts')
with hosts file as:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
But, I get this error: /bin/sh: /usr/local/bin/python: No such file or directory
So, where is python located inside AWS Lambda?
I installed boto just like I installed other packages in the Lambda's deployment package: pip install boto -t <folder-name>
The bash command which python will usually give the location of the Python binary. There's an example of how to call a bash script from AWS Lambda here.

Categories