I have installed a self-hosted agent on my local VM, it's connected to azure no issues there.
I have a python code on azure DevOps
I have installed all the requirements.txt requirements manually into the cmd line of the local VM so that self-hosted agent installed on it doesn't have to install them ( to minimize the build & deployment time)
But when I have below code in the YAML file to run pytest cases pipeline is failing because of below error
This is my Yaml file
trigger:
- master
variables:
python.version : 3.8.6
stages:
- stage: Build
jobs:
- job: Build
pool:
name: 'MaitQA'
#pool:
# vmImage: 'windows-latest' # windows-latest Or windows-2019 ; vs2017-win2016 # https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#software # vs2017-win2016
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: 'pip install pytest pytest-azurepipelines ; pytest unit_test/'
This is The error
--------------- Starting: Use Python 3.8.6
------------------------------ Task : Use Python version Description : Use the specified version of Python from the tool cache, optionally adding it to the PATH Version : 0.151.4 Author : Microsoft Corporation Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/tool/use-python-version
------------------------------------------- [error]Version spec 3.8.6 for architecture x64 did not match any version in Agent.ToolsDirectory. Versions in C:\CodeVersions_tool:
If this is a Microsoft-hosted agent, check that this image supports side-by-side versions of Python at https://aka.ms/hosted-agent-software. If this is a self-hosted agent, see how to configure side-by-side Python versions at https://go.microsoft.com/fwlink/?linkid=871498. Finishing: Use Python
3.8.6
---------------
This error refers to Python not being in the agent tools directory, and therefore unavailable to the agent.
Here are (incomplete) details for setting up the tools directory with Python:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/tool/use-python-version?view=azure-devops#how-can-i-configure-a-self-hosted-agent-to-use-this-task
The mystery in the above documentation is, what are those 'tool_files' they refer to?
Thankfully, jrm346 on GitHub went through the source code to work it out; for Linux you need to compile Python from source and reconfigure the target directory:
https://github.com/microsoft/azure-pipelines-tasks/issues/10721
For Python 3.8:
Create the needed file structure under the agent's tool's directory:
Python
└── 3.8.0
├── x64
└── x64.complete
Then compile Python 3.8.6 following the below instructions, with one small addition: just after '/configure --enable-optimizations' of step 4 run the command './configure --prefix=/home/azure/_work/_tool/Python/3.8.0/x64', replacing '/home/azure/_work/_tool' with your agent's tools directory location:
https://linuxize.com/post/how-to-install-python-3-8-on-ubuntu-18-04/
Did you follow How can I configure a self-hosted agent to use this task?
The desired Python version will have to be added to the tool cache on the self-hosted agent in order for the task to use it. Normally the tool cache is located under the _work/_tool directory of the agent or the path can be overridden by the environment variable AGENT_TOOLSDIRECTORY. Under that directory, create the following directory structure based off of your Python version:
Including #Krzysztof Madej's suggestion, you can also try to restart the self-hosted agent service.
Related
I apologize in advance.I have a task to create CI pipeline in Gitlab for projects on python language with results in SonarQube. I found some gitlab-ci.yml file:
image: image-registry/gitlab/python
before_script:
- cd ..
- git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab/python-education/junior.git
stages:
- PyLint
pylint:
stage: PyLint
only:
- merge_requests
script:
- cp -R ${CI_PROJECT_NAME}/* junior/project
- cd junior && python3 run.py --monorepo
Is it possible to some code in script to output in SonarQube?
Yes, third party issues are supported with SonarQube. For PyLint, you can set sonar.python.pylint.reportPath in your sonar.properties file with the path of the report(s) for pylint. You must use --output-format=parseable argument to pylint.
When you run sonar scanner, it will collect the report(s) and send it to SonarQube.
In my organization we maintain an internal mirrored Anaconda repository containing packages that our users requested. The purpose is to exclude certain packages that may pose a security risk, and all users in our organization connect to this internal Anaconda repository to download and install packages instead of the official Anaconda repo site. We have a script that runs regularly to update the repository using the conda-mirror command:
conda-mirror --config [config.yml file] --num-threads 1 --platform [platform] --temp-directory [directory] --upstream-channel [channel] --target-directory [directory]
The config.yml file is setup like this:
blacklist:
- name: '*'
channel_alias: https://repo.continuum.io/pkgs/
channels:
- https://conda.anaconda.org/conda-forge
- free
- main
- msys2
- r
repo-build:
dependencies: true
platforms:
- noarch
- win-64
- linux-64
root-dir: \\root-drive.net\repo
whitelist:
- name: package1
So the logic of this config file is to blacklist all packages except the ones listed under whitelist. However the problem I'm having is, if a user request package x to be added to the repository and I added package x under the whitelist, it only downloads package x to the repository and not its dependent packages. I've checked the documentation on conda-mirror and the configuration file and can't find anything related to automatically mirroring a package and all its dependencies. Is there a way to do this automatically?
When attempting to initialise a new cdk project in a windows WSL environment, I was confronted with the following error:
cdk init test-app --language python
Usage:
cdk [-vbo] [--toc] [--notransition] [--logo=<logo>] [--theme=<theme>] [--custom-css=<cssfile>] FILE
cdk --install-theme=<theme>
cdk --default-theme=<theme>
cdk --generate=<name>
I first wanted to check the install was still correct but the version number is not displayed:
cdk --version
cdk
All advice online and on Stack Overflow suggests re-installing as root user. I have attempted a global install as root, followed by a restart:
sudo npm install -g aws-cdk
checking for global install version lists the following, showing the update has had global effect:
npm list -g --depth=0 | grep cdk
├── aws-cdk#2.15.0
├── cdk-assume-role-credential-plugin#1.4.0
but the error remains the same. Running the which command confirms it is following the correct user path:
which cdk
/home/user/.local/bin/cdk
This is a new error and I am unable to pinpoint any particular change that could have caused this. I have been able to initialise cdk projects in empty directories before without issue.
I'm very new to DevOps, so this may be a very silly question. I'm trying to deploy a python Web scraping script onto an azure webapp using GitHub actions. This script is meant to be run for a long period of time as it is analyzing websites word by word for hours. It then logs the results to .log files.
I know a bit of how GitHub actions work, I know that I can trigger jobs when I push to the repo for instance. However, I'm a bit confused as to how one runs the app or a script on an azure resource (like a VM or webapp) for example. Does this process involve SSH-ing into the resource and then automatically run the cli command "python main.py" or "docker-compose up", or is there something more sophisticated involved?
For better context, this is my script inside of my workflows folder:
on:
[push]
env:
AZURE_WEBAPP_NAME: emotional-news-service # set this to your application's name
WORKING_DIRECTORY: '.' # set this to the path to your path of working directory inside GitHub repository, defaults to the repository root
PYTHON_VERSION: '3.9'
STARTUP_COMMAND: 'docker-compose up --build -d' # set this to the startup command required to start the gunicorn server. default it is empty
name: Build and deploy Python app
jobs:
build-and-deploy:
runs-on: ubuntu-latest
environment: dev
steps:
# checkout the repo
- uses: actions/checkout#master
# setup python
- name: Setup Python
uses: actions/setup-python#v1
with:
python-version: ${{ env.PYTHON_VERSION }}
# setup docker compose
- uses: KengoTODA/actions-setup-docker-compose#main
with:
version: '1.26.2'
# install dependencies
- name: python install
working-directory: ${{ env.WORKING_DIRECTORY }}
run: |
sudo apt install python${{ env.PYTHON_VERSION }}-venv
python -m venv --copies antenv
source antenv/bin/activate
pip install setuptools
pip install -r requirements.txt
python -m spacy download en_core_web_md
# Azure login
- uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- uses: azure/appservice-settings#v1
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
mask-inputs: false
general-settings-json: '{"linuxFxVersion": "PYTHON|${{ env.PYTHON_VERSION }}"}' #'General configuration settings as Key Value pairs'
# deploy web app
- uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
package: ${{ env.WORKING_DIRECTORY }}
startup-command: ${{ env.STARTUP_COMMAND }}
# Azure logout
- name: logout
run: |
az logout
most of the script above was taken from: https://github.com/Azure/actions-workflow-samples/blob/master/AppService/python-webapp-on-azure.yml.
is env.STARTUP_COMMAND the "SSH and then run the command" part that I was thinking of, or is it something else entirely?
I also have another question: is there a better way to view logs from that python script running from within the azure resource? The only way I can think of is to ssh into it and then type in "cat 'whatever.log'".
Thanks in advance!
Instead of using STARTUP_COMMAND: 'docker-compose up --build -d' you can use the startup file name.
startUpCommand: 'gunicorn --bind=0.0.0.0 --workers=4 startup:app'
or
StartupCommand: 'startup.txt'
The StartupCommand parameter defines the app in the startup.py file. By default, Azure App Service looks for the Flask app object in a file named app.py or application.py. If your code doesn't follow this pattern, you need to customize the startup command. Django apps may not need customization at all. For more information, see How to configure Python on Azure App Service - Customize startup command.
Also, because the python-vscode-flask-tutorial repository contains the same startup command in a file named startup.txt, you could specify that file in the StartupCommand parameter rather than the command, by using StartupCommand: 'startup.txt'.
Refer: here for more info
I installed ansible, apache-libcloud with pip. Also, I can use the gcloud cli and ansible works for any non-gce-related playbooks.
When using the gce module as a task to create instances in an ansible playbook, the following error occurs:
TASK: [Launch instances] ******************************************************
<127.0.0.1> REMOTE_MODULE gce instance_names=mm2 machine_type=f1-micro image=ubuntu-1204-precise-v20150625 zone=europe-west1-d service_account_email= pem_file=../pkey.pem project_id=fancystuff-11
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && echo $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889']
<127.0.0.1> PUT /var/folders/v4/ll0_f8lj7yl7yghb645h95q9ckfc19/T/tmpyDoPt9 TO /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce
<127.0.0.1> EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce; rm -rf /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/ >/dev/null 2>&1']
failed: [localhost -> 127.0.0.1] => {"failed": true, "parsed": false}
failed=True msg='libcloud with GCE support (0.13.3+) required for this module'
FATAL: all hosts have already failed -- aborting
And the site.yml of the playbook I wrote:
name: Create a sandbox instance
hosts: localhost
vars:
names: mm2
machine_type: f1-micro
image: ubuntu-1204-precise-v20150625
zone: europe-west1-d
service_account_email: xxx#developer.gserviceaccount.com
pem_file: ../pkey.pem
project_id: fancystuff-11
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}} service_account_email={{ service_account_email }}
pem_file={{ pem_file }} project_id={{ project_id }}
register: gce
The gce cloud module fails with the error message "ibcloud with GCE support (0.13.3+) required for this module".
However, running gce.py from the ansible github repo works. The python script finds the apache-libcloud library and prints a json with all running instances. Besides, pip install apache-libcloud states it is installed properly.
Is there anything I am missing like an environment variable that points to the python libraries (PYTHONPATH)?
UPDATE 1:
I included the following task before the gce task:
- name: install libcloud
pip: name=apache-libcloud
This also does not affect the behavior nor prevents any error messages.
Update 2:
I added the following task to inspect the available PYTHONPATH:
- name: Getting PYTHONPATH
local_action: shell python -c 'import sys; print(":".join(sys.path))'
register: pythonpath
- debug:
msg: "PYTHONPATH: {{ pythonpath.stdout }}"
The following is returned:
PYTHONPATH: :/usr/local/lib/python2.7/site-packages/setuptools-17.1.1-py2.7.egg:/usr/local/lib/python2.7/site-packages/pip-7.0.3-py2.7.egg:/usr/local/lib/python2.7/site-packages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/site-packages:/Library/Python/2.7/site-packages
UPDATE 3:
I introduced my own test.py script as a task which executes the same apache-libcloud imports as the gce ansible module. The script imports just fine!!!
Setting the PYTHONPATH fixes the issue. For example:
$ export PYTHONPATH=/usr/local/lib/python2.7/site-packages/
I'm using OSX and I solved this for myself. Short answer: install ansible with pip. (rather than e.g. brew)
I inspected the PYTHONPATH that Ansible sets runtime and it looked like it had nothing to do whith my normal system PYTHONPATH. E.g. for me, my system PYTHONPATH was empty, and setting that like e.g. mlazarov suggested didn't make any difference. I made ansible print the PYTHONPATH it uses runtime, and it looked like this:
ok: [localhost] => {
"msg": "PYTHONPATH: :/usr/local/Cellar/ansible/1.9.4/libexec/lib/python2.7/site-packages:/usr/local/Cellar/ansible/1.9.4/libexec/vendor/lib/python2.7/site-packages:/Library/Frameworks/Python.framework/Versions/3.4/lib/python34.zip:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/plat-darwin:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/lib-dynload:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages"
}
So there's only ansible's own site-packages and some strange Python3 installations (I'm using python2.7)
Something in this discussion made me think it might be a problem with the ansible installation, my ansible was installed with brew. I reinstalled it globally with pip (simply running sudo pip install ansible), and that fixed the problem. Now the PYTHONPATH ansible prints looks much better, with my virtualenv python installation in the beginning, and no more "libcloud with GCE support (0.13.3+) required for this module".
I was able to resolve the issue by setting the PYTHONPATH environment variable (export PYTHONPATH=/path/to/site-packages) with the current site-packages folder. Apparently, ansible establishes its own environment during module execution and ignores any paths available in python except the paths from the environment variable PYTHONPATH.
I find this a peculiar behavior which is not documented on the ansible websites.
I have a similar environment setup. I found some information at the bottom of this section: https://github.com/jlund/streisand#prerequisites
Essentially there's some magic files you can update so the brew'd ansible will add a folder to search for packages:
mkdir -p ~/Library/Python/2.7/lib/python/site-packages
echo '/usr/local/lib/python2.7/site-packages' > ~/Library/Python/2.7/lib/python/site-packages/homebrew.pth
Hope that fixes it for you!
In my case it was the case of:
pip install apache-libcloud