Automatically download all dependencies when mirroring a package - python

In my organization we maintain an internal mirrored Anaconda repository containing packages that our users requested. The purpose is to exclude certain packages that may pose a security risk, and all users in our organization connect to this internal Anaconda repository to download and install packages instead of the official Anaconda repo site. We have a script that runs regularly to update the repository using the conda-mirror command:
conda-mirror --config [config.yml file] --num-threads 1 --platform [platform] --temp-directory [directory] --upstream-channel [channel] --target-directory [directory]
The config.yml file is setup like this:
blacklist:
- name: '*'
channel_alias: https://repo.continuum.io/pkgs/
channels:
- https://conda.anaconda.org/conda-forge
- free
- main
- msys2
- r
repo-build:
dependencies: true
platforms:
- noarch
- win-64
- linux-64
root-dir: \\root-drive.net\repo
whitelist:
- name: package1
So the logic of this config file is to blacklist all packages except the ones listed under whitelist. However the problem I'm having is, if a user request package x to be added to the repository and I added package x under the whitelist, it only downloads package x to the repository and not its dependent packages. I've checked the documentation on conda-mirror and the configuration file and can't find anything related to automatically mirroring a package and all its dependencies. Is there a way to do this automatically?

Related

Python Lambda missing dependencies when set up through Amplify

I've been trying to configure an Amplify project with a Python based Lambda backend API.
I have followed the tutorials by creating an API through the AWS CLI and installing all the dependencies through pipenv.
When I cd into the function's directory, my Pipfile looks like this:
name = "pypi"
url = "https://pypi.python.org/simple"
verify_ssl = true
[dev-packages]
[packages]
src = {editable = true, path = "./src"}
flask = "*"
flask-cors = "*"
aws-wsgi = "*"
boto3 = "*"
[requires]
python_version = "3.8"
And when I run amplify push everything works and the Lambda Function gets created successfully.
Also, when I run the deploy pipeline from the Amplify Console, I see in the build logs that my virtual env is created and my dependencies are downloaded.
Something else that was done based on github issues (otherwise build would definitely fail) was adding the following to amplify.yml:
backend:
phases:
build:
commands:
- ln -fs /usr/local/bin/pip3.8 /usr/bin/pip3
- ln -fs /usr/local/bin/python3.8 /usr/bin/python3
- pip3 install --user pipenv
- amplifyPush --simple
Unfortunately, from the Lambda's logs (both dev and prod), I see that it fails importing every dependency that was installed through Pipenv. I added the following in index.py:
import os
os.system('pip list')
And saw that NONE of my dependencies were listed so I was wondering if the Lambda was running through the virtual env that was created, or was just using the default Python.
How can I make sure that my Lambda is running the virtualenv as defined in the Pipfile?
Lambda functions do not run in a virtualenv. Amplify uses pipenv to create a virtualenv and download the dependencies. Then Amplify packages those dependencies, along with the lambda code, into a zip file which it uploads to AWS Lambda.
Your problem is either that the dependencies are not packaged with your function or that they are packaged with a bad directory structure. You can download the function code to see exactly how the packaging went.

Azure Self hosted agent to run pytest

I have installed a self-hosted agent on my local VM, it's connected to azure no issues there.
I have a python code on azure DevOps
I have installed all the requirements.txt requirements manually into the cmd line of the local VM so that self-hosted agent installed on it doesn't have to install them ( to minimize the build & deployment time)
But when I have below code in the YAML file to run pytest cases pipeline is failing because of below error
This is my Yaml file
trigger:
- master
variables:
python.version : 3.8.6
stages:
- stage: Build
jobs:
- job: Build
pool:
name: 'MaitQA'
#pool:
# vmImage: 'windows-latest' # windows-latest Or windows-2019 ; vs2017-win2016 # https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#software # vs2017-win2016
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: 'pip install pytest pytest-azurepipelines ; pytest unit_test/'
This is The error
--------------- Starting: Use Python 3.8.6
------------------------------ Task : Use Python version Description : Use the specified version of Python from the tool cache, optionally adding it to the PATH Version : 0.151.4 Author : Microsoft Corporation Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/tool/use-python-version
------------------------------------------- [error]Version spec 3.8.6 for architecture x64 did not match any version in Agent.ToolsDirectory. Versions in C:\CodeVersions_tool:
If this is a Microsoft-hosted agent, check that this image supports side-by-side versions of Python at https://aka.ms/hosted-agent-software. If this is a self-hosted agent, see how to configure side-by-side Python versions at https://go.microsoft.com/fwlink/?linkid=871498. Finishing: Use Python
3.8.6
---------------
This error refers to Python not being in the agent tools directory, and therefore unavailable to the agent.
Here are (incomplete) details for setting up the tools directory with Python:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/tool/use-python-version?view=azure-devops#how-can-i-configure-a-self-hosted-agent-to-use-this-task
The mystery in the above documentation is, what are those 'tool_files' they refer to?
Thankfully, jrm346 on GitHub went through the source code to work it out; for Linux you need to compile Python from source and reconfigure the target directory:
https://github.com/microsoft/azure-pipelines-tasks/issues/10721
For Python 3.8:
Create the needed file structure under the agent's tool's directory:
Python
└── 3.8.0
├── x64
└── x64.complete
Then compile Python 3.8.6 following the below instructions, with one small addition: just after '/configure --enable-optimizations' of step 4 run the command './configure --prefix=/home/azure/_work/_tool/Python/3.8.0/x64', replacing '/home/azure/_work/_tool' with your agent's tools directory location:
https://linuxize.com/post/how-to-install-python-3-8-on-ubuntu-18-04/
Did you follow How can I configure a self-hosted agent to use this task?
The desired Python version will have to be added to the tool cache on the self-hosted agent in order for the task to use it. Normally the tool cache is located under the _work/_tool directory of the agent or the path can be overridden by the environment variable AGENT_TOOLSDIRECTORY. Under that directory, create the following directory structure based off of your Python version:
Including #Krzysztof Madej's suggestion, you can also try to restart the self-hosted agent service.

Upload to pypi from Gitlab Pipelines

I'm trying to upload a package to pypi using a Gitlab CI job, but I cannot make it work :/ Anyone has a working example?
What I have tried so far in my .gitlab-ci.yaml (from my local machine all of them are working):
Twine with a .pypirc file
- echo "[distutils]" >> ~/.pypirc
- echo "index-servers =" >> ~/.pypirc
- echo " pypi" >> ~/.pypirc
- echo "" >> ~/.pypirc
- echo "[pypi]" >> ~/.pypirc
- 'echo "repository: https://upload.pypi.org/legacy/" >> ~/.pypirc'
- 'echo "username: ${PYPI_USER}" >> ~/.pypirc'
- 'echo "password: ${PYPI_PASSWORD}" >> ~/.pypirc'
- python3 setup.py check sdist bdist # This will fail if your creds are bad.
- cat ~/.pypirc
- twine upload dist/* --config-file ~/.pypirc
Same as before but with $VARIABLE
[...]
- 'echo "username: $PYPI_USER" >> ~/.pypirc'
- 'echo "password: $PYPI_PASSWORD" >> ~/.pypirc'
[...]
Two options before but using python setup.py ... upload
twine upload dist/* -u $PYPI_USER -p $PYPI_PASSWORD
twine upload dist/* wiht TWINE_USERNAME and TWINE_PASSWORD environment variables.
... and always get a 403 Client Error: Invalid or non-existent authentication information. I'm running out of options...
I am simply using the TWINE_USERNAME and TWINE_PASSWORD variables, it worked out of the box.
This is the relevant part in my gitlab-ci.yml (replace the image with your desired one and of course change all the other stuff like stage, cache etc. to your needs):
pypi:
image: docker.km3net.de/base/python:3
stage: deploy
cache: {}
script:
- pip install -U twine
- python setup.py sdist
- twine upload dist/*
only:
- tags
And add the environment variables in GitLab under Settings->CI/CD->Variables (https://your-gitlab-instance.oerg/GIT_NAMESPACE/GIT_PROJECT/settings/ci_cd):
Here is the successful pipeline:
I got this working, using a modified version of your code:
pypi:
stage: upload
script:
- pip install twine
- rm -rf dist
- echo "[distutils]" >> ~/.pypirc
- echo "index-servers =" >> ~/.pypirc
- echo " nexus" >> ~/.pypirc
- echo "" >> ~/.pypirc
- echo "[nexus]" >> ~/.pypirc
- echo "${PYPI_REPO}" >> ~/.pypirc
- echo "${PYPI_USER}" >> ~/.pypirc
- echo "${PYPI_PASSWORD}" >> ~/.pypirc
- python3 setup.py check sdist bdist # This will fail if your creds are bad.
- python setup.py sdist bdist_wheel
- twine upload -r nexus dist/*.tar.gz
The difference is I didn't use the "'" and got rid of the colons in the yaml; instead I set the values of the secrets as e.g., username: myuser
If problems with EOF appears, make sure to change Settings/Repository/Tags to be protected, so they will work again. I've posted here a more complete description.
Note that GitLab 12.10 (April 2020) will offer in its premium or more edition, a simpler way, using CI_JOB_TOKEN (See below the second part of this answer, with GitLab 13.4, Sept. 2020)
Build, publish, and share Python packages to the GitLab PyPI Repository
Python developers need a mechanism to create, share, and consume packages that contain compiled code and other content in projects that use these packages. PyPI, an open source project maintained by the Python Packaging Authority, is the standard for how to define, create, host, and consume Python packages.
In GitLab 12.10, we are proud to offer PyPI repositories built directly into GitLab! Developers now have an easier way to publish their projects’ Python packages. By integrating with PyPI, GitLab will provide a centralized location to store and view those packages in the same place as their source code and pipelines.
In March, we announced that the GitLab PyPI Repository and support for other package manager formats will be moved to open source.
You can follow along as we work to make these features more broadly available in the epic.
See Documentation and Issue.
And with GitLab 13.4 (September 2020)
Use CI_JOB_TOKEN to publish PyPI packages
You can use the GitLab PyPI Repository to build, publish, and share python packages, right alongside your source code and CI/CD Pipelines.
However, previously you couldn’t authenticate with the repository by using the pre-defined environment variable CI_JOB_TOKEN.
As a result, you were forced to use your personal credentials for making updates to the PyPI Repository, or you may have decided not to use the repository at all.
Now it is easier than ever to use GitLab CI/CD to publish and install PyPI packages by using the predefined CI_JOB_TOKEN environment variable.
See Documentation and Issue.
You can also upload python package to a private Pypi server in one line (I am using it with gilab-ci):
Set environment variables PYPI_SERVER, PYPI_USER and PYPI_PASSWORD through Gitlab CI settings
Call
twine upload --repository-url ${PYPI_SERVER} --username $PYPI_USER --password $PYPI_PASSWORDD $artifact
Note: I had to use twine from PIP (pip3 install twine) and not from my Ubuntu package as the version 10 of twine seems to have a bug (zipfile.BadZipFile: File is not a zip file).
You can also look into using dpl: Here's how I'm doing it:
pip:
stage: upload
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- python setup.py sdist
- dpl --provider=pypi --user=$PIP_USERNAME --password=$PIP_PASSWORD --skip_existing=true
only:
- master
You can set $PIP_USERNAME and $PIP_PASSWORD in the variables section for your project: settings -> CI/CD -> Variables
I know this is an old question, but if you're using poetry (I'm testing with version 1.1.11) you can do it quite easily, like this:
poetry config repositories.my_private_repo [URL_TO_YOUR_PYPI_REPO]
poetry config http-basic.my_private_repo [USERNAME] [PASSWORD]
poetry build
poetry publish --repository my_private_repo
On develop branches, you can add the --dry-run argument to poetry publish so it won't actually get uploaded

Travis CI PyPI deploy interpreter

My setup.py contains scripts=["package-name"] parameter and .travis.yml is:
deploy:
provider: pypi
distributions: "sdist bdist bdist_wheel"
docs_dir: docs
user: ...
password:
secure: ...
But, when package is deployed to PyPI, shebang lines in scripts are converted into something like: /home/travis/virtualenv/python2.7.9/bin/python, which, of course, is bad interpreter on target machine.
If I deploy from hand, I can use python setup.py -e '/usr/bin/env python' sdist upload, but how can I bypass this parameter in Travis?

How do I configure pip.conf in AWS Elastic Beanstalk?

I need to deploy a Python application to AWS Elastic Beanstalk, however this module requires dependencies from our private PyPi index. How can I configure pip (like what you do with ~/.pip/pip.conf) so that AWS can connect to our private index while deploying the application?
My last resort is to modify the dependency in requirements.txt to -i URL dependency before deployment, but there must be a clean way to achieve this goal.
In .ebextensions/files.config add something like this:
files:
"/opt/python/run/venv/pip.conf":
mode: "000755"
owner: root
user: root
content: |
[global]
find-links = <URL>
trusted-host = <HOST>
index-url = <URL>
Or whatever other configurations you'd like to set in your pip.conf. This will place the pip.conf file in the virtual environment of your application, which will be activated before pip -r requirements.txt is executed. Hopefully this helps!

Categories