Travis CI PyPI deploy interpreter - python

My setup.py contains scripts=["package-name"] parameter and .travis.yml is:
deploy:
provider: pypi
distributions: "sdist bdist bdist_wheel"
docs_dir: docs
user: ...
password:
secure: ...
But, when package is deployed to PyPI, shebang lines in scripts are converted into something like: /home/travis/virtualenv/python2.7.9/bin/python, which, of course, is bad interpreter on target machine.
If I deploy from hand, I can use python setup.py -e '/usr/bin/env python' sdist upload, but how can I bypass this parameter in Travis?

Related

Python Lambda missing dependencies when set up through Amplify

I've been trying to configure an Amplify project with a Python based Lambda backend API.
I have followed the tutorials by creating an API through the AWS CLI and installing all the dependencies through pipenv.
When I cd into the function's directory, my Pipfile looks like this:
name = "pypi"
url = "https://pypi.python.org/simple"
verify_ssl = true
[dev-packages]
[packages]
src = {editable = true, path = "./src"}
flask = "*"
flask-cors = "*"
aws-wsgi = "*"
boto3 = "*"
[requires]
python_version = "3.8"
And when I run amplify push everything works and the Lambda Function gets created successfully.
Also, when I run the deploy pipeline from the Amplify Console, I see in the build logs that my virtual env is created and my dependencies are downloaded.
Something else that was done based on github issues (otherwise build would definitely fail) was adding the following to amplify.yml:
backend:
phases:
build:
commands:
- ln -fs /usr/local/bin/pip3.8 /usr/bin/pip3
- ln -fs /usr/local/bin/python3.8 /usr/bin/python3
- pip3 install --user pipenv
- amplifyPush --simple
Unfortunately, from the Lambda's logs (both dev and prod), I see that it fails importing every dependency that was installed through Pipenv. I added the following in index.py:
import os
os.system('pip list')
And saw that NONE of my dependencies were listed so I was wondering if the Lambda was running through the virtual env that was created, or was just using the default Python.
How can I make sure that my Lambda is running the virtualenv as defined in the Pipfile?
Lambda functions do not run in a virtualenv. Amplify uses pipenv to create a virtualenv and download the dependencies. Then Amplify packages those dependencies, along with the lambda code, into a zip file which it uploads to AWS Lambda.
Your problem is either that the dependencies are not packaged with your function or that they are packaged with a bad directory structure. You can download the function code to see exactly how the packaging went.

Azure Self hosted agent to run pytest

I have installed a self-hosted agent on my local VM, it's connected to azure no issues there.
I have a python code on azure DevOps
I have installed all the requirements.txt requirements manually into the cmd line of the local VM so that self-hosted agent installed on it doesn't have to install them ( to minimize the build & deployment time)
But when I have below code in the YAML file to run pytest cases pipeline is failing because of below error
This is my Yaml file
trigger:
- master
variables:
python.version : 3.8.6
stages:
- stage: Build
jobs:
- job: Build
pool:
name: 'MaitQA'
#pool:
# vmImage: 'windows-latest' # windows-latest Or windows-2019 ; vs2017-win2016 # https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#software # vs2017-win2016
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: 'pip install pytest pytest-azurepipelines ; pytest unit_test/'
This is The error
--------------- Starting: Use Python 3.8.6
------------------------------ Task : Use Python version Description : Use the specified version of Python from the tool cache, optionally adding it to the PATH Version : 0.151.4 Author : Microsoft Corporation Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/tool/use-python-version
------------------------------------------- [error]Version spec 3.8.6 for architecture x64 did not match any version in Agent.ToolsDirectory. Versions in C:\CodeVersions_tool:
If this is a Microsoft-hosted agent, check that this image supports side-by-side versions of Python at https://aka.ms/hosted-agent-software. If this is a self-hosted agent, see how to configure side-by-side Python versions at https://go.microsoft.com/fwlink/?linkid=871498. Finishing: Use Python
3.8.6
---------------
This error refers to Python not being in the agent tools directory, and therefore unavailable to the agent.
Here are (incomplete) details for setting up the tools directory with Python:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/tool/use-python-version?view=azure-devops#how-can-i-configure-a-self-hosted-agent-to-use-this-task
The mystery in the above documentation is, what are those 'tool_files' they refer to?
Thankfully, jrm346 on GitHub went through the source code to work it out; for Linux you need to compile Python from source and reconfigure the target directory:
https://github.com/microsoft/azure-pipelines-tasks/issues/10721
For Python 3.8:
Create the needed file structure under the agent's tool's directory:
Python
└── 3.8.0
├── x64
└── x64.complete
Then compile Python 3.8.6 following the below instructions, with one small addition: just after '/configure --enable-optimizations' of step 4 run the command './configure --prefix=/home/azure/_work/_tool/Python/3.8.0/x64', replacing '/home/azure/_work/_tool' with your agent's tools directory location:
https://linuxize.com/post/how-to-install-python-3-8-on-ubuntu-18-04/
Did you follow How can I configure a self-hosted agent to use this task?
The desired Python version will have to be added to the tool cache on the self-hosted agent in order for the task to use it. Normally the tool cache is located under the _work/_tool directory of the agent or the path can be overridden by the environment variable AGENT_TOOLSDIRECTORY. Under that directory, create the following directory structure based off of your Python version:
Including #Krzysztof Madej's suggestion, you can also try to restart the self-hosted agent service.

Upload to pypi from Gitlab Pipelines

I'm trying to upload a package to pypi using a Gitlab CI job, but I cannot make it work :/ Anyone has a working example?
What I have tried so far in my .gitlab-ci.yaml (from my local machine all of them are working):
Twine with a .pypirc file
- echo "[distutils]" >> ~/.pypirc
- echo "index-servers =" >> ~/.pypirc
- echo " pypi" >> ~/.pypirc
- echo "" >> ~/.pypirc
- echo "[pypi]" >> ~/.pypirc
- 'echo "repository: https://upload.pypi.org/legacy/" >> ~/.pypirc'
- 'echo "username: ${PYPI_USER}" >> ~/.pypirc'
- 'echo "password: ${PYPI_PASSWORD}" >> ~/.pypirc'
- python3 setup.py check sdist bdist # This will fail if your creds are bad.
- cat ~/.pypirc
- twine upload dist/* --config-file ~/.pypirc
Same as before but with $VARIABLE
[...]
- 'echo "username: $PYPI_USER" >> ~/.pypirc'
- 'echo "password: $PYPI_PASSWORD" >> ~/.pypirc'
[...]
Two options before but using python setup.py ... upload
twine upload dist/* -u $PYPI_USER -p $PYPI_PASSWORD
twine upload dist/* wiht TWINE_USERNAME and TWINE_PASSWORD environment variables.
... and always get a 403 Client Error: Invalid or non-existent authentication information. I'm running out of options...
I am simply using the TWINE_USERNAME and TWINE_PASSWORD variables, it worked out of the box.
This is the relevant part in my gitlab-ci.yml (replace the image with your desired one and of course change all the other stuff like stage, cache etc. to your needs):
pypi:
image: docker.km3net.de/base/python:3
stage: deploy
cache: {}
script:
- pip install -U twine
- python setup.py sdist
- twine upload dist/*
only:
- tags
And add the environment variables in GitLab under Settings->CI/CD->Variables (https://your-gitlab-instance.oerg/GIT_NAMESPACE/GIT_PROJECT/settings/ci_cd):
Here is the successful pipeline:
I got this working, using a modified version of your code:
pypi:
stage: upload
script:
- pip install twine
- rm -rf dist
- echo "[distutils]" >> ~/.pypirc
- echo "index-servers =" >> ~/.pypirc
- echo " nexus" >> ~/.pypirc
- echo "" >> ~/.pypirc
- echo "[nexus]" >> ~/.pypirc
- echo "${PYPI_REPO}" >> ~/.pypirc
- echo "${PYPI_USER}" >> ~/.pypirc
- echo "${PYPI_PASSWORD}" >> ~/.pypirc
- python3 setup.py check sdist bdist # This will fail if your creds are bad.
- python setup.py sdist bdist_wheel
- twine upload -r nexus dist/*.tar.gz
The difference is I didn't use the "'" and got rid of the colons in the yaml; instead I set the values of the secrets as e.g., username: myuser
If problems with EOF appears, make sure to change Settings/Repository/Tags to be protected, so they will work again. I've posted here a more complete description.
Note that GitLab 12.10 (April 2020) will offer in its premium or more edition, a simpler way, using CI_JOB_TOKEN (See below the second part of this answer, with GitLab 13.4, Sept. 2020)
Build, publish, and share Python packages to the GitLab PyPI Repository
Python developers need a mechanism to create, share, and consume packages that contain compiled code and other content in projects that use these packages. PyPI, an open source project maintained by the Python Packaging Authority, is the standard for how to define, create, host, and consume Python packages.
In GitLab 12.10, we are proud to offer PyPI repositories built directly into GitLab! Developers now have an easier way to publish their projects’ Python packages. By integrating with PyPI, GitLab will provide a centralized location to store and view those packages in the same place as their source code and pipelines.
In March, we announced that the GitLab PyPI Repository and support for other package manager formats will be moved to open source.
You can follow along as we work to make these features more broadly available in the epic.
See Documentation and Issue.
And with GitLab 13.4 (September 2020)
Use CI_JOB_TOKEN to publish PyPI packages
You can use the GitLab PyPI Repository to build, publish, and share python packages, right alongside your source code and CI/CD Pipelines.
However, previously you couldn’t authenticate with the repository by using the pre-defined environment variable CI_JOB_TOKEN.
As a result, you were forced to use your personal credentials for making updates to the PyPI Repository, or you may have decided not to use the repository at all.
Now it is easier than ever to use GitLab CI/CD to publish and install PyPI packages by using the predefined CI_JOB_TOKEN environment variable.
See Documentation and Issue.
You can also upload python package to a private Pypi server in one line (I am using it with gilab-ci):
Set environment variables PYPI_SERVER, PYPI_USER and PYPI_PASSWORD through Gitlab CI settings
Call
twine upload --repository-url ${PYPI_SERVER} --username $PYPI_USER --password $PYPI_PASSWORDD $artifact
Note: I had to use twine from PIP (pip3 install twine) and not from my Ubuntu package as the version 10 of twine seems to have a bug (zipfile.BadZipFile: File is not a zip file).
You can also look into using dpl: Here's how I'm doing it:
pip:
stage: upload
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- python setup.py sdist
- dpl --provider=pypi --user=$PIP_USERNAME --password=$PIP_PASSWORD --skip_existing=true
only:
- master
You can set $PIP_USERNAME and $PIP_PASSWORD in the variables section for your project: settings -> CI/CD -> Variables
I know this is an old question, but if you're using poetry (I'm testing with version 1.1.11) you can do it quite easily, like this:
poetry config repositories.my_private_repo [URL_TO_YOUR_PYPI_REPO]
poetry config http-basic.my_private_repo [USERNAME] [PASSWORD]
poetry build
poetry publish --repository my_private_repo
On develop branches, you can add the --dry-run argument to poetry publish so it won't actually get uploaded

Running multiple uwsgi python versions

I'm trying to deploy django with uwsgi, and I think I lack understanding of how it all works. I have uwsgi running in emperor mode, and I'm trying to get the vassals to run in their own virtualenvs, with a different python version.
The emperor configuration:
[uwsgi]
socket = /run/uwsgi/uwsgi.socket
pidfile = /run/uwsgi/uwsgi.pid
emperor = /etc/uwsgi.d
emperor-tyrant = true
master = true
autoload = true
log-date = true
logto = /var/log/uwsgi/uwsgi-emperor.log
And the vassal:
uid=django
gid=django
virtualenv=/home/django/sites/mysite/venv/bin
chdir=/home/django/sites/mysite/site
module=mysite.uwsgi:application
socket=/tmp/uwsgi_mysite.sock
master=True
I'm seeing the following error in the emperor log:
Traceback (most recent call last):
File "./mysite/uwsgi.py", line 11, in <module>
import site
ImportError: No module named site
The virtualenv for my site is created as a python 3.4 pyvenv. The uwsgi is the system uwsgi (python2.6). I was under the impression that the emperor could be any python version, as the vassal would be launched with its own python and environment, launched by the master process. I now think this is wrong.
What I'd like to be doing is running the uwsgi master process with the system python, but the various vassals (applications) with their own python and their own libraries. Is this possible? Or am I going to have to run multiple emperors if I want to run multiple pythons? Kinda defeats the purpose of having virtual environments.
The "elegant" way is building the uWSGI python support as a plugin, and having a plugin for each python version:
(from uWSGI sources)
make PROFILE=nolang
(will build a uWSGI binary without language support)
PYTHON=python2.7 ./uwsgi --build-plugin "plugins/python python27"
will build the python27_plugin.so that you can load in vassals
PYTHON=python3 ./uwsgi --build-plugin "plugins/python python3"
will build the plugin for python3 and so on.
There are various way to build uWSGI plugins, the one i am reporting is the safest one (it ensure the #ifdef are honoured).
Having said that, having a uWSGI Emperor for each python version is viable too. Remember Emperor are stackable, so you can have a generic emperor spawning one emperor (as its vassal) for each python version.
Pip install uWSGI
One option would be to simply install uWSGI with pip in your virtualenvs and start your services separately:
pip install uwsgi
~/.virtualenvs/venv-name/lib/pythonX.X/site-packages/uwsgi --ini path/to/ini-file
Install uWSGI from source and build python plugins
If you want a system-wide uWSGI build, you can build it from source and install plugins for multiple python versions. You'll need root privileges for this.
First you may want to install multiple system-wide python versions.
Make sure you have any dependencies installed. For pcre, on a Debian-based distribution use:
apt install libpcre3 libpcre3-dev
Download and build the latest uWSGI source into /usr/local/src, replacing X.X.X.X below with the package version (e.g. 2.0.19.1):
wget http://projects.unbit.it/downloads/uwsgi-latest.tar.gz
tar vzxf uwsgi-latest.tar.gz
cd uwsgi-X.X.X.X/
make PROFILE=nolang
Symlink the versioned folder uwsgi-X.X.X.X to give it the generic name, uwsgi:
ln -s /usr/local/src/uwsgi-X.X.X.X /usr/local/src/uwsgi
Create a symlink to the build so it's on your PATH:
ln -s /usr/local/src/uwsgi/uwsgi /usr/local/bin
Build python plugins for the versions you need:
PYTHON=pythonX.X ./uwsgi --build-plugin "plugins/python pythonXX"
For example, for python3.8:
PYTHON=python3.8 ./uwsgi --build-plugin "plugins/python python38"
Create a plugin directory in an appropriate location:
mkdir -p /usr/local/lib/uwsgi/plugins/
Symlink the created plugins to this directory. For example, for python3.8:
ln -s /usr/local/src/uwsgi/python38_plugin.so /usr/local/lib/uwsgi/plugins
Then in your uWSGI configuration (project.ini) files, specify the plugin directory and the plugin:
plugin-dir = /usr/local/lib/uwsgi/plugins
plugin = python38
Make sure to create your virtualenvs with the same python version that you created the plugin with. For example if you created python38_plugin.so with python3.8 and you have plugin = python38 in your project.ini file, then an easy way to create a virtualenv with python3.8 is with:
python3.8 -m virtualenv path/to/project/virtualenv

How to run included tests on deployed pylons application

I have installed pylons based application from egg, so it sits somewhere under /usr/lib/python2.5/site-packages. I see that the tests are packaged too and I would like to run them (to catch a problem that shows up on deployed application but not on development version).
So how do I run them? Doing "nosetests" from directory containing only test.ini and development.ini gives an error about nonexistent test.ini under site-packages.
Straight from the horse's mouth:
Install nose: easy_install -W nose.
Run nose: nosetests --with-pylons=test.ini OR python setup.py nosetests
To run "python setup.py nosetests" you need to have a [nosetests] block in your setup.cfg looking like this:
[nosetests]
verbose=True
verbosity=2
with-pylons=test.ini
detailed-errors=1
with-doctest=True

Categories