I'm trying to upload a package to pypi using a Gitlab CI job, but I cannot make it work :/ Anyone has a working example?
What I have tried so far in my .gitlab-ci.yaml (from my local machine all of them are working):
Twine with a .pypirc file
- echo "[distutils]" >> ~/.pypirc
- echo "index-servers =" >> ~/.pypirc
- echo " pypi" >> ~/.pypirc
- echo "" >> ~/.pypirc
- echo "[pypi]" >> ~/.pypirc
- 'echo "repository: https://upload.pypi.org/legacy/" >> ~/.pypirc'
- 'echo "username: ${PYPI_USER}" >> ~/.pypirc'
- 'echo "password: ${PYPI_PASSWORD}" >> ~/.pypirc'
- python3 setup.py check sdist bdist # This will fail if your creds are bad.
- cat ~/.pypirc
- twine upload dist/* --config-file ~/.pypirc
Same as before but with $VARIABLE
[...]
- 'echo "username: $PYPI_USER" >> ~/.pypirc'
- 'echo "password: $PYPI_PASSWORD" >> ~/.pypirc'
[...]
Two options before but using python setup.py ... upload
twine upload dist/* -u $PYPI_USER -p $PYPI_PASSWORD
twine upload dist/* wiht TWINE_USERNAME and TWINE_PASSWORD environment variables.
... and always get a 403 Client Error: Invalid or non-existent authentication information. I'm running out of options...
I am simply using the TWINE_USERNAME and TWINE_PASSWORD variables, it worked out of the box.
This is the relevant part in my gitlab-ci.yml (replace the image with your desired one and of course change all the other stuff like stage, cache etc. to your needs):
pypi:
image: docker.km3net.de/base/python:3
stage: deploy
cache: {}
script:
- pip install -U twine
- python setup.py sdist
- twine upload dist/*
only:
- tags
And add the environment variables in GitLab under Settings->CI/CD->Variables (https://your-gitlab-instance.oerg/GIT_NAMESPACE/GIT_PROJECT/settings/ci_cd):
Here is the successful pipeline:
I got this working, using a modified version of your code:
pypi:
stage: upload
script:
- pip install twine
- rm -rf dist
- echo "[distutils]" >> ~/.pypirc
- echo "index-servers =" >> ~/.pypirc
- echo " nexus" >> ~/.pypirc
- echo "" >> ~/.pypirc
- echo "[nexus]" >> ~/.pypirc
- echo "${PYPI_REPO}" >> ~/.pypirc
- echo "${PYPI_USER}" >> ~/.pypirc
- echo "${PYPI_PASSWORD}" >> ~/.pypirc
- python3 setup.py check sdist bdist # This will fail if your creds are bad.
- python setup.py sdist bdist_wheel
- twine upload -r nexus dist/*.tar.gz
The difference is I didn't use the "'" and got rid of the colons in the yaml; instead I set the values of the secrets as e.g., username: myuser
If problems with EOF appears, make sure to change Settings/Repository/Tags to be protected, so they will work again. I've posted here a more complete description.
Note that GitLab 12.10 (April 2020) will offer in its premium or more edition, a simpler way, using CI_JOB_TOKEN (See below the second part of this answer, with GitLab 13.4, Sept. 2020)
Build, publish, and share Python packages to the GitLab PyPI Repository
Python developers need a mechanism to create, share, and consume packages that contain compiled code and other content in projects that use these packages. PyPI, an open source project maintained by the Python Packaging Authority, is the standard for how to define, create, host, and consume Python packages.
In GitLab 12.10, we are proud to offer PyPI repositories built directly into GitLab! Developers now have an easier way to publish their projects’ Python packages. By integrating with PyPI, GitLab will provide a centralized location to store and view those packages in the same place as their source code and pipelines.
In March, we announced that the GitLab PyPI Repository and support for other package manager formats will be moved to open source.
You can follow along as we work to make these features more broadly available in the epic.
See Documentation and Issue.
And with GitLab 13.4 (September 2020)
Use CI_JOB_TOKEN to publish PyPI packages
You can use the GitLab PyPI Repository to build, publish, and share python packages, right alongside your source code and CI/CD Pipelines.
However, previously you couldn’t authenticate with the repository by using the pre-defined environment variable CI_JOB_TOKEN.
As a result, you were forced to use your personal credentials for making updates to the PyPI Repository, or you may have decided not to use the repository at all.
Now it is easier than ever to use GitLab CI/CD to publish and install PyPI packages by using the predefined CI_JOB_TOKEN environment variable.
See Documentation and Issue.
You can also upload python package to a private Pypi server in one line (I am using it with gilab-ci):
Set environment variables PYPI_SERVER, PYPI_USER and PYPI_PASSWORD through Gitlab CI settings
Call
twine upload --repository-url ${PYPI_SERVER} --username $PYPI_USER --password $PYPI_PASSWORDD $artifact
Note: I had to use twine from PIP (pip3 install twine) and not from my Ubuntu package as the version 10 of twine seems to have a bug (zipfile.BadZipFile: File is not a zip file).
You can also look into using dpl: Here's how I'm doing it:
pip:
stage: upload
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- python setup.py sdist
- dpl --provider=pypi --user=$PIP_USERNAME --password=$PIP_PASSWORD --skip_existing=true
only:
- master
You can set $PIP_USERNAME and $PIP_PASSWORD in the variables section for your project: settings -> CI/CD -> Variables
I know this is an old question, but if you're using poetry (I'm testing with version 1.1.11) you can do it quite easily, like this:
poetry config repositories.my_private_repo [URL_TO_YOUR_PYPI_REPO]
poetry config http-basic.my_private_repo [USERNAME] [PASSWORD]
poetry build
poetry publish --repository my_private_repo
On develop branches, you can add the --dry-run argument to poetry publish so it won't actually get uploaded
Related
I have a currently working gitlab CICD setup ... which takes my conan recipe , my library repo.. and whichever git tag you hard code.. it will clone that and build the package... and push it to gitlab package manager.. GREAT!
What I am wondering is.. how should I automate this so it looks at the git repo and builds ALL git tags.. so that I can roll back and forth more easily on conan packages.
For reference here is my conan.py
from conans import ConanFile, CMake, tools
class TwsApiConan(ConanFile):
name = "twsapi"
version = "10.17.01"
license = "IBKR"
author = "someemail"
url = "https://github.com/ibkr/tws-api/"
description = "Built from a mirror of the actual TWS API files in Github"
topics = ("tws", "interactive brokers")
settings = "os", "compiler", "build_type", "arch"
options = {"shared": [True, False]}
default_options = {"shared": False}
generators = "cmake"
def source(self):
self.run("git clone --depth 1 --branch 10.17.01 git#github.com:ibkr/tws-api.git")
tools.replace_in_file("tws-api/CMakeLists.txt", " LANGUAGES CXX )",
''' LANGUAGES CXX )
add_compile_options(-std=c++17)''')
def build(self):
cmake = CMake(self)
cmake.configure(source_folder="tws-api")
cmake.build()
def package(self):
self.copy("*.h", dst="include", src="tws-api/source/cppclient/client")
self.copy("*hello.lib", dst="lib", keep_path=False)
self.copy("*.dll", dst="bin", keep_path=False)
self.copy("*.so", dst="lib", keep_path=False)
self.copy("*.dylib", dst="lib", keep_path=False)
self.copy("*.a", dst="lib", keep_path=False)
def package_info(self):
self.cpp_info.libs = ["twsapi"]
The gitlab CICD routine so far
variables:
GITHUB_DEPLOY_KEY_BASE64: $GITHUB_DEPLOY_KEY_BASE64
stages: # List of stages for jobs, and their order of execution
- build
build-job: # This job runs in the build stage, which runs first.
stage: build
image: registry.gitlab.com/jrgemcp-public/gitlab-cicd-docker/build-conan-docker:latest
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- MY_SECRET_DECODED="$(echo $GITHUB_DEPLOY_KEY_BASE64 | base64 -d)"
- echo "$MY_SECRET_DECODED" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan github.com >> ~/.ssh/known_hosts 2>/dev/null;
- chmod 644 ~/.ssh/known_hosts
script:
- conan profile new default --detect
- conan profile update settings.compiler.libcxx=libstdc++11 default
- conan remote add gitlab https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/packages/conan
- conan user myusername -r gitlab -p ${CI_JOB_TOKEN}
- conan create . mypackagename/prod
- conan upload "*" --remote=gitlab --all --confirm
You could generate your config dynamically in a script. That is to say, you might script getting all the tags/refs you want to build and create a yaml file containing a job for each ref that will checkout the correct ref and build it.
Basic idea in bash:
for tag in "$(get-all-tags-to-build)"; do
job_yaml="job ${tag}: {\"script\": \"make build ${tag}\"}"
echo job_yaml >> generated-config.yml
done
The idea being that make build is configured to checkout the tag provided as the argument and run the build.
Using that generated config artifact, it will cause the created child pipeline to contain a job for every ref returned by the get-all-tags-to-build script (you implement this).
Whenever I open my gitpod workspace I have to re-install my requirements.txt file. I was reading about the gitpod.yml file and see that I have to add it in there so the dependencies get installed during the prebuild.
I can't find any examples of this so I just want to see if I understand it correctly.
Right now my gitpod.yml file looks like this...
image:
file: .gitpod.Dockerfile
# List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/
tasks:
- init: echo 'init script' # runs during prebuild
command: echo 'start script'
# List the ports to expose. Learn more https://www.gitpod.io/docs/config-ports/
ports:
- port: 3000
onOpen: open-preview
vscode:
extensions:
- ms-python.python
- ms-azuretools.vscode-docker
- eamodio.gitlens
- batisteo.vscode-django
- formulahendry.auto-close-tag
- esbenp.prettier-vscode
Do I just add these two new 'init' and 'command' lines under tasks?
image:
file: .gitpod.Dockerfile
# List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/
tasks:
- init: echo 'init script' # runs during prebuild
command: echo 'start script'
- init: pip3 install -r requirements.txt
command: python3 manage.py
# List the ports to expose. Learn more https://www.gitpod.io/docs/config-ports/
ports:
- port: 3000
onOpen: open-preview
vscode:
extensions:
- ms-python.python
- ms-azuretools.vscode-docker
- eamodio.gitlens
- batisteo.vscode-django
- formulahendry.auto-close-tag
- esbenp.prettier-vscode
Thanks so much for your help. I'm still semi-new to all this and trying to figure my way around.
To install requirements in the prebuild, you have to install them in the Dockerfile. The exception is editable installs, pip install -e ..
For example, to install a package named <package-name>, add this line to .gitpod.Dockerfile:
RUN python3 -m pip install <package-name>
Installing from a requirements file is slightly trickier because the Dockerfile can't "see" the file when it's building. One workaround is to give the Dockerfile the URL of the requirements file in the repo.
RUN python3 -m pip install -r https://gitlab.com/<gitlab-username>/<repo-name>/-/raw/master/requirements.txt
Edit: Witness my embarrassing struggle with the same issue today: https://github.com/gitpod-io/gitpod/issues/7306
I've been trying to understand how to go about deploying my Python Function App to Azure using Bitbucket pipelines.
I've read some answers on the web, and it seems pretty simple once I have my python app zipped.
It can easily be done using this answer: Azure Function and BitBucket build pipelines
script:
- pipe: microsoft/azure-functions-deploy:1.0.2
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
FUNCTION_APP_NAME: '<string>'
ZIP_FILE: '<string>'
However, I can't, for the life of me, find the format Azure Functions is expecting the zip file to be in.
Where do the requirements go? Even better - what pipeline spec comes before this one that creates the sought after ZIP_FILE?
Thanks!
I tried this solution and work also for me, but there is a deprecation problem. The pipe microsoft/azure-functions-deploy is using a deprecated image for azure cli: microsoft/azure-cli, you can read this here.
So you can use the atlassian version of this pipe but for python doesn't work for me because in the command:
az functionapp deployment source config-zip...
there isn't specified --build-remote.
So my solution is not use pipe but write your step with azure-cli commands:
- step:
name: Deploy on Azure
image: mcr.microsoft.com/azure-cli:latest
script:
- az login --service-principal --username ${AZURE_APP_ID} --password ${AZURE_PASSOWRD} --tenant ${AZURE_TENANT_ID}
- az functionapp deployment source config-zip -g ${RESOURCE_GROUP_NAME} -n 'functioAppName' --src 'function.zip' --build-remote
This is a step that work in my case, another solution can be write a step that use this
Ended up finding the answer scattered in different places:
image: python:3.8
pipelines:
branches:
master:
- step:
name: Build function zip
caches:
- pip
script:
- apt-get update
- apt-get install -y zip
- pip install --target .python_packages/lib/site-packages -r requirements.txt
- zip -r function.zip .
artifacts:
- function.zip
- step:
name: Deploy zip to Azure
deployment: Production
script:
- pipe: microsoft/azure-functions-deploy:1.0.0
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
ZIP_FILE: 'function.zip'
FUNCTION_APP_NAME: $FUNCTION_NAME
I have configured the Nexus-OSS-3.14 private Python artifact server on aws cloud. I want to be maintain all my project related Python packages on my private repository server.
I downloaded the all the Python packages on my local Linux box and I want to be upload all the Python packages to private Python artifact server.
I have tried curl put request and I didn't upload and your help is needed to complete this.
I have tried curl put request:
curl -v -u admin:admin --upload-file boto3-1.9.76-py2.py3-none-any.whl https://artifact.example.com/repository/ASAP-Python-2.7-Hosted/
When I used that command and I get 404 response.
I think the recommended approach is to use twine, something like this should work:
pip install twine
twine upload --repository https://artifact.example.com/repository/ASAP-Python-2.7-Hosted/ boto3-1.9.76-py2.py3-none-any.whl
It should ask for your username and password. To make life a bit easier you can create $HOME/.pypirc file with the URL, username and password
[nexus]
repository: https://artifact.example.com/repository/ASAP-Python-2.7-Hosted/
username: admin
password: admin
Then when you call twine, do so like this:
twine upload --repository nexus boto3-1.9.76-py2.py3-none-any.whl
It's not a hard requirement, but if you're on multi user system and you've put a password in the file you should probably do
chmod 600 $HOME/.pypirc
Pip (yarn) for download. Twine for upload.
Configuration:
be careful with trailing slashes!
Download with pip (yarn)
pip config edit [--editor [nano|code|...]] [--global|--user] for edit config
[global]
index = https://nexus.your.domain/repository/pypi/pypi
index-url = https://nexus.your.domain/repository/pypi/simple
Or set environment variables. Dockerfile for example:
ENV \
PIP_INDEX=https://nexus.your.domain/repository/pypi/pypi \
PIP_INDEX_URL=https://nexus.your.domain/repository/pypi/simple
Or use command line args pip install --index
Upload with twine
Edit .pypirc:
[distutils]
index-servers =
pypi
[pypi]
repository: https://nexus.your.domain/repository/pypi-hosted/
username: nexususername
password: nexuspassword
Or environment
ENV \
TWINE_REPOSITORY_URL=https://nexus.your.domain/repository/pypi-hosted/ \
TWINE_USERNAME=nexususername \
TWINE_PASSWORD=nexuspassword
Or command line
twine upload --repository-url
I am currently trying to use GitLab to run a CI/CD job that runs a Python file that makes changes to a particular repository and then commits and pushes those changes to master. I also have a role of Master in the repository. It appears that all git functions run fine except for the git push, which leads to fatal: You are not currently on a branch. and with using git push origin HEAD:master --force, that leads to fatal: unable to access 'https://gitlab-ci-token:xxx#xxx/project.git/': The requested URL returned error: 403. I've been looking over solutions online, one being this one, and another being unprotecting it, and couldn't quite find what I was looking for just yet. This is also a sub-project within the GitLab repository.
Right now, this is pretty much what my .gitlab-ci.yml looks like.
before_script:
- apt-get update -y
- apt-get install git -y
- apt-get install python -y
- apt-get python-pip -y
main:
script:
- git config --global user.email "xxx#xxx"
- git config --global user.name "xxx xxx"
- git config --global push.default simple
- python main.py
My main.py file essentially has a function that creates a new file within an internal directory provided that it doesn't already exist. It has a looks similar to the following:
import os
import json
def createFile(strings):
print ">>> Pushing to repo...";
if not os.path.exists('files'):
os.system('mkdir files');
for s in strings:
title = ("files/"+str(s['title'])+".json").encode('utf-8').strip();
with open(title, 'w') as filedata:
json.dump(s, filedata, indent=4);
os.system('git add files/');
os.system('git commit -m "Added a directory with a JSON file in it..."');
os.system('git push origin HEAD:master --force');
createFile([{"title":"A"}, {"title":"B"}]);
I'm not entirely sure why this keeps happening, but I have even tried to modify the repository settings to change from protected pull and push access, but when I hit Save, it doesn't actually save. Nonetheless, this is my overall output. I would really appreciate any guidance any can offer.
Running with gitlab-runner 10.4.0 (00000000)
on cicd-shared-gitlab-runner (00000000)
Using Kubernetes namespace: cicd-shared-gitlab-runner
Using Kubernetes executor with image ubuntu:16.04 ...
Waiting for pod cicd-shared-gitlab-runner/runner-00000000-project-00000-concurrent-000000 to be running, status is Pending
Waiting for pod cicd-shared-gitlab-runner/runner-00000000-project-00000-concurrent-000000 to be running, status is Pending
Running on runner-00000000-project-00000-concurrent-000000 via cicd-shared-gitlab-runner-0000000000-00000...
Cloning repository...
Cloning into 'project'...
Checking out 00000000 as master...
Skipping Git submodules setup
$ apt-get update -y >& /dev/null
$ apt-get install git -y >& /dev/null
$ apt-get install python -y >& /dev/null
$ apt-get install python-pip -y >& /dev/null
$ git config --global user.email "xxx#xxx" >& /dev/null
$ git config --global user.name "xxx xxx" >& /dev/null
$ git config --global push.default simple >& /dev/null
$ python main.py
[detached HEAD 0000000] Added a directory with a JSON file in it...
2 files changed, 76 insertions(+)
create mode 100644 files/A.json
create mode 100644 files/B.json
remote: You are not allowed to upload code.
fatal: unable to access 'https://gitlab-ci-token:xxx#xxx/project.git/': The requested URL returned error: 403
HEAD detached from 000000
Changes not staged for commit:
modified: otherfiles/otherstuff.txt
no changes added to commit
remote: You are not allowed to upload code.
fatal: unable to access 'https://gitlab-ci-token:xxx#xxx/project.git/': The requested URL returned error: 403
>>> Pushing to repo...
Job succeeded
Here is a resource from Gitlab that describes how to make commits to the repository within the CI pipeline: https://gitlab.com/guided-explorations/gitlab-ci-yml-tips-tricks-and-hacks/commit-to-repos-during-ci/commit-to-repos-during-ci
Try configuring your gitlab-ci.yml file to push the changes rather than trying to do it from the python file.
I managed to do this via ssh on a runner by making sure the ssh key is added, and then using the full git url:
task_name:
stage: some_stage
script:
- ssh-add -K ~/.ssh/[ssh key]
- git push -o ci-skip git#gitlab.com:[path to repo].git HEAD:[branch name]
If it is the same repo that triggered the job, the url could also be written as:
git#$CI_SERVER_HOST:$CI_PROJECT_PATH.git
This method can be used to commit tags or files. You may also wish to consider using the CI CD
variable API to store cross-build persistent data if it does not have to be committed to the repo
https://docs.gitlab.com/ee/api/project_level_variables.html
https://docs.gitlab.com/ee/api/group_level_variables.html
ACCESS_TOKEN below is a variable at the repo or an upbound group level that contains a token that
can write to the target repos. Since maintainer can see these, it is best practice to
create tokens on special API users who are least privileged for just what they need to do.
write_to_another_repo:
before_script:
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
script:
- |
echo "This CI job demonstrates writing files and tags back to a different repository than this .gitlab-ci.yml is stored in."
OTHERREPOPATH="guided-explorations/gitlab-ci-yml-tips-tricks-and-hacks/commit-to-repos-during-ci/pushed-to-from-another-repo-ci.git"
git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#$CI_SERVER_HOST/$OTHERREPOPATH
cd pushed-to-from-another-repo-ci
CURRENTDATE="$(date)"
echo "$CURRENTDATE added a line" | tee -a timelog.log
git status
git add timelog.log
# "[ci skip]" and "-o ci-skip" prevent a CI trigger loop
git commit -m "[ci skip] updated timelog.log at $CURRENTDATE"
git push -o ci-skip http://root:$ACCESS_TOKEN#$CI_SERVER_HOST/$OTHERREPOPATH HEAD:master
#Tag commit (can be used without commiting files)
git tag "v$(date +%s)"
git tag
git push --tags http://root:$ACCESS_TOKEN#$CI_SERVER_HOST/$OTHERREPOPATH HEAD:master
The requested URL returned error: 403
The HTTP 403 Forbidden client error status response code indicates that the server understood the request but refuses to authorize it.
The problem is we cannot provide a valid authentication to git and hence our request is forbidden.
Try this:Control Panel => User Accounts => Manage your credentials => Windows Credentials
It worked for me.However I'm not quite sure if it will work for you.
Maybe you can need to generate access token on profile, edit profile - then access tokens for 'read_repository' or 'write_repository'
profile => edit profile => access tokens