I have been able to setup an Azure pipeline that publishes a python package to our internal Azure Feed. Now I am trying to have it publish directly to PyPi. This is what I have done already:
I have setup a PyPi "Service Connection" in the Azure Project with the following configuration
Authentication Method = Username and Password
Python Repository Url for upload = https://upload.pypi.org/legacy
EndpointName: I wasnt too sure about this but I set it as the package name on PyPi
And I named this Service Connection PyPi.
In the pipeline I will run the following authentication task:
- task: TwineAuthenticate#1
inputs:
pythonUploadServiceConnection: 'PyPi'
Then I build the wheel for publishing.
Whenever I try to publish to the internal Azure feed it works, but when I try to upload that same package to pypi it gets stuck on this:
Uploading distributions to https://upload.pypi.org/legacy/
Are there any clear issues anyone can see that can have it get stuck trying to upload to pypi?
Twine authenticate probably isn't actually providing credentials to the twine upload command, so it's hanging waiting for user input. Try adding --non-interactive to your twine command like this twine upload --non-interactive dist/*. It will probably end up showing an error.
Related
I created a new artifact feed from a public open source project that is already being used to perform Azure Pipelines public CI on a repo hosted on github.
I successfully uploaded a bunch of test Python wheel files to that feed using twine and the appropriate credentials.
The generated pip feed URL looks as follows:
https://pkgs.dev.azure.com/orgname/publicprojectname/_packaging/feedname/pypi/simple/
If I type the "pypi"/"pip" URL of the feed in the browser where I am logged in, I get the following message:
This functionality is currently not available.
If I type the same URL in a new "private browsing" window on firefox, I get redirected to the Azure login page.
My end goal would be to share nightly builds and I do not want to use the main pypi.org server for that.
Note: this official tutorial mentions a specific button named "+ New public feed (preview)" to create the feed but I could not find it on my project. I used the regular "+ New feed" button instead. Maybe this is the cause of the problem?
Even if the URL cannot be browsed from a regular browser, pointing pip to use it with the following command works as expected:
pip install -i https://pkgs.dev.azure.com/orgname/publicprojectname/_packaging/feedname/pypi/simple/ packagename
To list the content of feed, anonymous users can use a different URL:
https://dev.azure.com/orgname/publicprojectname/_packaging?_a=feed&feed=feedname
Side note: make sure you do not have artifacts-keyring package installed in the venv you use to test anonymous pip access:
pip uninstall -y artifacts-keyring
Otherwise trying to access you feed from pip will trigger the following credential pause:
$ pip install -i https://pkgs.dev.azure.com/orgname/publicprojectname/_packaging/feedname/pypi/simple/ packagename
Looking in indexes: hhttps://pkgs.dev.azure.com/orgname/publicprojectname/_packaging/feedname/pypi/simple/
[Minimal] [CredentialProvider]DeviceFlow: https://pkgs.dev.azure.com/orgname/publicprojectname/_packaging/feedname/pypi/simple/
[Minimal] [CredentialProvider]ATTENTION: User interaction required.
**********************************************************************
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code XXXXXXXX to authenticate.
**********************************************************************
[Error] [CredentialProvider]Device flow authentication failed. User was presented with device flow, but didn't react within 90 seconds.
I am using google cloud appengine and deploying with gcloud app deploy and a standard app.yaml file. My requirements.txt file has one private package that is fetched from github (git+ssh://git#github.com/...git). This install works locally, but when I run the deploy I get
Host key verification failed.
fatal: Could not read from remote repository.
This suggests there is no ssh key when installing. Reading docs (https://cloud.google.com/appengine/docs/standard/python3/specifying-dependencies) it appears that this just isn't an option???
Dependencies are installed in a Cloud Build environment that does not provide access to SSH keys. Packages hosted on repositories that require SSH-based authentication must be copied into your project directory and uploaded alongside your project's code using the pip package manager.
To me this seems severely not-optimal - the whole point of factoring out code into a package was to be able to avoid duplication in repos. Now, if I want to use appengine, you're telling me this not possible?
Is there really no workaround?
See:
https://cloud.google.com/appengine/docs/standard/python3/specifying-dependencies#private_dependencies
The App Engine service does not (and should not) have access to your private repo.
One alternative (that you don't want) is to upload your public key to the App Engine service.
The other -- as documented -- is that you must provide the content of your private repo to the service as part of your upload.
I'm going through the same issue, deploying on gcloud a python project that contains in its requirements.txt some private repositories. As #DazWilkin wrote already, there's no way to deploy it like you do normally.
One option would be to create a docker image of the whole project and its dependencies, save it into the gcloud docker registry and then pull it into the App Engine instance.
Has anyone succesfully implemented flask-saml using Windows as dev environment, Python 3.6 and Flask 1.0.2?
I was given the link to the SAML METADATA XML file by our organisation and had it configured on my flask app.
app.config.update({
'SECRET_KEY': 'changethiskeylaterthisisoursecretkey',
'SAML_METADATA_URL': 'https://<url>/FederationMetadata.xml',
})
flask_saml.FlaskSAML(app)
According to the documentation this extension will setup the following routes:
/saml/logout/: Log out from the application. This is where users go
if they click on a “Logout” button.
/saml/sso/: Log in through SAML.
/saml/acs/: After /saml/sso/ has sent you to your IdP it sends you
back to this path. Also your IdP might provide direct login without
needing the /saml/sso/ route.
When I go to one of the routes http://localhost:5000/saml/sso/ I get the error below
saml2.sigver.SigverError saml2.sigver.SigverError: Cannot find
['xmlsec.exe', 'xmlsec1.exe']
I then went to this site https://github.com/mehcode/python-xmlsec/releases/tag/1.3.5 to get xmlsec and install it. However, I'm still getting the same issue.
Here is a screenshot of how I installed xmlsec
where does not seem to find the xmlsec.exe
documentationis asking to have xmlsec1 pre-installed. What you installed is a python binding to xmlsec1.
Get a windows build of xmlsec1 from here or build it from source
And make it available in the PATH.
xmlsec won't work properly in windows, better use Linux environment
Type the below command before giving pip install xmlsec
sudo apt-get install xmlsec1
I apologize in advance since this seems like a basic question...
I am trying to learn using mujoco(link here), and inside its python binding Makefile it has:
upload:
rm -rf dist
python setup.py sdist
twine upload dist/*
What does twin upload dist/* command do?
In addition, this asks me for a username and password like this:
Uploading distributions to https://pypi.python.org/pypi
Enter your username: guest
Enter your password:
Uploading mujoco-py-0.5.7.tar.gz
HTTPError: 401 Client Error: You must be identified to edit package information for url: https://pypi.python.org/pypi
Makefile:2: recipe for target 'upload' failed
Is this asking for my computer username and password?
Twine is a commonly used system for uploading project builds to PyPI (the Python Package Index).
It will take care of securely transferring your project's build artifacts, in either wheel, sdist, etc. formats to PyPI or some other user defined index server.
When you specify twine upload <files>, twine will attempt to upload said files to PyPI, but in order to do so, it will require you to authenticate yourself. This is because PyPI wants to protect a project from having their advertised packages "hijacked" by a ne'er-do-well. In order for this step to proceed, you would have to give credentials that are marked as authoritative for the project that your uploaded project artifacts belong to.
It looks like the mujoco project's Makefile includes a target to ease in uploading updates of the project to PyPI by utilizing the Twine application. This target would only be meant to be used by the package maintainer(s).
Oh, and in case you were wondering, the python setup.py sdist command is what makes a source code artifact that can be uploaded to PyPI. It will place this artifact in the ./build/ directory as project-name_version.tar.gz.
I've been working on an update to a server that requires a private GitHub repository. I can download the repo on my machine, because I'm able to enter a password when prompted. When I try to do this with my server, which is running on an Amazon EC2 instance, I don't have these prompts, so the module from the GitHub repository is not installed. Is there a way for me to provide the username and password in the installing file that I'm using for pip, so I can have the private repo module install successfully?
I'm using the -e git+<url>#egg=<name> in my requirements.txt
You can use SSH links instead of HTTPS links, like git#github.com:username/projectname.git instead of https://github.com/username/projectname.git, and use authentication keys instead of a password.
Step by step, you have to:
Change URL in requirements.txt to git#....
Create a key pair for your deployment machine and store it in ~/.ssh/ directory.
Add the key to you Github account.
Read the GitHub help pages for more detailed instructions.