Python - Build and release an Artifact with AzureDevOps - python

I'm trying to create an Azure DevOps Pipeline in order to build and release a Python package under the Azure DevOps Artifacts section.
I've started creating a feed called "utils", then I've created my package and I've structured it like that:
.
src
|
__init__.py
class.py
test
|
__init__.py
test_class.py
.pypirc
azure-pipelines.yml
pyproject.toml
requirements.txt
setup.cfg
And this is the content of files:
.pypirc
[distutils]
Index-servers =
prelios-utils
[utils]
Repository = https://pkgs.dev.azure.com/OMIT/_packaging/utils/pypi/upload/
pyproject.toml
[build-system]
requires = [
"setuptools>=42",
"wheel"
]
build-backend = "setuptools.build_meta"
setup.cfg
[metadata]
name = my_utils
version = 0.1
author = Walter Tranchina
author_email = walter.tranchina#OMIT.com
description = A package containing [...]
long_description = file: README.md
long_description_content_type = text/markdown
url = OMIT.com
project_urls =
classifiers =
Programming Language :: Python :: 3
License :: OSI Approved :: MIT License
Operating System :: OS Independent
[options]
package_dir =
= src
packages = find:
python_requires = >=3.7
install_requires=
[options.packages.find]
where = src
azure-pipelines.yml
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python38:
python.version: '3.8'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
displayName: 'Install dependencies'
- script: |
pip install twine wheel
displayName: 'Install buildtools'
- script: |
pip install pytest pytest-azurepipelines
pytest
displayName: 'pytest'
- script: |
python -m build
displayName: 'Artifact creation'
- script: |
twine upload -r utils --config-file ./.pypirc dist/*
displayName: 'Artifact Upload'
The problem I'm facing is that the pipeline stucks in the Artifact Upload stage for hours without completing.
Can please someone help me understand what it's wrong?
Thanks!
[UPDATE]
I've updated my yml file as suggested in the answers:
- task: TwineAuthenticate#1
displayName: 'Twine Authenticate'
inputs:
artifactFeed: 'utils'
And now I have this error:
2022-05-19T09:20:50.6726960Z ##[section]Starting: Artifact Upload
2022-05-19T09:20:50.6735745Z ==============================================================================
2022-05-19T09:20:50.6736081Z Task : Command line
2022-05-19T09:20:50.6736434Z Description : Run a command line script using Bash on Linux and macOS and cmd.exe on Windows
2022-05-19T09:20:50.6736788Z Version : 2.201.1
2022-05-19T09:20:50.6737008Z Author : Microsoft Corporation
2022-05-19T09:20:50.6737375Z Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
2022-05-19T09:20:50.6737859Z ==============================================================================
2022-05-19T09:20:50.8090380Z Generating script.
2022-05-19T09:20:50.8100662Z Script contents:
2022-05-19T09:20:50.8102321Z twine upload -r utils --config-file ./.pypirc dist/*
2022-05-19T09:20:50.8102824Z ========================== Starting Command Output ===========================
2022-05-19T09:20:50.8129029Z [command]/usr/bin/bash --noprofile --norc /home/vsts/work/_temp/706c12ef-da25-44b0-b1fc-5ab83e7e0bf9.sh
2022-05-19T09:20:51.1178721Z Uploading distributions to
2022-05-19T09:20:51.1180490Z https://pkgs.dev.azure.com/OMIT/_packaging/utils/pypi/upload/
2022-05-19T09:20:27.0860014Z Traceback (most recent call last):
2022-05-19T09:20:27.0861203Z File "/opt/hostedtoolcache/Python/3.8.12/x64/bin/twine", line 8, in <module>
2022-05-19T09:20:27.0862081Z sys.exit(main())
2022-05-19T09:20:27.0863965Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/__main__.py", line 33, in main
2022-05-19T09:20:27.0865080Z error = cli.dispatch(sys.argv[1:])
2022-05-19T09:20:27.0866638Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/cli.py", line 124, in dispatch
2022-05-19T09:20:27.0867670Z return main(args.args)
2022-05-19T09:20:27.0869183Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/commands/upload.py", line 198, in main
2022-05-19T09:20:27.0870362Z return upload(upload_settings, parsed_args.dists)
2022-05-19T09:20:27.0871990Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/commands/upload.py", line 127, in upload
2022-05-19T09:20:27.0873239Z repository = upload_settings.create_repository()
2022-05-19T09:20:27.0875392Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/settings.py", line 329, in create_repository
2022-05-19T09:20:27.0876447Z self.username,
2022-05-19T09:20:27.0877911Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/settings.py", line 131, in username
2022-05-19T09:20:27.0879043Z return cast(Optional[str], self.auth.username)
2022-05-19T09:20:27.0880583Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/auth.py", line 34, in username
2022-05-19T09:20:27.0881640Z return utils.get_userpass_value(
2022-05-19T09:20:27.0883208Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/utils.py", line 248, in get_userpass_value
2022-05-19T09:20:27.0884302Z value = prompt_strategy()
2022-05-19T09:20:27.0886234Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/auth.py", line 85, in username_from_keyring_or_prompt
2022-05-19T09:20:27.0887440Z return self.prompt("username", input)
2022-05-19T09:20:27.0888964Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/auth.py", line 96, in prompt
2022-05-19T09:20:27.0890017Z return how(f"Enter your {what}: ")
2022-05-19T09:20:27.0890786Z EOFError: EOF when reading a line
2022-05-19T09:20:27.1372189Z ##[error]Bash exited with code 'null'.
2022-05-19T09:20:27.1745024Z ##[error]The operation was canceled.
2022-05-19T09:20:27.1749049Z ##[section]Finishing: Artifact Upload
Seems like twine is waiting for something... :/

I guess this is because you are missing a Python Twine Upload Authenticate task.
- task: TwineAuthenticate#1
inputs:
artifactFeed: 'MyTestFeed'
If you are using a project level feed, the value of artifactFeed should be {project name}/{feed name}.
If you are using an organization level feed, the value of artifactFeed should be {feed name}.
A simpler way is to click the gray "setting" button under the task and select your feed from the drop-down list.

I've found the solution after many tentatives...
First I've created a Service Connection in Azure DevOps to Python, containing an API key previously generated.
Then I've edited the yaml file:
- task: TwineAuthenticate#1
displayName: 'Twine Authenticate'
inputs:
pythonUploadServiceConnection: 'PythonUpload'
- script: |
python -m twine upload --skip-existing --verbose -r utils --config-file $(PYPIRC_PATH) dist/*
displayName: 'Artifact Upload'
They key was using the variable $(PYPIRC_PATH) that is automatically set by the previous task. The .pypirc file is ignored by the process, so it can be deleted!
Hope it will help!

Related

How to create a CSV file inside a Azure repo using python

I'm using azure devops for the first time, just trying to create a CSV file using python script.
Python script main.py:
# importing pandas as pd
import pandas as pd
# list of name, degree, score
nme = ["aparna", "pankaj", "sudhir", "Geeku"]
deg = ["MBA", "BCA", "M.Tech", "MBA"]
scr = [90, 40, 80, 98]
# dictionary of lists
dict = {'name': nme, 'degree': deg, 'score': scr}
df = pd.DataFrame(dict)
print(df)
# saving the dataframe
df.to_csv('file.csv', header=False, index=False)
print("CSV file created")
Output:
and it created a csv file in that folder
What i did is, just went to the repo and created a new repo called myTest and uploaded the python file to there
then went to the pipelines and selected "Azure repos git"->selected "myTest"->"Python Package"-> and edited the YAML file and gave "save and run"
azure-pipelines.yml file content:
trigger:
- main
pool:
vmImage: ubuntu-latest
strategy:
matrix:
Python37:
python.version: '3.7'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install pandas
displayName: 'Install dependencies'
- script: |
pip install pytest pytest-azurepipelines
pytest
python main.py
displayName: 'pytest'
The pipeline ran successfully, but i didn't see any csv file created on the repo
Can somebody help me to solve this issue, is it possible to create a csv file inside a azure repo?

Structure Folder Path with Repository path Cloud Composer DAG

I need to run the DAG with the repository folder name, and I need to call the other modules from another directory from another path repository deployed.
So, I have a cloudbuild.yaml that will deploy the script into DAG folder and Plugins folder, but I still didn't know, how to get the other modules from the other path on Cloud Composer Bucket Storage.
This is my Bucket Storage path
cloud-composer-bucket/
dags/
github_my_repository_deployed-testing/
test_dag.py
plugins/
github_my_repository_deployed-testing/
planning/
modules_1.py
I need to call modules_1.py from my test_dag.py, I used this command to call the module
from planning.modules_1 import get_data
But from this method, I got an error shown like this
Broken DAG: [/home/airflow/gcs/dags/github_my_repository_deployed-testing/test_dag.py] Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/gcs/dags/github_my_repository_deployed-testing/test_dag.py", line 7, in <module>
from planning.modules_1 import get_date
ModuleNotFoundError: No module named 'planning'
This is my cloudbuild.yaml
steps:
- id: 'Push into Composer DAG'
name: 'google/cloud-sdk'
entrypoint: 'sh'
args: [ '-c', 'gsutil -m rsync -d -r ./dags ${_COMPOSER_BUCKET}/dags/$REPO_NAME']
- id: 'Push into Composer Plugins'
name: 'google/cloud-sdk'
entrypoint: 'sh'
args: [ '-c', 'gsutil -m rsync -d -r ./plugins ${_COMPOSER_BUCKET}/plugins/$REPO_NAME']
- id: 'Code Scanning'
name: 'python:3.7-slim'
entrypoint: 'sh'
args: [ '-c', 'pip install bandit && bandit --exit-zero -r ./']
substitutions:
_CONTAINER_VERSION: v0.0.1
_COMPOSER_BUCKET: gs://asia-southeast1-testing-cloud-composer-025c0511-bucket
My question is, what is the best and how to call the other modules into DAG?
You can put every modules in the Cloud Composer DAG folder, example :
cloud-composer-bucket/
dags/
github_my_repository_deployed-testing/
test_dag.py
planning/
modules_1.py
setup.py
On the DAG Python code, you can import your module with the following way :
from planning.modules_1 import get_data
As I remembered, the setup.py is created by Cloud Composer in the DAG root folder, if it's not the case, you can copy the setup.py in the DAG folder :
bucket/dags/setup.py
Example of setup.py file :
from setuptools import find_packages, setup
setup(
name="composer_env_python_lib",
version="0.0.1",
install_requires=[],
data_files=[],
packages=find_packages(),
)
Other possible solution
You can also use internal Python packages from GCP Artifact registry if you want (example with your package planning).
Then you can download your internal Python packages from Cloud Composer via PyPiPackages, I share with you a link about this :
private repo Composer Artifact registry

Testing with pytest: import that works on GitLab doesn't work in VS Code (and vice versa)

TL;DR: How can I set up my GitLab test pipeline so that the tests also run locally on VS Code?
I'm very new to GitLab pipelines, so please forgive me if the question is amateurish. I have a GitLab repo set up online, and I'm using VS Code to develop locally. I've created a new pipeline, I want to make sure all my unit tests (written with PyTest) run anytime I make a commit.
The issue is, that even though I use the same setup.py file for both places (obviously), I can't get both VS Code testing and the GitLab pipeline test to work at the same time. The issue is, I'm doing an import for my tests, and if I import like
...
from external_workforce import misc_tools
# I want to test functions in this misc_tools module
...
Then it works on GitLab, but not on VS Code, as VS Code gives an error when I'm doing test discovery, namely: ModuleNotFoundError: No module named 'external_workforce'. However, this works on GitLab. But if I import (in my test_tools.py file, see location below) like this:
...
from hr_datapool.external_workforce import misc_tools
...
It works in VS Code, but now GitLab is doing a crazy on me saying ModuleNotFoundError: No module named 'hr_datapool'.
I think the relevant info might be the following, please ask for more if more info is needed!
My file structure is:
.
|__ requirements.txt
setup.py
hr_datapool
|__ external_workforce
|__ __init__.py
misc_tools.py
tests
|__ test_tools.py
|__ other_module
...
In my pipeline editor (the .gitlab-ci.yml file) I have:
image: python:3.9.7
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- venv/
before_script:
- python --version # For debugging
- pip install virtualenv
- virtualenv venv
- source venv/bin/activate
- pip install -r requirements.txt
test:
script:
- pytest --pyargs hr_datapool #- python setup.py test
run:
script:
- python setup.py bdist_wheel
artifacts:
paths:
- dist/*.whl
And finally, my setup.py is:
import re
from unittest import removeResult
from setuptools import setup, find_packages
with open('requirements.txt') as f:
requirements = f.read().splitlines()
for req in ['wheel', 'bar']:
requirements.append(req)
setup(
name='hr-datapool',
version='0.1',
...
packages=find_packages(),
install_requires=requirements,
)
Basically, the question is: How can I set up my GitLab test pipeline so that the tests also run locally on VS Code? Thank you!
UPDATE:
Adding the full trace coming from VS Code:
> conda run -n base --no-capture-output --live-stream python ~/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/get_output_via_markers.py ~/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/testing_tools/run_adapter.py discover pytest -- --rootdir "." -s --cache-clear hr_datapool
cwd: .
[ERROR 2022-2-23 9:2:4.500]: Error discovering pytest tests:
[r [Error]: ============================= test session starts ==============================
platform darwin -- Python 3.9.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /Users/myuser/Documents/myfolder
plugins: anyio-2.2.0
collected 0 items / 1 error
==================================== ERRORS ====================================
_____ ERROR collecting hr_datapool/external_workforce/tests/test_tools.py ______
ImportError while importing test module '/Users/myuser/Documents/myfolder/hr_datapool/external_workforce/tests/test_tools.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../opt/anaconda3/lib/python3.9/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
hr_datapool/external_workforce/tests/test_tools.py:2: in <module>
from external_workforce import misc_tools
E ModuleNotFoundError: No module named 'external_workforce'
=========================== short test summary info ============================
ERROR hr_datapool/external_workforce/tests/test_tools.py
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
===================== no tests collected, 1 error in 0.08s =====================
Traceback (most recent call last):
File "/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/get_output_via_markers.py", line 26, in <module>
runpy.run_path(module, run_name="__main__")
File "/Users/myuser/opt/anaconda3/lib/python3.9/runpy.py", line 268, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/myuser/opt/anaconda3/lib/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/myuser/opt/anaconda3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/testing_tools/run_adapter.py", line 22, in <module>
main(tool, cmd, subargs, toolargs)
File "/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/testing_tools/adapter/__main__.py", line 100, in main
parents, result = run(toolargs, **subargs)
File "/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/testing_tools/adapter/pytest/_discovery.py", line 44, in discover
raise Exception("pytest discovery failed (exit code {})".format(ec))
Exception: pytest discovery failed (exit code 2)
ERROR conda.cli.main_run:execute(33): Subprocess for 'conda run ['python', '/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/get_output_via_markers.py', '/Users/A111086670/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/testing_tools/run_adapter.py', 'discover', 'pytest', '--', '--rootdir', '/Users/myuser/Documents/myfolder', '-s', '--cache-clear', 'hr_datapool']' command failed. (See above for error)
at ChildProcess.<anonymous> (/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/out/client/extension.js:32:39235)
at Object.onceWrapper (events.js:422:26)
at ChildProcess.emit (events.js:315:20)
at maybeClose (internal/child_process.js:1048:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:288:5)]
The PYTHONPATH caused the problem.
As external_workforce parent folder path -> the path of hr_datapool in the PYTHONPATH when you are using GitLab. While hr_datapool parent folder path in the PYTHONPATH when you are using VSCode.
Are you running the test in the terminal on the VSCode? And have you added this in the settings.json file?
"terminal.integrated.env.windows": {
"PYTHONPATH": "${workspaceFolder};"
},
Then you can execute pytest in the terminal on the VSCode. But you have not configured this in GitLab instead of adding hr-datapool( - pytest --pyargs hr_datapool or setup( name='hr-datapool',), so you will get the error message.

Can't use Python's sh module in Bazel genrule

When I run a python script, which uses the 'sh' module, from bazel genrule, it failed with this:
INFO: Analysed target //src:foo_gen (8 packages loaded).
INFO: Found 1 target...
ERROR: /home/libin11/workspace/test/test/src/BUILD:1:1: Executing genrule //src:foo_gen failed (Exit 1)
Traceback (most recent call last):
File "src/test.py", line 2, in <module>
sh.touch("foo.bar")
File "/usr/local/lib/python2.7/dist-packages/sh.py", line 1427, in __call__
return RunningCommand(cmd, call_args, stdin, stdout, stderr)
File "/usr/local/lib/python2.7/dist-packages/sh.py", line 767, in __init__
self.call_args, pipe, process_assign_lock)
File "/usr/local/lib/python2.7/dist-packages/sh.py", line 1784, in __init__
self._stdout_read_fd, self._stdout_write_fd = pty.openpty()
File "/usr/lib/python2.7/pty.py", line 29, in openpty
master_fd, slave_name = _open_terminal()
File "/usr/lib/python2.7/pty.py", line 70, in _open_terminal
raise os.error, 'out of pty devices'
OSError: out of pty devices
Target //src:foo_gen failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 2.143s, Critical Path: 0.12s
INFO: 0 processes.
FAILED: Build did NOT complete successfully
I want to integrate a thirdparty project to my own. The third party project is built with a python script, so I would like to build the project with bazel genrule.
Here is an example file list:
.
├── src
│ ├── BUILD
│ └── test.py
└── WORKSPACE
WORKSPACE is empty, BUILD is:
genrule(
name = "foo_gen",
srcs = glob(["**/*"]),
outs = ["foo.bar"],
cmd = "python $(location test.py)",
)
test.py is:
import sh
sh.touch("foo.bar")
And run:
bazel build //src:foo_gen
OS: Ubuntu 16.04
bazel: release 0.14.1
It looks like if you change the call to sh.touch("foo.bar", _tty_in=False, _tty_out=False) it works, but you'll still need a bit of modification to the genrule otherwise it won't produce output.
I prefer to import pip dependencies using the bazel python rules, so I can create the tool for my genrule. This way, bazel handles the pip requirement install and you don't have to chmod the test.py file.
load("#my_deps//:requirements.bzl", "requirement")
py_binary(
name = "foo_tool",
srcs = [
"test.py",
],
main = "test.py",
deps = [
requirement("sh"),
],
)
genrule(
name = "foo_gen",
outs = ["foo.bar"],
cmd = """
python3 $(location //src:foo_tool)
cp foo.bar $#
""",
tools = [":foo_tool"],
)
Note the required copy in the genrule command. It's a bit cleaner if your python script can output to std out, then you can just redirect the output to the file instead of adding a copy command. See this for more info.
My output with these changes:
INFO: Analysed target //src:foo_gen (0 packages loaded).
INFO: Found 1 target...
Target //src:foo_gen up-to-date:
bazel-genfiles/src/foo.bar
INFO: Elapsed time: 0.302s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action

How to fix the issue "PyPI-test not found in .pypic" when submit package to PyPI?

I followed the guide How to submit a package to PyPI to submit one package.
It throwed the error below:
Traceback (most recent call last):
File "setup.py", line 27, in
'Programming Language :: Python',
File "/usr/lib64/python2.6/distutils/core.py", line 152, in setup
dist.run_commands()
File "/usr/lib64/python2.6/distutils/dist.py", line 975, in run_commands
self.run_command(cmd)
File "/usr/lib64/python2.6/distutils/dist.py", line 995, in run_command
cmd_obj.run()
File "/usr/lib/python2.6/site-packages/setuptools/command/register.py", line 9, in run
_register.run(self)
File "/usr/lib64/python2.6/distutils/command/register.py", line 33, in run
self._set_config()
File "/usr/lib64/python2.6/distutils/command/register.py", line 84, in _set_config
raise ValueError('%s not found in .pypirc' % self.repository)
ValueError: PyPI-test not found in .pypirc
My .pypirc file context is:
[distutils] # this tells distutils what package indexes you can push to
index-servers =
PyPI # the live PyPI
PyPI-test # test PyPI
[PyPI] # authentication details for live PyPI
repository: https://PyPI.python.org/PyPI
username: {{username}}
password: {{password}}
[PyPI-test] # authentication details for test PyPI
repository: https://testPyPI.python.org/PyPI
username: {{username}}
My OS env is CentOS release 6.2 (Final) and python env is Python 2.6.6.
What's the reason and how to fix it?
Some pitfalls to avoid in order to make this work:
The .pypirc file is expected inside the HOME directory. This is true for Windows and Unix.
If it's not working, it's because the .pypirc file is not found at the path indicated by the HOME variable.
On Windows, to know what your path is:
With PowerShell (if you are using pew to manage virtualenv for instance), echo $HOME.
With default Windows console, echo %HOMEPATH% (yes, talk about "portability")
Then place the .pypirc file right at that path.
As for the file, don't forget the distutil part, otherwise it won't work.
Your file should be EXACTLY like that:
[distutils]
index-servers =
pypi
pypitest
[pypitest]
repository = https://testpypi.python.org/pypi
username = <your user name goes here>
password = <your password goes here>
[pypi]
repository = https://pypi.python.org/pypi
username = <your user name goes here>
password = <your password goes here>
My intuition tells me to not customize the pypi repository name, not sure it works otherwise.
Then, when you run the command, simple provide the -r (repository) flag with pypitest
python setup.py register -r pypitest
And that should do the trick.
Make sure your .pypirc file is in your /home directory.
When I got this error, I changed my .pypirc file to:
[distutils]
index-servers =
pypi
test
[pypi]
repository: https://pypi.python.org/pypi
username: {{username}}
password: {{password}}
[test]
repository: https://testpypi.python.org/pypi
username: {{username}}
password: {{password}}
and then I ran:
python setup.py register
instead of:
python setup.py register -r pypitest
This prompted me for my username and password which I entered and it successfully registered. Note I was following Peter Downs' Guide
I realized this doesn't upload to pypitest, but I still managed to register my module to pypi using this method.
I replaced "PyPI"/"PyPItest" both to lowercase letters: "pypi"/"pypi-test". The error disappeared, but prompt another error:
Server response (403): You are not allowed to store 'mypackage' package information.
You should remove the comments here since distutils doesn't parse them properly:
index-servers =
PyPI # the live PyPI
PyPI-test # test PyPI
So just:
index-servers =
PyPI
PyPI-test
Or maybe even better don't use mixed case and dashes for the repository names, as Junchen suggests. With the current version it should work, though.
I used pypitest, rather than pypi-test. Works like charm.
I follow the instruction by Peter Downs

Categories