pip fails with HTTP error 503 while getting https://pypi.python - python

I'm trying to set up a build machine using Jenkins on an Amazon EC2 instance. It's building python, and using the shiningpanda plugin to set up a virtualenv for the build.
Every time I run the build I run:
pip install --use-mirrors --force-reinstall -r requirements.txt
I've been making builds all day trying to get my coverage and pylint settings right.
Now, at the end of the day, I'm getting these types of errors for a few of the projects:
HTTP error 503 while getting
https://pypi.python.org/packages/source/c/coverage/coverage-3.6.tar.gz#md5=67d4e393f4c6a5ffc18605409d2aa1ac
(from https://pypi.python.org/simple/coverage/)
Could not install requirement coverage==3.6 (from -r requirements.txt
(line 11)) because of error HTTP Error 503: Service Unavailable
If I visit the link in the browser it loads fine.
Why is this happening? Is there an api limit on the pypi api that I'm exceeding? This has been working all day.
One more note, each time i run pip, it fails on a different package. The subsequent build after the error message above coverage downloaded successfully, but I got a 503 error three packages later.

503 usually means a temporary error -- the webserver is not able to service the request due to, for example, temporary overloading.
The fact that it's a different package each time would indicate this kind of transient error. The overloading is probably just a result of lots of other calls coming in at the same time as you.

Related

Error when installing torch through requirements.txt for azure web service deployment

Generating a requirements.txt file returns this for torch:
torch==1.6.0+cpu
torchvision==0.7.0+cpu
However, with +cpu, I get an error that it is not able to find what it is supposed to install.
I navigated to this website: https://pypi.org/project/torch/#history and since I couldnt find any version saying "+cpu" so I removed the +cpu from my requirements.txt file and ran the deployment again.
Now this is where it is stuck at:
Collecting torch==1.6.0
9:41:06 PM cv-web-app: [16:41:06+0000] Downloading torch-1.6.0-cp37-cp37m-manylinux1_x86_64.whl (748.8 MB)
It is taking forever to install this and in the end I keep on getting this error:
An unknown error has occurred. Check the diagnostic log for details.
I see the diagnostic logs through the azure portal and I dont see anything logged beyond the installing of torch. As in I am unable to figure out what the error is. Maybe I am mistaking in my checking.
How do I figure out what is wrong? What does the CPU indicate?
Moreover, I am making a computer vision app. Using flask, and my system is windows. I am deploying it to azure through vscode through the "Create new web app" option.
As suggested by #ryanchill , When we deploy our Python app, Azure App Service will create a virtual environment and run pip install -r requirements.txt.
If its not happening, make sure SCM_DO_BUILD_DURING_DEPLOYMENT is set to 1.
See Configure a Linux Python app for Azure App Service for more details.
NOTE: Also as mentioned in the above MS DOC
You can install using path with putting the pytorch path in the requirements.txt
For example:
--find-links https://download.pytorch.org/whl/torch_stable.html torch==1.9.0+cpu
Reference:
. Python Webapp on Azure - PyTorch | MS Q&A .
Also please refer the below links for more information:
. MS Tutorial: Deploy Python apps to Azure App Service
.Blog PyTorch Web Service deployment using Azure Machine Learning Service and Azure Web Apps from VS Code

Python build on AWS CodeBuild failing with deps error - dependancies were not changed

Morning,
I have an app using AWS CodeBuild that is rarely deployed. I went to deploy a tiny change - no change to dependancies and the build now fails. I suspect something in AWS or Python has moved in since I last deployed.
The error is
Build Failed
Error: PythonPipBuilder:ResolveDependencies - {simplejson==3.17.3(wheel)}
simplejson has never been in requirements.txt so I am not sure why it is suddenly being called out.
Have you seen this before?

Log metrics in PythonScriptStep

In my Azure ML pipeline I've got a PythonScriptStep that is crunching some data. I need to access the Azure ML Logger to track metrics in the step, so I'm trying to import get_azureml_logger but that's bombing out. I'm not sure what dependency I need to install via pip.
from azureml.logging import get_azureml_logger
ModuleNotFoundError: No module named 'azureml.logging'
I came across a similar post but it deals with Azure Notebooks. Anyway, I tried adding that blob to my pip dependency, but it's failing with an Auth error.
Collecting azureml.logging==1.0.79 [91m ERROR: HTTP error 403 while getting
https://azuremldownloads.blob.core.windows.net/wheels/latest/azureml.logging-1.0.79-py3-none-any.whl?sv=2016-05-31&si=ro-2017&sr=c&sig=xnUdTm0B%2F%2FfknhTaRInBXyu2QTTt8wA3OsXwGVgU%2BJk%3D
[0m91m ERROR: Could not install requirement azureml.logging==1.0.79 from
https://azuremldownloads.blob.core.windows.net/wheels/latest/azureml.logging-1.0.79-py3-none-any.whl?sv=2016-05-31&si=ro-2017&sr=c&sig=xnUdTm0B%2F%2FfknhTaRInBXyu2QTTt8wA3OsXwGVgU%2BJk%3D
(from -r /azureml-environment-setup/condaenv.g4q7suee.requirements.txt
(line 3)) because of error 403 Client Error:
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. for url:
https://azuremldownloads.blob.core.windows.net/wheels/latest/azureml.logging-1.0.79-py3-none-any.whl?sv=2016-05-31&si=ro-2017&sr=c&sig=xnUdTm0B%2F%2FfknhTaRInBXyu2QTTt8wA3OsXwGVgU%2BJk%3D
I'm not sure how to move on this, all I need to do is to log metrics in the step.
Check out the ScriptRunConfig Section of the Monitor Azure ML experiment runs and metrics. ScriptRunConfig works effectively the same as a PythonScriptStep.
The idiom is generally to have the following in your the script of your PythonScriptStep:
from azureml.core.run import Run
run = Run.get_context()
run.log('foo_score', "bar")
Side note: You don't need to change your environment dependencies to use this because PythonScriptSteps have azureml-defaults installed automatically as a dependency.

Is it a security issue to pin the version of the "certifi" package in requirements.txt?

I have a web service that uses the requests library to make https requests to a different, external service.
As part of my deployment process, whenever there's a change to the list of dependencies, I use pip freeze to regenerate the requirements.txt file, which is stored in my code repository and processed by my PaaS provider to set up the application environment.
Today, I noticed this line in my requirements.txt file:
certifi==14.05.14
That is, the certifi package is pinned down to a version that is no longer the latest.
Is this a security issue (does it mean that my trusted root certificates are not up-to-date)?
If so - what would be the best way to change my deployment process (which is, I think, fairly standard) to solve this issue?

dcos cassandra subcommand error

Can't seem to install the Cassandra package, marathon get's stuck in deployment in phase 1/2 and dcos cassandra subcommand issues the following stacktrace, any help appreciated.
Traceback (most recent call last):
File "/home/azureuser/.dcos/subcommands/cassandra/env/bin/dcos-cassandra", line 5, in <module>
from pkg_resources import load_entry_point
File "/opt/mesosphere/lib/python3.4/site-packages/pkg_resources.py", line 2701, in <module>
parse_requirements(__requires__), Environment()
File "/opt/mesosphere/lib/python3.4/site-packages/pkg_resources.py", line 572, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: requests
Python version: Python 3.4.2
requests version : 1.8.1
I'm on the team that's building the Cassandra service. Thanks for trying it out!
We've just updated the Cassandra CLI package to better define its pip dependencies. In your case it looks like it was trying to reuse an old version of the requests library? To kick your CLI's Cassandra module to the latest version, try running dcos package uninstall --cli cassandra; dcos package install --cli cassandra. Note that the --cli is important; omitting it can result in uninstalling the Cassandra service itself, while all we want is to reinstall the local CLI module.
Keep in mind that you should also be able to access the Cassandra service directly over HTTP. The CLI module is effectively a thin interface around the service's HTTP API. For example, curl -H "Authorization:token=$(dcos config show core.dcos_acs_token)" http://<your-dcos-host>/service/cassandra/v1/plan | jq '.'. See the curl examples in the Cassandra 1.7 docs for other endpoints.
Once you've gotten the CLI up and running, that should give more insight into the state of the service, but logs may give more thorough information, particularly if the service is failing to start. You can access the service logs directly by visiting the dashboard at http://<your-dcos-host>/:
Click Services on the left, then select marathon from the list. The Cassandra service manager is run as a Marathon task.
A panel will come up showing a list of all tasks being managed by Marathon. Click cassandra on this list to show its working directory, including the available log files.
When hovering over files, a magnifying glass will appear. Click a magnifying glass to display the corresponding file in-line.
Unfortunately we're still having the same problem, though we've managed to get a workaround. It seems there are more than one distinct issues with DC/OS on Azure, anyway I'll provide further feedback. If using the Marketplace version of DC/OS 1.7.0, Cassandra doesn't deploy, it get's stuck in Marathon on phase 1/2, upon inspection of the logs it seems to have a problem with accessing the default ports.
Pastebin to log file
On the other hand that problem doesn't appear on ACS DC/OS, Cassandra deploys correctly appearing in the DC/OS Service tab as well as on Marathon. The DCOS Cassandra CLI doesn't work on any. Upon a not very thorough inspection, it seems that when we installed DCOS CLI using the method above there are some issues with the dependencies specially taking into account the $PYTHONPATH variable
/opt/mesosphere/lib/python3.4/site-packages
We were able to solve the dependencies issue by taking two actions:
First Dependency issue was with requests module, which was solved with the following actions after installing cli for the Cassandra subcommand.
cd ~/.dcos/subcommands/cassandra
source env/bin/activate
pip install -Iv requests
We used -Iv since the usual update procedure fails with external dependency in $PYTHONPATH path, so requests dependency solved.
Second dependency which the cassandra subcommand was requiring was docopt, again by using the same method we were able to solve the issue and now the subcommand works as per the documentation
pip install -Iv docopt
This does seem a bit hackish, wondering if there's anything more appropriate to be done.
output of dcos cassandra connection after taking above steps
{
"address": [
"10.32.0.9:9042",
"10.32.0.6:9042",
"10.32.0.8:9042"
],
"dns": [
"node-0.cassandra.mesos:9042",
"node-1.cassandra.mesos:9042",
"node-2.cassandra.mesos:9042"
]
}
The same happens for other DC/OS subcommands like for example the Kafka one.

Categories