Complete list of causes of MaxRetryError - python

I am currently using from requests.packages.urllib3.util.retry import Retry to retry some API calls, but I seem to keep encountering different errors as the cause to ConnectionError / MaxRetryError. As I currently catch then generate custom errors, I'd like to go through a complete list of causes that lead to MaxRetryError. I thought this'd be easy to find, but I can't seem to find it anywhere.
Does anyone have any reference to a complete list of possible causes that can lead to ConnectionError / MaxRetryError? The only reference I seem to be able to find is this. Seems like this is an issue others are facing too.
An example of what this error looks like is this:
ConnectionError(MaxRetryError("HTTPSConnectionPool(host='localhost', port=8080): Max retries exceeded with url: ..... (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x0000026D26242688>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))"))

Related

Unable to Import File using h2o in Python

I'm trying to import a file using h2o in Python.
h2o.init() is successful, but when I do the following:
df = h2o.import_file(path = "Combined Database - Final.csv")
I get a number of errors that I can't find any help on. Here is the last one that shows up:
H2OConnectionError: Unexpected HTTP error:
HTTPConnectionPool(host='127.0.0.1', port=54321): Max retries exceeded
with url: /3/Jobs/
$03017f00000132d4ffffffff$_a6edaa906ba7a556a417c13149c940db (Caused by
NewConnectionError(': Failed to establish a new connection: [WinError
10048] Only one usage of each socket address (protocol/netw ork
address/port) is normally permitted',))
Above it, there are “OSError”, “NewConnectionError”, “MaxRetryError”.
This is my first time using h2o, and I can't even import my data. Any help would be much appreciated!
please see the user guide: http://docs.h2o.ai/h2o/latest-stable/h2o-docs/starting-h2o.html
please also run the following tests (reposted from here) to debug your issue.
Does running h2o.jar from the commandline work?
And if so, does h2o.init() then connect to it?
What do the logs say?
Disable your firewall, and see if it makes a difference. (Remember to
re-enable it afterwards).
Try a different port number (the default is 54321).
Shutdown h2o (h2o.shutdown()) and try running h2o.init() and see if it works.

Anaconda does not allow the creation of a new environment

I am just starting out with Anaconda, and trying to create a new environment. However, when I try to do this using the navigator, it giver the following error:
CondaHTTPError: HTTP None None for url <https://conda.anaconda.org/conda
forge/win-64/repodata.json><br>Elapsed: None<br><br>An HTTP error occurred
when trying to retrieve this URL.<br>HTTP errors are often intermittent,
and a simple retry will get you on your way.<br>ConnectionError
(MaxRetryError("HTTPSConnectionPool(host='conda.anaconda.org', port=443):
Max retries exceeded with url: /t/<TOKEN>/conda-forge/win-64/repodata.json
(Caused by NewConnectionError
('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at
0x0000000004C39080>: Failed to establish a new connection: [Errno 11004]
getaddrinfo failed',))",),)<br>
I have been reading a lot online and I think it has something to do with the .condarc file. However, I can't figure out how to solve this issue. Does anyone know how to fix this?
Many thanks in advance :)

ebaysdk-python Connection Error

I have a django project which is currently being hosted on pythonanywhere that uses the Finding api of the open source project ebaysdk-python. Now on my local machine, the site worked perfectly, however, when I execute the api call I get this error message: HTTPConnectionPool(host='svcs.ebay.com', port=80): Max retries exceeded with url: /services/search/FindingService/v1 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f6560105150>: Failed to establish a new connection: [Errno 111] Connection refused',)).
Now I have scoured the docs and other related questions, but could not figure out the issue. I have verified that my API keys are correct, and my code to execute the api call is straight from the docs. So that I may be pointed in the correct direction: What is the most likely cause for this error to be raised under these circumstances?
Thank you.

Socrata SODA API is rejecting with Max Retries Exceeded

I am using the REST API Modular Input within Splunk to GET data.SFGov.org data via SODA API. I have an APP TOKEN. I am getting the MAX RETRIES EXCEEDED repeatedly.
Background:
I am building a proto Splunk based stream cursor for SF opendata. I have been testing a GET using the REST API MODULAR INPUT from the configuration screen itself, have not written any python code yet. Here is the ERROR.
11-30-2016 16:24:57.432 -0800 ERROR ExecProcessor - message from "python /Applications/Splunk/etc/apps/rest_ta/bin/rest.py" Exception performing request: HTTPSConnectionPool(host='data.sfgov.org', port=443): Max retries exceeded with url: [REDACTED] (Caused by : [Errno 8] nodename nor servname provided, or not known)
I found out that by mistake, the REST API module's polling interval was set to 60 seconds and it might have caused a problem? (I changed it to ONE DAY to avoid future issues). I then got a new APP TOKEN and tried a GET. I see the get going out in the log, but the same MAX RETRIES EXCEEDED error is coming. I am using the same IP address.
I will be testing for the next few weeks. How do I fix this and gracefully avoid this again?
#chrismetcalf - just flagging you.
Max Retries Exceeded is not an error message that I'd expect to see out of our API, especially if you were only making a call every 60 seconds. I think that may actually be Splunk giving up after trying and failing to make your HTTP call too many times.
The error message Caused by : [Errno 8] nodename nor servname provided, or not known makes me think that there's actually a DNS error on Splunk's side. That's the error message you usually see when a domain name can't be resolved.
Perhaps there's some DNS whitelisting you need to make in your Splunk environment?

Tweepy Python suddenly becoming very slow, ProxyError

I've been using Tweepy for crawling tweets in certain area with geocode as parameter since 2 month ago. Everything was fine until last week, Tweepy suddenly becoming very slow and terminate itself.
Error message :
tweepy.error.TweepError:Failed to send request: HTTPSConnectionPool(host='api.twitter.com',port=443): Max retries exceeded with url: /1.1/search/tweet.json?count=200&geocode=-6.1750%2C106.8283%2C438.37km&since=2016-04-21&until=2016-04-22 (Caused by ProxyError('Cannot connect to proxy.', error(10054,'An existing connection was forcibly closed by the remote host')))
I've already set the proxy (as usual) using:
set http_proxy=http://152.118.24.10:8080
set https_proxy=https://152.118.24.10:8080
Anyone have same problem or know the solution?

Categories