I am using gspread module to read data from a google sheet, however, some gsheets are somehow too large, and whenever i try to read(get) the values from the google sheet i get a timeout error as the following:
ReadTimeout: HTTPSConnectionPool(host='sheets.googleapis.com', port=443): Read timed out. (read timeout=120)
One solution comes to my mind is to extend the timeout value ,which i don't know exactly how.
If you know how, or have any kind of solution to this issue, I would really appreciate your help.
Hi if you look at gspread repository it recently merged a new PR that introduces timeouts in the client. When released, just update gspread to latest version and you'll be able to set a timeout on you requests.
Related
I use npa_api requests to get play-by-play data for some games.
The request works on my personal computer with python environment:
leaguegamefinder.LeagueGameFinder(player_or_team_abbreviation='T',date_from_nullable = Today,date_to_nullable = Today, league_id_nullable = '00', outcome_nullable = "W")
But I want to run my python code with the workflow tool of Github and, in that case, it gives me the following error:
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='stats.nba.com', port=443): Read timed out. (read timeout=30)
I have seen that NBA API blacklists the request from some cloud hosting providers but I did not find a solution to pass this block (using different proxies or new headers does not seem to solve the prob) and I don't know if there is one though.
Does anybody have an idea about this kind of problem?
Thanks a lot!
I am working in saving data from a zip file to Elasticsearch DB in a Python application, that zip file has HTML pages and domain names. Now I need to push data to array from that file and then need to save it in Elastic search DB.
The issue is that sometimes, when data is extensive because HTML can be of any size, then I get the error:
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='localhost', port=9200): Read timed out. (read timeout=300)
ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host='localhost', port=9200): Read timed out. (read timeout=300))
I have tried increasing timeout value but I don't know how long can data become in future to save on update, so not sure what value should I put there.
Can anyone please help me in knowing that if this is the only way or is there any other better way to fix it.
I am following the guide for using Datastore in python found here: https://cloud.google.com/datastore/docs/reference/libraries
When I go to use the put() method, there is a long pause, and a google.cloud.exceptions.GatewayTimeout: 504 Deadline Exceeded is returned.
I'm attempting to follow the guide in the python shell. I'm not sure what to do from here or how to resolve this error. Any help would be greatly appreciated.
I'm using overpy to query the Overpass API, and the nature of the data is such that I have a lot of queries to execute. I've run into the 429 OverpassTooManyRequests exception and I'm trying to play by the rules. I've tried introducing time.sleep methods to space out the requests, but I have no basis for how long the program should wait before continuing.
I found this link which mentions a "Retry-after" header:
How to avoid HTTP error 429 (Too Many Requests) python
Is there a way to access that header in an overpy response? I've been through the docs and the source code, but nothing stood out that would allow me to access that header so I can pause querying until it's acceptable to do so again.
I'm using Python 3.6 and overpy 0.4.
Maybe this isn't quite the answer you're seeking, but I ran into the same issue and fixed it by simply hosting my own OSM database server using docker. Just clone the repo and follow instructions:
https://github.com/mediasuitenz/docker-overpass-api
from http://overpass-api.de/command_line.html do check that you do not have a single 'runaway' request that is taking up all the resources.
After verifying that I don't have runaway queries, I have taken Peter's advice and added a catch for the TooManyRequests exception that waits 30s and tries again. This seems to be working as an immediate solution.
I will also raise an issue with the originators of OverPy to suggest an enhancement to allow evaluating the /api/status, as per mmd's advice.
I'm struggling with what should be a very simple problem. I'm failing to set the session timeout on a SUDS jurko connection. My WSDL is good. Everything works when pulling a smaller dataset. I've attempted several means of setting the timeout. While the following doesn't complain/etc, it also is ineffective:
from suds.client import Client
client = Client(authUrl, timeout=600)
My connection/etc appears to fail after the default 90 seconds. Unfortunately, this just isn't long enough to get the data I need. The error I receive is
ssl.SSLError: ('The read operation timed out',)
Help! My Google foo is weak, I guess. I've tried many things... and, finally, I have to ask for help. Which will be greatly appreciated...
While this will not help the OP I think that it is worth mentioning that under Python 3.9 the call to Client(...., timeout=300) seems to be working with sudz version 1.0.3 from https://github.com/Skylude/suds - so I guess that this issue has been resolved.