The following error occurred while trying to run code:
Traceback (most recent call last):
response = session.post(base_url, params={'query': filename_query})
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 578, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 805, in urlopen
if retries.is_retry(method, response.status, has_retry_after):
File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 343, in is_retry
if not self._is_method_retryable(method):
File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 331, in _is_method_retryable
if self.method_whitelist and method.upper() not in self.method_whitelist:
AttributeError: 'Retry' object has no attribute 'method_whitelist'
Could someone help me with this?
I don't know your specific case since there is not much info. Nevertheless, I had the same error while using the requests package in an Apache Beam pipeline
The thing is that method_whitelist was deprecated and removed from urllib3==1.26.0 onwards, as stated in the release changelog
The solution in my case was to set the urllib version to a previous one, adding urllib3==1.25.11 to my requirements.txt
There was a similar problem for me. In my case, the problem was that the pyfcm library was not updated. This library is for sending push notifications. My problem was solved by updating this library.
pip install pyfcm --upgrade
I solved a similar problem by installing requests==1.26.0 and urllib3==1.26.2.
Use allowed_methods instead of method_whitelist. The latter was removed at some point and was replaced by the former.
Related
I have the following script that grabs a repository from Github using PYGitHub
import logging
import getpass
import os
from github import Github, Repository as Repository, UnknownObjectException
GITHUB_URL = 'https://github.firstrepublic.com/api/v3'
if __name__ == '__main__':
logging.getLogger().setLevel(logging.DEBUG)
logging.debug('validating GH token')
simpleuser = getpass.getuser().replace('adm_','')
os.path.exists(os.path.join(os.path.expanduser('~' + getpass.getuser()) + '/.ssh/github-' + simpleuser + '.token'))
with open(os.path.join(os.path.expanduser('~' + getpass.getuser()) + '/.ssh/github-' + simpleuser + '.token'), 'r') as token_file:
github_token = token_file.read()
logging.debug(f'Token after file processing: {github_token}')
logging.debug('initializing github')
g = Github(base_url=GITHUB_URL, login_or_token=github_token)
logging.debug("attempting to get repository")
source_repo = g.get_repo('CLOUD/iam')
Works just fine in Python 3.9.1 on my Mac.
In production, we have RHEL7, Python 3.6.8 (can't upgrade it, don't suggest it). This is where it blows up:
(virt) user#lmachine: directory$ python3 test3.py -r ORG/repo_name -d
DEBUG:root:validating GH token
DEBUG:root:Token after file processing: <properly_formed_token>
DEBUG:root:initializing github
DEBUG:root:attempting to get repository
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): <domain>:443
Traceback (most recent call last):
File "test3.py", line 68, in <module>
source_repo = g.get_repo(args.repo)
File "/home/adm_gciesla/virt/lib/python3.6/site-packages/github/MainClass.py", line 348, in get_repo
"GET", "%s%s" % (url_base, full_name_or_id)
File "/home/user/virt/lib/python3.6/site-packages/github/Requester.py", line 319, in requestJsonAndCheck
verb, url, parameters, headers, input, self.__customConnection(url)
File "/home/user/virt/lib/python3.6/site-packages/github/Requester.py", line 410, in requestJson
return self.__requestEncode(cnx, verb, url, parameters, headers, input, encode)
File "/home/user/virt/lib/python3.6/site-packages/github/Requester.py", line 487, in __requestEncode
cnx, verb, url, requestHeaders, encoded_input
File "/home/user/virt/lib/python3.6/site-packages/github/Requester.py", line 513, in __requestRaw
response = cnx.getresponse()
File "/home/user/virt/lib/python3.6/site-packages/github/Requester.py", line 116, in getresponse
allow_redirects=False,
File "/home/user/virt/lib/python3.6/site-packages/requests/sessions.py", line 543, in get
return self.request('GET', url, **kwargs)
File "/home/user/virt/lib/python3.6/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/home/user/virt/lib/python3.6/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/home/user/virt/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/user/virt/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/home/user/virt/lib/python3.6/site-packages/urllib3/connectionpool.py", line 392, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib64/python3.6/http/client.py", line 1254, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib64/python3.6/http/client.py", line 1295, in _send_request
self.putheader(hdr, value)
File "/usr/lib64/python3.6/http/client.py", line 1232, in putheader
raise ValueError('Invalid header value %r' % (values[i],))
ValueError: Invalid header value b'token <properly_formed_token>\n'
The script is a stripped down version of a larger application. I've tried rolling back to earlier versions of PyGitHub, that's really all I have control over in prod. Same error regardless. PyGithub's latest release claims Python >=3.6 should work.
I've really run the gamut of debugging. Seems like reading from environment variables can work sometimes, but the script needs to be able to use whatever credentials are available. Passing in the token as an argument is only for running locally.
Hopefully someone out there has seen something similar.
We just figured it out. Apparently, even though there's no newline in the .token file, there is one after calling file.read()
Changing github_token = token_file.read() to github_token = token_file.read().strip() fixes the problem.
I'm trying to download a video file using an API, the equivalent curl command works without problem, the python code below works without error for small videos:
with requests.get("http://username:password#url/Download/", data=data, stream=True) as r:
r.raise_for_status()
with open("deliverables/video_output34.mp4", "wb") as f:
for chunk in r.iter_content(chunk_size=1024):
f.write(chunk)
it fails for large videos (failed for video ~34M) (the equivalent curl command works for this one)
Traceback (most recent call last):
File "/home/nabil/.local/lib/python3.7/site-packages/requests/adapters.py", line 479, in send
r = low_conn.getresponse(buffering=True)
TypeError: getresponse() got an unexpected keyword argument 'buffering'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nabil/.local/lib/python3.7/site-packages/requests/adapters.py", line 482, in send
r = low_conn.getresponse()
File "/usr/local/lib/python3.7/http/client.py", line 1321, in getresponse
response.begin()
File "/usr/local/lib/python3.7/http/client.py", line 296, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.7/http/client.py", line 265, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nabil/.local/lib/python3.7/site-packages/requests/api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "/home/nabil/.local/lib/python3.7/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/home/nabil/.local/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/home/nabil/.local/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/home/nabil/.local/lib/python3.7/site-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: Remote end closed connection without response
I've checked links like the following without success
Thanks to SilentGhost on IRC#python who pointed out to this suggesting I should upgrade my requests, which solved it(from 2.22.0 to 2.24.0).
upgrading the package is done like this:
pip install requests --upgrade
Another source that may help someone looking at this question is to use pycurl, here is a good starting point: https://github.com/rajatkhanduja/PyCurl-Downloader
or/and you can use --libcurl to your curl command to get a good indication on how to use pycurl
I just wrote a simple python demo,while met a confusing problem.
import requests
print(requests.get('http://www.sina.com.cn/'))
I know that right result is return Response [200].But in my WIN10 x64,it returns the following error,I guess maybe some problems occur in my computer.
Traceback (most recent call last):
File "C:\Users\CJY\Desktop\Python_Demo\web.py", line 2, in <module>
print(requests.get('http://www.sina.com.cn/'))
File "D:\python3.6.1\lib\site-packages\requests\api.py", line 72, in get
return request('get', url, params=params, **kwargs)
File "D:\python3.6.1\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "D:\python3.6.1\lib\site-packages\requests\sessions.py", line 518, in request
resp = self.send(prep, **send_kwargs)
File "D:\python3.6.1\lib\site-packages\requests\sessions.py", line 639, in send
r = adapter.send(request, **kwargs)
File "D:\python3.6.1\lib\site-packages\requests\adapters.py", line 403, in send
conn = self.get_connection(request.url, proxies)
File "D:\python3.6.1\lib\site-packages\requests\adapters.py", line 302, in get_connection
conn = proxy_manager.connection_from_url(url)
File "D:\python3.6.1\lib\site-packages\requests\packages\urllib3\poolmanager.py", line 279, in connection_from_url
pool_kwargs=pool_kwargs)
File "D:\python3.6.1\lib\site-packages\requests\packages\urllib3\poolmanager.py", line 408, in connection_from_host
self.proxy.host, self.proxy.port, self.proxy.scheme, pool_kwargs=pool_kwargs)
File "D:\python3.6.1\lib\site-packages\requests\packages\urllib3\poolmanager.py", line 218, in connection_from_host
raise LocationValueError("No host specified.")
requests.packages.urllib3.exceptions.LocationValueError: No host specified.
[Finished in 0.2s]
Please help me!
That works for me. Please ensure you have internet connectivity and you can ping www.sina.com.cn
Just tested this with the same python version on windows 10 64-Bit and it worked for me.
When using requests in windows i have come across the same error when the local dns cache is pointing to an incorrect value.
If you are still having no luck try flushing the local dns cache on that machine my entering the following command in command prompt.
ipconfig /flushdns
error location:
Lib\urllib\request.py:
proxyEnable = winreg.QueryValueEx(internetSettings, 'ProxyEnable')[0]
if proxyEnable is string , you'll see the error. The reason is in your registry, ProxyEnable is set as REG_SZ but not REG_DWORD, so change it and all is ok.
open the registry:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings \ ProxyEnable
(you can also directly search ProxyEnable)
delete ProxyEnable
create a new ProxyEnable form (REG_SZ 0) to (REG_DWORD 0x00000000(0))
see follow pictures,my pc language is chinese, but the location for ProxyEnable is the same.
create a new ProxyEnable
right value for ProxyEnable
I'm trying to do a simple post request, I'm using a list because I want to send all my post request at the same time using thread. Here is an example of an url :
s = "https://emoncms.org/input/post.json?node="+str(test)+"&json={test_stack_overflow:0}&apikey="+str(apikey)
list.append(threading.Thread(target=requests.post, args=([s, ])))
I was using this code maybe 3 months ago and it worked perfectly.
I wanted to get back on this project this week and I realized that I got some errors, this one particularly :
Exception in thread Thread-14:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 94, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 49, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 457, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 569, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 420, in send
raise SSLError(e, request=request)
SSLError: <unprintable SSLError object>
I got an other error, ConnectionError but I think it's due to the network or because the website can't follow it's activity or is down. I leave you the traceback if you want :
ConnectionError: ('Connection aborted.', error(101, 'Network is unreachable'))
This code is only a part of my project, the code is running every minutes and I don't know why but this issue (SSLError) comes only maybe 10 times a day. I got this script running on different Raspberry Pi and some have the same problem but not the same frequency, others don't have it at all.
Any ideas on what is going ?
Thanks in advance !
Use verify=False in the requests method like this
import requests
url="https://emoncms.org/input/post.json?node="+str(test)+"&json={test_stack_overflow:0}&apikey="+str(apikey)
requests.post(url,verify=False)
If you are using with threads then it will be like
list.append(threading.Thread(target=requests.post, args=(url,),kwargs={"verify":False})) #**kwargs should be passed seperately.
You are getting this error because python requests tries to verify certificate for https connections so you have to override it by passing verify=False or you can also provide certificate in verify like this requests.get(url,verify="/path/to/certificate.ext")
Also I doubt that this should be a get request because query parameters won't come in post request as of my knowledge. So if you use GET method same verify applies there too.
I am trying to access geolocation of addresses input by the User in my Django App.
The App was working fine with pygeocoder. Suddenly, the app has started giving problems.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ishaan/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pygeocoder.py", line 129, in geocode
return GeocoderResult(Geocoder.get_data(params=params))
File "/home/ishaan/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pygeocoder.py", line 204, in get_data
response = session.send(request.prepare())
File "/home/ishaan/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/home/ishaan/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/requests/adapters.py", line 370, in send
timeout=timeout
File "/home/ishaan/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "/home/ishaan/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 344, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
File "/home/ishaan/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 314, in _raise_timeout
if 'timed out' in str(err) or 'did not complete (read)' in str(err): # Python 2.6
TypeError: __str__ returned non-string (type Error)
I am unable to unserstand this problem, and have a close to no knowledge about SSL and its errors.
I dont know, how relevant it is, but the app was working until I installed Redis Server for another app in the same project. I have uninstalled it, but it is still not working.
Lastly, No other Geocoder is working - Every geocoder gives the same error. I tried to use GeoPy, geocoder 1.6.4
Thanks in advance