Proxy using Twython - python

I keep getting this error everytime I try running my code through proxy. I have gone through every single link available on how to get my code running behind proxy and am simply unable to get this done.
import twython
import requests
TWITTER_APP_KEY = 'key' #supply the appropriate value
TWITTER_APP_KEY_SECRET = 'key-secret'
TWITTER_ACCESS_TOKEN = 'token'
TWITTER_ACCESS_TOKEN_SECRET = 'secret'
t = twython.Twython(app_key=TWITTER_APP_KEY,
app_secret=TWITTER_APP_KEY_SECRET,
oauth_token=TWITTER_ACCESS_TOKEN,
oauth_token_secret=TWITTER_ACCESS_TOKEN_SECRET,
client_args = {'proxies': {'http': 'proxy.company.com:10080'}})
now if I do
t = twython.Twython(app_key=TWITTER_APP_KEY,
app_secret=TWITTER_APP_KEY_SECRET,
oauth_token=TWITTER_ACCESS_TOKEN,
oauth_token_secret=TWITTER_ACCESS_TOKEN_SECRET,
client_args = client_args)
print t.client_args
I get only a {}
and when I try running
t.update_status(status='See how easy this was?')
I get this problem :
Traceback (most recent call last):
File "<pyshell#40>", line 1, in <module>
t.update_status(status='See how easy this was?')
File "build\bdist.win32\egg\twython\endpoints.py", line 86, in update_status
return self.post('statuses/update', params=params)
File "build\bdist.win32\egg\twython\api.py", line 223, in post
return self.request(endpoint, 'POST', params=params, version=version)
File "build\bdist.win32\egg\twython\api.py", line 213, in request
content = self._request(url, method=method, params=params, api_call=url)
File "build\bdist.win32\egg\twython\api.py", line 134, in _request
response = func(url, **requests_args)
File "C:\Python27\lib\site-packages\requests-1.2.3-py2.7.egg\requests\sessions.py", line 377, in post
return self.request('POST', url, data=data, **kwargs)
File "C:\Python27\lib\site-packages\requests-1.2.3-py2.7.egg\requests\sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\site-packages\requests-1.2.3-py2.7.egg\requests\sessions.py", line 438, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\site-packages\requests-1.2.3-py2.7.egg\requests\adapters.py", line 327, in send
raise ConnectionError(e)
ConnectionError: HTTPSConnectionPool(host='api.twitter.com', port=443): Max retries exceeded with url: /1.1/statuses/update.json (Caused by <class 'socket.gaierror'>: [Errno 11004] getaddrinfo failed)
I have searched everywhere. Tried everything that I possibly could. The only resources available were :
https://twython.readthedocs.org/en/latest/usage/advanced_usage.html#manipulate-the-request-headers-proxies-etc
https://groups.google.com/forum/#!topic/twython-talk/GLjjVRHqHng
https://github.com/fumieval/twython/commit/7caa68814631203cb63231918e42e54eee4d2273
https://groups.google.com/forum/#!topic/twython-talk/mXVL7XU4jWw
There were no topics I could find here (on Stack Overflow) either.
Please help. Hope someone replies. If you have already done this please help me with some code example.

Your code isn't using your proxy. The example shows, you specified a proxy for plain HTTP but your stackstrace shows a HTTPSConnectionPool. Your local machine probably can't resolve external domains.
Try setting your proxy like this:
client_args = {'proxies': {'https': 'http://proxy.company.com:10080'}}

In combination with #t-8ch's answer (which is that you must use a proxy as he has defined it), you should also realize that as of this moment, requests (the underlying library of Twython) does not support proxying over HTTPS. This is a problem with requests underlying library urllib3. It's a long running issue as far as I'm aware.
On top of that, reading a bit of Twython's source explains why t.client_args returns an empty dictionary. In short, if you were to instead print t.client.proxies, you'd see that indeed your proxies are being processed as they very well should be.
Finally, complaining about your workplace while on StackOverflow and linking to GitHub commits that have your GitHub username (and real name) associated with them in the comments is not the best idea. StackOverflow is indexed quite thoroughly by Google and there is little doubt that someone else might find this and associate it with you as easily as I have. On top of that, that commit has absolutely no effect on Twython's current behaviour. You're running down a rabbit hole with no end by chasing the author of that commit.

It looks like a domain name lookup failed. Assuming your configured DNS server can resolve Twitter's domain name (and surely it can), I would presume your DNS lookup for proxy.company.com failed. Try using a proxy by IP address instead of by hostname.

Related

Python HTTPS request SSLError CERTIFICATE_VERIFY_FAILED

PYTHON
import requests
url = "https://REDACTED/pb/s/api/auth/login"
r = requests.post(
url,
data = {
'username': 'username',
'password': 'password'
}
)
NIM
import httpclient, json
let client = newHttpClient()
client.headers = newHttpHeaders({ "Content-Type": "application/json" })
let body = %*{
"username": "username",
"password": "password"
}
let resp = client.request("https://REDACTED.com/pb/s/api/auth/login", httpMethod = httpPOST, body = $body)
echo resp.body
I'm calling an API to get some data. Running the python code I get the traceback below. However, the nim code works perfectly so there must be something wrong with the python code or setup.
I'm running Python version 2.7.15.
requests lib version 2.19.1
Traceback (most recent call last):
File "C:/Python27/testht.py", line 21, in <module>
"Referer": "https://REDACTED.com/pb/a/"
File "C:\Python27\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Python27\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 512, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 622, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\site-packages\requests\adapters.py", line 511, in send
raise SSLError(e, request=request)
SSLError: HTTPSConnectionPool(host='REDACTED.com', port=443): Max retries exceeded with url: /pb/s/api/auth/login (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726)'),))
The requests module will verify the cert it gets from the server, much like a browser would. Rather than being able to click through and say "add exception" like you would in your browser, requests will raise that exception.
There's a way around it though: try adding verify=False to your post call.
However, the nim code works perfectly so there must be something wrong with the python code or setup.
Actually, your Python code or setup is less to blame but instead the nim code or better the defaults on the httpclient library. In the documentation for nim can be seen that httpclient.request uses a SSL context returned by getDefaultSSL by default which according to this code creates a context which does not verify the certificate:
proc getDefaultSSL(): SSLContext =
result = defaultSslContext
when defined(ssl):
if result == nil:
defaultSSLContext = newContext(verifyMode = CVerifyNone)
Your Python code instead attempts to properly verify the certificate since the requests library does this by default. And it fails to verify the certificate because something is wrong - either with your setup or the server.
It is unclear who has issued the certificate for your site but if it is not in your default CA store you can use the verify argument of requests to specify the issuer CA. See this documentation for details.
If the site you are trying to access works with the browser but fails with your program it might be that it uses a special CA which was added as trusted to the browser (like a company certificate). Browsers and Python use different trust stores so this added certificate needs to be added to Python or at least to your program as trusted too. It might also be that the setup of the server has problems. Browsers can sometimes work around problems like a missing intermediate certificate but Python doesn't. In case of a public accessible site you could use SSLLabs to check what's wrong.

Python Requests post times out despite timeout setting

I am using the Python Requests module (v. 2.19.1) with Python 3.4.3, calling a function on a remote server that generates a .csv file for download. In general, it works perfectly. There is one particular file that takes >6 minutes to complete, and no matter what I set the timeout parameter to, I get an error after exactly 5 minutes trying to generate that file.
import requests
s = requests.Session()
authPayload = {'UserName': 'myloginname','Password': 'password'}
loginURL = 'https://myremoteserver.com/login/authenticate'
login = s.post(loginURL, data=authPayload)
backupURL = 'https://myremoteserver.com/directory/jsp/Backup.jsp'
payload = {'command': fileCommand}
headers = {'Connection': 'keep-alive'}
post = s.post(backupURL, data=payload, headers=headers, timeout=None)
This times out after exactly 5 minutes with the error:
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 330, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 612, in urlopen
raise MaxRetryError(self, url, e)
urllib3.exceptions.MaxRetryError: > HTTPSConnectionPool(host='myremoteserver.com', port=443): Max retries exceeded with url: /directory/jsp/Backup.jsp (Caused by < class 'http.client.BadStatusLine'>: '')
If I set timeout to something much smaller, say, 5 seconds, I get a error that makes perfect sense:
urllib3.exceptions.ReadTimeoutError:
HTTPSConnectionPool(host='myremoteserver.com', port=443): Read
timed out. (read timeout=5)
If I run the process from a browser, it works fine, so it doesn't seem like it's the remote server closing the connection, or a firewall or something in-between closing the connection.
Posted at the request of the OP -- my comments on the original question pointed to a related SO problem
The clue to the problem lies in the http.client.BadStatusLine error.
Take a look at the following related SO Q & A that discusses the impact of proxy servers on HTTP requests and responses.

Python - requests.exceptions.SSLError - dh key too small

I'm scraping some internal pages using Python and requests. I've turned off SSL verifications and warnings.
requests.packages.urllib3.disable_warnings()
page = requests.get(url, verify=False)
On certain servers I receive an SSL error I can't get past.
Traceback (most recent call last):
File "scraper.py", line 6, in <module>
page = requests.get(url, verify=False)
File "/cygdrive/c/Users/jfeocco/VirtualEnv/scraping/lib/python3.4/site-packages/requests/api.py", line 71, in get
return request('get', url, params=params, **kwargs)
File "/cygdrive/c/Users/jfeocco/VirtualEnv/scraping/lib/python3.4/site-packages/requests/api.py", line 57, in request
return session.request(method=method, url=url, **kwargs)
File "/cygdrive/c/Users/jfeocco/VirtualEnv/scraping/lib/python3.4/site-packages/requests/sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "/cygdrive/c/Users/jfeocco/VirtualEnv/scraping/lib/python3.4/site-packages/requests/sessions.py", line 585, in send
r = adapter.send(request, **kwargs)
File "/cygdrive/c/Users/jfeocco/VirtualEnv/scraping/lib/python3.4/site-packages/requests/adapters.py", line 477, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: [SSL: SSL_NEGATIVE_LENGTH] dh key too small (_ssl.c:600)
This happens both in/out of Cygwin, in Windows and OSX. My research hinted at outdated OpenSSL on the server. I'm looking for a fix client side ideally.
Edit:
I was able to resolve this by using a cipher set
import requests
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS += 'HIGH:!DH:!aNULL'
try:
requests.packages.urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST += 'HIGH:!DH:!aNULL'
except AttributeError:
# no pyopenssl support used / needed / available
pass
page = requests.get(url, verify=False)
this is not an extra answer just try to combine the solution code from question with extra information
So others can copy it directly without extra try
It is not only a DH Key issues in server side, but also lots of different libraries are mismatched in python modules.
Code segment below is used to ignore those securitry issues because it may be not able be solved in server side. For example if it is internal legacy server, no one wants to update it.
Besides the hacked string for 'HIGH:!DH:!aNULL', urllib3 module can be imported to disable the warning if it has
import requests
import urllib3
requests.packages.urllib3.disable_warnings()
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS += ':HIGH:!DH:!aNULL'
try:
requests.packages.urllib3.contrib.pyopenssl.util.ssl_.DEFAULT_CIPHERS += ':HIGH:!DH:!aNULL'
except AttributeError:
# no pyopenssl support used / needed / available
pass
page = requests.get(url, verify=False)
This also worked for me:
import requests
import urllib3
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS = 'ALL:#SECLEVEL=1'
openssl SECLEVELs documentation:
https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_security_level.html
SECLEVEL=2 is the openssl default nowadays, (at least on my setup: ubuntu 20.04, openssl 1.1.1f); SECLEVEL=1 lowers the bar.
Security levels are intended to avoid the complexity of tinkering with individual ciphers.
I believe most of us mere mortals don't have in depth knowledge of the security strength/weakness of individual ciphers, I surely don't.
Security levels seem a nice method to keep some control over how far you are opening the security door.
Note: I got a different SSL error, WRONG_SIGNATURE_TYPE instead of SSL_NEGATIVE_LENGTH, but the underlying issue is the same.
Error:
Traceback (most recent call last):
[...]
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 581, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='somehost.com', port=443): Max retries exceeded with url: myurl (Caused by SSLError(SSLError(1, '[SSL: WRONG_SIGNATURE_TYPE] wrong signature type (_ssl.c:1108)')))
Disabling warnings or certificate validation will not help. The underlying problem is a weak DH key used by the server which can be misused in the Logjam Attack.
To work around this you need to chose a cipher which does not make any use of Diffie Hellman Key Exchange and thus is not affected by the weak DH key. And this cipher must be supported by the server. It is unknown what the server supports but you might try with the cipher AES128-SHA or a cipher set of HIGH:!DH:!aNULL
Using requests with your own cipher set is tricky. See Why does Python requests ignore the verify parameter? for an example.
I had the same issue.
And it was fixed by commenting
CipherString = DEFAULT#SECLEVEL=2
line in /etc/ssl/openssl.cnf .
Someone from the requests python library's core development team has documented
a recipe to keep the changes limited to one or a few servers:
https://lukasa.co.uk/2017/02/Configuring_TLS_With_Requests/
If your code interacts with multiple servers, it makes sense not to lower the security requirements of all connections because one server has a problematic configuration.
The code worked for me out of the box.
That is, using my own value for CIPHERS, 'ALL:#SECLEVEL=1'.
I will package my solution here. I had to modify the python SSL library, which was possible since I was running my code within a docker container, but it's something that probably you don't want to do.
Get the supported cipher of your server. In my case was a third party e-mail server, and I used script described list SSL/TLS cipher suite
check_supported_ciphers.sh
#!/usr/bin/env bash
# OpenSSL requires the port number.
SERVER=$1
DELAY=1
ciphers=$(openssl ciphers 'ALL:eNULL' | sed -e 's/:/ /g')
echo Obtaining cipher list from $(openssl version).
for cipher in ${ciphers[#]}
do
echo -n Testing $cipher...
result=$(echo -n | openssl s_client -cipher "$cipher" -connect $SERVER 2>&1)
if [[ "$result" =~ ":error:" ]] ; then
error=$(echo -n $result | cut -d':' -f6)
echo NO \($error\)
else
if [[ "$result" =~ "Cipher is ${cipher}" || "$result" =~ "Cipher :" ]] ; then
echo YES
else
echo UNKNOWN RESPONSE
echo $result
fi
fi
sleep $DELAY
done
Give it permissions:
chmod +x check_supported_ciphers.sh
And execute it:
./check_supported_ciphers.sh myremoteserver.example.com | grep OK
After some seconds you will see an output similar to:
Testing AES128-SHA...YES (AES128-SHA_set_cipher_list)
So will use "AES128-SHA" as SSL cipher.
Force the error in your code:
Traceback (most recent call last):
File "my_custom_script.py", line 52, in
imap = IMAP4_SSL(imap_host)
File "/usr/lib/python2.7/imaplib.py", line 1169, in init
IMAP4.init(self, host, port)
File "/usr/lib/python2.7/imaplib.py", line 174, in init
self.open(host, port)
File "/usr/lib/python2.7/imaplib.py", line 1181, in open
self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile)
File "/usr/lib/python2.7/ssl.py", line 931, in wrap_socket
ciphers=ciphers)
File "/usr/lib/python2.7/ssl.py", line 599, in init
self.do_handshake()
File "/usr/lib/python2.7/ssl.py", line 828, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:727)
Get the python SSL library path used, in this case:
/usr/lib/python2.7/ssl.py
Edit it:
cp /usr/lib/python2.7/ssl.py /usr/lib/python2.7/ssl.py.bak
vim /usr/lib/python2.7/ssl.py
And replace:
_DEFAULT_CIPHERS = (
'ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:DH+CHACHA20:ECDH+AES256:DH+AES256:'
'ECDH+AES128:DH+AES:ECDH+HIGH:DH+HIGH:RSA+AESGCM:RSA+AES:RSA+HIGH:'
'!aNULL:!eNULL:!MD5:!3DES'
)
By:
_DEFAULT_CIPHERS = (
'AES128-SHA'
)
I encounter this problem afer upgrading to Ubuntu 20.04 from 18.04, following command works for me .
pip install --ignore-installed pyOpenSSL --upgrade
It's may be safer not to override the default global ciphers, but instead create custom HTTPAdapter with the required ciphers in a specific session:
import ssl
from typing import Any
import requests
class ContextAdapter(requests.adapters.HTTPAdapter):
"""Allows to override the default context."""
def __init__(self, *args: Any, **kwargs: Any) -> None:
self.ssl_context: ssl.SSLContext|None = kwargs.pop("ssl_context", None)
super().__init__(*args, **kwargs)
def init_poolmanager(self, *args: Any, **kwargs: Any) -> Any:
# See available keys in urllib3.poolmanager.SSL_KEYWORDS
kwargs.setdefault("ssl_context", self.ssl_context)
return super().init_poolmanager(*args, **kwargs)
then you need to create custom context, for example:
import ssl
def create_context(
ciphers: str, minimum_version: int, verify: bool
) -> ssl.SSLContext:
"""See https://peps.python.org/pep-0543/."""
ctx = ssl.create_default_context()
# Allow to use untrusted certificates.
if not verify:
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
# Just for example.
if minimum_version == ssl.TLSVersion.TLSv1:
ctx.options &= (
~getattr(ssl, "OP_NO_TLSv1_3", 0)
& ~ssl.OP_NO_TLSv1_2
& ~ssl.OP_NO_TLSv1_1
)
ctx.minimum_version = minimum_version
ctx.set_ciphers(ciphers)
return ctx
and then you need to configure each website with custom context rules:
session = requests.Session()
session.mount(
"https://dh.affected-website.com",
ContextAdapter(
ssl_context=create_context(
ciphers="HIGH:!DH:!aNULL"
),
),
)
session.mount(
"https://only-elliptic.modern-website.com",
ContextAdapter(
ssl_context=create_context(
ciphers="ECDHE+AESGCM"
),
),
)
session.mount(
"https://only-tls-v1.old-website.com",
ContextAdapter(
ssl_context=create_context(
ciphers="DEFAULT:#SECLEVEL=1",
minimum_version=ssl.TLSVersion.TLSv1,
),
),
)
result = session.get("https://only-tls-v1.old-website.com/object")
After reading all the answers, I can say that #bgoeman's answer is close to mine, you can follow their link to learn more.
On CentOS 7, search for the following content in /etc/pki/tls/openssl.cnf:
[ crypto_policy ]
.include /etc/crypto-policies/back-ends/opensslcnf.config
[ new_oids ]
Set 'ALL:#SECLEVEL=1' in /etc/crypto-policies/back-ends/opensslcnf.config.
In docker image you can add the following command in your Dockerfile to get rid of this issue:
RUN sed -i '/CipherString = DEFAULT/s/^#\?/#/' /etc/ssl/openssl.cnf
This automatically comments out the problematic CipherString line.
If you are using httpx library, with this you skip the warning:
import httpx
httpx._config.DEFAULT_CIPHERS += ":HIGH:!DH:!aNULL"
I had the following error:
SSLError: [SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:727)
I solved it(Fedora):
python2.7 -m pip uninstall requests
python2.7 -m pip uninstall pyopenssl
python2.7 -m pip install pyopenssl==yourversion
python2.7 -m pip install requests==yourversion
The order module install cause that:
requests.packages.urllib3.contrib.pyopenssl.util.ssl_.DEFAULT_CIPHERS
AttributeError "pyopenssl" in "requests.packages.urllib3.contrib" when the module did exist.
Based on the answer given by the user bgoeman, the following code, which keeps the default ciphers only adding the security level, works.
import requests
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS += '#SECLEVEL=1'

Implementing a bittorent client and I get 403 from the tracker

I was trying my hand at implementing a bittorrent client in python (I know there are libs out there that can do this for me easily, but I'm just trying to learn new things).
I downloaded and managed to succesfully decode the torrent file, however when I try to do the GET request at the tracker I get back a 403 response and I have no idea why. This is what I tried (this is code copied from the python shell):
>>> f = open("torrents/test.torrent")
>>> torrentData = bencoder.decode(f.read())
>>> torrentData["announce"]
'http://reactor.flro.org:8080/announce.php?passkey=d59fc5b5b9e2664895ad1c68a3621caf'
>>> params["info_hash"] = sha1(bencoder.encode(torrentData["info"])).digest()
>>> params["peer_id"] = '-AZ-1234-12345678901'
>>> params["left"] = sum(f["length"] for f in torrentData["info"]["files"])
>>> params["port"] = 6890
>>> params["uploaded"] = 0
>>> params["downloaded"] = 0
>>> params["compact"] = 1
>>> params["event"] = "started"
>>> params
{'uploaded': 0, 'compact': 1, 'info_hash': '\xab}\x19\x0e\xac"\x9d\xcf\xe5g\xd4R\xae\xee\x1e\xd7\
>>> final_url = torrentData["announce"] + "&" + urllib.urlencode(params)
>>> final_url
'http://reactor.flro.org:8080/announce.php?passkey=d59fc5b5b9e2664895ad1c68a3621caf&uploaded=0&co
>>> urllib2.urlopen(final_url)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
return opener.open(url, data, timeout)
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
response = meth(req, response)
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
'http', request, response, code, msg, hdrs)
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
return self._call_chain(*args)
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
result = func(*args)
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
Am I missing something from the params folder? I also tried this torrent in my uTorrent client and it worked so the tracker is working fine. I even tried the naked announce url (without the params) and same thing. From what I read from the bittorrent specification there is no mention of a 403 response from the tracker.
I'd be very happy if you guys could help me out with this.
To reduce the amount of variables it is better to test against a tracker you're running locally, e.g. opentracker is a good choice since it imposes few requirements.
Errors you only get on specific trackers but not on others are likely due to additional requirements imposed by the tracker administrators and not by the bittorrent protocol itself.
The major exceptions are that many public trackers may not allow non-compact announces or require UDP announces instead of HTTP ones.
Ok I managed to figure out the issue. It's kinda silly but it's actually because the request to the tracker didn't have any headers, and that tracker actually needed an user-agent otherwise it would reject the request. All I had to do was add a user-agent to the request.

Python httplib ResponseNotReady

I'm writing a REST client for elgg using python, and even when the request succeeds, I get this in response:
Traceback (most recent call last):
File "testclient.py", line 94, in <module>
result = sendMessage(token, h1)
File "testclient.py", line 46, in sendMessage
res = h1.getresponse().read()
File "C:\Python25\lib\httplib.py", line 918, in getresponse
raise ResponseNotReady()
httplib.ResponseNotReady
Looking at the header, I see ('content-length', '5749'), so I know there is a page there, but I can't use .read() to see it because the exception comes up. What does ResponseNotReady mean and why can't I see the content that was returned?
Previous answers are correct, but there's another case where you could get that exception:
Making multiple requests without reading any intermediate responses completely.
For instance:
conn.request('PUT',...)
conn.request('GET',...)
# will not work: raises ResponseNotReady
conn.request('PUT',...)
r = conn.getresponse()
r.read() # <-- that's the important call!
conn.request('GET',...)
r = conn.getresponse()
r.read() # <-- same thing
and so on.
Make sure you don't reuse the same object from a previous connection. You will hit this once the server keep-alive ends and the socket closes.
I was running into this same exception today, using this code:
conn = httplib.HTTPConnection(self._host, self._port)
conn.putrequest('GET',
'/retrieve?id={0}'.format(parsed_store_response['id']))
retr_response = conn.getresponse()
I didn't notice that I was using putrequest rather than request; I was mixing my interfaces. ResponseNotReady is raised because I haven't actually sent the request yet.
Additionally, errors like this can occur when the server sends a response without a Content-Length header, which will cripple the state of the HTTP client if Keep-Alive is used and another request is sent over the same socket.
This can also occur if a firewall blocks the connection.
Unable to add comment to #Bokeh 's answer; as I do not have the requisite reputation yet on this platform.
So, adding as answer: Bokeh's answer worked for me.
I was trying to pipeline multiple requests sequentially over the same connection object. For few of the responses I wanted to process the response later, hence missed to read the response.
From my experience, I second Bokeh's answer:
response.read() is a must after each request. Even if you wish to process response or not.
From my standpoint this question would have been incomplete without Bokeh's answer.
Thanks #Bokeh

Categories