I am sending multiple requests with aiohttp using tor's http proxy (with aiohttp_socks)
After some requests are done I am getting the following error:
Traceback (most recent call last):
File "main.py", line 171, in <module>
loop.run_until_complete(future)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "main.py", line 95, in get_market_pages
async with session.get(active_link, headers=headers) as response:
File "/home/mrlalatg/.local/lib/python3.8/site-packages/aiohttp/client.py", line 1012, in __aenter__
self._resp = await self._coro
File "/home/mrlalatg/.local/lib/python3.8/site-packages/aiohttp/client.py", line 504, in _request
await resp.start(conn)
File "/home/mrlalatg/.local/lib/python3.8/site-packages/aiohttp/client_reqrep.py", line 847, in start
message, payload = await self._protocol.read() # type: ignore # noqa
File "/home/mrlalatg/.local/lib/python3.8/site-packages/aiohttp/streams.py", line 591, in read
await self._waiter
aiohttp.client_exceptions.ClientOSError: [Errno 1] [SSL: APPLICATION_DATA_AFTER_CLOSE_NOTIFY] application data after close notify (_ssl.c:2745)
I can't find any information about this error, except the git discussion about a similar one - github
There I found a workaround (link) that I can modify to ignore this error, but it didn't work, the error is still there.
The modified version of a workaround:
SSL_PROTOCOLS = (asyncio.sslproto.SSLProtocol,)
try:
import uvloop.loop
except ImportError:
pass
else:
SSL_PROTOCOLS = (*SSL_PROTOCOLS, uvloop.loop.SSLProtocol)
def ignore_aiohttp_ssl_eror(loop):
"""Ignore aiohttp #3535 / cpython #13548 issue with SSL data after close
There is an issue in Python 3.7 up to 3.7.3 that over-reports a
ssl.SSLError fatal error (ssl.SSLError: [SSL: KRB5_S_INIT] application data
after close notify (_ssl.c:2609)) after we are already done with the
connection. See GitHub issues aio-libs/aiohttp#3535 and
python/cpython#13548.
Given a loop, this sets up an exception handler that ignores this specific
exception, but passes everything else on to the previous exception handler
this one replaces.
Checks for fixed Python versions, disabling itself when running on 3.7.4+
or 3.8.
"""
orig_handler = loop.get_exception_handler()
def ignore_ssl_error(loop, context):
if context.get("message") in {
"SSL error in data received",
"Fatal error on transport",
}:
# validate we have the right exception, transport and protocol
exception = context.get('exception')
protocol = context.get('protocol')
if (
isinstance(exception, ssl.SSLError)
and exception.reason == 'APPLICATION_DATA_AFTER_CLOSE_NOTIFY'
and isinstance(protocol, SSL_PROTOCOLS)
):
if loop.get_debug():
asyncio.log.logger.debug('Ignoring asyncio SSL KRB5_S_INIT error')
return
if orig_handler is not None:
orig_handler(loop, context)
else:
loop.default_exception_handler(context)
loop.set_exception_handler(ignore_ssl_error)
Related
raise errors.TimeoutError
nats.errors.TimeoutError: nats: timeout
if self.nc.is_connected:
log.info(f"NATS client successfully connected to {SERVERS}")
# Create jetstream context and check if stream exists
self.nc = self.nc.jetstream()
log.info("Created contextual JetStream object!")
try:
# Check if the JetStream exists [Client has already created JetStream and subject associations]
acc_info = await self.nc.account_info()
info = await self.nc.stream_info(STREAM_NAME)
The code is erroring out when using the JetStream context prior to adding a stream. Upon deletion of a stream, and calling stream_info() after, the NotFoundError is returned and handled. Why does the returned exception change so suddenly? Any help is greatly appreciated.
File "/usr/local/lib/python3.9/site-packages/nats/js/manager.py", line 65, in stream_info
resp = await self._api_request(
File "/usr/local/lib/python3.9/site-packages/nats/js/manager.py", line 158, in _api_request
msg = await self._nc.request(req_subject, req, timeout=timeout)
File "/usr/local/lib/python3.9/site-packages/nats/aio/client.py", line 899, in request
msg = await self._request_new_style(
File "/usr/local/lib/python3.9/site-packages/nats/aio/client.py", line 939, in _request_new_style
raise errors.TimeoutError
nats.errors.TimeoutError: nats: timeout
Is your nats-server configured to enable JetStream functionality (i.e. either in it's config file or passing the -js command line argument)?
Is your account allowed to use JetStream? nats account info will tell you.
I am trying to configure AsyncHTTPClient with auth proxy to access https websites. Is it possible to do with authenticated proxy?
from tornado import httpclient, ioloop
config = {
'proxy_host': proxy_host,
'proxy_port': proxy_post,
"proxy_username": proxy_username,
"proxy_password": proxy_password
}
httpclient.AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient")
def handle_request(response):
if response.error:
print("Error:", response.error)
else:
print(response.body)
ioloop.IOLoop.instance().stop()
http_client = httpclient.AsyncHTTPClient()
http_client.fetch("https://twitter.com/",
handle_request, **config)
ioloop.IOLoop.instance().start()
I get these errors after running the code above
Traceback (most recent call last):
File "C:\Users\Adam\Anaconda3\envs\sizeer\lib\site-packages\tornado\curl_httpclient.py", line 130, in _handle_socket
self.io_loop.add_handler(fd, self._handle_events, ioloop_event)
File "C:\Users\Adam\Anaconda3\envs\sizeer\lib\site-packages\tornado\platform\asyncio.py", line 103, in add_handler
self.asyncio_loop.add_writer(fd, self._handle_events, fd, IOLoop.WRITE)
File "C:\Users\Adam\Anaconda3\envs\sizeer\lib\asyncio\events.py", line 507, in add_writer
raise NotImplementedError
NotImplementedError
Traceback (most recent call last):
File "C:\Users\Adam\Anaconda3\envs\sizeer\lib\site-packages\tornado\curl_httpclient.py", line 130, in _handle_socket
self.io_loop.add_handler(fd, self._handle_events, ioloop_event)
File "C:\Users\Adam\Anaconda3\envs\sizeer\lib\site-packages\tornado\platform\asyncio.py", line 97, in add_handler
raise ValueError("fd %s added twice" % fd)
ValueError: fd 700 added twice
ERROR:asyncio:Future exception was never retrieved
future: <Future finished exception=HTTP 599: SSL certificate problem: unable to get local issuer certificate>
tornado.curl_httpclient.CurlError: HTTP 599: SSL certificate problem: unable to get local issuer certificate
Process finished with exit code -1
I'm not sure if this is the only problem here, but the NotImplementedError is because Python 3.8 on Windows uses a different event loop implementation that is incompatible with Tornado. You need to add asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) to the beginning of your main file/function.
I suspect you may also need to use the ca_certs argument to tell libcurl where to find the trusted root certificates for your proxy.
I'm trying to use the python package proxybroker.
I tried to use one of the examples mentioned here. I just copied the following example to run locally:
import asyncio from proxybroker import Broker
async def save(proxies, filename):
"""Save proxies to a file."""
with open(filename, 'w') as f:
while True:
proxy = await proxies.get()
if proxy is None:
break
proto = 'https' if 'HTTPS' in proxy.types else 'http'
row = '%s://%s:%d\n' % (proto, proxy.host, proxy.port)
f.write(row)
def main():
proxies = asyncio.Queue()
broker = Broker(proxies)
tasks = asyncio.gather(broker.find(types=['HTTP', 'HTTPS'], limit=10),
save(proxies, filename='proxies.txt'))
loop = asyncio.get_event_loop()
loop.run_until_complete(tasks)
if __name__ == '__main__':
main()
When I try to run the code the following error is thrown together with some deprecation warnings:
/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/aiohttp/client.py:494:
DeprecationWarning: Use async with instead warnings.warn("Use async
with instead", DeprecationWarning)
/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/aiohttp/client.py:494:
DeprecationWarning: Use async with instead warnings.warn("Use async
with instead", DeprecationWarning)
/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/aiohttp/client.py:494:
DeprecationWarning: Use async with instead warnings.warn("Use async
with instead", DeprecationWarning)
/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/aiohttp/client.py:494:
DeprecationWarning: Use async with instead warnings.warn("Use async
with instead", DeprecationWarning)
/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/aiohttp/client.py:494:
DeprecationWarning: Use async with instead warnings.warn("Use async
with instead", DeprecationWarning)
/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/aiohttp/client.py:494:
DeprecationWarning: Use async with instead warnings.warn("Use async
with instead", DeprecationWarning)
/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/aiohttp/client.py:494:
DeprecationWarning: Use async with instead warnings.warn("Use async
with instead", DeprecationWarning)
/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/aiohttp/client.py:494:
DeprecationWarning: Use async with instead warnings.warn("Use async
with instead", DeprecationWarning)
/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/aiohttp/client.py:494:
DeprecationWarning: Use async with instead warnings.warn("Use async
with instead", DeprecationWarning)
/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/aiohttp/client.py:494:
DeprecationWarning: Use async with instead warnings.warn("Use async
with instead", DeprecationWarning) https://getproxy.net/en/ is failed.
Error: ClientOSError(101, 'Cannot connect to host getproxy.net:443
ssl:True [Can not connect to getproxy.net:443 [Network is
unreachable]]'); https://getproxy.net/en/ is failed. Error:
ClientOSError(101, 'Cannot connect to host getproxy.net:443 ssl:True
[Can not connect to getproxy.net:443 [Network is unreachable]]');
https://getproxy.net/en/ is failed. Error: ClientOSError(101, 'Cannot
connect to host getproxy.net:443 ssl:True [Can not connect to
getproxy.net:443 [Network is unreachable]]'); Traceback (most recent
call last): File
"/home/sebastian/PycharmProjects/testing/test/test_prox.py", line 27,
in
main() File "/home/sebastian/PycharmProjects/testing/test/test_prox.py", line 23,
in main
loop.run_until_complete(tasks) File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in
run_until_complete
return future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
result = coro.throw(exc) File "/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/proxybroker/api.py",
line 108, in find
await self._run(self._checker.check_judges(), action) File "/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/proxybroker/api.py",
line 114, in _run
await tasks File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__
yield self # This tells Task to wait for completion. File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup
future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
result = coro.throw(exc) File "/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/proxybroker/checker.py",
line 26, in check_judges
await asyncio.gather(*[j.check() for j in self._judges]) File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__
yield self # This tells Task to wait for completion. File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup
future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None) File "/home/sebastian/PycharmProjects/STW/venv/lib/python3.5/site-packages/proxybroker/judge.py",
line 82, in check
j=self, code=resp.status, page=page[0], IndexError: string index out of range
I use python3.5.2 and the up to date versions of proxybroker (0.1.4) aiohttp (1.0.2) asyncio (3.4.3).
I'm not sure what causes the error as I did not change the code example and as far as I know I have installed all dependencies. Can anyone help me and tell me what I am doing wrong and even better how to do it right?
EDIT
A quick workaround for the issue is to change the line where the error occurs. That line is only for logging an error, thus the change should not do any harm.
For this workaround - not a solution - I added an additional check in the judge.py in line 79 where the exception was raised before.
Locally I changed it to:
if isinstance(page, type(list())) or isinstance(page, type(dict())):
log.error(('{j} is failed. HTTP status code: {code}; '
'Real IP on page: {ip}; Version: {word}; '
'Response: {page}').format(
j=self, code=resp.status, page=page[0],
ip=(get_my_ip() in page), word=(rv in page)))
else:
log.error(('{j} is failed. HTTP status code: {code}; '
'Real IP on page: {ip}; Version: {word}; '
'Response: {page}').format(
j=self, code=resp.status, page=page,
ip=(get_my_ip() in page), word=(rv in page)))
That way I can use proxybroker again. The issue is filed on gihub with proxybroker.
Deprecation warnings are harmless (at least unless I'll remove this kind of backward compatibility).
The error just says that getproxy.net is not available -- this is your main problem.
Currently there is problem with:
<Judge [HTTP] www.ingosander.net> is failed. HTTP status code: 302; Real IP on page: False; Version: False; Response:
The response content is empty because it's redirected (status code: 302).
Possible solutions:
1. Change http client. Use requests pakage - automaticly follows redirects.
2. Change error logging for redirected urls adding
elif (resp.status == 302):
log.error(('{j} is failed. HTTP status code: {code}; '
'Real IP on page: {ip}; Version: {word}; '
'Response: {page}').format(
j=self, code=resp.status, page=None,
ip=(get_my_ip() in page), word=(rv in page)))
in the judge.py
Comment this Judge in the judgeList at the bottom of judge.py
So my twisted mail receiver is working nicely. Right up until we try to handle a case where the config is fubarred, and a mismatched cert/key is passed to the certificate options object for the factory.
I have a module, custom_esmtp.py, which includes an overload of ext_STARTLS(self,rest) which I have modified as follows, to include a try/except:
elif self.ctx and self.canStartTLS:
try:
self.sendCode(220, 'Begin TLS negotiation now')
self.transport.startTLS(self.ctx)
self.startedTLS = True
except:
log.err()
self.sendCode(550, "Internal server error")
return
When I run the code, having passed a cert and key that do not match, I get the following call stack:
Unhandled Error
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/twisted/internet/tcp.py", line 220, in _dataReceived
rval = self.protocol.dataReceived(data)
File "/usr/local/lib/python2.7/site-packages/twisted/protocols/basic.py", line 454, in dataReceived
self.lineReceived(line)
File "/usr/local/lib/python2.7/site-packages/twisted/mail/smtp.py", line 568, in lineReceived
return getattr(self, 'state_' + self.mode)(line)
File "/usr/local/lib/python2.7/site-packages/twisted/mail/smtp.py", line 582, in state_COMMAND
method('')
--- <exception caught here> ---
File "custom_esmtp.py", line 286, in ext_STARTTLS
self.transport.startTLS(self.ctx)
File "/usr/local/lib/python2.7/site-packages/twisted/internet/_newtls.py", line 179, in startTLS
startTLS(self, ctx, normal, FileDescriptor)
File "/usr/local/lib/python2.7/site-packages/twisted/internet/_newtls.py", line 139, in startTLS
tlsFactory = TLSMemoryBIOFactory(contextFactory, client, None)
File "/usr/local/lib/python2.7/site-packages/twisted/protocols/tls.py", line 769, in __init__
contextFactory = _ContextFactoryToConnectionFactory(contextFactory)
File "/usr/local/lib/python2.7/site-packages/twisted/protocols/tls.py", line 648, in __init__
oldStyleContextFactory.getContext()
File "/usr/local/lib/python2.7/site-packages/twisted/internet/_sslverify.py", line 1429, in getContext
self._context = self._makeContext()
File "/usr/local/lib/python2.7/site-packages/twisted/internet/_sslverify.py", line 1439, in _makeContext
ctx.use_privatekey(self.privateKey)
OpenSSL.SSL.Error: [('x509 certificate routines', 'X509_check_private_key', 'key values mismatch')]
Line 286 of custom_esmtp.py is the self.transport.startTLS(self.ctx). I've looked through all the twisted modules listed in the stack, at the quoted lines, and there are no other try/except blocks.... So my understanding is that the error should be passed back up the stack, unhandled, until it reaches my handler in custom_esmtp.py? So why is it not getting handled - especially since the only except I have is a "catch all"?
Thanks in advance!
If you want this error to be caught, you can do:
from OpenSSL import SSL
# ...
try:
# ...
except SSL.Error:
# ...
Perhaps the syntax changes a bit. I can't check because I don't use this precise package, but the idea is that you have to declare the import path of the exceptions you want to catch.
I am seeing the python-requests library crash with the following traceback:
Traceback (most recent call last):
File "/usr/lib/python3.2/http/client.py", line 529, in _read_chunked
chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./app.py", line 507, in getUrlContents
response = requests.get(url, headers=headers, auth=authCredentials, timeout=http_timeout_seconds)
File "/home/dotancohen/code/lib/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/home/dotancohen/code/lib/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/home/dotancohen/code/lib/requests/sessions.py", line 338, in request
resp = self.send(prep, **send_kwargs)
File "/home/dotancohen/code/lib/requests/sessions.py", line 441, in send
r = adapter.send(request, **kwargs)
File "/home/dotancohen/code/lib/requests/adapters.py", line 340, in send
r.content
File "/home/dotancohen/code/lib/requests/models.py", line 601, in content
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
File "/home/dotancohen/code/lib/requests/models.py", line 542, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "/home/dotancohen/code/lib/requests/packages/urllib3/response.py", line 222, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/home/dotancohen/code/lib/requests/packages/urllib3/response.py", line 173, in read
data = self._fp.read(amt)
File "/usr/lib/python3.2/http/client.py", line 489, in read
return self._read_chunked(amt)
File "/usr/lib/python3.2/http/client.py", line 534, in _read_chunked
raise IncompleteRead(b''.join(value))
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.2/threading.py", line 740, in _bootstrap_inner
self.run()
File "./app.py", line 298, in run
self.target(*self.args)
File "./app.py", line 400, in provider_query
url_contents = getUrlContents(str(providerUrl), '', authCredentials)
File "./app.py", line 523, in getUrlContents
except http.client.IncompleteRead as error:
NameError: global name 'http' is not defined
As can be seen, I've tried to catch the http.client.IncompleteRead: IncompleteRead(0 bytes read) error that requests is throwing with the line except http.client.IncompleteRead as error:. However, that is throwing a NameError due to http not being defined. So how can I catch that exception?
This is the code throwing the exception:
import requests
from requests_oauthlib import OAuth1
authCredentials = OAuth1('x', 'x', 'x', 'x')
response = requests.get(url, auth=authCredentials, timeout=20)
Note that I am not including the http library, though requests is including it. The error is very intermittent (happens perhaps once every few hours, even if I run the requests.get() command every ten seconds) so I'm not sure if added the http library to the imports has helped or not.
In any case, in the general sense, if included library A in turn includes library B, is it impossible to catch exceptions from B without including B myself?
To answer your question
In any case, in the general sense, if included library A in turn includes library B, is it impossible to catch exceptions from B without including B myself?
Yes. For example:
a.py:
import b
# do some stuff with b
c.py:
import a
# but you want to use b
a.b # gives you full access to module b which was imported by a
Although this does the job, it doesn't look so pretty, especially with long package/module/class/function names in real world.
So in your case to handle http exception, either try to figure out which package/module within requests imports http and so that you'd do raise requests.XX.http.WhateverError or rather just import it as http is a standard library.
It's hard to analyze the problem if you don't give source and just the stout,
but check this link out : http://docs.python-requests.org/en/latest/user/quickstart/#errors-and-exceptions
Basically,
try and catch the exception whereever the error is rising in your code.
Exceptions:
In the event of a network problem (e.g. DNS failure, refused connection, etc),
Requests will raise a **ConnectionError** exception.
In the event of the rare invalid HTTP response,
Requests will raise an **HTTPError** exception.
If a request times out, a **Timeout** exception is raised.
If a request exceeds the configured number of maximum redirections,
a **TooManyRedirects** exception is raised.
All exceptions that Requests explicitly raises inherit
from **requests.exceptions.RequestException.**
Hope that helped.