Timeout problems with the Riak Python Client with protocol buffers - python

I'm seeing a strange but consistent behaviour from the Python Riak Client when connecting to my riak AWS cluster using protocol buffers. This short python snippet produces the error:
import time
import riak
client = riak.RiakClient(
host='address_to_my_cluster_goes_here',
http_port=8098,
pb_port=8087,
protocol='pbc'
)
result = client.ping()
# Do something else for a while, > 60 seconds
time.sleep(61)
result = client.ping()
The last ping always throws an exception, with the following traceback:
Traceback (most recent call last):
File "main_causing_exception.py", line 16, in <module>
result = client.ping()
File "C:\temp\venv\lib\site-packages\riak\client\transport.py", line 127, in wrapper
return self._with_retries(pool, thunk)
File "C:\temp\venv\lib\site-packages\riak\client\transport.py", line 69, in _with_retries
return fn(transport)
File "C:\temp\venv\lib\site-packages\riak\client\transport.py", line 125, in thunk
return fn(self, transport, *args, **kwargs)
File "C:\temp\venv\lib\site-packages\riak\client\operations.py", line 92, in ping
return transport.ping()
File "C:\temp\venv\lib\site-packages\riak\transports\pbc\transport.py", line 95, in ping
msg_code, msg = self._request(MSG_CODE_PING_REQ)
File "C:\temp\venv\lib\site-packages\riak\transports\pbc\connection.py", line 43, in _request
return self._recv_msg(expect)
File "C:\temp\venv\lib\site-packages\riak\transports\pbc\connection.py", line 50, in _recv_msg
self._recv_pkt()
File "C:\temp\venv\lib\site-packages\riak\transports\pbc\connection.py", line 71, in _recv_pkt
% len(nmsglen))
riak.RiakError: 'Socket returned short packet length 0 - expected 4'
If I do the client.ping() every 30 seconds or so, the error doesn't happen, indicating that it's some kind of socket keep-alive problem I'm seeing, but this does't seem like a solution robust enough for a production environment.
The error only occurs when using the pbc protocol setting and I've never seen it when using a http configured Riak Python Client.
I'm using Python 2.7.5 on a Win7-64 platform (though the error also occurs on our Ubuntu development server) in a virtual environment with the following packages and versions:
protobuf (2.4.1)
riak (2.0.1)
riak-pb (1.4.1.1)
Any thoughts on what's going on and how to resolve it? Am I using the Python Riak Client in the wrong way?

You should use a newer version of Riak library.
There was an error with send function in PBC transport, that was resolved here: https://github.com/basho/riak-python-client/issues/381

Related

werkzeug exception: ClientDisconnected: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand

I am getting this error:
werkzeug.exceptions.ClientDisconnected: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand.
Full stack below. Happening with Python 3.10.5 on an Ubuntu 18.04 VM. Using Werkzeug 2.1.2.
Any idea of the cause?
No problem on Windows 10 or an Ubuntu 18.04 NUC. Works fine on Ubuntu 18.04 VM with Python 3.6.5 and Werkzeug 1.0.1.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/werkzeug/wsgi.py", line 921, in read
read = self._read(to_read)
ValueError: read of closed file
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/werkzeug/formparser.py", line 140, in wrapper
return f(self, stream, *args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/werkzeug/formparser.py", line 290, in _parse_multipart
form, files = parser.parse(stream, boundary, content_length)
File "/usr/local/lib/python3.10/site-packages/werkzeug/formparser.py", line 418, in parse
for data in iterator:
File "/usr/local/lib/python3.10/site-packages/werkzeug/wsgi.py", line 653, in _make_chunk_iter
item = _read(buffer_size)
File "/usr/local/lib/python3.10/site-packages/werkzeug/wsgi.py", line 923, in read
return self.on_disconnect()
File "/usr/local/lib/python3.10/site-packages/werkzeug/wsgi.py", line 893, in on_disconnect
raise ClientDisconnected()
werkzeug.exceptions.ClientDisconnected: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand.
It's just as it states, the client disconnected. The reason? could be several, they could have had a network problem, they could have clicked a different link, they could have navigated away. either way, the server could no longer talk to your client and finish the transfer of data.
It's the most annoying error in the world, you can't do anything about it.

Python opc-ua communication using self signed certificate and basic128rsa15 encryption

I want to communicate via python opcua library to an opcua server that uses Basic128Rsa15 encryption.
client.set_security_string("Basic128Rsa15,"
"SignAndEncrypt,"
"cert.pem,"
"key.pem")
I did the same communication to an Prosys server using Basic256Sha256 encryption and all was ok. With Basic128Rsa15 (using KEPserver) I get following error:
In [19]: runfile('opcuaclient.py', wdir='/home/di29394/fue4bfi/python/fuere4bfi')
DEPRECATED! Do not use SecurityPolicyBasic128Rsa15 anymore!
Received an error: MessageAbort(error:StatusCode(BadSecurityChecksFailed), reason:An error occurred verifying security.)
Received an error: MessageAbort(error:StatusCode(BadSecurityChecksFailed), reason:An error occurred verifying security.)
Protocol Error
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 101, in _run
self._receive()
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 121, in _receive
self._call_callback(0, ua.UaStatusCodeError(msg.Error.value))
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 131, in _call_callback
.format(request_id, self._callbackmap.keys())
opcua.ua.uaerrors._base.UaError: No future object found for request: 0, callbacks in list are
Traceback (most recent call last):
File "<ipython-input-18-4187edd51b2b>", line 1, in <module>
runfile('opcuaclient.py', wdir='/home/opcuauser')
File "/usr/lib/python3/dist-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "/usr/lib/python3/dist-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "opcuaclient.py", line 57, in <module>
connected = client.connect()
File "/usr/local/lib/python3.6/dist-packages/opcua/client/client.py", line 259, in connect
self.open_secure_channel()
File "/usr/local/lib/python3.6/dist-packages/opcua/client/client.py", line 309, in open_secure_channel
result = self.uaclient.open_secure_channel(params)
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 265, in open_secure_channel
return self._uasocket.open_secure_channel(params)
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 199, in open_secure_channel
response = struct_from_binary(ua.OpenSecureChannelResponse, future.result(self.timeout))
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 430, in result
raise CancelledError()
CancelledError
The certificate was self signed using cryptography library (snippet):
cert = (
x509.CertificateBuilder()
.subject_name(name)
.issuer_name(name)
.public_key(key.public_key())
.serial_number(1000)
.not_valid_before(now)
.not_valid_after(now + timedelta(days=10*365)) # ggf. auch dynamisch machen
.add_extension(basic_contraints, False)
.add_extension(san, False)
.sign(key, hashes.SHA256(), default_backend())
Do I have to change the certificate generation according to Basic128Rsa15 or is there something different wrong?
Thanks in advance.
I felt not so good about using Basic128Rsa15. But obviously this was not the problem. The problem was, that I've been connected to the KEPServer at least two times with different certificates but same - valid - URI. The server had problems with this, so rejected all incomming connections (the error message seems to be not very helpful). After deleting all requests on the server and connecting again, all was fine (even with Basic128Rsa15).
The error message is actually quite clear !
DEPRECATED! Do not use SecurityPolicyBasic128Rsa15 anymore!
Basic128Rsa15 is not considered as Secure anymore by the OPC Foundation and recommended to deprecate it.
Source: http://opcfoundation-onlineapplications.org/ProfileReporting/index.htm?ModifyProfile.aspx?ProfileID=a84d5b70-47b2-45ca-a0cc-e98fe8528f3d
There might be an option to still use it with KEPServerEx but I will not recommend using it for something different than testing.
Note: Basic256 is also considered obsolete by the OPC Foundation, the minimum recommended OPC UA Security Policy is then Basic256Sha256.
Some OPC UA Client and Server already support the latest and more secure Security Policies :
Aes128Sha256RsaOaep
Aes256Sha256RsaPss
I used to following the line
client.set_security_string("Basic256Sha256,SignAndEncrypt,xxxxx.der,xxxxx.pem")
please try this

Handling Non-SSL Traffic in Python/Tornado

I have a webservice running in python 2.7.10 / Tornado that uses SSL. This service throws an error when a non-SSL call comes through (http://...).
I don't want my service to be accessible when SSL is not used, but I'd like to handle it in a cleaner fashion.
Here is my main code that works great over SSL:
if __name__ == "__main__":
tornado.options.parse_command_line()
#does not work on 2.7.6
ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ssl_ctx.load_cert_chain("...crt.pem","...key.pem")
ssl_ctx.load_verify_locations("...CA.crt.pem")
http_server = tornado.httpserver.HTTPServer(application, ssl_options=ssl_ctx, decompress_request=True)
http_server.listen(options.port)
mainloop = tornado.ioloop.IOLoop.instance()
print("Main Server started on port XXXX")
mainloop.start()
and here is the error when I hit that server with http://... instead of https://...:
[E 151027 20:45:57 http1connection:700] Uncaught exception
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/http1connection.py", line 691, in _server_request_loop
ret = yield conn.read_response(request_delegate)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 807, in run
value = future.result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 209, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 810, in run
yielded = self.gen.throw(*sys.exc_info())
File "/usr/local/lib/python2.7/dist-packages/tornado/http1connection.py", line 166, in _read_message
quiet_exceptions=iostream.StreamClosedError)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 807, in run
value = future.result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 209, in result
raise_exc_info(self._exc_info)
File "<string>", line 3, in raise_exc_info
SSLError: [SSL: HTTP_REQUEST] http request (_ssl.c:590)
Any ideas how I should handle that exception?
And what the standard-conform return value would be when I catch a non-SSL call to an SSL-only API?
UPDATE
This API runs on a specific port e.g. https://example.com:1234/. I want to inform a user who is trying to connect without SSL, e.g. http://example.com:1234/ that what they are doing is incorrect by returning an error message or status code. As it is the uncaught exception returns a 500, which they could interpret as a programming error on my part. Any ideas?
There's an excelent discussion in this Tornado issue about that, where Tornado maintainer says:
If you have both HTTP and HTTPS in the same tornado process, you must be running two separate HTTPServers (of course such a feature should not be tied to whether SSL is handled at the tornado level, since you could be terminating SSL in a proxy, but since your question stipulated that SSL was enabled in tornado let's focus on this case first). You could simply give the HTTP server a different Application, one that just does this redirect.
So, the best solution it's to HTTPServer that listens on port 80 and doesn't has the ssl_options parameter setted.
UPDATE
A request to https://example.com/some/path will go to port 443, where you must have an HTTPServer configured to handle https traffic; while a request to http://example.com/some/path will go to port 80, where you must have another instance of HTTPServer without ssl options, and this is where you must return the custom response code you want. That shouldn't raise any error.

Pika blocking_connection.py random timeout connecting to RabbitMQ

i have a rabbit mq running on machine
both client and rabbitMQ are running on the same network
rabbitMQ has many clients
i can ping client from rabbitMQ and back
longest latency measured between the machine is 12.1 ms
network details : Standard Switch network (network of virtual machines running on single physical machine - using vmware VC)
im getting random timeouts when initializing RPC connection
/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/adapters/blocking_connection.py
problem is that the timeout isn't consistent and happens from time to time.
when manually testing this issue and running the blocking_connection.py 1000 times from the same machine that it fails no timeout accrue.
this is the error i get when failing :
2013-04-23 08:24:23,396 runtest-trigger.24397 24397 DEBUG producer_rabbit initiate_rpc_connection Connecting to RabbitMQ RPC queue rpcqueue_java on host: auto-db1
2013-04-23 08:24:25,350 runtest-trigger.24397 24397 ERROR testrunner go Run 1354: cought exception: timed out
Traceback (most recent call last):
File "/testrunner.py", line 193, in go
self.set_runparams(jobid)
File "/testrunner.py", line 483, in set_runparams
self.runparams.producers_testrun = self.initialize_producers_testrun(self.runparams)
File "/basehandler.py", line 114, in initialize_producers_testrun
producer.set_testcase_checkout()
File "/baseproducer.py", line 73, in set_testcase_checkout
self.checkout_handler = pm_checkout.get_producer(self.testcasecheckout)
File "/producer_manager.py", line 101, in get_producer
producer = self.load_producer(plugin_dir, producer_name)
File "/producer_manager.py", line 20, in load_producer
producer = getattr(producer_module, 'Producer')(producer_name, self.runparams)
File "/producer_rabbit.py", line 13, in __init__
self.initiate_rpc_connection()
File "/producer_rabbit.py", line 67, in initiate_rpc_connection
self.connection = pika.BlockingConnection(pika.ConnectionParameters( host=self.conf.rpc_proxy))
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/adapters/blocking_connection.py", line 32, in __init__
BaseConnection.__init__(self, parameters, None, reconnection_strategy)
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/adapters/base_connection.py", line 50, in __init__
reconnection_strategy)
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/connection.py", line 170, in __init__
self._connect()
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/connection.py", line 228, in _connect
self.parameters.port or spec.PORT)
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/adapters/blocking_connection.py", line 44, in _adapter_connect
self._handle_read()
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/adapters/base_connection.py", line 151, in _handle_read
data = self.socket.recv(self._suggested_buffer_size)
timeout: timed out
please assist
I had a similar issue. If everything looks fine, then you most likely have some sort of miss configuration, e.g. bad binding. If miss configured, then you'll get a timeout because the script can't reach where it thinks it needs to go, so the error can be miss leading in this case.
For my problem, I specifically had issues with both my rabbitmq.config file and my bindings and had to use my python solution shown in: RabbitMQ creating queues and bindings from a command line over the command line example I showed. Once updated and configured properly, everything worked fine. Hopefully this gets you in the right direction.
Pika provides some time out issue when connecting different hosts.Solution is to pass a socket_timeout argument in connection parameter.Pika should upgrade to >=0.9.14
credentials = pika.PlainCredentials(RABBITMQ_USER, RABBITMQ_PASS)
connection = pika.BlockingConnection(pika.ConnectionParameters(
credentials=credentials,
host=RABBITMQ_HOST,
socket_timeout=300))
channel = connection.channel()

gdata internal error

Since yesterday a working Python gdata program has stopped working after I changed the IP address used.
I receive the following stack trace:
Traceback (most recent call last):
File "C:\prod\googleSite\googleSite2.py", line 23, in
feed = client.GetContentFeed()
File "C:\Python27\lib\site-packages\gdata\sites\client.py", line 155, in get_c
ontent_feed
auth_token=auth_token, **kwargs)
File "C:\Python27\lib\site-packages\gdata\client.py", line 635, in get_feed
**kwargs)
File "C:\Python27\lib\site-packages\gdata\client.py", line 320, in request
RequestError)
gdata.client.RequestError: Server responded with: 500, Internal Error
The code is as follow:
import gdata.sites.client
import gdata.sites.data
client = gdata.sites.client.SitesClient(source='xxx', site='yyy')
client.ssl = True # Force API requests through HTTPS
client.ClientLogin('user#googlemail.com', 'password', client.source);
feed = client.GetContentFeed();
Update:
The issue fixes itself after an hour - is there any kind of commit or logout to avoid this?
Since you're not passing anything in GetContentFeed, it's using CONTENT_FEED_TEMPLATE % (self.domain, self.site) as the URI. I'm not sure if the IP change had an impact on what the self.domain/self.site values should be, but it might be worth checking those out.

Categories