I am running the offical thrift py:tornado demo, exception was raised after the client close the transport.
example: https://github.com/apache/thrift/tree/master/tutorial/py.tornado
Starting the server...
ping()
add(1, 1)
zip()
zip()
calculate(1, Work(comment=None, num1=1, num2=0, op=4))
calculate(1, Work(comment=None, num1=15, num2=10, op=2))
getStruct(1)
ERROR:thrift.TTornado:thrift exception in handle_stream
Traceback (most recent call last):
File "/Users/user/venv/py27/lib/python2.7/site-packages/thrift/TTornado.py", line 174, in handle_stream
frame = yield trans.readFrame()
File "/Users/user/venv/py27/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/user/venv/py27/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/user/venv/py27/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/user/venv/py27/lib/python2.7/site-packages/thrift/TTornado.py", line 141, in readFrame
raise gen.Return(frame)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/user/venv/py27/lib/python2.7/site-packages/thrift/TTornado.py", line 125, in io_exception_context
message=str(e))
TTransportException: Stream is closed
Is there any way to avoid this error msg or how to catch it?
many reasons would cause StreamClosedError.
check the thrift server side, there may raise an exception.
after test, I found thrift THttpServer cannot serve Tornado client stream.
and, when I upgrade my tornado version from 4.4.3 to 4.5, the StreamClosedError disappear.
client side Thrift version: 0.10.0
Tornado version: 4.5
client side Python version: 3.5.2
System version: Ubuntu 16.04
Related
I want to communicate via python opcua library to an opcua server that uses Basic128Rsa15 encryption.
client.set_security_string("Basic128Rsa15,"
"SignAndEncrypt,"
"cert.pem,"
"key.pem")
I did the same communication to an Prosys server using Basic256Sha256 encryption and all was ok. With Basic128Rsa15 (using KEPserver) I get following error:
In [19]: runfile('opcuaclient.py', wdir='/home/di29394/fue4bfi/python/fuere4bfi')
DEPRECATED! Do not use SecurityPolicyBasic128Rsa15 anymore!
Received an error: MessageAbort(error:StatusCode(BadSecurityChecksFailed), reason:An error occurred verifying security.)
Received an error: MessageAbort(error:StatusCode(BadSecurityChecksFailed), reason:An error occurred verifying security.)
Protocol Error
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 101, in _run
self._receive()
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 121, in _receive
self._call_callback(0, ua.UaStatusCodeError(msg.Error.value))
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 131, in _call_callback
.format(request_id, self._callbackmap.keys())
opcua.ua.uaerrors._base.UaError: No future object found for request: 0, callbacks in list are
Traceback (most recent call last):
File "<ipython-input-18-4187edd51b2b>", line 1, in <module>
runfile('opcuaclient.py', wdir='/home/opcuauser')
File "/usr/lib/python3/dist-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "/usr/lib/python3/dist-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "opcuaclient.py", line 57, in <module>
connected = client.connect()
File "/usr/local/lib/python3.6/dist-packages/opcua/client/client.py", line 259, in connect
self.open_secure_channel()
File "/usr/local/lib/python3.6/dist-packages/opcua/client/client.py", line 309, in open_secure_channel
result = self.uaclient.open_secure_channel(params)
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 265, in open_secure_channel
return self._uasocket.open_secure_channel(params)
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 199, in open_secure_channel
response = struct_from_binary(ua.OpenSecureChannelResponse, future.result(self.timeout))
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 430, in result
raise CancelledError()
CancelledError
The certificate was self signed using cryptography library (snippet):
cert = (
x509.CertificateBuilder()
.subject_name(name)
.issuer_name(name)
.public_key(key.public_key())
.serial_number(1000)
.not_valid_before(now)
.not_valid_after(now + timedelta(days=10*365)) # ggf. auch dynamisch machen
.add_extension(basic_contraints, False)
.add_extension(san, False)
.sign(key, hashes.SHA256(), default_backend())
Do I have to change the certificate generation according to Basic128Rsa15 or is there something different wrong?
Thanks in advance.
I felt not so good about using Basic128Rsa15. But obviously this was not the problem. The problem was, that I've been connected to the KEPServer at least two times with different certificates but same - valid - URI. The server had problems with this, so rejected all incomming connections (the error message seems to be not very helpful). After deleting all requests on the server and connecting again, all was fine (even with Basic128Rsa15).
The error message is actually quite clear !
DEPRECATED! Do not use SecurityPolicyBasic128Rsa15 anymore!
Basic128Rsa15 is not considered as Secure anymore by the OPC Foundation and recommended to deprecate it.
Source: http://opcfoundation-onlineapplications.org/ProfileReporting/index.htm?ModifyProfile.aspx?ProfileID=a84d5b70-47b2-45ca-a0cc-e98fe8528f3d
There might be an option to still use it with KEPServerEx but I will not recommend using it for something different than testing.
Note: Basic256 is also considered obsolete by the OPC Foundation, the minimum recommended OPC UA Security Policy is then Basic256Sha256.
Some OPC UA Client and Server already support the latest and more secure Security Policies :
Aes128Sha256RsaOaep
Aes256Sha256RsaPss
I used to following the line
client.set_security_string("Basic256Sha256,SignAndEncrypt,xxxxx.der,xxxxx.pem")
please try this
Need some help! While running the python script using Rabbit MQ RPC. I am getting a Socket 104,Socket closed when connection was open error. Below is python traceback and some code:
Traceback (most recent call last):
File "./server.py", line 34, in <module>
channel.start_consuming()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1681, in start_consuming
self.connection.process_data_events(time_limit=None)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 656, in process_data_events
self._dispatch_channel_events()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 469, in _dispatch_channel_events
impl_channel._get_cookie()._dispatch_events()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1310, in _dispatch_events
evt.body)
File "./server.py", line 30, in on_request
body=json.dumps(DEVICE_INFO))
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1978, in basic_publish
mandatory, immediate)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 2065, in publish
self._flush_output()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1174, in _flush_output
*waiters)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output
raise exceptions.ConnectionClosed()
pika.exceptions.ConnectionClosed
Apologies as i am unable to comment due to low reputation. Could you provide a little more information on how you are opening your connection. Is it really open?
It might be because of loss of connection with rabbitmq server as pika doesn't deal with disconnects and often results in similar stacktrace.
I also had similar problem, in my case it was because my pika connection was dropping after sometime and my colleague was able to deal with this by adding a wait time for mq:port_number.
We were using docker container so we added following line to our invoke.sh to wait for mq:
filename.py --wait-secs 30 --port-wait mq:5672
I hope you are able to resolve this after doing that.
Otherwise it would be better to check if the connection is being dropped by pika before your python script runs or providing more information on how you are invoking it.
I'm trying to connect to two MySQL databases (one local, one remote) at the same time using Python 3.4 but I'm really struggling. Splitting the problem into three:
Step 1: connect to the local DB. This is working fine
using PyMySQL. (MySQLdb isn't compatible with Python 3.4, of
course.)
Step 2: connect to the remote DB (which needs to
use SSH). I can get it to work from the Linux command prompt but not
from Python... see below.
Step 3: connect to both at the
same time. I think I'm supposed to use a different port for the
remote database so that I can have both connections at the same time
but I'm out of my depth here! If it's relevant then the two DBs will
have different names. And if this question isn't directly related,
please tell me and I'll post it separately.
Unfortunately I'm not really starting in the right place for a newbie... once I can get this working I can happily go back to basic Python and SQL but hopefully someone will take pity on me and give me a hand to get started!
For Step 2, my code is below. It seems to be quite close to the sshtunnel example which answers this question Python - SSH Tunnel Setup and MySQL DB Access - though that uses MySQLdb. For the moment I'm embedding the connection parameters – I'll move them to the config file once it's working properly.
import dropbox, pymysql, shlex, shutil, subprocess
from sshtunnel import SSHTunnelForwarder
import iot_config as cfg
def CloseLocalDB():
localcur.close()
localdb.close()
def CloseRemoteDB():
# Disconnect from the database
# remotecur.close()
# remotedb.close()
# Close the SSH tunnel
# ssh.close()
print("end of CloseRemoteDB function")
def OpenLocalDB():
global localcur, localdb
localdb = pymysql.connect(host=cfg.localdbconn['host'], user=cfg.localdbconn['user'], passwd=cfg.localdbconn['passwd'], db=cfg.localdbconn['db'])
localcur = localdb.cursor()
def OpenRemoteDB():
global remotecur, remotedb
with SSHTunnelForwarder(
('my_remote_site', 22),
ssh_username = "my_ssh_username",
ssh_private_key = "/etc/ssh/my_private_key.ppk",
ssh_private_key_password = "my_private_key_password",
remote_bind_address = ('127.0.0.1', 3308)) as server:
remotedb = None
#Following line gives an error if uncommented
# remotedb = pymysql.connect(host='127.0.0.1', user='remote_db_user', passwd='remote_db_password', db='remote_db_name', port=server.local_bind_port)
#remotecur = remotedb.cursor()
# Main program starts here
OpenLocalDB()
CloseLocalDB()
OpenRemoteDB()
CloseRemoteDB()
This is the error I'm getting:
2016-04-21 19:13:33,487 | ERROR | Secsh channel 0 open FAILED: Connection refused: Connect failed
2016-04-21 19:13:33,553 | ERROR | In #1 <-- ('127.0.0.1', 60591) to ('127.0.0.1', 3308) failed: ChannelException(2, 'Connect failed')
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 60591)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/sshtunnel.py", line 286, in handle
src_address)
File "/usr/local/lib/python3.4/dist-packages/paramiko/transport.py", line 834, in open_channel
raise e
paramiko.ssh_exception.ChannelException: (2, 'Connect failed')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.4/socketserver.py", line 613, in process_request_thread
self.finish_request(request, client_address)
File "/usr/lib/python3.4/socketserver.py", line 344, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python3.4/socketserver.py", line 669, in __init__
self.handle()
File "/usr/local/lib/python3.4/dist-packages/sshtunnel.py", line 296, in handle
raise HandlerSSHTunnelForwarderError(msg)
sshtunnel.HandlerSSHTunnelForwarderError: In #1 <-- ('127.0.0.1', 60591) to ('127.0.0.1', 3308) failed: ChannelException(2, 'Connect failed')
----------------------------------------
Traceback (most recent call last):
File "/home/pi/Documents/iot_pm2/iot_ssh_example_for_help.py", line 38, in <module>
OpenRemoteDB()
File "/home/pi/Documents/iot_pm2/iot_ssh_example_for_help.py", line 32, in OpenRemoteDB
remotedb = pymysql.connect(host='127.0.0.1', user='remote_db_user', passwd='remote_db_password', db='remote_db_name', port=server.local_bind_port)
File "/usr/local/lib/python3.4/dist-packages/pymysql/__init__.py", line 88, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 678, in __init__
self.connect()
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 889, in connect
self._get_server_information()
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 1190, in _get_server_information
packet = self._read_packet()
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 945, in _read_packet
packet_header = self._read_bytes(4)
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 981, in _read_bytes
2013, "Lost connection to MySQL server during query")
pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')
Thanks in advance.
Answering my own question because, with a lot of help from J.M. Fernández on Github, I have a solution: the example that I copied at the beginning uses port 3308 but port 3306 is the standard. Once I'd changed this it started working.
I am attempting to run a autobahn WAMP application using twisted on a linux machine(ubuntu server 64bit).
I did note that when developing/testing I needed to install pywin32 after using wamp.Application, but this is of course not available/useful in a linux environment.
Previously I have been running autobahn websocket programs on this machine fine, but this error occurs now with the switch to using autobahn.twisted.wamp.Application
Any help to combat this problem so I can get the application running will be helpfull.
my imports are:
from twisted.internet.defer import returnValue
from autobahn.twisted.wamp import Application
I get the following stack trace:
2014-08-23 09:54:15+1200 [WampWebSocketServerProtocol,0,127.0.0.1] RX WAMP HELLO
Message (realm = realm1, roles = [<autobahn.wamp.role.RoleSubscriberFeatures in
stance at 0x9bd79ec>, <autobahn.wamp.role.RolePublisherFeatures instance at 0x9b
d7aec>, <autobahn.wamp.role.RoleCallerFeatures instance at 0x9bd7b6c>, <autobahn
.wamp.role.RoleCalleeFeatures instance at 0x9bd7fac>], authmethods = None, authi
d = None)
2014-08-23 09:54:15+1200 [WampWebSocketServerProtocol,0,127.0.0.1] Unhandled err
or in Deferred:
2014-08-23 09:54:15+1200 [WampWebSocketServerProtocol,0,127.0.0.1] Unhandled Err
or
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/autobahn/wamp/websocket.p
y", line 90, in onMessage
self._session.onMessage(msg)
File "/usr/local/lib/python2.7/dist-packages/autobahn/wamp/protocol.py
", line 1267, in onMessage
self._add_future_callbacks(d, success, failed)
File "/usr/local/lib/python2.7/dist-packages/autobahn/twisted/wamp.py"
, line 72, in _add_future_callbacks
return future.addCallbacks(callback, errback)
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py
", line 295, in addCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py
", line 577, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/usr/local/lib/python2.7/dist-packages/autobahn/wamp/protocol.py
", line 1250, in success
welcome(self._realm, res.authid, res.authrole, res.authmethod, res.a
uthprovider)
File "/usr/local/lib/python2.7/dist-packages/autobahn/wamp/protocol.py
", line 1221, in welcome
self._router = self._router_factory.get(realm)
File "/usr/local/lib/python2.7/dist-packages/autobahn/wamp/router.py",
line 173, in get
self._routers[realm] = self.router(self, realm, self._options)
File "/usr/local/lib/python2.7/dist-packages/autobahn/wamp/router.py",
line 52, in __init__
self._broker = self.broker(self, self._options)
exceptions.AttributeError: Router instance has no attribute 'broker'
It seems that there was likely a bug in autobahn because a:
pip install autobahn --upgrade
seems to have fixed this with autobahn now at version: 0.8.14
Sorry for the lack of updating before asking :)
I use:
MongoDB 1.6.5
Pymongo 1.9
Python 2.6.6
I have 3 types of daemons. 1st load data from web, 2nd analyze it and save result, and 3rd group result. All of them working with Mongodb.
At some time 3rd daemon throws many exceptions like this(mostly when there are big amount of data in DB):
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/gevent-0.13.1-py2.6-linux-x86_64.egg/gevent/greenlet.py", line 405, in run
result = self._run(*self.args, **self.kwargs)
File "/data/www/spider/daemon/scripts/mainconverter.py", line 72, in work
for item in res:
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/cursor.py", line 601, in next
if len(self.__data) or self._refresh():
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/cursor.py", line 564, in _refresh
self.__query_spec(), self.__fields))
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/cursor.py", line 521, in __send_message
**kwargs)
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/connection.py", line 743, in _send_message_with_response
return self.__send_and_receive(message, sock)
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/connection.py", line 724, in __send_and_receive
return self.__receive_message_on_socket(1, request_id, sock)
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.9_-py2.6-linux-x86_64.egg/pymongo/connection.py", line 714, in __receive_message_on_socket
struct.unpack("<i", header[8:12])[0])
AssertionError: ids don't match -561338340 0
<Greenlet at 0x2baa628: <bound method Worker.work of <scripts.mainconverter.Worker object at 0x2ba8450>>> failed with AssertionError
Can anyone tell what cause this exeption and how to fix this.
Thanks.
This is likely a threading problem related to how you are using worker threads with gevent coroutines. It seems like the pymongo connection object is reading a response for a request it didn't make.