I am trying to connect to a GRPC server in a celery task. I have the following piece of code
timeout = 1
host = '0.tcp.ngrok.io'
port = '7145'
channel = grpc.insecure_channel('{0}:{1}'.format(host, port))
try:
grpc.channel_ready_future(channel).result(timeout=timeout)
except grpc.FutureTimeoutError:
sys.exit(1)
stub = stub(channel)
When I run this snippet through the Python shell, I am able to establish the connection, and execute the GRPC methods. However, when I run this through the Celery task, I get the grpc.FutureTimeoutError, and the connection does not get established.
The Celery worker lies on the same machine as the grpc server. I tried using the socket library to ping the GRPC server, and that worked (It returned some junk response).
I am using Python 2.7, with grpcio==1.6.0 installed. The Celery version is 4.1.0. Any pointers would be helpful.
I believe Celery uses fork under the hood, and gRPC 1.6 did not support any forking behavior.
Try updating to gRPC 1.7.
Related
I'm using rpyc package to communicate between server and client. Everything was worked fine until today. My client pc can't receive message from server. So I run socket server:
$ python bin/rpyc_classic.py
INFO:SLAVE/18812:server started on [0.0.0.0]:18812
Connect like in docs:
import rpyc
conn = rpyc.classic.connect("10.99.100.200")
I tried to debug with wireshark, firewall is off and all is worked before. I'm using OSX. When I do reverse, connection working fine
I am able to connect to my local kafka server with python kafka package.
however I am not able to connect to external ssl enabled kafka server.
Whereas my java code is able to communicate with the same server using these parameters:
props.put("security.protocol", kafkaProtocol);
props.put(SslConfigs.SSL_PROTOCOL_CONFIG, kafkaProtocol);
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, kafkaCertLocation);
props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, kafkaCertPassword);
I don't know what exactly the equivalent parameters in python kafka package.
Can some suggest me on this immediately.
I have tried this code:
producer = KafkaProducer(value_serializer=lambda m: json.dumps(m).encode('utf-8'),
bootstrap_servers='YYYYY.KAKFASERVER.com:9094',
security_protocol='SSL',
ssl_certfile='cacerts',
ssl_password='xxxxxxx')
I am getting the following error message:
failed to connect to YYYYY.KAKFASERVER.com:9094 unknown error (_ssl.c:3715)
I have created a flask application to process GNSS data. There are certain functions which takes a lot of time to execute. Therefore i have integrated celery to perform those functions as Asynchronous tasks. First I have tested the app in localhost by adding message broker as rabbitmq
app.config['CELERY_BROKER_URL']='amqp://localhost//'
app.config['CELERY_RESULT_BACKEND']='db+postgresql://username:pssword#localhost/DBname'
After fully tested the application in virtualenv I deployed It on heroku and added rabbitmq addon. Then I changed the app.config as follows.
app.config['CELERY_BROKER_URL']='amqp://myUsername:Mypassowrd#small-fiver-23.bigwig.lshift.net:10123/FlGJwZfbz4TR'
app.config['CELERY_RESULT_BACKEND']='db+postgres://myusername:Mypassword#ec2-54-163-246-193.compute-1.amazonaws.com:5432/dhcbl58v8ifst/MYDB'
After changing the above I ran the celery worker
celery -A app.celery worker --loglevel=info
and get this error
[2018-03-16 11:21:16,796: ERROR/MainProcess] consumer: Cannot connect to amqp://SHt1Xvhb:**#small-fiver-23.bigwig.lshift.net:10123/FlGJwZfbz4TR: timed out.
How can I check whether my heroku addon is working from Rabbitmq management console
It seems the port 10123 is not exposed. Can you try telnet small-fiver-23.bigwig.lshift.net 10123 from the server and see if you're able to connect successfully to the server?
If not, you have to expose that port to be accessible from the server you're trying to connect to.
I am currently working with WSGI, Eventlet and REDIS to create a websocket server. We are seeing a very high CPU load for what I would expect with the number of connections.
We have around 2000 connections to the websocket server which push information once roughly every 1m30s. The info is captured and placed into the REDIS DB, if there any messages waiting for the client then the messages are sent back in reply.
Python version 2.7
The wsgi setup looks like the below
wsgi.server(eventlet.listen(('127.0.0.1',8000),backlog=5000),hello_world,max_size=5000)
I have patched the eventlet lib at the script start using
import eventlet
eventlet.monkey_patch()
To hopefully get around any Redis related deadlocks causing high CPU.
The server is a EC2 C4 large running on Ubuntu 16.04. Amazon
Nothing else is running on the server other than this script to 100% CPU seems very high to me but perhaps my expectations are incorrect.
Can anyone help with perhaps some common gotchas?
I am using redis to save/update/delete data for my web socket server (Implemented using autobahn - twisted based web socket implementation) according to the messages I get from my server clients. For Redis operations I am using redis-py package. When there is more number of concurrent clients connecting to my server, I could see requests served in synchronous manner. I found redis operations blocks server from handling parallel client requests. Why is this happenimg ? How can I solve this issue ? I am doing redis operations from onMessage function of autobahn protocol class.
I found the root cause by googling. Issue was the python package I was using for Redis operation (redis-py) was designed in synchronus manner. So twisted server main thread was in blocking state during data fetch/update from Redis. Now I am trying the twisted based asynchronus package for Redis called txredisapi instead of redis-py in twisted way using defer package.