I am testing Redis in heroku. I have simple flask app that can create the redis client and kill the client but ...
#app.route('/client-status')
def client_status():
redis.client_setname("first")
redis.client_kill('addr') #10.157.2.68:60097
return "Success"
The question is how to get addr ? I know a way to get name redis.client_getname()...
you should use the command CLIENT LIST http://redis.io/commands/client-list to get all the information about the client connections server, and then retrieve the address field
Related
I am trying to write a producer and consumer code in python using pika for rabbitmq. However for my specific case, I need to run producer on a different host and consumer on other.
I have already written a producer code as:
import pika
credentials = pika.PlainCredentials('username', 'password')
parameters = pika.ConnectionParameters('ip add of another host', 5672, '/', credentials)
connection = pika.BlockingConnection()
channel = connection.channel()
channel.queue_declare(queue='test')
channel.basic_publish(exchange='', routing_key='test', body='hello all!')
print (" [x] sent 'Hello all!")
connection.close()
The above producer code is running without any error. I also created a new user and gave administrator credentials to it on rabbitmq-server. However when I run the consumer code on another host running rabbitmq-server, I do not see any output:
import pika
credentials = pika.PlainCredentials('username', 'password')
parameters = pika.ConnectionParameters('localhost', 5672, '/', credentials)
connection = pika.BlockingConnection()
channel = connection.channel()
channel.queue_declare(queue='test')
def callback(ch, method, properties, body):
print(" [x] Recieved %r" % body)
channel.basic_consume(
queue='test', on_message_callback=callback, auto_ack=True)
print (' [x] waiting for messages. To exit press ctrl+c')
channel.start_consume()
So, here i had two hosts on the same network which had rabbitmq installed. However one has 3.7.10 and other had 3.7.16 version of rabbitmq.
The producer is able to send the text without error, but the consumer on another host is not receiving any text.
I do not get any problem when both run on same machine, as i just replace connection settings with localhost. Since user guest is only allowed to connect on localhost by default, i created a new user on consumer host running rabbitmq-server.
Please look if anyone can help me out here...
I have a couple of questions when I see your problem:
Are you 100% sure that on your RabbitMQ management monitoring
you see 2 connections? One from your local host and another from the another host? This will help to debug
Second, Did you check that your ongoing port 5672 on the server that host RabbitMQ is open? Because maybe your producer does not manage to connect What is your cloud provider?
If you don't want to manage those kinds of issues, you should use a service like https://zenaton.com. They host everything for you, and you have integrated monitoring, error handling etc.
Your consumer and producer applications must connect to the same RabbitMQ server. If you have two instances of RabbitMQ running they are independent. Messages do not move from one instance of RabbitMQ to another unless you configure Shovel or Federation.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
You don't seem to be passing the parameters to the BlockingConnection instance.
import pika
rmq_server = "ip_address_of_rmq_server"
credentials = pika.PlainCredentials('username', 'password')
parameters = pika.ConnectionParameters(rmq_server, 5672, '/', credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
Also, your consumer is attaching to the localhost hostname. Make sure this actually resolves and that your RabbitMQ service is listening on the localhost address (127.0.0.1) It may not be bound to that address. I believe that RMQ will bind to all interfaces (and thus all addresses) by default but I'm not sure.
I have an end-to-end pipeline of an web application like below in Python3.6
Socket(connection from client to server) -> Flask Server -> Kafka Producer ->Kafka Consumer ->NLPService
Now when I get some result back from the NLPService, I need to send it back to the client. I am thinking below steps
NLP service writes the result to a different topic on Kafka producer (done)
Kafka consumer retrieves the result from Kafka broker (done)
Kafka consumer needs to write the result to the flask server
Then flask server will send the result back to the socket
Socket writes to client
I have already done steps 1-2. But stuck at step 3, 4. How do I write from Kafka to the flask server? If I just call a function at my server.py, then logically it seems like I have to create a socket within at function at server.py which will do the job of sending to client through socket. But syntax wise it looks weird. What am I missing?
at consumer.py
#receiving reply
topicReply = 'Reply'
consumerReply = KafkaConsumer(topicReply, value_deserializer=lambda m: json.loads(m.decode('ascii')))
for message in consumerReply:
#send reply back to Server
fromConsumer(message.value)
at server.py
socketio = SocketIO(app)
def fromConsumer(msg):
#socketio.on('reply')
def replyMessage(msg):
send(msg)
The above construct in server.py doesn't make sense to me. Please suggest.
I have a very simple Python (Flask socket.io) application which works as a server and another app written in AngularJS which is a client.
In order to handle connected and disconnected client I use respectlivy:
#socketio.on('connect')
def on_connect():
print("Client connected")
#socketio.on('disconnect')
def on_disconnect():
print("Client disconnected")
When Client connects to my app I get information about it, in case if client disconnect (for example because of problems with a network) I don't get any information.
What is the proper way to handle the situation in which client disconnects unexpectedly?
There are two types of connections: using long-pooling or WebSocket.
When you use WebSocket clients knows instantly that server was disconnected.
In the case of long-polling, there is need to set ping_interval and pint_timeout parameters (I also find information about heartbeat_interval and heartbeat_timeout but I don't know how they are related to ping_*).
From the server perspective: it doesn't know that client was disconnected and the only way to get that information is to set ping_interval and ping_timeout.
I need to log the IP address of every user of my webapp, that I've created with Python and Flask.
I'm using
request.remote_addr
But that's return the IP address of the server the app is deployed to. Any fixes to this?
How do you deploy the flask application?
I guess you deploy your app via a reverse-proxy server like nginx, right?
If you did that then request.remote_addr is the address of your server because your server sent client's request to your application and sent the response to the client.
To fix this, see: http://flask.pocoo.org/docs/0.11/deploying/wsgi-standalone/#proxy-setups
The easiest way to get the user's(also known as client) IP is to set this as a variable or use it directly.
request.environ['REMOTE_ADDR']
To get your server's IP:
request.remote_addr
My project use bottle and HBase, client connect to HBase via python thrift client, code simplify like this
#!/usr/bin/env python
from bottle import route, run, default_app, request
client = HBaseClient()
#route('/', method='POST')
def index():
data = client.getdata()
return data
Now the issue is if client disconnect, our request will be failed. So it requires to make sure client keep alive.
One solution is using connection pool, is there any connection pool I can refer to?
Any other solution for this issue?
Looks happybase can deal this issue
HappyBase has a connection pool that tries to deal with broken connections to some extent: http://happybase.readthedocs.org/en/latest/user.html#using-the-connection-pool