Should RabbitMQ take this long to setup the connection? - python

I am trying a basic hello world example with RabbitMQ in Python, and it is taking about 8 seconds to set up a basic blocking connection. This seems excessive, but this is my first experience with RabbitMQ, so my question is: is this normal? Can I reduce this time? Or should I look for another option? Here is my code:
import time
import pika
start = time.time()
connection = pika.BlockingConnection(pika.ConnectionParameters(host="localhost"))
end = time.time()
print "Elapsed time: %s" % (end-start)
channel = connection.channel()
channel.queue_declare(queue="hello")
channel.basic_publish(exchange="",
routing_key="hello",
body="Hello world!")
connection.close()
and my output is Elapsed time: 8.01042914391.
Thanks for the help!
[Edit] I have noticed that every time I run it, it takes almost exactly 8 seconds, to within .2%. I'm not sure if that means anything.

You need to configure channel configurations for inbound channel, outbound chanel, just like thread pool executor. Default values for these threads is 1 which might cause in delay under some load.

Related

Pyserial in_waiting CPU usage

I have a large python script with a thread that listens to a serial port and puts new data to a queue whenever it's received. I've been trying to improve the performance of the script, as right now even when nothing is happening it's using ~ 12% of my Ryzen 3600 CPU. That seems very excessive.
Here's the serial listener thread:
def listen(self):
"""
Main listener
"""
while self.doListen:
# wait for a message
if self.bus.ser.in_waiting:
# Read rest of message
msg = self.bus.read(self.bus.ser.in_waiting)
# Put into queue
self.msgQueue.put_nowait(msg)
I profiled the script using yappi and found that the serial.in_waiting call seems to be hogging the majority of the cycles. See the KCachegrind screenshot below:
I tried the trick suggested in this question, doing a blocking read(1) call to wait for data. However read(1) just continuously returns empty data and never actually blocks (and yes, I've made sure my pyserial timeout is set to None)
Is there a more elegant and CPU-friendly way to wait for data on the serial bus? My messages are of variable length, so doing a blocking read(n) wouldn't work. They also don't end in newlines or any specific terminators, so readline() wouldn't work either.
Aaron's suggestion was great. A simple time.sleep(0.01) in my serial thread dramatically cut down on CPU usage. So far it looks like I'm not missing any messages either, which was my big fear with adding in sleeps.
The new listener thread:
def listen(self):
"""
Main listener
"""
while self.doListen:
# wait for a message
if self.bus.ser.in_waiting:
# Read rest of message
msg = self.bus.read(self.bus.ser.in_waiting)
# Put into queue
self.msgQueue.put_nowait(msg)
# give the CPU a break
time.sleep(0.01)

strange while loop behavior with time.sleep above 90 s

I'm struggling understanding a "weird" behavior of my simple script. Basically, it works as expected if time.sleep() is set as 60s but as soon as I put a value above 90 (90 is the limit apparently in my case), the loop doesn't work properly. I discovered this when I was trying to pause the script for 3 mins.
Here's my script
from gpiozero import CPUTemperature
import time
import paho.mqtt.client as mqtt #import the client1
import psutil
broker_address="192.168.1.17"
client = mqtt.Client("P1") #create new instance
client.connect(broker_address) #connect to broker
#time.sleep(60)
while True:
cpu = CPUTemperature()
print(cpu.temperature)
#a=cpu.temperature
#print(psutil.cpu_percent())
#print(psutil.virtual_memory()[2])
#print(a)
client.publish("test/message",cpu.temperature)
#client.publish("test/ram", psutil.virtual_memory()[2])
#client.publish("test/cpu", psutil.cpu_percent())
time.sleep(91)
In this case, with 91s it just prints the value of cpu.temperature every 91s, whereas with a value like 60s, besides printing, it also publishes the value via mqtt every cycle.
Am I doing something wrong here? Or for a longer sleep I need to change my code? I'm running this on a RaspberryPi.
Thanks in advance
EDIT:
I solved modifying the script, in particular how mqtt was handling the timing
here's the new script
mqttc=mqtt.Client("P1")
#mqttc.on_connect = onConnect
#mqttc.on_disconnect = onDisconnect
mqttc.connect("192.168.1.17", port=1883, keepalive=60)
mqttc.loop_start()
while True:
cpu = CPUTemperature()
print(cpu.temperature)
mqttc.publish("test/message",cpu.temperature)
time.sleep(300)
The MQTT client uses a network thread to handle a number of different aspects of the connection to the broker.
Firstly, it handles sending ping request to the broker in order to keep the connection alive. The default period for the keepalive period is 60 seconds. The connection will be dropped by the broker if it does not receive any messages in 1.5 times this value, which just happens to be 90 seconds.
Secondly, the thread handles any incoming messages that the client may have subscribed to.
Thirdly, if you try to publish a message that is bigger than the MTU of the network link, calling mqttc.publish() will only send the first packet and the loop is needed to send the rest of the payload.
There are 2 ways to run the network tasks.
As you have found, you can start a separate thread with the mqttc.loop_start()
The other option is to call mqttc.loop() within your own while loop

Python AIOHTTP.web server multiprocessing load-balancer?

I am currently developing a web app using the aiohttp module. I'm using:
aiohttp.web, asyncio, uvloop, aiohttp_session, aiohttp_security, aiomysql, and aioredis
I have run some benchmarks against it and while they're pretty good, I can't help but want for more. I know that Python is, by nature, single-threaded. AIOHTTP is using async as to be non-blocking but am I correct in assuming that it is not utilizing all CPU cores?
My idea: Run multiple instances of my aiohttp.web code via concurrent.futures in multiprocessing mode. Each process would serve the site on a different port. I would then put a load balancer in front of them. MySQL and Redis can be used to share state where necessary such as for sessions.
Question: Given a server with several CPU cores, will this result in the desired performance increase? If so, is there any specific pattern to pursue in order to avert problems? I can't think of anything that these aio modules are doing that would require that there only be a single thread though I could be wrong.
Note: This is not a subjective question as I've posed it. Either the module is currently bound to one thread/process or it isn't - can benefit from a multiprocessing module + load balancer or can't.
You're right asyncio uses one CPU only. (one event loop uses one thread only and thus one CPU only)
Whether your whole project is network or CPU bound is something I can't say.
You have to try.
You could use nginx or haproxy as load balancer.
You might even try to use no load balancer at all. I never tried this feature for load balancing, just as proof of concept for a fail-over system.
With new kernels multiple processes can listen to the same port (when using the SO_REUSEPORT option) and I guess it's the kernel who would be doing a round robin.
Here a small link to an article comparing performance of a typical nginx configuration vs an nginx setup with the SO_REUSEPORT feature:
https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/
It seems SO_REUSEPORT might distributes the CPU charge rather evenly, but might increase the variation of response times. Not sure this is relevant in your setup, but thought I let you know.
Added 2020-02-04:
My solution added 2019-12-09 works, but triggers a deprecation warning.
When having more time and time for testing it myself I will post the improved solution here. For the time being you can find it at AIOHTTP - Application.make_handler(...) is deprecated - Adding Multiprocessing
Added 2019-12-09:
Here a small example of an HTTP server, that can be started multiple times listening on the same socket.
The kernel would distribute the tasks. I never checked whether this is efficient or not though.
reuseport.py:
import asyncio
import os
import socket
import time
from aiohttp import web
def mk_socket(host="127.0.0.1", port=8000, reuseport=False):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if reuseport:
SO_REUSEPORT = 15
sock.setsockopt(socket.SOL_SOCKET, SO_REUSEPORT, 1)
sock.bind((host, port))
return sock
async def handle(request):
name = request.match_info.get('name', "Anonymous")
pid = os.getpid()
text = "{:.2f}: Hello {}! Process {} is treating you\n".format(
time.time(), name, pid)
time.sleep(0.5) # intentionally blocking sleep to simulate CPU load
return web.Response(text=text)
if __name__ == '__main__':
host = "127.0.0.1"
port=8000
reuseport = True
app = web.Application()
sock = mk_socket(host, port, reuseport=reuseport)
app.add_routes([web.get('/', handle),
web.get('/{name}', handle)])
loop = asyncio.get_event_loop()
coro = loop.create_server(
protocol_factory=app.make_handler(),
sock=sock,
)
srv = loop.run_until_complete(coro)
loop.run_forever()
And one way to test it:
./reuseport.py & ./reuseport.py &
sleep 2 # sleep a little so servers are up
for n in 1 2 3 4 5 6 7 8 ; do wget -q http://localhost:8000/$n -O - & done
The output might look like:
1575887410.91: Hello 1! Process 12635 is treating you
1575887410.91: Hello 2! Process 12633 is treating you
1575887411.42: Hello 5! Process 12633 is treating you
1575887410.92: Hello 7! Process 12634 is treating you
1575887411.42: Hello 6! Process 12634 is treating you
1575887411.92: Hello 4! Process 12634 is treating you
1575887412.42: Hello 3! Process 12634 is treating you
1575887412.92: Hello 8! Process 12634 is treating you
I think is better to not reinvent the wheel and use one of the proposed solutions at the documentation:
https://docs.aiohttp.org/en/stable/deployment.html#nginx-supervisord

Sleep takes 86% of time in rabbitpy consumer?

I found a rather interesting problem today. I have a queue with 2K messages in it.
The consumer: import rabbitpy
with rabbitpy.Connection('amqp://guest:guest#localhost:5672/%2f') as conn:
with conn.channel() as channel:
queue = rabbitpy.Queue(channel, 'example')
for message in queue.consume_messages():
message.ack()
takes 41 seconds to get these messages and ack them. (messages vary from 4kB to 52kB)
The publisher, however, took 15 seconds to publish them.
Upon profiling, I found that there is a call to sleep were we spend 86% of the time. This, to my application, is not acceptable. Could someone help me get rid of this sleep? (I'm ok if the CPU cycles are wasted or whatever till a message arrives.)
Zoomed in screenshot

Timeout for idle connection

I am using asyncore and asynchat modules to build a SMTP server (I used code from smtpd lib to build the SMTP server) but I have a problem with connection timeouts. When I open telnet connection to the SMTP server and leave it so, the connection is established althought no data exchange happens. I want to set timeout e.g 30 seconds and to close the idle connection by server if nothing comes from the client (else there could be an easy DOS vulnerability). I googled for a solution, read source codes and documentation but didn't found anything usable.
Thanks
According to asyncore documentation, asyncore.loop() has a timeout parameter, which defaults to 30 seconds. So apparently default already should be 30 seconds, you can try and play with it to suit your own needs.
The timeout argument sets the timeout parameter for the appropriate
select() or poll() call, measured in seconds; the default is 30
seconds.
Ok, the above actually refers to poll() or select() timeout and not the idle timeout.
As per this page, you can hack asyncore to support timeouts like this:
Add the following block to your own copy of asyncore.poll just after the for fd in e: block...
#handle timeouts
rw = set(r) + set(w)
now = time.time()
for f in (i for i in rw if i in map):
map[f].lastdata = now
for j in (map[i] for i in map if i not in rw):
if j.timeout+j.lastdata now:
#timeout!
j.handle_close()
You ARE going to need to initialize .timeout and .lastdata members for
every instance, but that shouldn't be so bad (for a socket that
doesn't time out, I would actually suggest a 1 hour or 1 day timeout).

Categories