I need to communicate with an embedded system over RS232. For this I want to profile the time it takes to send a response to each command.
I've tested this code using two methods: datetime.now() and timeit()
Method #1
def resp_time(n,msg):
"""Given number of tries - n and bytearray list"""
msg = bytearray(msg)
cnt = 0
timer = 0
while cnt < n:
time.sleep(INTERVAL)
a = datetime.datetime.now()
ser.flush()
ser.write(msg)
line = []
for count in ser.read():
line.append(count)
if count == '\xFF':
# print line
break
b = datetime.datetime.now()
c = b-a
# print c.total_seconds()*1000
timer = timer + c.total_seconds()*1000
cnt = cnt + 1
return timer/n
ser = serial.Serial(COMPORT,BAUDRATE,serial.EIGHTBITS, serial.PARITY_NONE, serial.STOPBITS_ONE, timeout=16)
if ser.isOpen():
print "Serial port opened at: Baud:",COMPORT,BAUDRATE
cmd = read_file()
# returns a list of commands [msg1,msg2....]
n = 100
for index in cmd:
timer = resp_time(n,index)
print "Time in msecs over %d runs: %f " % (n,timer)
Method #2
def com_loop(msg):
msg = bytearray(msg)
time.sleep(INTERVAL)
ser.flush()
ser.write(msg)
line = []
for count in ser.read():
line.append(count)
if count == '\xFF':
break
if __name__ == '__main__':
import timeit
ser = serial.Serial(COMPORT,BAUDRATE,serial.EIGHTBITS, serial.PARITY_NONE, serial.STOPBITS_ONE, timeout=16)
if ser.isOpen():
print "Serial port opened at: Baud:",COMPORT,BAUDRATE
cmd = read_file()
# returns a list of commands [msg1,msg2....]
n = 100
for index in cmd:
t = timeit.timeit("com_loop(index)","from __main__ import com_loop;index=%s;" % index,number = n)
print t/100
With datetime I get 2 milli-sec to execute a command & with timeit I get 200 milli-sec for the same command.
I suspect I'm not calling timeit() properly, can someone point me in the right direction?
I'd assume 200µs is closer to the truth, considering your comport will have something like 115200baud; assuming messages are 8 bytes long, transmitting one message would take about 9/115200 s ~= 10/100000 = 1/10,000 = 100µs on the serial line alone. Being faster than that will be pretty impossible.
Python is definitely not the language of choice to do timing characterization at these scales. You will need to get a logic analyzer, or work very close to the serial controller (which I hope is directly attached to your PC's IO controller and not some USB devices, because that will introduce latencies in the same order of magnitude, at least). If you're talking about microseconds, the limiting factor in measurement is usually the random time it takes for your PC to react to an interrupt, the OS to run the interrupt service routine, the scheduler to continue your userland process, and then starts python with its levels and levels of indirection. You're basically measuring the size of single grains of sand by holding a banana next to them.
Related
My custom sensor dashboard requests new readings every second.
This worked well, until I hooked up 3 DS18B20 temperature sensors (1-wire protocol, so all on 1 pin), which each take 750ms to provide new data.
This is the class I currently use to read the temperature of each sensor:
# ds18b20.py
# written by Roger Woollett
import os
import glob
import time
class DS18B20:
# much of this code is lifted from Adafruit web site
# This class can be used to access one or more DS18B20 temperature sensors
# It uses OS supplied drivers and one wire support must be enabled
# To do this add the line
# dtoverlay=w1-gpio
# to the end of /boot/config.txt
#
# The DS18B20 has three pins, looking at the flat side with the pins pointing
# down pin 1 is on the left
# connect pin 1 to GPIO ground
# connect pin 2 to GPIO 4 *and* GPIO 3.3V via a 4k8 (4800 ohm) pullup resistor
# connect pin 3 to GPIO 3.3V
# You can connect more than one sensor to the same set of pins
# Only one pullup resistor is required
def __init__(self):
# Load required kernel modules
os.system('modprobe w1-gpio')
os.system('modprobe w1-therm')
# Find file names for the sensor(s)
base_dir = '/sys/bus/w1/devices/'
device_folder = glob.glob(base_dir + '28*')
self._num_devices = len(device_folder)
self._device_file = list()
i = 0
while i < self._num_devices:
self._device_file.append(device_folder[i] + '/w1_slave')
i += 1
def _read_temp(self, index):
# Issue one read to one sensor
# You should not call this directly
# First check if this index exists
if index >= len(self._device_file):
return False
f = open(self._device_file[index], 'r')
data = f.read()
f.close()
return data
def tempC(self, index=0):
# Call this to get the temperature in degrees C
# detected by a sensor
data = self._read_temp(index)
retries = 0
# Check for error
if data == False:
return None
while (not "YES" in data) and (retries > 0):
# Read failed so try again
time.sleep(0.1)
#print('Read Failed', retries)
data = self._read_temp(index)
retries -= 1
if (retries == 0) and (not "YES" in data):
return None
(discard, sep, reading) = data.partition(' t=')
if reading == 85000:
# 85ºC is the boot temperature of the sensor, so ignore that value
return None
temperature = float(reading) / 1000.0
return temperature
def device_count(self):
# Call this to see how many sensors have been detected
return self._num_devices
I already tried to return the previous temperature reading if the current one isn't finished yet, however this didn't reduce the time it took to read a sensor, so I guess the only way is to do things asynchronously.
I could reduce the precision to reduce the time it takes per reading, but ideally I would read all of the sensors simultaneously on separate threads.
How can I best implement this? Or are there other ways to improve the reading speed of multiple DS18B20 sensors?
Thanks for any insights!
You're facing some limitations introduced by the Linux kernel driver. If you were interacting with the OneWire protocol directly, you would only have a single 750ms read cycle for all three sensors, rather than (3 * 750ms). When speaking the 1-wire protocol directly, you can issue a single "convert temperature" command to all devices on the bus, as described here, and then read all the sensors.
The Linux driver explicitly doesn't support this mode of operation:
If none of the devices are parasite powered it would be possible to convert all the devices at the same time and then go back to read individual sensors. That isn’t currently supported. The driver also doesn’t support reduced precision (which would also reduce the conversion time) when reading values.
That means you're stuck with a 750ms per device read cycle. Your best option is probably placing the sensor reading code in a separate thread, e.g.:
import glob
import threading
import time
# Note that we're inheriting from threading.Thread here;
# see https://docs.python.org/3/library/threading.html
# for more information.
class DS18B20(threading.Thread):
default_base_dir = "/sys/bus/w1/devices/"
def __init__(self, base_dir=None):
super().__init__()
self._base_dir = base_dir if base_dir else self.default_base_dir
self.daemon = True
self.discover()
def discover(self):
device_folder = glob.glob(self._base_dir + "28*")
self._num_devices = len(device_folder)
self._device_file: list[str] = []
for i in range(self._num_devices):
self._device_file.append(device_folder[i] + "/w1_slave")
self._values: list[float | None] = [None] * self._num_devices
self._times: list[float] = [0.0] * self._num_devices
def run(self):
"""Thread entrypoint: read sensors in a loop.
Calling DS18B20.start() will cause this method to run in
a separate thread.
"""
while True:
for dev in range(self._num_devices):
self._read_temp(dev)
# Adjust this value as you see fit, noting that you will never
# read actual sensor values more often than 750ms * self._num_devices.
time.sleep(1)
def _read_temp(self, index):
for i in range(3):
with open(self._device_file[index], "r") as f:
data = f.read()
if "YES" not in data:
time.sleep(0.1)
continue
disacard, sep, reading = data.partition(" t=")
temp = float(reading) / 1000.0
self._values[index] = temp
self._times[index] = time.time()
break
else:
print(f"failed to read device {index}")
def tempC(self, index=0):
return self._values[index]
def device_count(self):
"""Return the number of discovered devices"""
return self._num_devices
Because this is a thread, you need to .start() it first, so your
code would look something like:
d = DS18B20()
d.start()
while True:
for i in range(d.device_count()):
print(f'dev {i}: {d.tempC(i)}')
time.sleep(0.5)
You can call the tempC method as often as you want, because it's
just return a value from the _values array. The actual update
frequency is controlled by the loop in the run method (and the
minimum cycle time imposed by the sensors).
I wrote a small pyserial interface to read the data from the COM port after issuing a command. For eg : in my case my system has a lot of network interface so i need to validate whether all the interfaces are up using ifconfig command. But when i gave this command , the output of the command is getting truncated at the last few lines. The approximate size of the output in bytes would be 6500-7000 bytes but i am receiving only around 6000-6150 bytes all the time. Please find my code below
'''
import serial
import time
com_serial = serial.Serial("COM6", 115200, timeout = 10)
com_serial.reset_input_buffer()
com_serial.write(b"ifconfig\n")
data_all = b" "
time.sleep(5)
while True:
bytetoread = com_serial.inWaiting()
time.sleep(2)
print ("Bytetoread: " , bytetoread)
data = com_serial.read(bytetoread)
data_all += data
if bytetoread < 1:
break
print ("Data:", data_all)
com_serial.close()
'''
**Output:
Bytetoread: 3967
Bytetoread: 179
Bytetoread: 2049
Bytetoread: 0
**
Data: *********with missing data at the end.
I am not sure why the logs are missing?
I have tried another approach.
'''
import serial
import time
com_serial = serial.Serial("COM6", 115200, timeout = 10)
com_serial.reset_input_buffer()
com_serial.write(b"ifconfig\n")
time.sleep(5)
data_all = b" "
data_all = com_serial.read(100000000)
print (data_all)
com_serial.close()
'''
Here also the last few logs are getting truncated.
The root cause seems to be inadequate buffer size of the Tx and Rx serial buffer. By increasing the buffer size using .set_buffer_size() resolved the issue.
'''
import serial
import time
com_serial = serial.Serial("COM6", 115200, timeout = 10)
com_serial.set_buffer_size(rx_size = 12800, tx_size = 12800)
com_serial.reset_input_buffer()
com_serial.write(b"ifconfig\n")
data_all = b" "
data_all = com_serial.read(100000000)
print (data_all)
com_serial.close()
'''
I'm doing a telemetry application using Azure IoT Hub, Azure IoT SDK in Python and a raspberry pi with temperature and humidity sensors.
Humidity + Temperature sensors => Rasperry Pi => Azure IoT Hub
In my first implementation thanks azure examples, I used one loop that collect data from the temperature sensor and the humidity sensor, and send them to Azure IoT Hub in the same time every 60 second.
>>> 1 Loop every 60s = Collect data & send data of temperature and humidity
Now I would like to send them with different frequencies, I mean :
One loop will collect the data of the temperature sensor and send it to Azure IoT Hub every 60 seconds;
Whereas a second loop will collect the data of the humidity sensor and send it to Azure IoT Hub every 600 seconds.
>>> 1 Loop every 60s= Collect data & send data of temperature
>>> 2 Loop every 600s= Collect data & send data of humidity
I think the tool I need is multi-threading, but I don't understand which library or structure I have to implement in my case.
Here is the code provided by Azure, including one loop that handles temperature and humidity at the same time. Reading the data and sending to Azure every 60 seconds.
import random
import time
import sys
# Using the Python Device SDK for IoT Hub:
from iothub_client import IoTHubClient, IoTHubClientError,
IoTHubTransportProvider, IoTHubClientResult
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult,
IoTHubError, DeviceMethodReturnValue
# The device connection string to authenticate the device with your IoT hub.
CONNECTION_STRING = "{Your IoT hub device connection string}"
# Using the MQTT protocol.
PROTOCOL = IoTHubTransportProvider.MQTT
MESSAGE_TIMEOUT = 10000
# Define the JSON message to send to IoT Hub.
TEMPERATURE = 20.0
HUMIDITY = 60
MSG_TXT = "{\"temperature\": %.2f,\"humidity\": %.2f}"
def send_confirmation_callback(message, result, user_context):
print ( "IoT Hub responded to message with status: %s" % (result) )
def iothub_client_init():
# Create an IoT Hub client
client = IoTHubClient(CONNECTION_STRING, PROTOCOL)
return client
def iothub_client_telemetry_sample_run():
try:
client = iothub_client_init()
print ( "IoT Hub device sending periodic messages, press Ctrl-C to exit" )
#******************LOOP*******************************
while True:
# Build the message with simulated telemetry values.
temperature = TEMPERATURE + (random.random() * 15)
humidity = HUMIDITY + (random.random() * 20)
msg_txt_formatted = MSG_TXT % (temperature, humidity)
message = IoTHubMessage(msg_txt_formatted)
# Send the message.
print( "Sending message: %s" % message.get_string() )
client.send_event_async(message, send_confirmation_callback, None)
time.sleep(60)
except IoTHubError as iothub_error:
print ( "Unexpected error %s from IoTHub" % iothub_error )
return
except KeyboardInterrupt:
print ( "IoTHubClient sample stopped" )
if __name__ == '__main__':
print ( "IoT Hub Quickstart #1 - Simulated device" )
print ( "Press Ctrl-C to exit" )
iothub_client_telemetry_sample_run()
I would like to use the same structure of functions, including two loops that handles temperature and humidity, one every 60s and one every 600s.
while True:
# Build the message with simulated telemetry values.
temperature = TEMPERATURE + (random.random() * 15)
msg_txt_formatted1 = MSG_TXT1 % (temperature)
message1 = IoTHubMessage(msg_txt_formatted1)
# Send the message.
print( "Sending message: %s" % message1.get_string() )
client.send_event_async(message1, send_confirmation_callback, None)
time.sleep(60)
while True:
# Build the message with simulated telemetry values.
humidity = HUMIDITY + (random.random() * 20)
msg_txt_formatted2 = MSG_TXT2 % (humidity)
message2 = IoTHubMessage(msg_txt_formatted2)
# Send the message.
print( "Sending message: %s" % message2.get_string() )
client.send_event_async(message2, send_confirmation_callback, None)
time.sleep(600)
How can I do that? How to call those loops with multi-threading or another method?
It may be simpler to do something like
while True:
loop_b()
for _ in range(10):
loop_a()
time.sleep(60)
or even
while True:
time.sleep(1)
now = time.time()
if now % 60 == 0:
loop_a()
if now % 600 == 0:
loop_b()
But if you really want to use threads, then:
import threading
class LoopAThread(threading.Thread):
def run(self):
loop_a()
class LoopBThread(threading.Thread):
def run(self):
loop_b()
...
thread_a = LoopAThread()
thread_b = LoopBThread()
thread_a.start()
thread_b.start()
thread_a.join()
thread_b.join()
Here are two competing approaches to consider
Don't bother with threads at all. Just have one loop that sleeps every 60 seconds like you have now. Keep track of the last time you sent humidity data. If 600 seconds has passed, then send it. Otherwise, skip it and go to sleep for 60 seconds. Something like this:
from datetime import datetime, timedelta
def iothub_client_telemetry_sample_run():
last_humidity_run = None
humidity_period = timedelta(seconds=600)
client = iothub_client_init()
while True:
now = datetime.now()
send_temperature_data(client)
if not last_humidity_run or now - last_humidity_run >= humidity_period:
send_humidity_data(client)
last_humidity_run = now
time.sleep(60)
Rename iothub_client_telemetry_sample_run to temperature_thread_func or something like it. Create a separate function that looks just like it for humidity. Spawn two threads from the main function of your program. Set them to daemon mode so they shutdown when the user exits
from threading import Thread
def temperature_thread_func():
client = iothub_client_init()
while True:
send_temperature_data(client)
time.sleep(60)
def humidity_thread_func():
client = iothub_client_init()
while True:
send_humidity_data(client)
time.sleep(600)
if __name__ == '__main__':
temp_thread = Thread(target=temperature_thread_func)
temp_thread.daemon = True
humidity_thread = Thread(target=humidity_thread_func)
humidity_thread.daemon = True
input('Polling for data. Press a key to exit')
Notes:
If you decide to use threads, consider using an
event
to terminate them cleanly.
time.sleep is not a precise way to keep
time. You might need a different timing mechanism if the samples need
to be taken at precise moments.
I'm looking to control a script via Zigbee/XBee using X-CTU. I've created a script named zb_control.py. Now I'm trying to start and stop another script within this script. A script adxl345test.py is used to collect data from an attached accelerometer on my Raspberry Pi.
The idea behind the zb_control.py script is that I run it and then if I type "run" in X-CTU the script will start running adxl345test.py and collect data.
I'm trying to create a script within a script that can also be stopped again and then still have the zb_control.py running ready to recieve new input from X-CTU.
As you can tell I've tried different things:
import serial, time, sys, os, subprocess
from subprocess import check_call
from subprocess import call
while True:
ser=serial.Serial('/dev/ttyUSB0',9600,timeout=2)
inc=ser.readline().strip()
if inc=='run':
print("---------------")
print("Collecting data")
print("---------------")
p = subprocess.Popen("/home/pi/adxl345test.py", stdout=subprocess.PIPE, shell=True)
elif inc=='stop':
# check_call(["pkill", "-9", "-f", adxl345test.py])
# serial.write('\x03')
# os.system("pkill –f adxl345test.py")
# call(["killall", "adxl345test.py"])
p.kill()
print("-----------------------")
print("Script has been stopped")
print("-----------------------")
I got it to run and it's now collecting data properly. However now the problem is stopping the adxl345test.py again. As you can tell from the script from above I'm using p.kill() but the script doesn't stop collecting data. When I type "stop" in XCTU my zb_control.py does print the print-commands but the p.kill() isn't being executed. Any suggestions?
I've tried using p.terminate() alone and together with p.kill() aswell as the commands by themselves however it doesn't stop the adxl345test.py script. I can tell that the .csv-file is still increasing in size and therefore the script must still be collecting data.
Here is the adxl345test.py script for those interested:
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Example on how to read the ADXL345 accelerometer.
# Kim H. Rasmussen, 2014
import sys, math, os, spidev, datetime, ftplib
# Setup SPI
spi = spidev.SpiDev()
#spi.mode = 3 <-- Important: Do not do this! Or SPI won't work as intended, or even at all.
spi.open(0,0)
spi.mode = 3
# Read the Device ID (should be xe5)
id = spi.xfer2([128,0])
print 'Device ID (Should be 0xe5):\n'+str(hex(id[1])) + '\n'
# Read the offsets
xoffset = spi.xfer2([30 | 128,0])
yoffset = spi.xfer2([31 | 128,0])
zoffset = spi.xfer2([32 | 128,0])
accres = 2
accrate = 13
print 'Offsets: '
print xoffset[1]
print yoffset[1]
# print str(zoffset[1]) + "\n\nRead the ADXL345 every half second:"
# Initialize the ADXL345
def initadxl345():
# Enter power saving state
spi.xfer2([45, 0])
# Set data rate to 100 Hz. 15=3200, 14=1600, 13=800, 12=400, 11=200, 10=100 etc.
spi.xfer2([44, accrate])
# Enable full range (10 bits resolution) and +/- 16g 4 LSB
spi.xfer2([49, accres])
# Enable measurement
spi.xfer2([45, 8])
# Read the ADXL x-y-z axia
def readadxl345():
rx = spi.xfer2([242,0,0,0,0,0,0])
#
out = [rx[1] | (rx[2] << 8),rx[3] | (rx[4] << 8),rx[5] | (rx[6] << 8)]
# Format x-axis
if (out[0] & (1<<16 - 1 )):
out[0] = out[0] - (1<<16)
# out[0] = out[0] * 0.004 * 9.82
# Format y-axis
if (out[1] & (1<<16 - 1 )):
out[1] = out[1] - (1<<16)
# out[1] = out[1] * 0.004 * 9.82
# Format z-axis
if (out[2] & (1<<16 - 1 )):
out[2] = out[2] - (1<<16)
# out[2] = out[2] * 0.004 * 9.82
return out
# Initialize the ADXL345 accelerometer
initadxl345()
# Read the ADXL345 every half second
timetosend = 60
while(1):
with open('/proc/uptime','r') as f: # get uptime
uptime_start = float(f.readline().split()[0])
uptime_last = uptime_start
active_file_first = "S3-" + str(pow(2,accrate)*25/256) + "hz10bit" + str(accres) + 'g' + str(datetime.datetime.utcnow().strftime('%y%m%d%H%M')) $
active_file = active_file_first.replace(":", ".")
wStream = open('/var/log/sensor/' + active_file,'wb')
finalcount = 0
print "Creating " + active_file
while uptime_last < uptime_start + timetosend:
finalcount += 1
time1 = str(datetime.datetime.now().strftime('%S.%f'))
time2 = str(datetime.datetime.now().strftime('%M'))
time3 = str(datetime.datetime.now().strftime('%H'))
time4 = str(datetime.datetime.now().strftime('%d'))
time5 = str(datetime.datetime.now().strftime('%m'))
time6 = str(datetime.datetime.now().strftime('%Y'))
axia = readadxl345()
wStream.write(str(round(float(axia[0])/1024,3))+','+str(round(float(axia[1])/1024,3))+','+str(round(float(axia[2])/1024,3))+','+time1+','+ti$
# Print the reading
# print axia[0]
# print axia[1]
# print str(axia[2]) + '\n'
# elapsed = time.clock()
# current = 0
# while(current < timeout):
# current = time.clock() - elapsed
with open('/proc/uptime', 'r') as f:
uptime_last = float(f.readline().split()[0])
wStream.close()
def doftp(the_active_file):
session = ftplib.FTP('192.0.3.6','sensor3','L!ghtSp33d')
session.cwd("//datalogger//")
file = open('/var/log/sensor/' + active_file, 'rb') # file to send
session.storbinary('STOR' + active_file, file) # send the file
file.close()
session.quit
My suggestions:
If you're doing something at a specified interval, you're probably better off using threading.Timer rather than checking the time yourself.
As I said in the comment, you can check for an exit condition instead of brutally killing the process. This also allows to properly clean up what you need.
Those time1...time6 really don't look nice, how about a list? Also, time2, time3, time4, time5, time6 are not used.
You don't actually need strftime to get hour, day, month, year, etc. They're already there as attributes.
You can do something like:
cur_time = datetime.datetime.now()
cur_hour = cur_time.hour
cur_minute = cur_time.minute
...And so on, which is a bit better. In this specific case it won't matter, but if you start measuring milliseconds, the time will be slightly different after a few lines of code, so you should store it and use it from the variable.
As for the rest, if you want an example, here I check that a file exists to determine whether to stop or not. It's very crude but it should give you a starting point:
from threading import *
from os.path import exists
def hello():
print('TEST') # Instead of this, do what you need
if (exists('stop_file.txt')):
return
t = Timer(0.5, hello)
t.start()
hello()
Then in the other create you create the stop file when you want it to stop (don't forget to add a line to remove it before starting it again).
pub.py
import redis
import datetime
import time
import json
import sys
import threading
import gevent
from gevent import monkey
monkey.patch_all()
def main(chan):
redis_host = '10.235.13.29'
r = redis.client.StrictRedis(host=redis_host, port=6379)
while True:
def getpkg():
package = {'time': time.time(),
'signature' : 'content'
}
return package
#test 2: complex data
now = json.dumps(getpkg())
# send it
r.publish(chan, now)
print 'Sending {0}'.format(now)
print 'data type is %s' % type(now)
time.sleep(1)
def zerg_rush(n):
for x in range(n):
t = threading.Thread(target=main, args=(x,))
t.setDaemon(True)
t.start()
if __name__ == '__main__':
num_of_chan = 10
zerg_rush(num_of_chan)
cnt = 0
stop_cnt = 21
while True:
print 'Waiting'
cnt += 1
if cnt == stop_cnt:
sys.exit(0)
time.sleep(30)
sub.py
import redis
import threading
import time
import json
import gevent
from gevent import monkey
monkey.patch_all()
def callback(ind):
redis_host = '10.235.13.29'
r = redis.client.StrictRedis(host=redis_host, port=6379)
sub = r.pubsub()
sub.subscribe(str(ind))
start = False
avg = 0
tot = 0
sum = 0
while True:
for m in sub.listen():
if not start:
start = True
continue
got_time = time.time()
decoded = json.loads(m['data'])
sent_time = float(decoded['time'])
dur = got_time - sent_time
tot += 1
sum += dur
avg = sum / tot
print decoded #'Recieved: {0}'.format(m['data'])
file_name = 'logs/sub_%s' % ind
f = open(file_name, 'a')
f.write('processing no. %s' % tot)
f.write('it took %s' % dur)
f.write('current avg: %s\n' % avg)
f.close()
def zerg_rush(n):
for x in range(n):
t = threading.Thread(target=callback, args=(x,))
t.setDaemon(True)
t.start()
def main():
num_of_chan = 10
zerg_rush(num_of_chan)
while True:
print 'Waiting'
time.sleep(30)
if __name__ == '__main__':
main()
I am testing redis pubsub to replace the use of rsh to communicate with remote boxes.
One of the things I have tested for was the number of channels affecting latency of publish and pubsub.listen().
Test: One publisher and one subscriber per channel (publisher publish every one second). Incremented the number of channels from and observed the latency (The duration from the moment publisher publish a message to the moment subscriber got the message via listen)
num of chan--------------avg latency in seconds
10:----------------------------------0.004453
50:----------------------------------0.005246
100:---------------------------------0.0155
200:---------------------------------0.0221
300:---------------------------------0.0621
Note: tested on 2 CPU + 4GB RAM + 1 NICs RHEL6.4 VM.
What can I do to maintain low latency with high number of channels?
Redis is single-threaded so increasing more cpus wont help. maybe more RAM? if so, how much more?
Anything I can do code-wise or bottleneck is in Redis itself?
Maybe the limitation comes from the way my test codes are written with threading?
EDIT:
Redis Cluster vs ZeroMQ in Pub/Sub, for horizontally scaled distributed systems
Accepted answer says "You want to minimize latency, I guess. The number of channels is irrelevant. The key factors are the number of publishers and number of subscribers, message size, number of messages per second per publisher, number of messages received by each subscriber, roughly. ZeroMQ can do several million small messages per second from one node to another; your bottleneck will be the network long before it's the software. Most high-volume pubsub architectures therefore use something like PGM multicast, which ZeroMQ supports."
From my testings, i dont know if this is true. (The claim that the number of channels is irrelevant)
For example, i did a testing.
1) One channel. 100 publishers publishing to a channel with 1 subscriber listening. Publisher publishing one second at a time. latency was 0.00965 seconds
2) Same testing except 1000 publishers. latency was 0.00808 seconds
Now during my channel testing:
300 channels with 1 pub - 1 sub resulted in 0.0621 and this is only 600 connections which is less than above testing yet significantly slow in latency