I'm doing a telemetry application using Azure IoT Hub, Azure IoT SDK in Python and a raspberry pi with temperature and humidity sensors.
Humidity + Temperature sensors => Rasperry Pi => Azure IoT Hub
In my first implementation thanks azure examples, I used one loop that collect data from the temperature sensor and the humidity sensor, and send them to Azure IoT Hub in the same time every 60 second.
>>> 1 Loop every 60s = Collect data & send data of temperature and humidity
Now I would like to send them with different frequencies, I mean :
One loop will collect the data of the temperature sensor and send it to Azure IoT Hub every 60 seconds;
Whereas a second loop will collect the data of the humidity sensor and send it to Azure IoT Hub every 600 seconds.
>>> 1 Loop every 60s= Collect data & send data of temperature
>>> 2 Loop every 600s= Collect data & send data of humidity
I think the tool I need is multi-threading, but I don't understand which library or structure I have to implement in my case.
Here is the code provided by Azure, including one loop that handles temperature and humidity at the same time. Reading the data and sending to Azure every 60 seconds.
import random
import time
import sys
# Using the Python Device SDK for IoT Hub:
from iothub_client import IoTHubClient, IoTHubClientError,
IoTHubTransportProvider, IoTHubClientResult
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult,
IoTHubError, DeviceMethodReturnValue
# The device connection string to authenticate the device with your IoT hub.
CONNECTION_STRING = "{Your IoT hub device connection string}"
# Using the MQTT protocol.
PROTOCOL = IoTHubTransportProvider.MQTT
MESSAGE_TIMEOUT = 10000
# Define the JSON message to send to IoT Hub.
TEMPERATURE = 20.0
HUMIDITY = 60
MSG_TXT = "{\"temperature\": %.2f,\"humidity\": %.2f}"
def send_confirmation_callback(message, result, user_context):
print ( "IoT Hub responded to message with status: %s" % (result) )
def iothub_client_init():
# Create an IoT Hub client
client = IoTHubClient(CONNECTION_STRING, PROTOCOL)
return client
def iothub_client_telemetry_sample_run():
try:
client = iothub_client_init()
print ( "IoT Hub device sending periodic messages, press Ctrl-C to exit" )
#******************LOOP*******************************
while True:
# Build the message with simulated telemetry values.
temperature = TEMPERATURE + (random.random() * 15)
humidity = HUMIDITY + (random.random() * 20)
msg_txt_formatted = MSG_TXT % (temperature, humidity)
message = IoTHubMessage(msg_txt_formatted)
# Send the message.
print( "Sending message: %s" % message.get_string() )
client.send_event_async(message, send_confirmation_callback, None)
time.sleep(60)
except IoTHubError as iothub_error:
print ( "Unexpected error %s from IoTHub" % iothub_error )
return
except KeyboardInterrupt:
print ( "IoTHubClient sample stopped" )
if __name__ == '__main__':
print ( "IoT Hub Quickstart #1 - Simulated device" )
print ( "Press Ctrl-C to exit" )
iothub_client_telemetry_sample_run()
I would like to use the same structure of functions, including two loops that handles temperature and humidity, one every 60s and one every 600s.
while True:
# Build the message with simulated telemetry values.
temperature = TEMPERATURE + (random.random() * 15)
msg_txt_formatted1 = MSG_TXT1 % (temperature)
message1 = IoTHubMessage(msg_txt_formatted1)
# Send the message.
print( "Sending message: %s" % message1.get_string() )
client.send_event_async(message1, send_confirmation_callback, None)
time.sleep(60)
while True:
# Build the message with simulated telemetry values.
humidity = HUMIDITY + (random.random() * 20)
msg_txt_formatted2 = MSG_TXT2 % (humidity)
message2 = IoTHubMessage(msg_txt_formatted2)
# Send the message.
print( "Sending message: %s" % message2.get_string() )
client.send_event_async(message2, send_confirmation_callback, None)
time.sleep(600)
How can I do that? How to call those loops with multi-threading or another method?
It may be simpler to do something like
while True:
loop_b()
for _ in range(10):
loop_a()
time.sleep(60)
or even
while True:
time.sleep(1)
now = time.time()
if now % 60 == 0:
loop_a()
if now % 600 == 0:
loop_b()
But if you really want to use threads, then:
import threading
class LoopAThread(threading.Thread):
def run(self):
loop_a()
class LoopBThread(threading.Thread):
def run(self):
loop_b()
...
thread_a = LoopAThread()
thread_b = LoopBThread()
thread_a.start()
thread_b.start()
thread_a.join()
thread_b.join()
Here are two competing approaches to consider
Don't bother with threads at all. Just have one loop that sleeps every 60 seconds like you have now. Keep track of the last time you sent humidity data. If 600 seconds has passed, then send it. Otherwise, skip it and go to sleep for 60 seconds. Something like this:
from datetime import datetime, timedelta
def iothub_client_telemetry_sample_run():
last_humidity_run = None
humidity_period = timedelta(seconds=600)
client = iothub_client_init()
while True:
now = datetime.now()
send_temperature_data(client)
if not last_humidity_run or now - last_humidity_run >= humidity_period:
send_humidity_data(client)
last_humidity_run = now
time.sleep(60)
Rename iothub_client_telemetry_sample_run to temperature_thread_func or something like it. Create a separate function that looks just like it for humidity. Spawn two threads from the main function of your program. Set them to daemon mode so they shutdown when the user exits
from threading import Thread
def temperature_thread_func():
client = iothub_client_init()
while True:
send_temperature_data(client)
time.sleep(60)
def humidity_thread_func():
client = iothub_client_init()
while True:
send_humidity_data(client)
time.sleep(600)
if __name__ == '__main__':
temp_thread = Thread(target=temperature_thread_func)
temp_thread.daemon = True
humidity_thread = Thread(target=humidity_thread_func)
humidity_thread.daemon = True
input('Polling for data. Press a key to exit')
Notes:
If you decide to use threads, consider using an
event
to terminate them cleanly.
time.sleep is not a precise way to keep
time. You might need a different timing mechanism if the samples need
to be taken at precise moments.
Related
To start off im using an anonymous connection joining the channels which means there are no JOIN limits, I have tried different variations of sleeping, I started off just joining from a text however that had a lot of problems because it was connecting all the sockets before joining so I couldnt see what caused it. However this is the best version I have created so far, its pretty scuffed but I am just trying to understand what the issue is. If anyone has any insight on doing a big task like this I would appreciate it a lot!
(oauth and helix headers are from a random alt account I made for testing and its trying to join 10k channels in the example but stops around 2k-3k max)
import requests
import socket
import time
import threading
import random
connections_made = 0
sockets = []
def connect():
global sockets
global connections_made
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print("CONNECTING TO IRC")
sock.connect(('irc.chat.twitch.tv', 6667))
sock.send(bytes('PASS oauth:'+ '\r\n', 'utf-8'))
sock.send(bytes('NICK justinfan' + str(random.randint(10000,99999)) + '\r\n', 'utf-8'))
sockets.append(sock)
connections_made += 1
print(f"socket: {len(sockets)}")
for i in range(2):
connect() # initial for .recv reading
helix_headers = {'client-id': 'q6batx0epp608isickayubi39itsckt', 'authorization': 'Bearer rk0ixn6169ar7y5xey9msvk1h8zrs8'}
def request(channels_to_join,cursor):
request_amount = int(channels_to_join / 100) # 100 requests = 10000 channels
user_list = []
sock_numb = 0
total_chans_joined = 0
count_every_request = 0
for i in range(request_amount):
time.sleep(1)
# 3k channels with time.sleep(1) 1.5k channels with time.sleep(2) 30 seconds then connection reset error (when bulk joining 100 channels and waiting for the next request)
# waiting 30 seconds doesnt fix this either stop at about 500 channels so lasted 2.5minutes?
# waiting 60 seconds at 500 channels breaks
if count_every_request == 1: # for every 100 channels
connect()
count_every_request = 0
r = requests.get("https://api.twitch.tv/helix/streams?first=100&after=" + cursor,headers=helix_headers)
cursor = r.json()['pagination']['cursor']
count_every_request += 1
for everything in r.json()['data']:
user_list.append(everything['user_login'])
channel = everything['user_login']
# join channel
if sock_numb == connections_made: # makes it so when joining sockets it joins up to the amount of sockets that there are and then loops back
sock_numb = 0
print(f"JOINING #{channel} with socket: {sock_numb} total joined: {total_chans_joined}")
sockets[sock_numb].send(bytes('JOIN #' + channel + '\r\n', 'utf-8'))
total_chans_joined += 1
sock_numb += 1
def loop():
print("Looping")
try:
while True:
time.sleep(0.1)
for i in range(connections_made):
data = sockets[i].recv(4096).decode("utf-8",errors='replace').strip()
if data == "":
continue
print(data)
if "PING :tmi.twitch.tv" in data:
print("PONG")
sockets[i].send(bytes('PONG :tmi.twitch.tv' + '\r\n', 'utf-8'))
except Exception as e:
print(str(e) + " error in loop ")
pass
thread_loop = threading.Thread(target=loop)
thread_loop.start()
request(channels_to_join=10000,cursor = "eyJiIjp7IkN1cnNvciI6ImV5SnpJam80T0RrMU1TNDVNRFkwTWpnd09URTVNU3dpWkNJNlptRnNjMlVzSW5RaU9uUnlkV1Y5In0sImEiOnsiQ3Vyc29yIjoiZXlKeklqbzFNakF6TGpJM056UTFPVEUzT1RReE1Td2laQ0k2Wm1Gc2MyVXNJblFpT25SeWRXVjkifX0")
The likely problem is that your bot can't keep up with the message send buffer.
So you connect to many channels, but are not processing the incoming chat messages in a timely fashion. So the "queue" of messages to send from Twitch to You exceeds Twitch's buffer. And it DC's you
Or as per the IRC Rate limit guide you are sneding too many commands and getting Disconnected from the server.
Large chat bots will often split groups of channels over multiple connections to solve this issue.
I'm writing a program to toggle the lights at my house based on my iPhone's GPS coordinates. Below is what I have so far. However, I feel like there must be a better way to do this. Is there a way to get GPS data without pinging my phone every five minutes?
So far I've tried the following with no joy:
Using Shortcuts and Scriptable I tried to write some JavaScript that would trigger when I got close to home. However, I could not figure out how to use await require('wemo-client') using scriptablify. I kept getting an error, "ReferenceError: Can't find variable: require".
IFTTT does not have a variable timed trigger so the lights won't turn off after 15 minutes. Also, I plan on adding a motion sensor trigger that is unsupported.
Pythonista is $10. Yes, I am that cheap.
Apple HomeKit does not support the model I'm using, Wemo Smart Light Switch F7C030.
The code below works, but I hate that I have to ping my phone every five minutes. I'd rather save battery life by firing this code once or twice a day, as needed.
Any suggestions would be greatly appreciated.
Code:
import sys
import time
import datetime
import os
from pyicloud import PyiCloudService
import pywemo
APPLE_ID = os.getenv('APPLE_ID') # Apple ID username
APPLE_ID_PASSWORD = os.getenv('APPLE_ID_PASSWORD') # Apple ID password
API = PyiCloudService(APPLE_ID, APPLE_ID_PASSWORD)
IPHONE = API.devices[1]
LOCATION = IPHONE.location()
FIVE = 300 # 5 * 60 seconds
FIFTEEN = 900 # 15 * 60 seconds
ONEMILE = 0.01449275362318840579710144927536 # one mile is 1/69 degrees lat or long
HOMELAT = # my home's latitude
HOMELONG = # my home's longitude
WEMOS = pywemo.discover_devices()
LEN_WEMOS = range(len(WEMOS))
# Two factor authentication to retrieve iPhone data
if API.requires_2fa:
import click
print("Two-step authentication required. Your trusted devices are:")
DEVICES = API.devices
for i, device in enumerate(DEVICES):
print(" %s: %s" % (i, device.get('deviceName', "SMS to %s" % device.get('phoneNumber'))))
DEF_DEVICE = click.prompt('Which device would you like to use?', default=0)
DEVICE = DEVICES[DEF_DEVICE]
if not API.send_verification_code(DEVICE):
print("Failed to send verification code")
sys.exit(1)
CODE = click.prompt('Please enter validation code')
if not API.validate_verification_code(DEVICE, CODE):
print("Failed to verify verification code")
sys.exit(1)
# Turn off the lights when I leave
def leavehome():
timenow = datetime.datetime.now()
print("Left home on {}".format(timenow.strftime("%B %d, %Y at %H:%M:%S")))
for wemo in LEN_WEMOS:
WEMOS[wemo].off()
# Turn on the lights for 15 minutes when I get home
def arrivehome():
timenow = datetime.datetime.now()
print("Arrived home on {}".format(timenow.strftime("%B %d, %Y at %H:%M:%S")))
# Loop through all Wemo devices
for wemo in LEN_WEMOS:
WEMOS[wemo].on()
time.sleep(FIFTEEN)
for wemo in LEN_WEMOS:
WEMOS[wemo].off()
# Automatically turn off the lights after 15 minutes - save electricity
def timeoff():
time.sleep(FIFTEEN)
for wemo in LEN_WEMOS:
WEMOS[wemo].off()
# Ping my phone for GPS data
def pingphone(prev):
mylat = LOCATION["latitude"]
mylong = LOCATION["longitude"]
logic(prev, mylat, mylong)
time.sleep(FIVE)
# Perform logic to determine if I'm home, out, arriving, or leaving
def logic(prev, lat, long):
inrange = (HOMELAT+ONEMILE >= lat >= HOMELAT-ONEMILE and HOMELONG+ONEMILE >= long >= HOMELONG-ONEMILE)
current = bool(inrange)
previous = prev
if current and not previous:
arrivehome()
elif previous and not current:
leavehome()
else:
timeoff()
pingphone(current)
# Run the script
pingphone(False)
I have an Azure Event Hub that has messages in it. I wrote the msgs with a Python app and can see the correct msg count in the Event Hub GUI. But I cannot seem to read the msg with Python. My code is below. It runs with no errors but gets zero results.
Oddly, after I run this code, the Event Hub GUI shows that all the msgs (a couple thousand) were outgoing, indicating that my program actually got them. But the code never displays them.
Any help appreciated!
The result is always...
Msg offset: <azure.eventhub.common.Offset object at 0x102fc4e10>
Msg seq: 0
Msg body: 0
Received 1 messages in 0.11292386054992676 seconds
++++++++++++
# pip install azure-eventhub
import logging
import time
from azure.eventhub import EventHubClient, Receiver, Offset
logger = logging.getLogger("azure")
# URL of the event hub, amqps://<mynamespace>.servicebus.windows.net/myeventhub
ADDRESS = "amqps://chc-eh-ns.servicebus.windows.net/chc-eh"
# Access tokens for event hub namespace, from Azure portal for namespace
USER = "RootManageSharedAccessKey"
KEY = "XXXXXXXXXXXXXXXXXXXXXXXXXX"
# Additional setup to receive events
CONSUMER_GROUP = "$default" # our view of the event hub, useful when there is more than one consumer at same time
PARTITION = "0" # which stream within event hub
OFFSET = Offset("-1") # get all msgs in event hub. msgs are never removed, they just expire per event hub settings
PREFETCH = 100 # not sure exactly what this does ??
# Initialize variables
total = 0
last_sn = -1
last_offset = -1
client = EventHubClient(ADDRESS, debug=False, username=USER, password=KEY)
try:
receiver = client.add_receiver(CONSUMER_GROUP, PARTITION, prefetch=PREFETCH, offset=OFFSET)
client.run()
start_time = time.time()
for event_data in receiver.receive(timeout=100):
last_offset = event_data.offset
last_sn = event_data.sequence_number
print("Msg offset: " + str(last_offset))
print("Msg seq: " + str(last_sn))
print("Msg body: " + event_data.body_as_str())
total += 1
end_time = time.time()
client.stop()
run_time = end_time - start_time
print("\nReceived {} messages in {} seconds".format(total, run_time))
except KeyboardInterrupt:
pass
finally:
client.stop()
Got it! The code sample I copied was wrong. You have to get batches of messages then iterate over each batch. Here is the correct code...
# pip install azure-eventhub
import time
from azure.eventhub import EventHubClient, Offset
# URL of the event hub, amqps://<mynamespace>.servicebus.windows.net/myeventhub
ADDRESS = "amqps://chc-eh-ns.servicebus.windows.net/chc-eh"
# Access tokens for event hub namespace, from Azure portal for namespace
USER = "RootManageSharedAccessKey"
KEY = "XXXXXXXXXXXX"
# Additional setup to receive events
CONSUMER_GROUP = "$default" # our view of the event hub, useful when there is more than one consumer at same time
PARTITION = "0" # which stream within event hub
OFFSET = Offset("-1") # get all msgs in event hub. msgs are never removed, they just expire per event hub settings
PREFETCH = 100 # batch size ??
# Initialize variables
total = 0
last_sn = -1
last_offset = -1
client = EventHubClient(ADDRESS, debug=False, username=USER, password=KEY)
try:
receiver = client.add_receiver(CONSUMER_GROUP, PARTITION, prefetch=PREFETCH, offset=OFFSET)
client.run()
start_time = time.time()
batch = receiver.receive(timeout=5000)
while batch:
for event_data in batch:
last_offset = event_data.offset
last_sn = event_data.sequence_number
print("Msg offset: " + str(last_offset))
print("Msg seq: " + str(last_sn))
print("Msg body: " + event_data.body_as_str())
total += 1
batch = receiver.receive(timeout=5000)
end_time = time.time()
client.stop()
run_time = end_time - start_time
print("\nReceived {} messages in {} seconds".format(total, run_time))
except KeyboardInterrupt:
pass
finally:
client.stop()
I need to communicate with an embedded system over RS232. For this I want to profile the time it takes to send a response to each command.
I've tested this code using two methods: datetime.now() and timeit()
Method #1
def resp_time(n,msg):
"""Given number of tries - n and bytearray list"""
msg = bytearray(msg)
cnt = 0
timer = 0
while cnt < n:
time.sleep(INTERVAL)
a = datetime.datetime.now()
ser.flush()
ser.write(msg)
line = []
for count in ser.read():
line.append(count)
if count == '\xFF':
# print line
break
b = datetime.datetime.now()
c = b-a
# print c.total_seconds()*1000
timer = timer + c.total_seconds()*1000
cnt = cnt + 1
return timer/n
ser = serial.Serial(COMPORT,BAUDRATE,serial.EIGHTBITS, serial.PARITY_NONE, serial.STOPBITS_ONE, timeout=16)
if ser.isOpen():
print "Serial port opened at: Baud:",COMPORT,BAUDRATE
cmd = read_file()
# returns a list of commands [msg1,msg2....]
n = 100
for index in cmd:
timer = resp_time(n,index)
print "Time in msecs over %d runs: %f " % (n,timer)
Method #2
def com_loop(msg):
msg = bytearray(msg)
time.sleep(INTERVAL)
ser.flush()
ser.write(msg)
line = []
for count in ser.read():
line.append(count)
if count == '\xFF':
break
if __name__ == '__main__':
import timeit
ser = serial.Serial(COMPORT,BAUDRATE,serial.EIGHTBITS, serial.PARITY_NONE, serial.STOPBITS_ONE, timeout=16)
if ser.isOpen():
print "Serial port opened at: Baud:",COMPORT,BAUDRATE
cmd = read_file()
# returns a list of commands [msg1,msg2....]
n = 100
for index in cmd:
t = timeit.timeit("com_loop(index)","from __main__ import com_loop;index=%s;" % index,number = n)
print t/100
With datetime I get 2 milli-sec to execute a command & with timeit I get 200 milli-sec for the same command.
I suspect I'm not calling timeit() properly, can someone point me in the right direction?
I'd assume 200µs is closer to the truth, considering your comport will have something like 115200baud; assuming messages are 8 bytes long, transmitting one message would take about 9/115200 s ~= 10/100000 = 1/10,000 = 100µs on the serial line alone. Being faster than that will be pretty impossible.
Python is definitely not the language of choice to do timing characterization at these scales. You will need to get a logic analyzer, or work very close to the serial controller (which I hope is directly attached to your PC's IO controller and not some USB devices, because that will introduce latencies in the same order of magnitude, at least). If you're talking about microseconds, the limiting factor in measurement is usually the random time it takes for your PC to react to an interrupt, the OS to run the interrupt service routine, the scheduler to continue your userland process, and then starts python with its levels and levels of indirection. You're basically measuring the size of single grains of sand by holding a banana next to them.
pub.py
import redis
import datetime
import time
import json
import sys
import threading
import gevent
from gevent import monkey
monkey.patch_all()
def main(chan):
redis_host = '10.235.13.29'
r = redis.client.StrictRedis(host=redis_host, port=6379)
while True:
def getpkg():
package = {'time': time.time(),
'signature' : 'content'
}
return package
#test 2: complex data
now = json.dumps(getpkg())
# send it
r.publish(chan, now)
print 'Sending {0}'.format(now)
print 'data type is %s' % type(now)
time.sleep(1)
def zerg_rush(n):
for x in range(n):
t = threading.Thread(target=main, args=(x,))
t.setDaemon(True)
t.start()
if __name__ == '__main__':
num_of_chan = 10
zerg_rush(num_of_chan)
cnt = 0
stop_cnt = 21
while True:
print 'Waiting'
cnt += 1
if cnt == stop_cnt:
sys.exit(0)
time.sleep(30)
sub.py
import redis
import threading
import time
import json
import gevent
from gevent import monkey
monkey.patch_all()
def callback(ind):
redis_host = '10.235.13.29'
r = redis.client.StrictRedis(host=redis_host, port=6379)
sub = r.pubsub()
sub.subscribe(str(ind))
start = False
avg = 0
tot = 0
sum = 0
while True:
for m in sub.listen():
if not start:
start = True
continue
got_time = time.time()
decoded = json.loads(m['data'])
sent_time = float(decoded['time'])
dur = got_time - sent_time
tot += 1
sum += dur
avg = sum / tot
print decoded #'Recieved: {0}'.format(m['data'])
file_name = 'logs/sub_%s' % ind
f = open(file_name, 'a')
f.write('processing no. %s' % tot)
f.write('it took %s' % dur)
f.write('current avg: %s\n' % avg)
f.close()
def zerg_rush(n):
for x in range(n):
t = threading.Thread(target=callback, args=(x,))
t.setDaemon(True)
t.start()
def main():
num_of_chan = 10
zerg_rush(num_of_chan)
while True:
print 'Waiting'
time.sleep(30)
if __name__ == '__main__':
main()
I am testing redis pubsub to replace the use of rsh to communicate with remote boxes.
One of the things I have tested for was the number of channels affecting latency of publish and pubsub.listen().
Test: One publisher and one subscriber per channel (publisher publish every one second). Incremented the number of channels from and observed the latency (The duration from the moment publisher publish a message to the moment subscriber got the message via listen)
num of chan--------------avg latency in seconds
10:----------------------------------0.004453
50:----------------------------------0.005246
100:---------------------------------0.0155
200:---------------------------------0.0221
300:---------------------------------0.0621
Note: tested on 2 CPU + 4GB RAM + 1 NICs RHEL6.4 VM.
What can I do to maintain low latency with high number of channels?
Redis is single-threaded so increasing more cpus wont help. maybe more RAM? if so, how much more?
Anything I can do code-wise or bottleneck is in Redis itself?
Maybe the limitation comes from the way my test codes are written with threading?
EDIT:
Redis Cluster vs ZeroMQ in Pub/Sub, for horizontally scaled distributed systems
Accepted answer says "You want to minimize latency, I guess. The number of channels is irrelevant. The key factors are the number of publishers and number of subscribers, message size, number of messages per second per publisher, number of messages received by each subscriber, roughly. ZeroMQ can do several million small messages per second from one node to another; your bottleneck will be the network long before it's the software. Most high-volume pubsub architectures therefore use something like PGM multicast, which ZeroMQ supports."
From my testings, i dont know if this is true. (The claim that the number of channels is irrelevant)
For example, i did a testing.
1) One channel. 100 publishers publishing to a channel with 1 subscriber listening. Publisher publishing one second at a time. latency was 0.00965 seconds
2) Same testing except 1000 publishers. latency was 0.00808 seconds
Now during my channel testing:
300 channels with 1 pub - 1 sub resulted in 0.0621 and this is only 600 connections which is less than above testing yet significantly slow in latency