Threading Bluetooth communication Raspberry pi (Python 3) - python

my problem is, how can i implement threadings to my program, where i have communication BLE with Rpi3.
My program works great, but response is too slow.
Please help with this. Thanks.
BMS_reading:
import gatt
import sys
import time
import threading
class AnyDevice(gatt.Device):
def write(self, characteristic):
self.response=bytearray()
self.bms_write_characteristic.write_value(bytes([0xDD,0xA5,0x03,0x00,0xFF,0xFD,0x77]));
def services_resolved(self):
super().services_resolved()
device_information_service = next(
s for s in self.services
if s.uuid == '0000ff00-0000-1000-8000-00805f9b34fb')
self.bms_read_characteristic = next(
c for c in device_information_service.characteristics
if c.uuid == '0000ff01-0000-1000-8000-00805f9b34fb')
self.bms_write_characteristic = next(
c for c in device_information_service.characteristics
if c.uuid == '0000ff02-0000-1000-8000-00805f9b34fb')
self.bms_read_characteristic.enable_notifications()
self.write(self.bms_read_characteristic)
def characteristic_value_updated(self, characteristic, value):
self.value=value
def write():
self.response+=self.value
if (self.response.endswith(b'w')):
self.response=self.response[4:]
self.SoC=int.from_bytes(self.response[19:20], byteorder = 'big')
self.manager.stop()
write()
#reading loop (I want add threading and read info "SoC")
while True:
address="A4:C1:38:A0:59:EB"
manager = gatt.DeviceManager(adapter_name='hci0')
device = AnyDevice(mac_address=address, manager=manager)
device.connect()
manager.run()
print("Capacity is: "+str(device.SoC)+"%")
TERMINAL <<< Capacity is: 76%
#long delay which i dont want
<<< Capacity is: 76%
I dont know how can i make it.
when i make thread all while loop, the communication does not have time to react and prints bad numbers or errors.
Please help.
--------------------EDITED--PROGRAM--FOR--NOTIFICATION------UPDATE----------
import gatt
import json
import sys
#from gi.repository import GLib
manager = gatt.DeviceManager(adapter_name='hci0')
class AnyDevice(gatt.Device):
def connect_succeeded(self):
super().connect_succeeded()
print("[%s] Připojeno" % (self.mac_address))
def connect_failed(self, error):
super().connect_failed(error)
print("[%s] Connection failed: %s" % (self.mac_address, str(error)))
def disconnect_succeeded(self):
super().disconnect_succeeded()
print("[%s] Disconnected" % (self.mac_address))
self.manager.stop()
def services_resolved(self):
super().services_resolved()
device_information_service = next(
s for s in self.services
if s.uuid == '0000ff00-0000-1000-8000-00805f9b34fb')
self.bms_read_characteristic = next(
c for c in device_information_service.characteristics
if c.uuid == '0000ff01-0000-1000-8000-00805f9b34fb')
self.bms_write_characteristic = next(
c for c in device_information_service.characteristics
if c.uuid == '0000ff02-0000-1000-8000-00805f9b34fb')
print("BMS found")
self.bms_read_characteristic.enable_notifications()
def characteristic_enable_notifications_succeeded(self, characteristic):
super().characteristic_enable_notifications_succeeded(characteristic)
print("BMS request generic data")
self.response=bytearray()
self.bms_write_characteristic.write_value(bytes([0xDD,0xA5,0x03,0x00,0xFF,0xFD,0x77]));
def characteristic_enable_notifications_failed(self, characteristic, error):
super.characteristic_enable_notifications_failed(characteristic, error)
print("BMS notification failed:",error)
def characteristic_value_updated(self, characteristic, value):
self.response+=value
if (self.response.endswith(b'w')):
self.response=self.response[4:]
temperature= (int.from_bytes(self.response[23+1*2:1*2+25],'big')-2731)/10
print("Temperature is: "+str(temperature) + " C")
def characteristic_write_value_failed(self, characteristic, error):
print("BMS write failed:",error)
device = AnyDevice(mac_address="A4:C1:38:A0:59:EB", manager=manager)
device.connect()
manager.run()
Terminal print, even if the value changes and the manager is running:
>>>BMS found
>>>BMS request generic data
>>>Temperature is: 19 C
#there program get stuck even if value is changing
thank you, I edited the program with notifications and as you can see, it supports it.
But I have a problem here that even if the values ​​(temperatures) change and manager in in manager.run (), the terminal will send me only one value and did nothing else even if I heat the device. when I restart the program the value changes again and only one remains. Do I have a code written correctly, please?
Thnak you so much for your time sir.

My assumption is that you are using the gatt-python library.
The line manager.run() is starting the event loop so you do not need to have a while loop in your code.
If the temperature characteristic supports notifications, then turning them on would be the most efficient way of reading the values when they change.
If the device does not have notifications then creating a timed event to read the temperature at the frequency you require would be recommended. The documentation for for timeout_add_seconds isn't always the easiest to understand, but the import is:
from gi.repository import GLib
Then just before you run the event loop call:
GLib.timeout_add_seconds(2, my_callback_to_read_temperature)
I expect gi.repository to be installed on the RPi already but if you need the instructions for installing, then they are at: https://pygobject.readthedocs.io/en/latest/getting_started.html#ubuntu-getting-started

Related

Unable to use scapy as a bridge among interfaces

I'm trying to perform a transparent MITM attack with scapy. I've got an Ubuntu machine with two network interfaces, connected each one to a machine. Those machines have same subnet addresses and operate correctly if directly connected. The objective is to be able to be totally transparent, using both interfaces with no IP address and in promisc mode.
The implementation I'm using is the following:
def pkt_callback(pkt):
if pkt.sniffed_on == "enp0s3":
sendp(pkt, iface="enp0s8", verbose=0)
else:
sendp(pkt, iface="enp0s3", verbose=0)
def enable_bridge():
sniff(iface=["enp0s3", "enp0s8"], prn=pkt_callback, store=0)
if __name__ == "__main__":
conf.sniff_promisc=True
enable_bridge()
This is not all the code, but is the main routing part... I can see that packets arrive to both interfaces, but no pinging from one machine to another... Any idea of how to make this work?
Thanks in advance.
EDIT 1:
The full implementation here:
from scapy.all import *
from utils import interfaces, addresses
#from routing import *
from packet_filters import is_mms_packet
from attacks import performAttack
import sys
import os
import time
import datetime
def writePacketInDisk(pkt):
wrpcap("network_logs/network-log-
"+datetime.date.today().strftime("%Y")+"-"
+datetime.date.today().strftime("%B")+"-
"+datetime.date.today().strftime("%d")+".pcap", pkt, append=True)
def pkt_callback_PLC_OPC(pkt):
ret = True
# if is_mms_packet(pkt):
# writePacketInDisk(pkt)
#ret = performAttack(pkt)
return ret
def pkt_callback_OPC_PLC(pkt):
ret = True
# if is_mms_packet(pkt):
# writePacketInDisk(pkt)
#ret = performAttack(pkt)
return ret
def enable_bridge():
print "hello!!"
bridge_and_sniff(interfaces["plc-ccb"], interfaces["opc"],
xfrm12=pkt_callback_PLC_OPC, xfrm21=pkt_callback_OPC_PLC,
count=0, store=0)
#prn = lamba x: x.summary()
print "bye!!"
if __name__ == "__main__":
conf.sniff_promisc=True
enable_bridge()
This is definitely not working... Is the code correct? May be my VM too slow for this task?
This code is correct and should work. You should update to the current development version of Scapy (https://github.com/secdev/scapy/) and see if that was related to an old bug.
As a side note, you can directly use bridge_and_sniff("enp0s3", "enp0s8") instead of writing your own function.

Passing data between separately running Python scripts

If I have a python script running (with full Tkinter GUI and everything) and I want to pass the live data it is gathering (stored internally in arrays and such) to another python script, what would be the best way of doing that?
I cannot simply import script A into script B as it will create a new instance of script A, rather than accessing any variables in the already running script A.
The only way I can think of doing it is by having script A write to a file, and then script B get the data from the file. This is less than ideal however as something bad might happen if script B tries to read a file that script A is already writing in. Also I am looking for a much faster speed to communication between the two programs.
EDIT:
Here are the examples as requested. I am aware why this doesn't work, but it is the basic premise of what needs to be achieved. My source code is very long and unfortunately confidential, so it is not going to help here. In summary, script A is running Tkinter and gathering data, while script B is views.py as a part of Django, but I'm hoping this can be achieved as a part of Python.
Script A
import time
i = 0
def return_data():
return i
if __name__ == "__main__":
while True:
i = i + 1
print i
time.sleep(.01)
Script B
import time
from scriptA import return_data
if __name__ == '__main__':
while True:
print return_data() # from script A
time.sleep(1)
you can use multiprocessing module to implement a Pipe between the two modules. Then you can start one of the modules as a Process and use the Pipe to communicate with it. The best part about using pipes is you can also pass python objects like dict,list through it.
Ex:
mp2.py:
from multiprocessing import Process,Queue,Pipe
from mp1 import f
if __name__ == '__main__':
parent_conn,child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
print(parent_conn.recv()) # prints "Hello"
mp1.py:
from multiprocessing import Process,Pipe
def f(child_conn):
msg = "Hello"
child_conn.send(msg)
child_conn.close()
If you wanna read and modify shared data, between 2 scripts, which run separately, a good solution is, take advantage of the python multiprocessing module, and use a Pipe() or a Queue() (see differences here). This way, you get to sync scripts, and avoid problems regarding concurrency and global variables (like what happens if both scripts wanna modify a variable at the same time).
As Akshay Apte said in his answer, the best part about using pipes/queues, is that you can pass python objects through them.
Also, there are methods to avoid waiting for data, if there hasn't been any passed yet (queue.empty() and pipeConn.poll()).
See an example using Queue() below:
# main.py
from multiprocessing import Process, Queue
from stage1 import Stage1
from stage2 import Stage2
s1= Stage1()
s2= Stage2()
# S1 to S2 communication
queueS1 = Queue() # s1.stage1() writes to queueS1
# S2 to S1 communication
queueS2 = Queue() # s2.stage2() writes to queueS2
# start s2 as another process
s2 = Process(target=s2.stage2, args=(queueS1, queueS2))
s2.daemon = True
s2.start() # Launch the stage2 process
s1.stage1(queueS1, queueS2) # start sending stuff from s1 to s2
s2.join() # wait till s2 daemon finishes
# stage1.py
import time
import random
class Stage1:
def stage1(self, queueS1, queueS2):
print("stage1")
lala = []
lis = [1, 2, 3, 4, 5]
for i in range(len(lis)):
# to avoid unnecessary waiting
if not queueS2.empty():
msg = queueS2.get() # get msg from s2
print("! ! ! stage1 RECEIVED from s2:", msg)
lala = [6, 7, 8] # now that a msg was received, further msgs will be different
time.sleep(1) # work
random.shuffle(lis)
queueS1.put(lis + lala)
queueS1.put('s1 is DONE')
# stage2.py
import time
class Stage2:
def stage2(self, queueS1, queueS2):
print("stage2")
while True:
msg = queueS1.get() # wait till there is a msg from s1
print("- - - stage2 RECEIVED from s1:", msg)
if msg == 's1 is DONE ':
break # ends loop
time.sleep(1) # work
queueS2.put("update lists")
EDIT: just found that you can use queue.get(False) to avoid blockage when receiving data. This way there's no need to check first if the queue is empty. This is no possible if you use pipes.
You could use the pickling module to pass data between two python programs.
import pickle
def storeData():
# initializing data to be stored in db
employee1 = {'key' : 'Engineer', 'name' : 'Harrison',
'age' : 21, 'pay' : 40000}
employee2 = {'key' : 'LeadDeveloper', 'name' : 'Jack',
'age' : 50, 'pay' : 50000}
# database
db = {}
db['employee1'] = employee1
db['employee2'] = employee2
# Its important to use binary mode
dbfile = open('examplePickle', 'ab')
# source, destination
pickle.dump(db, dbfile)
dbfile.close()
def loadData():
# for reading also binary mode is important
dbfile = open('examplePickle', 'rb')
db = pickle.load(dbfile)
for keys in db:
print(keys, '=>', db[keys])
dbfile.close()
This will pass data to and from two running scripts using TCP host socket. https://zeromq.org/languages/python/. required module zmq: use( pip install zmq ).
This this is called a client server communication. The server will wait for the client to send a request. The client will also not run if the server is not running. In addition, this client server communication allows for you to send a request from one device(client) to another device(server), as long as the client and server are on the same network and you change localhost (localhost for the server is marked with: * )to the actual IP of your device(server)( IP help( go into your device network settings, click on your network icon, find advanced or properties, look for IP address. note this may be different from going to google and asking for your ip. I am using IPV6 so. DDOS protection.)) Change the localhost IP of the client to the server IP. QUESTION to OP. Do you have to have script b always running or can script b be imported as a module to script a? If so look up how to make python modules.
I solved the same problem using the lib Shared Memory Dict, it's a very simple dict implementation of multiprocessing.shared_memory.
Source1.py
from shared_memory_dict import SharedMemoryDict
from time import sleep
smd_config = SharedMemoryDict(name='config', size=1024)
if __name__ == "__main__":
smd_config["status"] = True
while True:
smd_config["status"] = not smd_config["status"]
sleep(1)
Source2.py
from shared_memory_dict import SharedMemoryDict
from time import sleep
smd_config = SharedMemoryDict(name='config', size=1024)
if __name__ == "__main__":
while True:
print(smd_config["status"])
sleep(1)

Writing a ros node with both a publisher and subscriber?

I am currently trying to make a ROS node in Python which has both a subscriber and a publisher.
I've seen examples where a message is published within the callback, but I want it to "constantly" publish messages, and perform callbacks when it is the case.
Here is how I do it now:
#!/usr/bin/env python
import rospy
from std_msgs.msg import Empty
from std_msgs.msg import String
import numpy as np
pub = rospy.Publisher('/status', String, queue_size=1000)
def callback(data):
print "Message received"
def listener():
rospy.init_node('control', anonymous=True)
rospy.Subscriber('control_c', Empty, callback)
rospy.spin()
if __name__ == '__main__':
print "Running"
listener()
So where should I publish?
Well, I think there's a lot of solutions here, you could even make use of a python process, but what I'm proposing is a ROS approach using a ros Timer.
I am not really that efficient in python but this code may gave you a heads up.
#!/usr/bin/env python
import rospy
from std_msgs.msg import Empty
from std_msgs.msg import String
import numpy as np
last_data = ""
started = False
pub = rospy.Publisher('/status', String, queue_size=1000)
def callback(data):
print "New message received"
global started, last_data
last_data = data
if (not started):
started = True
def timer_callback(event):
global started, pub, last_data
if (started):
pub.publish(last_data)
print "Last message published"
def listener():
rospy.init_node('control', anonymous=True)
rospy.Subscriber('control_c', String, callback)
timer = rospy.Timer(rospy.Duration(0.5), timer_callback)
rospy.spin()
timer.shutdown()
if __name__ == '__main__':
print "Running"
listener()
Here, your callback will update the message and your timer will fire up every 0.5sec and publishes the last data received.
you can test this code by publishing data on "/contriol_c" every 3 seconds and configuring you timer to 0.5 sec. start an echo on /status
$ rostopic echo /status
and you'll got your message published on a 2 Hz rate.
Hope that helps !
Simply replace rospy.spin() with the following loop:
while not rospy.is_shutdown():
# do whatever you want here
pub.publish(foo)
rospy.sleep(1) # sleep for one second
Of course you can adjust the sleep duration to whatever value you want (or even remove it entirely).
According to this reference subscribers in rospy are running in a separate thread, so you don't need to call spin actively.
Note that in roscpp (i.e. when using C++) this is handled differently. There you have to call ros::spinOnce() in the while loop.

Listening for a threading Event in python

first time SO user, please excuse any etiquette errors. I'm trying to implement a multithreaded program in python and am having troubles. This is no doubt due to a lack of understanding of how threading is implemented, but hopefully you can help me figure it out.
I have a basic program that continually listens for messages on a serial port and can then print/save/process/etc them, which works fine. It basically looks like this:
import serial
def main():
usb = serial.Serial('/dev/cu.usbserial-A603UBRB', 57600) #open serial w\ baud rate
while True:
line = usb.readline()
print(line)
However what I want to do is continually listen for the messages on a serial port, but not necessarily do anything with them. This should run in the background, and meanwhile in the foreground I want to have some kind of interface where the user can command the program to read/use/save these data for a while and then stop again.
So I created the following code:
import time
import serial
import threading
# this runs in the background constantly, reading the serial bus input
class serial_listener(threading.Thread):
def __init__(self, line, event):
super(serial_listener, self).__init__()
self.event = threading.Event()
self.line = ''
self.usb = serial.Serial('/dev/cu.usbserial-A603UBRB', 57600)
def run(self):
while True:
self.line = self.usb.readline()
self.event.set()
self.event.clear()
time.sleep(0.01)
# this lets the user command the software to record several values from serial
class record_data(threading.Thread):
def __init__(self):
super(record_data, self).__init__()
self.line = ''
self.event = threading.Event()
self.ser = serial_listener(self.line,self.event)
self.ser.start() #run thread
def run(self):
while(True):
user_input = raw_input('Record data: ')
if user_input == 'r':
event_counter = 0
while(event_counter < 16):
self.event.wait()
print(self.line)
event_counter += 1
# this is going to be the mother function
def main():
dat = record_data()
dat.start()
# this makes the code behave like C code.
if __name__ == '__main__':
main()
It compiles and runs, but when I order the program to record by typing r into the CLI, nothing happens. It doesn't seem to be receiving any events.
Any clues how to make this work? Workarounds are also fine, the only thing is that I can't constantly open and close the serial interface, it has to remain open the whole time, or else the device stops working until un/replugged.
Instead of using multiple threads, I would suggest using multiple processes. When you use threads, you have to think about the global interpreter lock. So you either listen to events or do something in your main thread. Both at the same time will not work.
When using multiple processes I would then use a queue to forward the events from your watchdog that you would like to handle. Or you could code your own event handler. Here you can find an example for multiprocess event handlers

monitoring dbus messages by python

I'm trying to make a python application which reads the messages going through DBus, something giving the same output of the bash dbus-monitor. According to what I got from my searching the code should be quite plain and clear, something like:
import dbus, gobject
from dbus.mainloop.glib import DBusGMainLoop
def msg_cb(bus, msg):
args = msg.get_args_list()
print "Notification from '%s'" % args[0]
print "Summary: %s" % args[3]
print "Body: %s", args[4]
if __name__ == '__main__':
DBusGMainLoop(set_as_default=True)
bus = dbus.SessionBus()
string = "interface='org.freedesktop.Notifications',member='Notify'"
bus.add_match_string(string)
bus.add_message_filter(msg_cb)
mainloop = gobject.MainLoop ()
mainloop.run ()
But launching it I only get the message returned by DBus saying the application is connected, differently from what I get if I execute the bash command:
dbus-monitor --session interface='org.freedesktop.Notifications',member='Notify'
In this case I can watch all the messages matching the filter condition.
Does anybody please help me to understand where I fail?
Thanks
Notify is a method, not a signal, so you need to add eavesdrop='true' as part of the match rule, to receive messages which are not intended for you. If you run dbus-monitor, you will notice the eavesdrop key in the rules dbus-monitor sets up.
This is a change in behavior, I believe since dbus-1.5.6 where bug 39450 was fixed.

Categories