I have a device which works on serial communication. I am writing python code which will send some commands to get the data from the device.
There are three commands.
1.COMMAND - sop
Device does its internal calculation and sends below data
Response - "b'SOP,0,921,34,40,207,0,x9A\r\n'"
2.COMMAND - time
This gives a date time values which normally do not change untill the device is restarted
3.START - "\r\r" or (<cr><cr>)
This command puts the device in responsive mode after which it responds to above commands. This command is basically entering <enter> twice & only have to do once at the start.
Now the problem which I am facing is that, frequency of data received from sop command is not fixed and hence the data is received anytime. This command can also not be stopped once started, so if I run another command like time, and read the data, I do not receive time values and they are merged with the sop data sometime. Below is the code, I am using:
port = serial.Serial('/dev/ttyS0',115200) #Init serial port
port.write(("\r\r".encode())) #Sending the start command
bytesToRead = port.in_waiting #Checking data bytesize
res = port.read(bytesToRead) #Reading the data which is normally a welcome msg
port.reset_input_buffer() #Clearing the input serial buffer
port.reset_output_buffer() #Clearing the output serial buffer
port.write(("sop\r".encode())) #Sending the command sop
while True:
time.sleep(5)
bytesToRead = port.in_waiting
print(bytesToRead)
res = port.read(bytesToRead)
print(res)
port.reset_input_buffer()
port.write(("time\r".encode()))
res = port.readline()
print(res)
Using the above command I sometimes do not receive the value of time after executing its command or sometimes it is merged with the sop command. Also with the sop command, I received a lot of data during the sleep(5) out of which I need to get the latest data. If I do not include sleep(5), I miss the sop data and it is then received after executing the time command.
I was hoping if anyone can point me to right direction of how to design it in a better way. Also, I think this can easily be done using interrupt handler but I didn't found any code about pyserial interrupts. Can anyone please suggest some good code for using interrupts in pyserial.
Thanks
Instead of using time.sleep(), its preferred to use serialport.in_waiting which help to check the number of bytes available in rcv buffer.
So once there is some data is rcv buffer then only read the data using read function.
so following code sequence can be followed without having any delay
while True:
bytesToRead = port.in_waiting
print(bytesToRead)
if(bytestoRead > 0):
res = port.read(bytesToRead)
print(res)
port.reset_input_buffer()
# put some check or filter then write data on serial port
port.write(("time\r".encode()))
res = port.readline()
print(res)
I am taking a stab here: Your time.sleep(5) might be too long. Have you tried making the sleep really short, for example time.sleep(.300)? If the time data gets written back between the sop returns you will catch it before it gets merged with sop, but I am making an assumption here that it will send time data back, else there is anyway nothing more you can do on the server side (the python) code. I do believe it won't hurt to make the sleep less, it is anyway just sitting there waiting (polling) for communication.
Not having the having the same environment on my side, makes it difficult to answer, because I can't test my answer, so I hope this might help.
I am trying to move my python code from using dynamodb to dynamodb2 to have access to the global secondary index capability. One concept that to me is a lot less clear in ddb2 compared to ddb is that of a batch. Here's once version of my new code which was basically modified from my original ddb code:
item_pIds = []
batch = table.batch_write()
count = 0
while True:
m = inq.read()
count = count + 1
mStr = json.dumps(m)
pid = m['primaryId']
if pid in item_pIds:
print "pid=%d already exists in the batch, ignoring" % pid
continue
item_pIds.append(pid)
sid = m['secondaryId']
item_data = {"primaryId" : pid, "secondaryId"] : sid, "message"] : mStr}
batch.put_item(data=item_data)
if count >= 25:
batch = table.batch_write()
count = 0
item_pIds = []
So what I am doing here is I am getting (JSON) messages from a queue. Each message has a primaryId and a secondaryId. The secondaryId is not unique in that I might get several messages at about the same time that have the same. The primaryId is sort of unique. That is, if I get a set of messages at about the same time that have the same primaryId, it's bad. However, from time to time, say once in a few hours I may get a message that need to override an existing message with the same primaryId. So this seems to align well with the statement from the dynamodb2 documentation page similar to that of ddb:
DynamoDB’s maximum batch size is 25 items per request. If you attempt to put/delete more than that, the context manager will batch as many as it can up to that number, then flush them to DynamoDB and continue batching as more calls come in.
However, what I noticed is that a large chunk of messages that I get through the queue never make it to the database. That is, when I try to retrieve them later, they are not there. So I was told that a better way of handling batch writes is by doing something like this:
with table.batch_write() as batch:
while True:
m = inq.read()
mStr = json.dumps(m)
pid = m['primaryId']
sid = m['secondaryId']
item_data = {"primaryId" : pid, "secondaryId"] : sid, "message"] : mStr}
batch.put_item(data=item_data)
That is, I only call batch_write() once similar to how I would open a file only once and then write into it continuously. But in this case, I don't understand what the "rule of 25 max" means. When does a batch start and end? And how do I check for duplicate primaryIds? That is, remembering all messages that I ever received through the queue is not realistic since (i) I have too many of them (the system runs 24/7) and (ii) as I stated before, occasional repeated ids are OK.
Sorry for the long message.
A batch will start whenever the request is sent and end when the last request in the batch is completed.
As with any RESTful API, every request comes with a cost, meaning how much/many resources it will take to complete said request. With the batch_write() class in DynamoDB2, they are wrapping the requests in a group and creating a queue to process them, which will reduce the cost as they are no longer individual requests.
The batch_write() class returns a context manager that handles the individual requests and what you get back slightly resembles a Table object but only has the put_item and delete_item requests.
DynamoDB's max batch size is 25, just like you've read. From the comments in the source code:
DynamoDB's maximum batch size is 25 items per request. If you attempt
to put/delete more than that, the context manager will batch as many
as it can up to that number, then flush them to DynamoDB & continue
batching as more calls come in.
You can also read about migrating, batches in particular, from DynamoDB to DynamoDB2 here.
I currently have a Python 2.6 piece of code that runs two loops simultaneously. The code uses the gps (gpsd) module and the scapy module. Basically the first function (gpsInfo) contains a continual while loop grabbing GPS data from a GPS device and writing the location to console. The second function (ClientDetect) runs in a continual loop also sniffs the air for wifi data and prints this data when specific packets are found. I've threaded these two loops with the GPS one running as a background thread. What I am looking to do (and have been struggling for 5 days to work out how) is for, when the ClientDetect function finds a match and prints the respective info, I want the respective GPS coordinates of when that hit was made also printed to console. At present my code doesn't seem to work.
observedclients = [] p = "" # Relate to wifi packet session =
gps.gps(mode=gps.WATCH_NEWSTYLE)
def gpsInfo():
while True:
session.poll()
time.sleep(5)
if gps.PACKET_SET:
session.stream
print session.fix.latitude + session.fix.longitude
time.sleep(0.1)
def WifiDetect(p):
if p.haslayer(Dot11):
if p.type == 0 and p.subtype in stamgmtstypes:
if p.addr2 not in observedclients:
print p.addr2
observedclients.append(p.addr2)
def ActivateWifiDetect():
sniff(iface="mon0", prn=WifiDetect)
if __name__ == '__main__':
t = threading.Thread(target=gpsInfo)
t.start()
WifiDetect()
Can anybody look at my code to see how best to grab the data simultaneously for when there is a wifi hit, for the GPS coordinates to be printed too? Somebody mentioned implementing queuing but I have researched this but to no avail with regards to how to implement it.
As said, the aim of this code is to scan for both GPS and specific wifi packets and when detected, print details relating to the packet and the location where is was detected.
A simple way of getting this is store the gps location in a global variable, and have the wifi sniffing thread read that global when it needs to print some data; The gotcha is that since two threads can be accessing the global variable at the same time, you'll want to wrap it with a mutex;
last_location = (None, None)
location_mutex = threading.Lock()
def gpsInfo():
global last_location
while True:
session.poll()
time.sleep(5)
if gps.PACKET_SET:
session.stream
with location_mutex:
# DON'T Print from inside thread!
last_location = session.fix.latitude, session.fix.longitude
time.sleep(0.1)
def WifiDetect(p):
if p.haslayer(Dot11):
if p.type == 0 and p.subtype in stamgmtstypes:
if p.addr2 not in observedclients:
with location_mutex:
print p.addr2, last_location
observedclients.append((p.addr2, last_location))
you need to tell python you are using an external variable when you use gps in a function. The code should look like this:
def gpsInfo():
global gps # new line
while True:
session.poll()
time.sleep(5)
if gps.PACKET_SET:
session.stream
print session.fix.latitude + session.fix.longitude
time.sleep(0.1)
def WifiDetect(p):
global p, observedclients # new line
if p.haslayer(Dot11):
if p.type == 0 and p.subtype in stamgmtstypes:
if p.addr2 not in observedclients:
print p.addr2
observedclients.append(p.addr2)
I think you should be more specific in your goal.
If all you want to do is get GPS coords when a Wifi network is sniffed, just do (psuedo-code):
while True:
if networkSniffed():
async_GetGPSCoords()
If you want a log of all GPS coords and want to match that up with Wifi network data, just print all the data out along with timestamps, and do post-processing do match up Wifi networks with GPS coords via timestamp.
I have been trying to work with the standard GPS (gps.py) module in python 2.6. This is supposed to act as a client and read GPS Data from gpsd running in Ubuntu.
According to the documentation from GPSD webpage on client design (GPSD Client Howto), I should be able to use the following code (slightly modified from the example) for getting latest GPS Readings (lat long is what I am mainly interested in)
from gps import *
session = gps() # assuming gpsd running with default options on port 2947
session.stream(WATCH_ENABLE|WATCH_NEWSTYLE)
report = session.next()
print report
If I repeatedly use the next() it gives me buffered values from the bottom of the queue (from when the session was started), and not the LATEST Gps reading. Is there a way to get more recent values using this library? In a Way, seek the Stream to the latest values?
Has anyone got a code example using this library to poll the gps and get the value i am looking for ?
Here is what I am trying to do:
start the session
Wait for user to call the gps_poll() method in my code
Inside this method read the latest TPV (Time Position Velocity) report and return lat long
Go back to waiting for user to call gps_poll()
What you need to do is regularly poll 'session.next()' - the issue here is that you're dealing with a serial interface - you get results in the order they were received. Its up to you to maintain a 'current_value' that has the latest retrieved value.
If you don't poll the session object, eventually your UART FIFO will fill up and you won't get any new values anyway.
Consider using a thread for this, don't wait for the user to call gps_poll(), you should be polling and when the user wants a new value they use 'get_current_value()' which returns current_value.
Off the top of my head it could be something as simple as this:
import threading
import time
from gps import *
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.session = gps(mode=WATCH_ENABLE)
self.current_value = None
def get_current_value(self):
return self.current_value
def run(self):
try:
while True:
self.current_value = self.session.next()
time.sleep(0.2) # tune this, you might not get values that quickly
except StopIteration:
pass
if __name__ == '__main__':
gpsp = GpsPoller()
gpsp.start()
# gpsp now polls every .2 seconds for new data, storing it in self.current_value
while 1:
# In the main thread, every 5 seconds print the current value
time.sleep(5)
print gpsp.get_current_value()
The above answers are very inefficient and overly complex for anyone using modern versions of gpsd and needing data at only specific times, instead of streaming.
Most GPSes send their position information at least once per second. Presumably since many GPS-based applications desire real-time updates, the vast majority of gpsd client examples I've seen use the above method of watching a stream from gpsd and receiving realtime updates (more or less as often as the gps sends them).
However, if (as in the OP's case) you don't need streaming information but just need the last-reported position whenever it's requested (i.e. via user interaction or some other event), there's a much more efficient and simpler method: let gpsd cache the latest position information, and query it when needed.
The gpsd JSON protocol has a ?POLL; request, which returns the most recent GPS information that gpsd has seen. Instead of having to iterate over the backlog of gps messages, and continually read new messages to avoid full buffers, you can send a ?WATCH={"enable":true} message at the start of the gpsd session, and then query the latest position information whenever you need it with ?POLL;. The response is a single JSON object containing the most recent information that gpsd has seen from the GPS.
If you're using Python3, the easiest way I've found is to use the gpsd-py3 package available on pypi. To connect to gpsd, get the latest position information, and print the current position:
import gpsd
gpsd.connect()
packet = gpsd.get_current()
print(packet.position())
You can repeat the gpsd.get_current() call whenever you want new position information, and behind the scenes the gpsd package will execute the ?POLL; call to gpsd and return an object representing the response.
Doing this with the built-in gps module isn't terribly straightforward, but there are a number of other Python clients available, and it's also rather trivial to do with anything that can perform socket communication, including this example using telnet:
$ telnet localhost 2947
Trying ::1...
Connected to localhost.
Escape character is '^]'.
{"class":"VERSION","release":"3.16","rev":"3.16","proto_major":3,"proto_minor":11}
?WATCH={"enable":true}
{"class":"DEVICES","devices":[{"class":"DEVICE","path":"/dev/pts/10","driver":"SiRF","activated":"2018-03-02T21:14:52.687Z","flags":1,"native":1,"bps":4800,"parity":"N","stopbits":1,"cycle":1.00}]}
{"class":"WATCH","enable":true,"json":false,"nmea":false,"raw":0,"scaled":false,"timing":false,"split24":false,"pps":false}
?POLL;
{"class":"POLL","time":"2018-03-02T21:14:54.873Z","active":1,"tpv":[{"class":"TPV","device":"/dev/pts/10","mode":3,"time":"2005-06-09T14:34:53.280Z","ept":0.005,"lat":46.498332203,"lon":7.567403907,"alt":1343.165,"epx":24.829,"epy":25.326,"epv":78.615,"track":10.3788,"speed":0.091,"climb":-0.085,"eps":50.65,"epc":157.23}],"gst":[{"class":"GST","device":"/dev/pts/10","time":"1970-01-01T00:00:00.000Z","rms":0.000,"major":0.000,"minor":0.000,"orient":0.000,"lat":0.000,"lon":0.000,"alt":0.000}],"sky":[{"class":"SKY","device":"/dev/pts/10","time":"2005-06-09T14:34:53.280Z","xdop":1.66,"ydop":1.69,"vdop":3.42,"tdop":3.05,"hdop":2.40,"gdop":5.15,"pdop":4.16,"satellites":[{"PRN":23,"el":6,"az":84,"ss":0,"used":false},{"PRN":28,"el":7,"az":160,"ss":0,"used":false},{"PRN":8,"el":66,"az":189,"ss":45,"used":true},{"PRN":29,"el":13,"az":273,"ss":0,"used":false},{"PRN":10,"el":51,"az":304,"ss":29,"used":true},{"PRN":4,"el":15,"az":199,"ss":36,"used":true},{"PRN":2,"el":34,"az":241,"ss":41,"used":true},{"PRN":27,"el":71,"az":76,"ss":42,"used":true}]}]}
?POLL;
{"class":"POLL","time":"2018-03-02T21:14:58.856Z","active":1,"tpv":[{"class":"TPV","device":"/dev/pts/10","mode":3,"time":"2005-06-09T14:34:53.280Z","ept":0.005,"lat":46.498332203,"lon":7.567403907,"alt":1343.165,"epx":24.829,"epy":25.326,"epv":78.615,"track":10.3788,"speed":0.091,"climb":-0.085,"eps":50.65,"epc":157.23}],"gst":[{"class":"GST","device":"/dev/pts/10","time":"1970-01-01T00:00:00.000Z","rms":0.000,"major":0.000,"minor":0.000,"orient":0.000,"lat":0.000,"lon":0.000,"alt":0.000}],"sky":[{"class":"SKY","device":"/dev/pts/10","time":"2005-06-09T14:34:53.280Z","xdop":1.66,"ydop":1.69,"vdop":3.42,"tdop":3.05,"hdop":2.40,"gdop":5.15,"pdop":4.16,"satellites":[{"PRN":23,"el":6,"az":84,"ss":0,"used":false},{"PRN":28,"el":7,"az":160,"ss":0,"used":false},{"PRN":8,"el":66,"az":189,"ss":45,"used":true},{"PRN":29,"el":13,"az":273,"ss":0,"used":false},{"PRN":10,"el":51,"az":304,"ss":29,"used":true},{"PRN":4,"el":15,"az":199,"ss":36,"used":true},{"PRN":2,"el":34,"az":241,"ss":41,"used":true},{"PRN":27,"el":71,"az":76,"ss":42,"used":true}]}]}
Adding my two cents.
For whatever reason my raspberry pi would continue to execute a thread and I'd have to hard reset the pi.
So I've combined sysnthesizerpatel and an answer I found on Dan Mandel's blog here.
My gps_poller class looks like this:
import os
from gps import *
from time import *
import time
import threading
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.session = gps(mode=WATCH_ENABLE)
self.current_value = None
self.running = True
def get_current_value(self):
return self.current_value
def run(self):
try:
while self.running:
self.current_value = self.session.next()
except StopIteration:
pass
And the code in use looks like this:
from gps_poll import *
if __name__ == '__main__':
gpsp = GpsPoller()
try:
gpsp.start()
while True:
os.system('clear')
report = gpsp.get_current_value()
# print report
try:
if report.keys()[0] == 'epx':
print report['lat']
print report['lon']
time.sleep(.5)
except(AttributeError, KeyError):
pass
time.sleep(0.5)
except(KeyboardInterrupt, SystemExit):
print "\nKilling Thread.."
gpsp.running = False
gpsp.join()
print "Done.\nExiting."
You can also find the code here: Here and Here
I know its an old thread but just for everyone understanding, you can also use pyembedded python library for this.
pip install pyembedded
from pyembedded.gps_module.gps import GPS
import time
gps = GPS(port='COM3', baud_rate=9600)
while True:
print(gps.get_lat_long())
time.sleep(1)
https://pypi.org/project/pyembedded/