Python-CAN script receiving half of the expected CAN messages - python

I have written a Python script utilizing the Python-CAN library which records received CAN messages at a 1 second rate for 5 minutes, before logging all the messages into a file and exiting. The computer has a CAN module which is connecting to the CAN bus. (The other device on the bus is an engine) I communicate with it using the SocketCAN interface.
The test engine system that this computer is connected to is sending around 114 messages at what I believe is a 250kb baud rate. I am expecting to see 114 messages recorded in the file for each 1 second period, but instead I'm seeing about half that count. (~65 messages)
Could it be possible that the engine's ECU is set to a 500kb baud rate, and that's why I'm not getting the count I am expecting? I would think there would be no communication if the baud rates do not match, but I do not have physical access to the system because I'm sending the script remotely through an OTA update and not running it myself. (The device is headless, but is setup to run the script on startup) I just see the log files that are generated.
Here is the python code:
(A note, I have code parsing the received messages into the contained signals, but I did not include this code here because it happens at the end, and it is not relevant)
class logging:
def __init__(self):
#Dictionary to hold received CAN messages
self.message_Dict = {}
#List to hold queued dictionaries
self.message_Queue = []
#A "filters" object is also created here, but I did not include it
#I have verified the filters are correct on my test system
def main(self):
#Record the current time
currentTime = datetime.datetime.now()
#Record the overall start time
startTime = datetime.datetime.now()
#Record the iteration start time
lastIterationStartTime = currentTime
#Create the CanBus that will be used to send and receive CAN msgs from the MCU
canbus = can.interfaces.socketcan.SocketcanBus(channel='can0', bitrate=250000)
#These filters are setup correctly, because all the messages come through
#on my test system, but I did not include them here
canbus.set_filters(self.Filters)
# Creating Listener filters and notifier
listener = can.Listener()
#Main loop
while 1:
#create a variable to hold received data
msg2 = canbus.recv()
#Record the current time
currentTime = datetime.datetime.now()
#If a valid message is detected
if(msg2 != None):
if(len(msg2.data) > 0):
try:
#Save the message data into a queue (will be processed later)
self.message_Dict[msg2.arbitration_id] = msg2.data
except:
print("Error in storing CAN message")
#If 1 second has passed since the last iteration,
#add the dictionary to a new spot in the queue
if((currentTime - lastIterationStartTime) >= datetime.timedelta(seconds=1)):
#Add the dictionary with messages into the queue for later processing
messageDict_Copy = self.message_Dict.copy()
self.message_Queue.append(messageDict_Copy)
print("Number of messages in dictionary: " + str(len(self.message_Dict)) + "
Number of reports in queue: " + str(len(self.message_Queue)))
#Clear the dictionary for new messages for every iteration
self.message_Dict.clear()
#Record the reset time
lastIterationStartTime = datetime.datetime.now()
#Once 5 minutes of data has been recorded, write to the file
if((currentTime - startTime) > datetime.timedelta(minutes=5)):
#Here is where I write the data to a file. This is too long to include
#Clear the queue
self.message_Queue = []
#Clear the dictionary for new messages for every iteration
self.message_Dict.clear()
#When the script is run, execute the Main method
if __name__ == '__main__':
mainClass = logging()
mainClass.main()
I appreciate any ideas or input you have. Thank you

In my experience, most of the engine's ECU usually uses 250kb, but the newest ones are using 500kb. I would suggest you too also try the both.
Also, the messages will only come to the bus if they have been sent, it seems silly but for example a truck, if you don't step on the accelerator the messages referring to the accelerator will not appear. So, maybe you need to check if all components are being using as you expect. The lib of can-utils has a 'Can sniffer' that can also help you.
I suggest you to use 'can-utils' to help you in that. It is a powerful tool to can analyses.
Did you try to loop the baudrate? Maybe can also help to find another.

Related

pySerial Capturing a long response

Hi guys I'm working a on script that will get data from a host using the Data Communications Standard (Developed by: Data Communication Standard Committee Lens Processing Division of The Vision Council), by serial port and pass the data into ModBus Protocol for the device to perform it's operations.
Since I don't fiscally have access to the host machine I'm trying to develop a secondary script to emulate the host. I am currently on the stage where I need to read a lot of information from the serial port and I get only part of the data. I was hoping to get the whole string sent on the send_job() function on my host emulator script.
Guys also can any of you tell me if this would be a good approach? the only thing the machine is supposed to do is grab 2 values from the host response and assign them to two modbus holding registers.
NOTE: the initialization function is hard coded because it will always be the same and the actual response data will not matter except for status. Also the job request is hard coded i only pass the job # that i get from a modbus holding register, the exact logic on how the host resolved this should not matter i only need to send the job number scanned from the device in this format.
main script:
def request_job_modbus(job):
data = F'[06][1c]req=33[0d][0a]job={job}[0d][0a][1e][1d]'.encode('ascii')
writer(data)
def get_job_from_serial():
response = serial_client.read_all()
resp = response.decode()
return resp
# TODO : SEND INIT SEQUENCE ONCE AND VERIFY IF REQUEST status=0
initiation_request()
init_response_status = get_init_status()
print('init method being active')
print(get_init_status())
while True:
# TODO: get job request data
job_serial = get_job_from_serial()
print(job_serial)
host emulation script:
def send_job():
job_response = '''[06][1c]ans=33[0d]job=30925[0d]status=0;"ok"[0d]do=l[0d]add=;2.50[0d]ar=1[0d]
bcerin=;3.93[0d]bcerup=;-2.97[0d]crib=;64.00[0d]do=l[0d]ellh=;64.00[0d]engmask=;613l[0d]
erdrin=;0.00[0d]erdrup=;10.00[0d]ernrin=;2.00[0d]ernrup=;-8.00[0d]ersgin=;0.00[0d]
ersgup=;4.00[0d]gax=;0.00[0d]gbasex=;-5.30[0d]gcrosx=;-7.96[0d]kprva=;275[0d]kprvm=;0.55[0d]
ldpath=\\uscqx-tcpmain-at\lds\iot\do\800468.sdf[0d]lmatid=;151[0d]lmatname=;f50[0d]
lnam=;vsp_basic_fh15[0d]sgerin=;0.00[0d]sgerup=;0.00[0d]sval=;5.18[0d]text_11=;[0d]
text_12=;[0d]tind=;1.53[0d][1e][1d]'''.encode('ascii')
writer(job_response)
def get_init_request():
req = p.readline()
print(req)
request = req.decode()[4:11]
# print(request)
if request == 'req=ini':
print('request == req=ini??? <<<<<<< cumple condicion y enviala respuesta')
send_init_response()
send_job()
while True:
# print(get_init_request())
get_init_request()
what I get in screen: main script
init method being active
bce
erd
condition was met init status=0
outside loop
ers
condition was met init status=0
inside while loop
trigger reset <<<--------------------
5782
`:lmatid=;151[0d]lmatname=;f50[0d]
lnam=;vsp_basic_fh15[0d]sgerin=;0.00[0d]sgerup=;0.00[0d]sval=;5.18[0d]text_11=;[0d]
text_12=;[0d]tind=;1.53[0d][1e][1d]
outside loop
condition was met init status=0
outside loop
what I get in screen: host emulation script
b'[1c]req=ini[0d][0a][1e][1d]'
request == req=ini??? <<<<<<< cumple condicion y enviala respuesta
b''
b'[06][1c]req=33[0d][0a]job=5782[0d][0a][1e][1d]'
b''
b''
b''
b''
b''
b''
I'm suspect you're trying to write too much at once to a hardware buffer that is fairly small. Especially when dealing with low power hardware, assuming you can stuff an entire message into a buffer is not often correct. Even full modern PC's sometimes have very small buffers for legacy hardware like serial ports. You may find when you switch from development to actual hardware, that the RTS and DTR lines need to be used to determine when to send or receive data. This will be up to whoever designed the hardware unfortunately, as they are often also ignored.
I would try chunking your data transfer into smaller bits as a test to see if the whole message gets through. This is a quick and dirty first attempt that may have bugs, but it should get you down the right path:
def get_job_from_serial():
response = b'' #buffer for response
while True:
try:
response += serial_client.read() #read any available data or wait for timeout
#this technically could only be reading 1 char at a time, but any
#remotely modern pc should easily keep up with 9600 baud
except serial.SerialTimeoutException: #timeout probably means end of data
#you could also presumably check the length of the buffer if it's always
#a fixed length to determine if the entire message has been sent yet.
break
return response
def writer(command):
written = 0 #how many bytes have we actually written
chunksize = 128 #the smaller you go, the less likely to overflow
# a buffer, but the slower you go.
while written < len(command):
#you presumably might have to wait for p.dtr() == True or similar
#though it's just as likely to not have been implemented.
written += p.write(command[written:written+chunksize])
p.flush() #probably don't actually need this
P.S. I had to go to the source code for p.read_all (for some reason I couldn't find it online), and it does not do what I think you expect it does. The exact code for it is:
def read_all(self):
"""\
Read all bytes currently available in the buffer of the OS.
"""
return self.read(self.in_waiting)
There is no concept of waiting for a complete message, it just a shorthand for grab everything currently available.

Python's pyserial with interrupt mode

I have a device which works on serial communication. I am writing python code which will send some commands to get the data from the device.
There are three commands.
1.COMMAND - sop
Device does its internal calculation and sends below data
Response - "b'SOP,0,921,34,40,207,0,x9A\r\n'"
2.COMMAND - time
This gives a date time values which normally do not change untill the device is restarted
3.START - "\r\r" or (<cr><cr>)
This command puts the device in responsive mode after which it responds to above commands. This command is basically entering <enter> twice & only have to do once at the start.
Now the problem which I am facing is that, frequency of data received from sop command is not fixed and hence the data is received anytime. This command can also not be stopped once started, so if I run another command like time, and read the data, I do not receive time values and they are merged with the sop data sometime. Below is the code, I am using:
port = serial.Serial('/dev/ttyS0',115200) #Init serial port
port.write(("\r\r".encode())) #Sending the start command
bytesToRead = port.in_waiting #Checking data bytesize
res = port.read(bytesToRead) #Reading the data which is normally a welcome msg
port.reset_input_buffer() #Clearing the input serial buffer
port.reset_output_buffer() #Clearing the output serial buffer
port.write(("sop\r".encode())) #Sending the command sop
while True:
time.sleep(5)
bytesToRead = port.in_waiting
print(bytesToRead)
res = port.read(bytesToRead)
print(res)
port.reset_input_buffer()
port.write(("time\r".encode()))
res = port.readline()
print(res)
Using the above command I sometimes do not receive the value of time after executing its command or sometimes it is merged with the sop command. Also with the sop command, I received a lot of data during the sleep(5) out of which I need to get the latest data. If I do not include sleep(5), I miss the sop data and it is then received after executing the time command.
I was hoping if anyone can point me to right direction of how to design it in a better way. Also, I think this can easily be done using interrupt handler but I didn't found any code about pyserial interrupts. Can anyone please suggest some good code for using interrupts in pyserial.
Thanks
Instead of using time.sleep(), its preferred to use serialport.in_waiting which help to check the number of bytes available in rcv buffer.
So once there is some data is rcv buffer then only read the data using read function.
so following code sequence can be followed without having any delay
while True:
bytesToRead = port.in_waiting
print(bytesToRead)
if(bytestoRead > 0):
res = port.read(bytesToRead)
print(res)
port.reset_input_buffer()
# put some check or filter then write data on serial port
port.write(("time\r".encode()))
res = port.readline()
print(res)
I am taking a stab here: Your time.sleep(5) might be too long. Have you tried making the sleep really short, for example time.sleep(.300)? If the time data gets written back between the sop returns you will catch it before it gets merged with sop, but I am making an assumption here that it will send time data back, else there is anyway nothing more you can do on the server side (the python) code. I do believe it won't hurt to make the sleep less, it is anyway just sitting there waiting (polling) for communication.
Not having the having the same environment on my side, makes it difficult to answer, because I can't test my answer, so I hope this might help.

Where does a dynamodb2 batch begin and end?

I am trying to move my python code from using dynamodb to dynamodb2 to have access to the global secondary index capability. One concept that to me is a lot less clear in ddb2 compared to ddb is that of a batch. Here's once version of my new code which was basically modified from my original ddb code:
item_pIds = []
batch = table.batch_write()
count = 0
while True:
m = inq.read()
count = count + 1
mStr = json.dumps(m)
pid = m['primaryId']
if pid in item_pIds:
print "pid=%d already exists in the batch, ignoring" % pid
continue
item_pIds.append(pid)
sid = m['secondaryId']
item_data = {"primaryId" : pid, "secondaryId"] : sid, "message"] : mStr}
batch.put_item(data=item_data)
if count >= 25:
batch = table.batch_write()
count = 0
item_pIds = []
So what I am doing here is I am getting (JSON) messages from a queue. Each message has a primaryId and a secondaryId. The secondaryId is not unique in that I might get several messages at about the same time that have the same. The primaryId is sort of unique. That is, if I get a set of messages at about the same time that have the same primaryId, it's bad. However, from time to time, say once in a few hours I may get a message that need to override an existing message with the same primaryId. So this seems to align well with the statement from the dynamodb2 documentation page similar to that of ddb:
DynamoDB’s maximum batch size is 25 items per request. If you attempt to put/delete more than that, the context manager will batch as many as it can up to that number, then flush them to DynamoDB and continue batching as more calls come in.
However, what I noticed is that a large chunk of messages that I get through the queue never make it to the database. That is, when I try to retrieve them later, they are not there. So I was told that a better way of handling batch writes is by doing something like this:
with table.batch_write() as batch:
while True:
m = inq.read()
mStr = json.dumps(m)
pid = m['primaryId']
sid = m['secondaryId']
item_data = {"primaryId" : pid, "secondaryId"] : sid, "message"] : mStr}
batch.put_item(data=item_data)
That is, I only call batch_write() once similar to how I would open a file only once and then write into it continuously. But in this case, I don't understand what the "rule of 25 max" means. When does a batch start and end? And how do I check for duplicate primaryIds? That is, remembering all messages that I ever received through the queue is not realistic since (i) I have too many of them (the system runs 24/7) and (ii) as I stated before, occasional repeated ids are OK.
Sorry for the long message.
A batch will start whenever the request is sent and end when the last request in the batch is completed.
As with any RESTful API, every request comes with a cost, meaning how much/many resources it will take to complete said request. With the batch_write() class in DynamoDB2, they are wrapping the requests in a group and creating a queue to process them, which will reduce the cost as they are no longer individual requests.
The batch_write() class returns a context manager that handles the individual requests and what you get back slightly resembles a Table object but only has the put_item and delete_item requests.
DynamoDB's max batch size is 25, just like you've read. From the comments in the source code:
DynamoDB's maximum batch size is 25 items per request. If you attempt
to put/delete more than that, the context manager will batch as many
as it can up to that number, then flush them to DynamoDB & continue
batching as more calls come in.
You can also read about migrating, batches in particular, from DynamoDB to DynamoDB2 here.

Two simultaneous Python loops with one result

I currently have a Python 2.6 piece of code that runs two loops simultaneously. The code uses the gps (gpsd) module and the scapy module. Basically the first function (gpsInfo) contains a continual while loop grabbing GPS data from a GPS device and writing the location to console. The second function (ClientDetect) runs in a continual loop also sniffs the air for wifi data and prints this data when specific packets are found. I've threaded these two loops with the GPS one running as a background thread. What I am looking to do (and have been struggling for 5 days to work out how) is for, when the ClientDetect function finds a match and prints the respective info, I want the respective GPS coordinates of when that hit was made also printed to console. At present my code doesn't seem to work.
observedclients = [] p = "" # Relate to wifi packet session =
gps.gps(mode=gps.WATCH_NEWSTYLE)
def gpsInfo():
while True:
session.poll()
time.sleep(5)
if gps.PACKET_SET:
session.stream
print session.fix.latitude + session.fix.longitude
time.sleep(0.1)
def WifiDetect(p):
if p.haslayer(Dot11):
if p.type == 0 and p.subtype in stamgmtstypes:
if p.addr2 not in observedclients:
print p.addr2
observedclients.append(p.addr2)
def ActivateWifiDetect():
sniff(iface="mon0", prn=WifiDetect)
if __name__ == '__main__':
t = threading.Thread(target=gpsInfo)
t.start()
WifiDetect()
Can anybody look at my code to see how best to grab the data simultaneously for when there is a wifi hit, for the GPS coordinates to be printed too? Somebody mentioned implementing queuing but I have researched this but to no avail with regards to how to implement it.
As said, the aim of this code is to scan for both GPS and specific wifi packets and when detected, print details relating to the packet and the location where is was detected.
A simple way of getting this is store the gps location in a global variable, and have the wifi sniffing thread read that global when it needs to print some data; The gotcha is that since two threads can be accessing the global variable at the same time, you'll want to wrap it with a mutex;
last_location = (None, None)
location_mutex = threading.Lock()
def gpsInfo():
global last_location
while True:
session.poll()
time.sleep(5)
if gps.PACKET_SET:
session.stream
with location_mutex:
# DON'T Print from inside thread!
last_location = session.fix.latitude, session.fix.longitude
time.sleep(0.1)
def WifiDetect(p):
if p.haslayer(Dot11):
if p.type == 0 and p.subtype in stamgmtstypes:
if p.addr2 not in observedclients:
with location_mutex:
print p.addr2, last_location
observedclients.append((p.addr2, last_location))
you need to tell python you are using an external variable when you use gps in a function. The code should look like this:
def gpsInfo():
global gps # new line
while True:
session.poll()
time.sleep(5)
if gps.PACKET_SET:
session.stream
print session.fix.latitude + session.fix.longitude
time.sleep(0.1)
def WifiDetect(p):
global p, observedclients # new line
if p.haslayer(Dot11):
if p.type == 0 and p.subtype in stamgmtstypes:
if p.addr2 not in observedclients:
print p.addr2
observedclients.append(p.addr2)
I think you should be more specific in your goal.
If all you want to do is get GPS coords when a Wifi network is sniffed, just do (psuedo-code):
while True:
if networkSniffed():
async_GetGPSCoords()
If you want a log of all GPS coords and want to match that up with Wifi network data, just print all the data out along with timestamps, and do post-processing do match up Wifi networks with GPS coords via timestamp.

Python GPS Module: Reading latest GPS Data

I have been trying to work with the standard GPS (gps.py) module in python 2.6. This is supposed to act as a client and read GPS Data from gpsd running in Ubuntu.
According to the documentation from GPSD webpage on client design (GPSD Client Howto), I should be able to use the following code (slightly modified from the example) for getting latest GPS Readings (lat long is what I am mainly interested in)
from gps import *
session = gps() # assuming gpsd running with default options on port 2947
session.stream(WATCH_ENABLE|WATCH_NEWSTYLE)
report = session.next()
print report
If I repeatedly use the next() it gives me buffered values from the bottom of the queue (from when the session was started), and not the LATEST Gps reading. Is there a way to get more recent values using this library? In a Way, seek the Stream to the latest values?
Has anyone got a code example using this library to poll the gps and get the value i am looking for ?
Here is what I am trying to do:
start the session
Wait for user to call the gps_poll() method in my code
Inside this method read the latest TPV (Time Position Velocity) report and return lat long
Go back to waiting for user to call gps_poll()
What you need to do is regularly poll 'session.next()' - the issue here is that you're dealing with a serial interface - you get results in the order they were received. Its up to you to maintain a 'current_value' that has the latest retrieved value.
If you don't poll the session object, eventually your UART FIFO will fill up and you won't get any new values anyway.
Consider using a thread for this, don't wait for the user to call gps_poll(), you should be polling and when the user wants a new value they use 'get_current_value()' which returns current_value.
Off the top of my head it could be something as simple as this:
import threading
import time
from gps import *
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.session = gps(mode=WATCH_ENABLE)
self.current_value = None
def get_current_value(self):
return self.current_value
def run(self):
try:
while True:
self.current_value = self.session.next()
time.sleep(0.2) # tune this, you might not get values that quickly
except StopIteration:
pass
if __name__ == '__main__':
gpsp = GpsPoller()
gpsp.start()
# gpsp now polls every .2 seconds for new data, storing it in self.current_value
while 1:
# In the main thread, every 5 seconds print the current value
time.sleep(5)
print gpsp.get_current_value()
The above answers are very inefficient and overly complex for anyone using modern versions of gpsd and needing data at only specific times, instead of streaming.
Most GPSes send their position information at least once per second. Presumably since many GPS-based applications desire real-time updates, the vast majority of gpsd client examples I've seen use the above method of watching a stream from gpsd and receiving realtime updates (more or less as often as the gps sends them).
However, if (as in the OP's case) you don't need streaming information but just need the last-reported position whenever it's requested (i.e. via user interaction or some other event), there's a much more efficient and simpler method: let gpsd cache the latest position information, and query it when needed.
The gpsd JSON protocol has a ?POLL; request, which returns the most recent GPS information that gpsd has seen. Instead of having to iterate over the backlog of gps messages, and continually read new messages to avoid full buffers, you can send a ?WATCH={"enable":true} message at the start of the gpsd session, and then query the latest position information whenever you need it with ?POLL;. The response is a single JSON object containing the most recent information that gpsd has seen from the GPS.
If you're using Python3, the easiest way I've found is to use the gpsd-py3 package available on pypi. To connect to gpsd, get the latest position information, and print the current position:
import gpsd
gpsd.connect()
packet = gpsd.get_current()
print(packet.position())
You can repeat the gpsd.get_current() call whenever you want new position information, and behind the scenes the gpsd package will execute the ?POLL; call to gpsd and return an object representing the response.
Doing this with the built-in gps module isn't terribly straightforward, but there are a number of other Python clients available, and it's also rather trivial to do with anything that can perform socket communication, including this example using telnet:
$ telnet localhost 2947
Trying ::1...
Connected to localhost.
Escape character is '^]'.
{"class":"VERSION","release":"3.16","rev":"3.16","proto_major":3,"proto_minor":11}
?WATCH={"enable":true}
{"class":"DEVICES","devices":[{"class":"DEVICE","path":"/dev/pts/10","driver":"SiRF","activated":"2018-03-02T21:14:52.687Z","flags":1,"native":1,"bps":4800,"parity":"N","stopbits":1,"cycle":1.00}]}
{"class":"WATCH","enable":true,"json":false,"nmea":false,"raw":0,"scaled":false,"timing":false,"split24":false,"pps":false}
?POLL;
{"class":"POLL","time":"2018-03-02T21:14:54.873Z","active":1,"tpv":[{"class":"TPV","device":"/dev/pts/10","mode":3,"time":"2005-06-09T14:34:53.280Z","ept":0.005,"lat":46.498332203,"lon":7.567403907,"alt":1343.165,"epx":24.829,"epy":25.326,"epv":78.615,"track":10.3788,"speed":0.091,"climb":-0.085,"eps":50.65,"epc":157.23}],"gst":[{"class":"GST","device":"/dev/pts/10","time":"1970-01-01T00:00:00.000Z","rms":0.000,"major":0.000,"minor":0.000,"orient":0.000,"lat":0.000,"lon":0.000,"alt":0.000}],"sky":[{"class":"SKY","device":"/dev/pts/10","time":"2005-06-09T14:34:53.280Z","xdop":1.66,"ydop":1.69,"vdop":3.42,"tdop":3.05,"hdop":2.40,"gdop":5.15,"pdop":4.16,"satellites":[{"PRN":23,"el":6,"az":84,"ss":0,"used":false},{"PRN":28,"el":7,"az":160,"ss":0,"used":false},{"PRN":8,"el":66,"az":189,"ss":45,"used":true},{"PRN":29,"el":13,"az":273,"ss":0,"used":false},{"PRN":10,"el":51,"az":304,"ss":29,"used":true},{"PRN":4,"el":15,"az":199,"ss":36,"used":true},{"PRN":2,"el":34,"az":241,"ss":41,"used":true},{"PRN":27,"el":71,"az":76,"ss":42,"used":true}]}]}
?POLL;
{"class":"POLL","time":"2018-03-02T21:14:58.856Z","active":1,"tpv":[{"class":"TPV","device":"/dev/pts/10","mode":3,"time":"2005-06-09T14:34:53.280Z","ept":0.005,"lat":46.498332203,"lon":7.567403907,"alt":1343.165,"epx":24.829,"epy":25.326,"epv":78.615,"track":10.3788,"speed":0.091,"climb":-0.085,"eps":50.65,"epc":157.23}],"gst":[{"class":"GST","device":"/dev/pts/10","time":"1970-01-01T00:00:00.000Z","rms":0.000,"major":0.000,"minor":0.000,"orient":0.000,"lat":0.000,"lon":0.000,"alt":0.000}],"sky":[{"class":"SKY","device":"/dev/pts/10","time":"2005-06-09T14:34:53.280Z","xdop":1.66,"ydop":1.69,"vdop":3.42,"tdop":3.05,"hdop":2.40,"gdop":5.15,"pdop":4.16,"satellites":[{"PRN":23,"el":6,"az":84,"ss":0,"used":false},{"PRN":28,"el":7,"az":160,"ss":0,"used":false},{"PRN":8,"el":66,"az":189,"ss":45,"used":true},{"PRN":29,"el":13,"az":273,"ss":0,"used":false},{"PRN":10,"el":51,"az":304,"ss":29,"used":true},{"PRN":4,"el":15,"az":199,"ss":36,"used":true},{"PRN":2,"el":34,"az":241,"ss":41,"used":true},{"PRN":27,"el":71,"az":76,"ss":42,"used":true}]}]}
Adding my two cents.
For whatever reason my raspberry pi would continue to execute a thread and I'd have to hard reset the pi.
So I've combined sysnthesizerpatel and an answer I found on Dan Mandel's blog here.
My gps_poller class looks like this:
import os
from gps import *
from time import *
import time
import threading
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.session = gps(mode=WATCH_ENABLE)
self.current_value = None
self.running = True
def get_current_value(self):
return self.current_value
def run(self):
try:
while self.running:
self.current_value = self.session.next()
except StopIteration:
pass
And the code in use looks like this:
from gps_poll import *
if __name__ == '__main__':
gpsp = GpsPoller()
try:
gpsp.start()
while True:
os.system('clear')
report = gpsp.get_current_value()
# print report
try:
if report.keys()[0] == 'epx':
print report['lat']
print report['lon']
time.sleep(.5)
except(AttributeError, KeyError):
pass
time.sleep(0.5)
except(KeyboardInterrupt, SystemExit):
print "\nKilling Thread.."
gpsp.running = False
gpsp.join()
print "Done.\nExiting."
You can also find the code here: Here and Here
I know its an old thread but just for everyone understanding, you can also use pyembedded python library for this.
pip install pyembedded
from pyembedded.gps_module.gps import GPS
import time
gps = GPS(port='COM3', baud_rate=9600)
while True:
print(gps.get_lat_long())
time.sleep(1)
https://pypi.org/project/pyembedded/

Categories