RPi0 simply hangs after a few weeks - python

I have an RPi0 set up in my crawl space to measure and record Temp/Humidity over time. Simply put, a Python program runs using Chron every few minutes, collects the temp/humidity and sends them to a cloud-based database ("one and one") using a simple HTTP page that runs a simple PHP program to add a line to the MySQL database table. No feedback (at least none on purpose...) The db is updated as expected for several weeks (long enough for me to be lulled into a false sense of security....) Then it stops. The RPi0 cannot be reached with SSH.
I unplug it, bring it up to my office, plug it in and it starts working again, no problem...No disk full symptoms (total usage 19%.) Spurious python messages show up, but they just talk about type conversion warnings.
Rinse and repeat.
Ignore the INSERT setup for the local database. I've commented out the actual command execution because I no longer use the local db.
#!/usr/bin/python
import sys
import Adafruit_DHT
import MySQLdb
import time
import urllib2
from Adafruit_BME280 import *
# BMP working, DHT not so much
sensor = BME280(address=0x76,t_mode=BME280_OSAMPLE_8, p_mode=BME280_OSAMPLE_8, h_mode=BME280_OSAMPLE_8)
# conn=MySQLdb.connect(host="192.168.2.204",user="jim",passwd="xxxx",db="EnviroLogger")
# c=conn.cursor()
humidity=0
temperaturec=0
# humidity, temperaturec = Adafruit_DHT.read_retry(11, 4)
temperaturef=((temperaturec*9.000)/5.000)+32.000
datatime=time.strftime('%Y-%m-%d %H:%M:%S',time.localtime())
dhtvalues="(null,'DHT-8200-Crawl',"+str(temperaturef)+","+str(humidity)+",null,'"+datatime+"')"
time.sleep(1)
BMPtemperaturef = ((sensor.read_temperature()*9.000)/5.000)+32.000
hectopascals = sensor.read_pressure()/100.0000
BMPhumidity = sensor.read_humidity()
bmpvalues="(null,'BMP-8200-Crawl',"+str(BMPtemperaturef)+","+str(BMPhumidity)+","+str(hectopascals)+",'"+datatime+"');"
finalSQLstring="INSERT INTO ResDataLog(ID,Location,Temperature, Humidity, Pressure, RecDate) VALUES " + dhtvalues +","+bmpvalues
# c.execute (finalSQLstring)
# conn.commit()
#Get Weather info from DarkSky
from forecastiopy import *
MyLatLong=[34.985928,-80.767389]
DarkSkyKey='587336fab8f4f5e8766aee23ca5cfee79f390943221acedddwerreffafde'
fio=ForecastIO.ForecastIO(DarkSkyKey, latitude=MyLatLong[0], longitude=MyLatLong[1])
current = FIOCurrently.FIOCurrently(fio)
ambienttemp=current.temperature
ambienthumidity=current.humidity
ambientpressure=current.pressure
url="http://telemetry.mywebhost.com/add_data.php?loc="+"BMP-8200-Crawl"+"&temp="+str(BMPtemperaturef)+"&hum="+str(BMPhumidity)+"&pr="+str(hectopascals)+"&atemp="+str(ambienttemp)+"&ahum="+str(ambienthumidity)+"&apress="+str(ambientpressure)
urllib2.urlopen(url)
How can I somehow capture what happens here and save it somewhere just before it dies?

Related

Python-CAN script receiving half of the expected CAN messages

I have written a Python script utilizing the Python-CAN library which records received CAN messages at a 1 second rate for 5 minutes, before logging all the messages into a file and exiting. The computer has a CAN module which is connecting to the CAN bus. (The other device on the bus is an engine) I communicate with it using the SocketCAN interface.
The test engine system that this computer is connected to is sending around 114 messages at what I believe is a 250kb baud rate. I am expecting to see 114 messages recorded in the file for each 1 second period, but instead I'm seeing about half that count. (~65 messages)
Could it be possible that the engine's ECU is set to a 500kb baud rate, and that's why I'm not getting the count I am expecting? I would think there would be no communication if the baud rates do not match, but I do not have physical access to the system because I'm sending the script remotely through an OTA update and not running it myself. (The device is headless, but is setup to run the script on startup) I just see the log files that are generated.
Here is the python code:
(A note, I have code parsing the received messages into the contained signals, but I did not include this code here because it happens at the end, and it is not relevant)
class logging:
def __init__(self):
#Dictionary to hold received CAN messages
self.message_Dict = {}
#List to hold queued dictionaries
self.message_Queue = []
#A "filters" object is also created here, but I did not include it
#I have verified the filters are correct on my test system
def main(self):
#Record the current time
currentTime = datetime.datetime.now()
#Record the overall start time
startTime = datetime.datetime.now()
#Record the iteration start time
lastIterationStartTime = currentTime
#Create the CanBus that will be used to send and receive CAN msgs from the MCU
canbus = can.interfaces.socketcan.SocketcanBus(channel='can0', bitrate=250000)
#These filters are setup correctly, because all the messages come through
#on my test system, but I did not include them here
canbus.set_filters(self.Filters)
# Creating Listener filters and notifier
listener = can.Listener()
#Main loop
while 1:
#create a variable to hold received data
msg2 = canbus.recv()
#Record the current time
currentTime = datetime.datetime.now()
#If a valid message is detected
if(msg2 != None):
if(len(msg2.data) > 0):
try:
#Save the message data into a queue (will be processed later)
self.message_Dict[msg2.arbitration_id] = msg2.data
except:
print("Error in storing CAN message")
#If 1 second has passed since the last iteration,
#add the dictionary to a new spot in the queue
if((currentTime - lastIterationStartTime) >= datetime.timedelta(seconds=1)):
#Add the dictionary with messages into the queue for later processing
messageDict_Copy = self.message_Dict.copy()
self.message_Queue.append(messageDict_Copy)
print("Number of messages in dictionary: " + str(len(self.message_Dict)) + "
Number of reports in queue: " + str(len(self.message_Queue)))
#Clear the dictionary for new messages for every iteration
self.message_Dict.clear()
#Record the reset time
lastIterationStartTime = datetime.datetime.now()
#Once 5 minutes of data has been recorded, write to the file
if((currentTime - startTime) > datetime.timedelta(minutes=5)):
#Here is where I write the data to a file. This is too long to include
#Clear the queue
self.message_Queue = []
#Clear the dictionary for new messages for every iteration
self.message_Dict.clear()
#When the script is run, execute the Main method
if __name__ == '__main__':
mainClass = logging()
mainClass.main()
I appreciate any ideas or input you have. Thank you
In my experience, most of the engine's ECU usually uses 250kb, but the newest ones are using 500kb. I would suggest you too also try the both.
Also, the messages will only come to the bus if they have been sent, it seems silly but for example a truck, if you don't step on the accelerator the messages referring to the accelerator will not appear. So, maybe you need to check if all components are being using as you expect. The lib of can-utils has a 'Can sniffer' that can also help you.
I suggest you to use 'can-utils' to help you in that. It is a powerful tool to can analyses.
Did you try to loop the baudrate? Maybe can also help to find another.

is it possible to pass data from one python program to another python program? [duplicate]

Is it possible -- other than by using something like a .txt/dummy file -- to pass a value from one program to another?
I have a program that uses a .txt file to pass a starting value to another program. I update the value in the file in between starting the program each time I run it (ten times, essentially simultaneously). Doing this is fine, but I would like to have the 'child' program report back to the 'mother' program when it is finished, and also report back what files it found to download.
Is it possible to do this without using eleven files to do it (that's one for each instance of the 'child' to 'mother' reporting, and one file for the 'mother' to 'child')? I am talking about completely separate programs, not classes or functions or anything like that.
To operate efficently, and not be waiting around for hours for everything to complete, I need the 'child' program to run ten times and get things done MUCH faster. Thus I run the child program ten times and give each program a separate range to check through.
Both programs run fine, I but would like to get them to run/report back and forth with each other and hopefully not be using file 'transmission' to accomplish the task, especially on the child-mother side of the transferring of data.
'Mother' program...currently
import os
import sys
import subprocess
import time
os.chdir ('/media/')
#find highest download video
Hival = open("Highest.txt", "r")
Histr = Hival.read()
Hival.close()
HiNext = str(int(Histr)+1)
#setup download #1
NextVal = open("NextVal.txt","w")
NextVal.write(HiNext)
NextVal.close()
#call download #1
procs=[]
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
#setup download #2-11
Histr2 = int(Histr)/10000
Histr2 = Histr2 + 1
for i in range(10):
Hiint = str(Histr2)+"0000"
NextVal = open("NextVal.txt","w")
NextVal.write(Hiint)
NextVal.close()
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
Histr2 = Histr2 + 1
for proc in procs:
proc.wait()
'Child' program
import urllib
import os
from Tkinter import *
import time
root = Tk()
root.title("Audiodownloader")
root.geometry("200x200")
app = Frame(root)
app.grid()
os.chdir('/media/')
Fileval = open('NextVal.txt','r')
Fileupdate = Fileval.read()
Fileval.close()
Fileupdate = int(Fileupdate)
Filect = Fileupdate/10000
Filect2 = str(Filect)+"0009"
Filecount = int(Filect2)
while Fileupdate <= Filecount:
root.title(Fileupdate)
url = 'http://www.yourfavoritewebsite.com/audio/encoded/'+str(Fileupdate)+'.mp3'
urllib.urlretrieve(url,str(Fileupdate)+'.mp3')
statinfo = os.stat(str(Fileupdate)+'.mp3')
if statinfo.st_size<10000L:
os.remove(str(Fileupdate)+'.mp3')
time.sleep(.01)
Fileupdate = Fileupdate+1
root.update_idletasks()
I'm trying to convert the original VB6 program over to Linux and make it much easier to use at the same time. Hence the lack of .mainloop being missing. This was my first real attempt at anything in Python at all hence the lack of def or classes. I'm trying to come back and finish this up after 1.5 months of doing nothing with it mostly due to not knowing how to. In research a little while ago I found this is WAY over my head. I haven't ever did anything with threads/sockets/client/server interaction so I'm purely an idiot in this case. Google anything on it and I just get brought right back here to stackoverflow.
Yes, I want 10 running copies of the program at the same time, to save time. I could do without the gui interface if it was possible for the program to report back to 'mother' so the mother could print on the screen the current value that is being searched. As well as if the child could report back when its finished and if it had any file that it downloaded successfully(versus downloaded and then erased due to being to small). I would use the successful download information to update Highest.txt for the next time the program got ran.
I think this may clarify things MUCH better...that or I don't understand the nature of using server/client interaction:) Only reason time.sleep is in the program was due to try to make sure that the files could get written before the next instance of the child program got ran. I didn't know for sure what kind of timing issue I may run into so I included those lines for safety.
This can be implemented using a simple client/server topology using the multiprocessing library. Using your mother/child terminology:
server.py
from multiprocessing.connection import Listener
# client
def child(conn):
while True:
msg = conn.recv()
# this just echos the value back, replace with your custom logic
conn.send(msg)
# server
def mother(address):
serv = Listener(address)
while True:
client = serv.accept()
child(client)
mother(('', 5000))
client.py
from multiprocessing.connection import Client
c = Client(('localhost', 5000))
c.send('hello')
print('Got:', c.recv())
c.send({'a': 123})
print('Got:', c.recv())
Run with
$ python server.py
$ python client.py
When you talk about using txt to pass information between programs, we first need to know what language you're using.
Within my knowledge of Java and Python achi viable despite laborious depensendo the amount of information that wants to work.
In python, you can use the library that comes with it for reading and writing txt and schedule execution, you can use the apscheduler.

Serial communication between teensy bord and system using python

i am fairly new to the field of IOT. I am setting up a sensor with teensy for reading up its data and transmitting over using serial communication to system where using python I am reading the data and storing it into a database.
The problem I am facing is when i check my program using arduino serial monitor I am getting insane sample speed like 10k readings are done in 40 milli seconds but when i try to read the same program using python it is not even giving me more than 1000 readings per second and that too without the database code with it it only reads 200 samples per second. Is there any way i can increase this sample rate or do i have to set any extra parameters for communication through serial ?
Here is my code for teensy :
int i;
elapsedMillis sinceTest1;
void setup()
{
Serial.begin(2000000); // USB is always 12 Mbit/sec
i = 0;
delay(5000);
Serial.println("Setup Called");
Serial.flush();
}
void loop()
{
if (i == 0 || i == 500000)
{
Serial.println(sinceTest1);
}
Serial.println(i);
//Serial.println(Serial.baud());
i++;
}
For python :
import serial
import pymysql
from datetime import datetime
import time
import signal
import sys
class ReadLine:
def __init__(self, s):
self.buf = bytearray()
self.s = s
def readline(self):
i = self.buf.find(b"\n")
if i >= 0:
r = self.buf[:i+1]
self.buf = self.buf[i+1:]
return r
while True:
i = max(1, min(2048, self.s.in_waiting))
data = self.s.read(i)
i = data.find(b"\n")
if i >= 0:
r = self.buf + data[:i+1]
self.buf[0:] = data[i+1:]
return r
else:
self.buf.extend(data)
ser = serial.Serial(
port='COM5',\
baudrate=2000000,\
#baudrate=9600,\
#parity=serial.PARITY_NONE,\
#stopbits=serial.STOPBITS_ONE,\
#bytesize=serial.EIGHTBITS,\
#timeout=0
)
print("connected to: " + ser.portstr)
count=1
#this will store the line
line = []
#database connection
connection = pymysql.connect(host="localhost", user="root", passwd="", database="tempDatabase")
cursor = connection.cursor()
checker = 0
rl = ReadLine(ser)
while True:
time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print(time)
print(checker)
print(rl.readline())
insert1 = ("INSERT INTO tempinfo(value,test,counter) VALUES('{}','{}','{}');".format(33.5, time,checker)) #.format(data[0])
insert2 = ("INSERT INTO urlsync(textvalue,sync) VALUES('http://www.myname.com/value.php?&value={}&time={}',0);".format(33.5,time)) #.format(data[0])
cursor.execute(insert1)
cursor.execute(insert2)
connection.commit()
checker += 1
connection.close()
time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print(time )
ser.close()
P.S : 1000 samples per second is the rate I am getting when I am not using the commands for database, including them I am getting around 250 samples per second only.
Any help or suggestion is appreciated, thank you.
First off, great question. The issue you are facing is loaded with learning opportunities.
Let's go one by one:
-You are now in the position to understand the difference between a microcontroller and a computer. The microcontroller in its most basic form (if you are running bare-metal code, even if it's not very efficient code, like on an Arduino) will do just one thing, and particularly when it's hardware-related (like reading or writing to UARTs) it will do it very efficiently. On a desktop computer, on the other hand, you have layer upon layer of tasks running simultaneously (operating system background tasks, updating the screen and whatnot). With so many things happening at the same time and if you don't establish priorities, it will be very difficult to accurately predict what will exactly happen and when. So it's not only your Python code that is running, there will be many more things that will come up and interrupt the flow of your user task. If you are hoping to read data from the UART buffer at a stable (or at least predictable) speed, that will never happen with the architecture you are using at the moment.
-Even if you manage to strip down your OS to the bare minimum, kill all processes, go on a terminal with no graphics whatsoever... you still have to deal with the uncertainty of what you are doing on your own Python code (that's why you see better performance with the Arduino serial monitor, which is not doing anything other than removing data from the buffer). On your Python code, you are sequentially reading from the port, trying to find a particular character (line feed) and then attaching the data you read to a list. If you want to improve performance, you need to either just read data and store it for offline processing or look at multithreading (if you have a thread of your program dedicated to only reading from the buffer and you do further processing on a separate thread you could improve significantly the throughput, particularly if you set priorities right).
-Last, but actually, most importantly, you should ask yourself: Do I really need to read data from my sensor at 2 Mbps? If the answer is yes, and your sensor is not a video camera, I'm afraid you need to take a step back and look at the following concepts: sensor bandwidth and dynamic response. After you do that, the next question is: how fast is your sensor updating its output and why? is that update rate meaningful? I can give you a couple of references here. First, imagine you have a temperature sensor to read and record the temperature in an oven. Does it make sense to sample values from the sensor at 1 MHz (1 million readings per second) if the temperature in the oven is changing at a rate of 10 degrees C per minute or even 100 degrees per second? Is your sensor even able to react so fast (that where its dynamic response comes into play)? My guess: probably not. Many industrial devices integrate dozens of sensors to control critical processes and send all data through a 1.5 Mbps link (pretty standard for Profibus, for instance).

Recording the time into a text file

I am trying to test how long my battery runs on my raspberry pi. All i need to do is run a while loop until the battery dies. However, I need to record the time that it starts and ends. I would like to save the data into a txt file. My current code is as follows:
import time
file = open('time','w')
x=1
while x==1:
from datetime import datetime
now = datetime.now()
file.write(now)
file.close()
If I just print the results in python i will get a result, and the current code makes a file called 'file' but there is nothing saved in the txt file. Any and all help would be appreciated.
Thank you for your time
You have an infinite loop which just gets the current time over and over again ... This loop won't break (even if your battery dies ...).
At some point, you need to break the loop or the condition in the while needs to become False. e.g.
from datetime import datetime
while x == 1:
now = datetime.now()
break
or
from datetime import datetime
while x == 1:
now = datetime.now()
x += 1
Generally speaking, you'll want to look in your system logs for when the computer decided to start up and when it decided to shut down due to lack of battery power ...
I would recomend code that looks like the following:
import time
import datetime.datetime as dt
initTime = dt.now()
while True:
with open('time.txt', 'a') as f:
f.write( str(dt.now() - initTime) )
time.sleep(1)
The couple of differences. First, you always open the file in append mode. This way, the file will be flushed every time. Next, it will always update the file with the amount of time elapsed. So even if your raspberry pi shuts down, you should be able to recover it.
No need to record these times yourself. The Pi (and almost any Linux distribution) writes those events to the /var/log/wtmp file (more about that file here). The last command can retrieve those events (also see the manual entry for last). Try the following:
last -FRx | grep -e 'boot\|shutdown'
Explanation of the flags:
-F prints the full date and times - which is handy in your case, because that's what you wanted to know.
-R suppresses the hostname in the output. You don't need that.
-x shows shutdown en runlevel changes - now that's what you wanted to know.
Finally the grep statement filters out the boot or shutdown messages.

Python background process not writing to MySQL db

I apologize if this question has been asked before, but I was unable to find any record of this issue. Full disclosure: I've only been using Python for a few months and MySQL for about 1 month.
I've written a short Python script on a Raspberry Pi (running Raspbian Wheezy) that sniffs wifi packets and writes signal strength info to a MySQL database. I've also created a small PHP file that grabs the info from the database and presents it in a table (pretty basic). All components of this little system work exactly as planned, however...
When I run the Python script in the background (sudo python my_script.py &) it does not appear to update the MySQL database with new readings. Yet it also throws no errors and outputs to the console without a problem (I have a line printed each time a wifi packet is intercepted and its RSSI is added to the database). I encountered the same problem when starting the script at boot up using the /etc/rc.local file. No errors, but no updates in the database either.
Is the problem on the Python side of things? A MySQL setting that I need to change? Is there something else I'm completely missing?
EDITED TO ADD CODE:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import MySQLdb as mdb
import sys
from scapy.all import *
# Create connection to MySQL database called 'DATABASE' at localhost with username 'USERNAME' and password 'PASSWORD'
HOST = "localhost"
USER = "USERNAME"
PW = "PASSWORD"
DB = "DATABASE"
con = mdb.connect(HOST, USER, PW, DB)
# set interface that will be used to monitor wifi
interface = "mon0"
with con:
cur = con.cursor()
# This function will be called every time a packet is intercepted. Packet is passed to function as 'p'
def sniffmgmt(p):
# These are the 3 types of management frames that are sent exclusively by clients (allows us to weed out packets sent by APs)
stamgmtstypes = (0, 2, 4)
if p.haslayer(Dot11):
# Make sure packet is a client-only type
if p.subtype in stamgmtstypes:
# Calculate RSSI
sig_str = -(256-(ord(p.notdecoded[-4:-3])))
# Update database with most recent detection
cur.execute("REPLACE INTO attendance(mac_address, rssi, timestamp) VALUES('%s', %d, NOW())" % (p.addr2, sig_str))
# Print MAC address that was detected (debugging only)
print "MAC Address (%s) has been logged" % (p.addr2)
# Tell scapy what interface to use (see above) and which function to call when a packet is intercepted. lfilter limits packets to management frames.
sniff(iface=interface, prn=sniffmgmt, store=0, lfilter=lambda x:x.type==0)
Thanks!
Kyle
P.S. For those who are wondering: this is not intended for malicious use, it is being used to investigate product tracking techniques at our warehouse.
I expect you're not calling commit on the db transaction.

Categories