Saving sensor data from Pi into csv file - python

I am setting up a raspberry pi zero to sense air quality, temperature and humidity. I am able to run the script to get the sensed data every 60 second. What should do if I need to save the data in a csv file at regular intervals?
import bme680
import time
import datetime
from datetime import datetime
from bme680 import BME680
from pms5003 import PMS5003, ReadTimeoutError
try:
sensor = bme680.BME680(bme680.I2C_ADDR_PRIMARY)
except IOError:
sensor = bme680.BME680(bme680.I2C_ADDR_SECONDARY)
pms5003 = PMS5003()
readings = pms5003.read()
sensor.set_humidity_oversample(bme680.OS_2X)
sensor.set_pressure_oversample(bme680.OS_4X)
sensor.set_temperature_oversample(bme680.OS_8X)
sensor.set_filter(bme680.FILTER_SIZE_3)
print('Data sensing')
try:
while True:
if sensor.get_sensor_data():
output = datetime.now().strftime('%Y-%m-%d,%H:%M:%S,')+'{0:.2f} C,{1:.2f} hPa,{2:.3f} %RH'.format(
sensor.data.temperature,
sensor.data.pressure,
sensor.data.humidity)
if pms5003.read():
print(output, readings)
else:
print(output)
time.sleep(60)
except KeyboardInterrupt:
pass
I expect the program to save the data in a csv file with headers like date, time, temperature , humidity etc.

To store that data into CSV or any other file format there are plenty of ways out there in python. If you want more controlled and more detailed csv then you can use Pandas or Numpy.
But if just you want a simple solution, then here it is.
def store_data(time,temperature,pressure,humidity):
append = [time,temperature,pressure,humidity]
with open('sensor_output.csv', 'a') as csvFile:
writer = csv.writer(csvFile)
writer.writerow(append)
csvFile.close()
Just pass your values in this function and python will handle rest. File will be created automatically and will be appended every time.
store_data(datetime.now().strftime('%Y-%m-%d,%H:%M:%S,'),sensor.data.temperature,sensor.data.pressure,sensor.data.humidity)
This is how you can call the function in your case.
#UPDATE:
If you are familiar with pandas and DataFrame then this answer might help you:
Writing a pandas DataFrame to CSV file

Related

Download csv file and convert to JSON

I would like to write Python script that download csv file from URL and then return this in JSON. The problem is that I need execute it as fast as it possible. What is the best way to do it? I was thinking about something like this:
r_bytes = requests.get(URL).content
r = r_bytes.decode('utf8')
reader = csv.DictReader(io.StringIO(r))
json_data = json.dumps(list(reader))
What do you think? It doesn't look good for me but I can t find any better way to solve this problem.
I tried comparing your conversion process with pandas and used this code:
import io
import pandas as pd
import requests
import json
import csv
import time
r_bytes = requests.get("https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2020-financial-year-provisional/Download-data/annual-enterprise-survey-2020-financial-year-provisional-csv.csv").content
print("finished download")
r = r_bytes.decode('utf8')
print("finished decode")
start_df_timestamp = time.time()
df = pd.read_csv(io.StringIO(r), sep=";")
result_df = json.dumps(df.to_dict('records'))
end_df_timestamp = time.time()
print("The df method took {d_t}s".format(d_t=end_df_timestamp-start_df_timestamp))
start_csv_reader_timestamp = time.time()
reader = csv.DictReader(io.StringIO(r))
result_csv_reader = json.dumps(list(reader))
end_csv_reader_timestamp = time.time()
print("The csv-reader method took {d_t}s".format(d_t=end_csv_reader_timestamp-start_csv_reader_timestamp))
and the result was:
finished download
finished decode
The df method took 0.200181245803833s
The csv-reader method took 0.3164360523223877s
this was using a random 37k row CSV file and i noticed that downloading it was by far the most time-intensive thing to do. Even if the the pandas.df functions were faster for me, you should probably try to profile your code, to see whether the conversion really is significantly adding to your runtime. :-)
PS: If you need to constantly monitor the CSV and processing updates turns out to be time-intensive, you could use hashes to only process alterations if the CSV has changed since your last download.

How to read streaming data from .temp file and feed it to the function? [duplicate]

This question already has answers here:
Read from a log file as it's being written using python
(4 answers)
Closed 3 years ago.
I have streaming data from many sensors which are updated every second to the .temp file on computer. I am trying to find way to read this data sequentially as it arrives and feed it to my function which should perform computations on this streaming data.
Is there any way to read this kind of data from .tmp file and perform computations at the same instance when data arrives?
Maybe something like this will help i created two python files one reader and one writer:
For example my writer will add a json string with the key age every second to a textfile:
import random
import time
with open("test.txt", "a") as t:
while True:
time.sleep(1)
t.write('{"age": ' + str(random.randint(1, 100)) + '}\n')
t.flush()
The reader will now read the latest writen line on change and calculate the median of this data.
import json
import statistics
agearray = []
with open("test.txt", "rb") as t:
current_filesize = t.seek(0, 2)
while True:
new_filesize = t.seek(0, 2)
if new_filesize > current_filesize:
print("file changed")
print(new_filesize, current_filesize)
t.seek(current_filesize)
readsize = new_filesize - current_filesize
data = t.read(readsize)
myjson = json.loads(data.decode("utf-8"))
print(myjson)
agearray.append(myjson["age"])
print(statistics.median(agearray))
current_filesize = new_filesize
Jep this isn't the best example but this would be my approach.
You have to start the files in two different threads for example 2x cmd or git bash...

How to update instead of rewriting csv by python with pandas to fetch stock data?

I am an absolute noob in terms of programming.
I wish to fetch historical data of a list of stock from yahoo for data analysis.
I modified the script I found and got this.
#settings for importing built-in datetime and date libraries
#and external pandas_datareader libraries
import pandas_datareader.data as web
import datetime
from datetime import timedelta
#read ticker symbols from a file to python symbol list
symbol = []
with open('E:\Google drive\Investment\Python Stock pick\Stocklist1.txt') as f:
for line in f:
symbol.append(line.strip())
f.close
end = datetime.datetime.today()
start = end - timedelta(days=400)
#set path for csv file
path_out = 'E:/Google drive/Investment/Python Stock pick/CSV/'
i=0
while i<len(symbol):
try:
df = web.DataReader(symbol[i], 'yahoo', start, end)
df.insert(0,'Symbol',symbol[i])
df = df.drop(['Adj Close'], axis=1)
if i == 0:
df.to_csv(path_out+symbol[i]+'.csv')
print (i, symbol[i],'has data stored to csv file')
else:
df.to_csv(path_out+symbol[i]+'.csv',header=True)
print (i, symbol[i],'has data stored to csv file')
except:
print("No information for ticker # and symbol:")
print (i,symbol[i])
i=i+1
continue
i=i+1
And I run the script everyday and it fetches stock data in the past.
It would replace the entire csv file and always replacing the old data with the new one.
Is there anyway for the script to just add the new data into the csv file?
Thanks a lot in advance. I am all new to the programming world and have no idea how to do this.
I think you need to add 'a+' instead. Otherwise the file will keep looping itself. It is what happened to me.
You have to add param 'a':
with open('E:\Google drive\Investment\PythonStockpic\Stocklist1.txt','a') as f:
f.write(line.strip())
see: append new row to old csv file python

Trying to understand how to use Exception Handling with my code

I'm reading in stock data from Yahoo associated with the "tickers" (stock codes) provided to me in a CSV file. However, some of the stock codes are actually not available on Yahoo, so I was wondering if there is a way to account for this in my code below via Exception Handling.
import pandas
import pandas.io.data as web
import datetime
import csv
f1=open('C:\Users\Username\Documents\Programming\Financialdata.csv') #Enter the location of the file
c1= csv.reader(f1)
tickers =[]
for row in c1: #reading tickers from the csv file
tickers.append(row)
start=datetime.datetime(2012,1,1)
end=datetime.datetime(2013,1,1)
l=[]; m=[]; tickernew=[]
i=0;j=0; k=0; z=[]
for tick in tickers[0]:
f=web.DataReader(tick,'yahoo', start,end)
if len(f)==250: #checking if the stock was traded for 250 days
tickernew.append(tick) #new ticker list to keep track of the new index number of tickers
k = k + 1 #k keeps track of the number of new tickers
for i in range(0,len(f)-1):
m.append(f['Adj Close'][i+1]/f['Adj Close'][i]) #calculating returns
Absolutely. Your first step should be to look at the traceback you get when your program crashes because of the invalid input you mention.
Then, simply wrap the line of the code you're crashing on with a try/except. Good Python style encourages you to be specific about what type of exception you're handling. So, for example, if the crash is raising a "ValueException" you'll want to do this:
try:
bad_line_of_code
except ValueException:
handle_the_issue

Parsing CSV File into Python into a contigous block

I am trying to load in time series/Apple's stock price data (3000X5) into Python.
So date, open, high, low, close. I am running the following code in python spyder.
import matplotlib.pyplot as plt
import csv
datafile = open('C:\Users\Riemmman\Desktop\SAMPLE_AAPL_DATA_FOR_Python.csv')
datareader = csv.reader(datafile)
data = []
for row in datareader:
data.append(row)
But the 'data' file still remains as a list file. I want it separated into a continuous block with the headers on top and the data in it's respective column with date being at the utmost left-hand side. As one would see the data in R/Matlab. What am I missing? Thank you for your help.
You want to transpose the data; rows to columns. The zip() function, when applied to all rows, does this for you. Use *datareader to have Python pull all rows in and apply them as separate arguments to the zip() function:
filename = 'C:\Users\Riemmman\Desktop\SAMPLE_AAPL_DATA_FOR_Python.csv'
with open(filename, 'rb') as datafile:
datareader = csv.reader(datafile)
columns = zip(*datareader)
This also uses some more best practices:
Using the file as a context manager with the with statement ensures it is clsed automatically
Open the file in binary mode so the csv module can manage line endings correctly

Categories