How to plot and analyse CAN data directly on python? - python

I need to analyse a lot of CAN data and want to use python for that. I recently came across the python-can library and saw that it's possible to convert .blf to .asc files.
How do I convert .blf data of CAN to .asc using python This post helped a lot.
https://stackoverflow.com/users/13525512/tranbi Can #Tranbi or anyone else help me with some example code?
This is the part I have done till now:
import can
import os
fileList = os.listdir(".\inputFiles")
for i in range(len(fileList)):
with open(os.path.join(".\inputFiles", fileList[i]), 'rb') as f_in:
log_in = can.io.BLFReader(f_in)
with open(os.path.join(".\outputFiles", os.path.splitext(fileList[i])[0] + '.asc'), 'w') as f_out:
log_out = can.io.ASCWriter(f_out)
for msg in log_in:
log_out.on_message_received(msg)
log_out.stop()
I need to either directly read data from .blf files sequentially, or convert them to .asc, correct the timestamp using the file name, combine the files, convert them to .csv and then analyse in python. Would really help if I can get a shorter route?

Related

Reading a .csv file in a CombiTimeTable in Dymola using SimulateExtendedModel() function through Python

I am trying to write a establish a Dymola python interface where a .csv file is read and given in the CombiTimeTable in Dymola. But I do not find any help to do that. Even with Dymola Script/Command I wasn't able to give in my csv file and a filename.
Does anyone have any idea how would i be able to do so?
P.S. I have also tried the ExternData package.
I have used
Modelica.Blocks.Sources.CombiTimeTable CombiTimeTable(tableOnFile=True, tableName="",fileName="")
And
innerparameter ExternData
Update
Found a alternative approach of doing it with simulateExtendedModel() as:
dymola.simulateExtendedModel("Model",startTime=0, stopTime=1, numberOfIntervals=0, outputInterval=300,
initialNames={"CombiTimeTable.table"}, initialValues={DataFiles.readCSVmatrix("Z:\your.csv")});
Still struggling with the errors but in the combiTimeTable it's possible to write a .csv in the table, whereas .txt and .mat under tableOnFile.
i have myself encountered a similar issue some time ago.
If you look at the documentation CombiTimeTable Documentation you can see that you can provide a .txt file as a source for the time table.
Here is a bit of python code used to process a csv into a suitable file (it may not be the best way, but it works perfectly for me). I am using the python pandas library :
import pandas as pd
"""Creating a dataframe by reading from your source csv file"""
data = pd.read_csv("Path/To/Your/SourceFile.csv")
"""Creating the entry file for the combi time table"""
f = open (file="Path/To/Dymola/Source.txt" ,mode="w+")
"""Writing whatever Dymola needs to read the file (Check the CombiTimeTable documentation for more informations) """
f.write("#1 \n")
f.write("double tab1("+str(len(data.axes[0]))+","+str(len(data.axes[1]))+") # comment line" +"\n")
f.close()
"""Using the to_csv method by pointing to an existing txt file, and using the 'a' (append) mode will cause the values from the dataframe to be next to the existing text"""
data.to_csv("Path/To/Dymola/Source.txt",mode ="a",sep = '\t', index = None, header=False)
Here is an example of what it returns me : Example of a generated CombiTimeTable source file
Some precisions :
I am assuming that your data from your csv have headers.
The first column in your csv contains time points (in my example they represent minutes).
Dymola side : Here is how you can setup the CombiTimeTable (based on my example) : CombiTimeTable setup example
I hope that you'll find this answer helpful.

Iterate over folder with images and put names into a .csv

I am trying out the google vision API & for that I want to do some preparations. I have collected some images online with which I want to work with.
There are around 100 images and now I want to set up a .csv file where the first column has the names of the images inside, so that I can later go over them.
Example:
Name
Picture1.jpg
Picture2.jpg
etc.
Does someone know a Python way to achieve this? So that I can run the code and it puts those names into a .csv?
Thanks already and have a good one!
You can use glob in python to iterate on all the images in a directory and write the image names to a csv file.
Example:
import glob
import os
f = open('images.csv', 'w')
for file in glob.glob('*.png'):
f.write(os.path.basename(file))
f.close()

Parsing a json file using python

I have this json file which has the image name(medical images-around 7500) and its corresponding medical reports.
It is of the format:
{"IMAGE_NAME1.png": "MEDICAL_REPORT1_IN_TEXT", "IMAGE_NAME2.png": "MEDICAL_REPORT2_IN_TEXT", ...and so on for all images ...}
What I want is all the image names in the JSON file so that I can take all the images(From a database of images which is a super set of the image names in the JSON file) and make its own folder. Can someone help me with this?
I think you're looking for the json library. I haven't tested this, but I am pretty confident it will work.
import json
with open("path/to/json.json", 'r') as file:
json_data = json.load(file)
to get image names from what you described the data to look like.
image_names = list(json_data.keys())

How to convert from c3d file to csv

Hi i have a project were the user uploads a .c3d file to be able to display the data on charts, so i am making it for when the user uploads a the file it gets converted into a .csv file so i can get the values but i am having no luck trying to convert from the .c3d file to the .csv extension.
i have used the following documentation http://c3d.readthedocs.io/en/stable/
but there much isn't much information on this out there
this is the following code i use to print the data:
import c3d
reader = c3d.Reader(open('file.c3d', 'rb'))
for i, points, analog in reader.read_frames():
print('frame {}: {}'.format(i, points.round(2)))
The c3d library I suppose you use(https://pypi.python.org/pypi/c3d/0.2.1) includes a script for converting C3D data to CSV format (c3d2csv).

Converting JSON file to SQLITE or CSV

I'm attempting to convert a JSON file to an SQLite or CSV file so that I can manipulate the data with python. Here is where the data is housed: JSON File.
I found a few converters online, but those couldn't handle the quite large JSON file I was working with. I tried using a python module called sqlbiter but again, like the others, was never really able to output or convert the file.
I'm not. sure where to go now, if anyone has any recommendations or insights on how to get this data into a database, I'd really appreciate it.
Thanks in advance!
EDIT: I'm not looking for anyone to do it for me, I just need to be pointed in the right direction. Are there other methods I haven't tried that I could learn?
You can utilize pandas module for this data processing task as follows:
First, you need to read the JSON file using with, open and json.load.
Second, you need to change the format of your file a bit by changing the large dictionary that has a main key for every airport into a list of dictionaries instead.
Third, you can now utilize some pandas magic to convert your list of dictionaries into a DataFrame using pd.DataFrame(data=list_of_dicts).
Finally, you can utilize pandas's to_csv function to write your DataFrame as a CSV file into disk.
It would look something like this:
import pandas as pd
import json
with open('./airports.json.txt','r') as f:
j = json.load(f)
l = list(j.values())
df = pd.DataFrame(data=l)
df.to_csv('./airports.csv', index=False)
You need to load your json file and parse it to have all the fields available, or load the contents to a dictionary, then you could using pyodbc to write to the database these fields, or write them to the csv if you use import csv first.
But this is just a general idea. You need to study python and how to do every step.
For instance for writting to the database you could do something like:
for i in range(0,max_len):
sql_order = "UPDATE MYTABLE SET MYTABLE.MYFIELD ...."
cursor1.execute(sql_order)
cursor1.commit()

Categories