I tried to import Matlab file into python and form a dataframe.
from scipy.io import loadmat
import os.path
path=os.path.abspath(os.getcwd())+"/BatteryDataSet/BatteryAgingARC_25_26_27_28_P1/B0025.mat"
mat = loadmat(path)
Then I tried to convert into pandas dataframe. It doesn't work.
Could anyone help me please? I've read the previous posts, still no answer.
Thank you very much!
I use Matlab to read the .mat file, it's like this:
Thanks again!
Related
import json
import pandas
import requests
Convert to Pandas
I know what you're going to say, this has been asked before. But ive gone through a number of posts already and they all require importing the json file into the code already.
So with this code I've been trying to import the json data through a URL, so there is no need to save any files before hand.
Is it even possible?
Please help.
Pandas json_normalize can do just that. Here is an example for which you will have to modify to meet your specific needs:
df = pd.json_normalize(packages_json, record_path='results')
(I omitted the output from the DF because is it unwieldy)
I am new to Python, coming from MATLAB. In MATLAB, I used to create a variable table (copy from excel to MATLAB) in MATLAB and save it as a .mat file and whenever I needed the data from the MATLAB, I used to import it using:
A = importdata('Filename.mat');
[Filename is 38x5 table, see the attached photo]
Is there a way I can do this in Python? I have to work with about 35 such tables and loading everytime from excel is not the best way.
In order to import excel tables into your python environment you have to install pandas.
Check out the detailed guideline.
import pandas as pd
xl = pd.ExcelFile('myFile.xlsx')
I hope this helps.
Use pandas:
import pandas as pd
dataframe = pd.read_csv("your_data.csv")
dataframe.head() # prints out first rows of your data
Or from Excel:
dataframe = pd.read_excel('your_excel_sheet.xlsx')
I'm quite stuck with a code I'm writing in Python, I'm a beginner and maybe is really easy, but I just can't see it. Any help would be appreciated. So thank you in advance :)
Here is the problem: I have to read some special data files with an special extension .fen into a pandas DataFrame.This .fen files are inside a zipped file .fenx that contains the .fen file and a .cfg configuration file.
In the code I've written I use zipfile library in order to unzip the files, and then get them in the DataFrame. This code is the following
import zipfile
import numpy as np
import pandas as pd
def readfenxfile(Directory,File):
fenxzip = zipfile.ZipFile(Directory+ '\\' + File, 'r')
fenxzip.extractall()
fenxzip.close()
cfgGeneral,cfgDevice,cfgChannels,cfgDtypes=readCfgFile(Directory,File[:-5]+'.CFG')
#readCfgFile redas the .cfg file and returns some important data.
#Here only the cfgDtypes would be important as it contains the type of data inside the .fen and that will become the column index in the final DataFrame.
if cfgChannels!=None:
dtDtype=eval('np.dtype([' + cfgDtypes + '])')
dt=np.fromfile(Directory+'\\'+File[:-5]+'.fen',dtype=dtDtype)
dt=pd.DataFrame(dt)
else:
dt=[]
return dt,cfgChannels,cfgDtypes
Now, the extract() method saves the unzipped file in the hard drive. The .fenx files can be quite big so this need of storing (and afterwards deleting them) is really slow. I would like to do the same I do now, but getting the .fen and .cfg files into the memory, not the hard drive.
I have tried things like fenxzip.read('whateverthenameofthefileis.fen')and some other methods like .open() from the zipfile library. But I can't get what .read() returns into a numpy array in anyway i tried.
I know it can be a difficult question to answer, because you don't have the files to try and see what happens. But if someone would have any ideas I would be glad of reading them. :) Thank you very much!
Here is the solution I finally found in case it can be helpful for anyone. It uses the tempfile library to create a temporal object in memory.
import zipfile
import tempfile
import numpy as np
import pandas as pd
def readfenxfile(Directory,File,ExtractDirectory):
fenxzip = zipfile.ZipFile(Directory+ r'\\' + File, 'r')
fenfile=tempfile.SpooledTemporaryFile(max_size=10000000000,mode='w+b')
fenfile.write(fenxzip.read(File[:-5]+'.fen'))
cfgGeneral,cfgDevice,cfgChannels,cfgDtypes=readCfgFile(fenxzip,File[:-5]+'.CFG')
if cfgChannels!=None:
dtDtype=eval('np.dtype([' + cfgDtypes + '])')
fenfile.seek(0)
dt=np.fromfile(fenfile,dtype=dtDtype)
dt=pd.DataFrame(dt)
else:
dt=[]
fenfile.close()
fenxzip.close()
return dt,cfgChannels,cfgDtypes
I have a .dat file and I want to generate the content in python, thus I use the following code:
import numpy as np
bananayte=np.fromfile("U04_banana-ytest.dat",dtype=float)
print(bananayte)
However, my initial data should be like "1.0000000e+00", while the output is like "1.39804066e-76". What happened? and what should I do to get the correct value? Thanks!
I met a DF file which is encoded in binary format. But when I open it using Vim, still I can see characters like "pandas.core.frame", "numpy.core.multiarray". So I guess it is related with Python. However I know little about the Python language. Though I have tried using pandas and numpy modules, I failed to read the file. Could you guys give any suggestion on this issue? Thank you in advance. Here is the Dropbox link to the DF file: https://www.dropbox.com/s/b22lez3xysvzj7q/flux.df
Looks like DataFrame stored with pickle, use read_pickle() to read it:
import pandas as pd
df = pd.read_pickle('flux.df')