CSV Files and pandas - python

I assume this is a trick question on this hw i'm working on but maybe it's not?
What object do you get after reading a csv file?
data frame
character vector
panel
all of the above
From what I know, you can use pandas to read in a csv file into a dataframe. But i know a panel is a data structure in pandas too...character vector I've never even heard of.
Any one got any ideas? I'm fairly certain the answer is just dataframe, but hey never know.

The time when you read a CSV file into a variable it is stored as a pandas.core.frame.DataFrame object which you are familiar of.
Now, talking about Panel, which represents wide format panel data, stored as 3-dimensional array have been deprecated since version 0.20.0 as listed Pandas Panel.

Related

Convert timeseries csv in Python

I want to convert a CSV file of time-series data with
multiple sensors.
This is what the data currently looks like:
The different sensors are described by numbers and have different numbers of axes. If a new activity is labeled, everything below belongs to this new label. The label is in the same column as the first entry of each sensor.
This is the way I would like the data to be:
Each sensor axis has its own column and the according label is added in the last column.
So far, I have created a DataObject class to access timestamp, sensortype, sensorvalues, and the belonging parent_label for each row in the CSV.
I thought the most convenient way to solve this would be by using pandas DataFrame but simply using pd.DataFrame(timestamp, sensortype, sensorvalues, label)
won't work.
Any ideas/hints? Maybe other ways to solve this problem?
I am fairly new to programming, especially Python, so I have already run out of ideas.
Thanks in advance
Try creating a numpy matrix of the columns you require then convert them to a pandas DataFrame.
Otherwise, you can also try to import the csv using pandas from the start.
Also for the following
pd.DataFrame(timestamp, sensortype, sensorvalues, label)
try referring to the pd.concat function as well. You would need to convert each array to a DataFrame, put them in a list and then concat them with pandas.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html

Vaex Displaying Data

I have a 10.11 GB CSV File and I have converted to hdf5 using dask. It is a mixture of str, int and float values. When I try to read it with vaex I just get numbers as given in the screenshot. Can someone please help me out?
Screenshot:
I am not sure how dask (or dask.dataframe) stores data in HDF5 format. Pandas for instance stores the data in a row-based format. On the other hand vaex expects a column based HDF5 files.
From your screenshot I see that your hdf5 file also preserves the index column - vaex does not have such a column, and expects just the data.
To ensure the HDF5 files work with vaex, it is best to use vaex itself to do the CSV->HDF5 conversion. Otherwise perhaps something like arrow will work, since it is a standard (while HDF5 can be more flexible and this harder to support all possible version of storing data).

What is the difference between save a pandas dataframe to pickle and to csv?

I am learning python pandas.
I see a tutorial which shows two ways to save a pandas dataframe.
pd.to_csv('sub.csv') and to open pd.read_csv('sub.csv')
pd.to_pickle('sub.pkl') and to open pd.read_pickle('sub.pkl')
The tutorial says to_pickle is to save the dataframe to disk. I am confused about this. Because when I use to_csv, I did see a csv file appears in the folder, which I assume is also save to disk right?
In general, why we want to save a dataframe using to_pickle rather than save it to csv or txt or other format?
csv
✅human readable
✅cross platform
⛔slower
⛔more disk space
⛔doesn't preserve types in some cases
pickle
✅fast saving/loading
✅less disk space
⛔non human readable
⛔python only
Also take a look at parquet format (to_parquet, read_parquet)
✅fast saving/loading
✅less disk space than pickle
✅supported by many platforms
⛔non human readable
Pickle is a serialized way of storing a Pandas dataframe. Basically, you are writing down the exact representation of the dataframe to disk. This means the types of the columns are and the indices are the same. If you simply save a file as csv, you are just storing it as a comma separated list. Depending on your data set, some information will be lost when you load it back up.
You can read more about pickle library in python, here.

Pandas position into an excel

Does anyone knows how can I insert a dataframe into an excel in a desired position ?
For example, I would like to start my dataframe into the cell "V78"
there is startrow and startcol argument in the .to_excel() method
df.to_excel('excel.xls', startrow=78, startcol=24)
I have a solution which may or may not fit your requirements.
I would not directly import it into an existing Excel file which may contain valuable data and furthermore keeping the files separate may be of use one day.
You could simply save the dataframe as an Excel file;
df.to_excel('df.xls')
And in the Excel file that you want to insert it into create an object of type file and link the two that way. See here.
Personally keeping them separate seems better as once two files become one there is no going back. You could also have multiple files this way for easy comparisons, without fiddling row/column numbers!
Hope was of some help!

How can I create a formatted and annotated excel with embedded pandas DataFrames

I want to create a "presentation ready" excel document with embedded pandas DataFrames and additional data and formatting
A typical document will include some titles and meta data, several Data Frames with sum row\column for each data frame.
The DataFrame itself should be formatted
The best thing I found was this which explains how to use pandas with XlsxWriter.
The main problem is that there's no apparent method to get the exact location of the embedded DataFrame to add the summary row below (the shape of the DataFrame is a good estimate, but it might no be exact when rendering complex DataFrames.
If there's a solution that relies on some kind of template, and not hard coding it would be even better.

Categories