I've been working to_csv()/read_csv() to read/write a data frame a user works with in an applet, where one of the columns is a datetime.datetime object, and it seems like to_csv automatically converts the datetimes to strings. Is this correct? If so, is there a way to "preserve" the dates as datetime rather than them being converted to strings? I've read through the documentation, and I can't seem to find the answer. Thank you.
To preserve the exact structure of a DataFrame, complete with data types, check out the pickle module, which "serializes" any python object to disk and reloads it back into a python environment.
Use pd.to_pickle instead of pd.to_csv, optionally with a compression argument (see docs):
# Save to pickle
df.to_pickle('pickle-file.pkl')
# Pickle with compression
df.to_pickle('pickle-file.pkl.gz', compression='gzip')
# Load pickle from disk
df = pd.read_pickle('pickle-file.pkl')
# or...
df = pd.read_pickle('pickle-file.pkl.gz', compression='gzip')
Related
I did export the following dataframe in Google Colab. Whichever method I used, when I import it later, my dataframe appears as pandas.core.series.Series, not as an array.
from google.colab import drive
drive.mount('/content/drive')
path = '/content/drive/My Drive/output.csv'
with open(path, 'w', encoding = 'utf-8-sig') as f:
df_protein_final.to_csv(f)
After importing the dataframe looks like below
pandas.core.series.Series
Note: The first image and second image can be different order in terms of numbers (It can be look as a different dataset). Please don't get hung up on this. Don't worry. Those images are just an example.
Why does column, which is originally an array before exporting, converts to series after exporting?
The code below gives the same result. Can't export original structure.
from google.colab import files
df.to_csv('filename.csv')
files.download('filename.csv')
Edit: I am looking for a solution is there any way to keep original structure (e.g. array) while exporting.
Actually that is how pandas work. When you try to insert a list or an numpy array into a pandas dataframe, it converts that array to a series always. If you want to turn the series back to a list/array use Series.values, Series.array or Series.to_numpy() . refer this
EDIT :
I got an idea from your comments. You are asking to save dframe into a file while preserving its all properties. You are actually (intentionally or unintentionally) asking how to SERIALIZE the data frame. You have to use pickle for this. Refer this
Note : Pandas has inbuilt pickle support. So you can directly export dframe into pickle file like in this example
df.to_pickle(file_name)
I tried to read a csv file of 4GB initially with pandas pd.read_csv but my system is running out of memory (I guess) and the kernel is restarting or the system hangs.
So, I tried using vaex library to convert csv to HDF5 and do operations(aggregations,group by)on that. For that I've used:
df = vaex.from_csv('Wager-Win_April-Jul.csv',column_names = None, convert=True, chunk_size=5000000)
and
df = vaex.from_csv('Wager-Win_April-Jul.csv',header = None, convert=True, chunk_size=5000000)
But still I'm getting my first record in csv file as the header(column names to be precise)and I'm unable to change the column names. I tried finding function to change the names but didn't come across any. Pls help me on that. Thanks :)
The column names 1559104, 10289, 991... is actually the first record in the csv and somehow vaex is taking the first row as my column names which I want to avoid
vaex.from_csv is a wrapper around pandas.read_csv with few extra options for the conversion.
So reading the pandas documentation, header='infer' (which is the default) if you want the csv reader to automatically infer the column names. Otherwise the 1st row of the file is used as the header. Alternatively you can pass the column names manually via the names kwarg. Same holds true for both vaex and pandas.
I would read the pandas.read_csv documentation to better understand all the options. Then you can use those options with vaex and the convert and chunk_size arguments.
I am working on django project.where user can upload a csv file and stored into database.Most of the csv file i saw 1st row contain header and then under the values but my case my header presents on column.like this(my csv data)
I did not understand how to save this type of data on my django model.
You can transpose your data. I think it is more appropriate for your dataset in order to do real analysis. Usually things such as id values would be the row index and the names such company_id, company_name, etc would be the columns. This will allow you to do further analysis (mean, std, variances, ptc_change, group_by) and use pandas at its fullest. Thus said:
import pandas as pd
df = pd.read_csv('yourcsvfile.csv')
df2 = df.T
Also, as #H.E. Lee pointed out. In order to save your model to your database, you can either use the method to_sql in your dataframe to save in mysql (e.g. your connection), if you're using mongodb you can use to_json and then import the data, or you can manually set your function transformation to your database.
You can flip it with the built-in CSV module quite easily, no need for cumbersome modules like pandas (which in turn requires NumPy...)... Since you didn't define the Python version you're using, and this procedure differs slightly between the versions, I'll assume Python 3.x:
import csv
# open("file.csv", "rb") in Python 2.x
with open("file.csv", "r", newline="") as f: # open the file for reading
data = list(map(list, zip(*csv.reader(f)))) # read the CSV and flip it
If you're using Python 2.x you should also use itertools.izip() instead of zip() and you don't have to turn the map() output into a list (it already is).
Also, if the rows are uneven in your CSV you might want to use itertools.zip_longest() (itertools.izip_longest() in Python 2.x) instead.
Either way, this will give you a 2D list data where the first element is your header and the rest of them are the related data. What you plan to do from there depends purely on your DB... If you want to deal with the data only, just skip the first element of data when iterating and you're done.
Given your data it may be best to store each row as a string entry using TextField. That way you can be sure not to lose any structure going forward.
I am trying to write a pandas dataframe to parquet file format (introduced in most recent pandas version 0.21.0) in append mode. However, instead of appending to the existing file, the file is overwritten with new data. What am i missing?
the write syntax is
df.to_parquet(path, mode='append')
the read syntax is
pd.read_parquet(path)
Looks like its possible to append row groups to already existing parquet file using fastparquet. This is quite a unique feature, since most libraries don't have this implementation.
Below is from pandas doc:
DataFrame.to_parquet(path, engine='auto', compression='snappy', index=None, partition_cols=None, **kwargs)
we have to pass in both engine and **kwargs.
engine{‘auto’, ‘pyarrow’, ‘fastparquet’}
**kwargs - Additional arguments passed to the parquet library.
**kwargs - here we need to pass is: append=True (from fastparquet)
import pandas as pd
import os.path
file_path = "D:\\dev\\output.parquet"
df = pd.DataFrame(data={'col1': [1, 2,], 'col2': [3, 4]})
if not os.path.isfile(file_path):
df.to_parquet(file_path, engine='fastparquet')
else:
df.to_parquet(file_path, engine='fastparquet', append=True)
If append is set to True and the file does not exist then you will see below error
AttributeError: 'ParquetFile' object has no attribute 'fmd'
Running above script 3 times I have below data in parquet file.
If I inspect the metadata, I can see that this resulted in 3 row groups.
Note:
Append could be inefficient if you write too many small row groups. Typically recommended size of a row group is closer to 100,000 or 1,000,000 rows. This has a few benefits over very small row groups. Compression will work better, since compression operates within a row group only. There will also be less overhead spent on storing statistics, since each row group stores its own statistics.
To append, do this:
import pandas as pd
import pyarrow.parquet as pq
import pyarrow as pa
dataframe = pd.read_csv('content.csv')
output = "/Users/myTable.parquet"
# Create a parquet table from your dataframe
table = pa.Table.from_pandas(dataframe)
# Write direct to your parquet file
pq.write_to_dataset(table , root_path=output)
This will automatically append into your table.
I used aws wrangler library. It works like charm
Below are the reference docs
https://aws-data-wrangler.readthedocs.io/en/latest/stubs/awswrangler.s3.to_parquet.html
I have read from kinesis stream and used kinesis-python library to consume the message and writing to s3 . processing logic of json I have not included as this post deals with problem unable to append data to s3. Executed in aws sagemaker jupyter
Below is the sample code I used:
!pip install awswrangler
import awswrangler as wr
import pandas as pd
evet_data=pd.DataFrame({'a': [a], 'b':[b],'c':[c],'d':[d],'e': [e],'f':[f],'g': [g]},columns=['a','b','c','d','e','f','g'])
#print(evet_data)
s3_path="s3://<your bucker>/table/temp/<your folder name>/e="+e+"/f="+str(f)
try:
wr.s3.to_parquet(
df=evet_data,
path=s3_path,
dataset=True,
partition_cols=['e','f'],
mode="append",
database="wat_q4_stg",
table="raw_data_v3",
catalog_versioning=True # Optional
)
print("write successful")
except Exception as e:
print(str(e))
Any clarifications ready to help. In few more posts I have read to read data and overwrite again. But as the data gets larger it will slow down the process. It is inefficient
There is no append mode in pandas.to_parquet(). What you can do instead is read the existing file, change it, and write back to it overwriting it.
Use the fastparquet write function
from fastparquet import write
write(file_name, df, append=True)
The file must already exist as I understand it.
API is available here (for now at least): https://fastparquet.readthedocs.io/en/latest/api.html#fastparquet.write
Pandas to_parquet() can handle both single files as well as directories with multiple files in it. Pandas will silently overwrite the file, if the file is already there. To append to a parquet object just add a new file to the same parquet directory.
os.makedirs(path, exist_ok=True)
# write append (replace the naming logic with what works for you)
filename = f'{datetime.datetime.utcnow().timestamp()}.parquet'
df.to_parquet(os.path.join(path, filename))
# read
pd.read_parquet(path)
I want to store a dataFrame with different columns into an hdf5 file (find an excerpt with data types below).
In [1]: mydf
Out [1]:
endTime uint32
distance float16
signature category
anchorName category
stationList object
Before converting some columns (signature and anchorName in my excerpt above), I used code like following to store it (which works pretty fine):
path = 'tmp4.hdf5'
key = 'journeys'
mydf.to_hdf(path, key, mode='w', complevel=9, complib='bzip2')
But it does not work with category and then I tried following:
path = 'tmp4.hdf5'
key = 'journeys'
mydf.to_hdf(path, key, mode='w', format='t', complevel=9, complib='bzip2')
It works fine, if I remove the column stationList, where each entry is a list of strings. But with this column I got the following exception:
Cannot serialize the column [stationList] because
its data contents are [mixed] object dtype
How do I need to improve my code to get the data frame stored?
pandas version: 0.17.1
python version: 2.7.6 (cannot change it due to compability reasons)
edit1 (some sample code):
import pandas as pd
mydf = pd.DataFrame({'endTime' : pd.Series([1443525810,1443540836,1443609470]),
'distance' : pd.Series([454.75,477.25,242.12]),
'signature' : pd.Series(['ab','cd','ab']),
'anchorName' : pd.Series(['tec','ing','pol']),
'stationList' : pd.Series([['t1','t2','t3'],['4','t2','t3'],['t3','t2','t4']])
})
# this works fine (no category)
mydf.to_hdf('tmp_without_cat.hdf5', 'journeys', mode='w', complevel=9, complib='bzip2')
for col in ['anchorName', 'signature']:
mydf[col] = mydf[col].astype('category')
# this crashes now because of category data
# mydf.to_hdf('tmp_with_cat.hdf5', 'journeys', mode='w', complevel=9, complib='bzip2')
# switching to format='t'
# this caused problems because of "mixed data" in column stationList
mydf.to_hdf('tmp_with_cat.hdf5', 'journeys', mode='w', format='t', complevel=9, complib='bzip2')
mydf.pop('stationList')
# this again works fine
mydf.to_hdf('tmp_with_cat_without_stationList.hdf5', 'journeys', mode='w', format='t', complevel=9, complib='bzip2')
edit2:
In the meanwhile I tried different things to get rid of this problem. One of these were to convert the entries of column stationList to tupels (possible since they shall not be changed) and to also convert it to category. But it did not change anything.
Here are the lines I added after the conversion loop (just for completeness):
mydf.stationList = [tuple(x) for x in mydf.stationList.values]
mydf.stationList.astype('category')
You have two problems:
You want to store categorical data in a HDF5 file;
You're trying to store arbitrary objects (i.e. stationList) in a HDF5 file.
As you discovered, categorical data is (currently?) only supported in the "table" format for HDF5.
However, storing arbitrary objects (list of strings, etc.) is really not something that is supported by the HDF5 format itself. Pandas working around that for you by serializing these objects using pickle, and then storing the pickle as an arbitrary-length string (which is not supported by all HDF5 formats, I think). But that will be slow and inefficient, and will never be supported well by HDF5.
In my mind, you have two options:
Pivot your data so you have one row of data by station name. Then you can store everything in a table-format HDF5 file. (This is a good practice in general; see Hadley Wickham on Tidy Data.)
If you really want to keep this format, then you might as well save the whole dataframe using to_pickle(). This will have no problem dealing with any kind of object (e.g. list of strings, etc.) you throw at it.
Personally, I would recommend option 1. You get to use a fast, binary file format. And the pivot will also make other operations with your data easier.