I have Pandas data frame big data frame loaded into memory.
Trying to utilize memory more efficient way.
For this purposes, i won't use this data frame after i will subset from this data frame only rows i am interested are:
DF = pd.read_csv("Test.csv")
DF = DF[DF['A'] == 'Y']
Already tried this solution but not sure if it most effective.
Is solution above is most efficient for memory usage?
Please advice.
you can try the following trick (if you can read the whole CSV file into memory):
DF = pd.read_csv("Test.csv").query("A == 'Y'")
Alternatively, you can read your data in chunks, using read_csv()
But i would strongly recommend you to save your data in HDF5 Table format (you may also want to compress it) - then you could read your data conditionally, using where parameter in read_hdf() function.
For example:
df = pd.read_hdf('/path/to/my_storage.h5', 'my_data', where="A == 'Y'")
Here you can find some examples and a comparison of usage for different storage options
Related
I have python code for data analysis that iterates through hundreds of datasets, does some computation and produces a result as a pandas DataFrame, and then concatenates all the results together. I am currently working with a set of data where these results are too large to fit into memory, so I'm trying to switch from pandas to Dask.
The problem is that I have looked through the Dask documentation and done some Googling and I can't really figure out how to create a Dask DataFrame iteratively like how I described above in a way that will take advantage of Dask's ability to only keep portions of the DataFrame in memory. Everything I see assumes that you either have all the data already stored in some format on disk, or that you have all the data in memory and now want to save it to disk.
What's the best way to approach this? My current code using pandas looks something like this:
def process_data(data) -> pd.DataFrame:
# Do stuff
return df
dfs = []
for data in datasets:
result = process_data(data)
dfs.append(result)
final_result = pd.concat(dfs)
final_result.to_csv("result.csv")
Expanding from #MichelDelgado comment, the correct approach should somethign like this:
import dask.dataframe as dd
from dask.delayed import delayed
def process_data(data) -> pd.DataFrame:
# Do stuff
return df
delayed_dfs = []
for data in datasets:
result = delayed(process_data)(data)
delayed_dfs.append(result)
ddf = dd.from_delayed(delayed_dfs)
ddf.to_csv('export-*.csv')
Note that this would created multiple CSV files, one per input partition.
You can find documentation here: https://docs.dask.org/en/stable/delayed-collections.html.
Also, be careful to actually read the data into the process function. So the data argument in the code above should only be an identifier, like a file path or equivalent.
I have a very large dataset I write to hdf5 in chunks via append like so:
with pd.HDFStore(self.train_store_path) as train_store:
for filepath in tqdm(filepaths):
with open(filepath, 'rb') as file:
frame = pickle.load(file)
if frame.empty:
os.remove(filepath)
continue
try:
train_store.append(
key='dataset', value=frame,
min_itemsize=itemsize_dict)
os.remove(filepath)
except KeyError as e:
print(e)
except ValueError as e:
print(frame)
print(e)
except Exception as e:
print(e)
The data is far too large to load into one DataFrame, so I would like to try out vaex for further processing. There's a few things I don't really understand though.
Since vaex uses a different representation in hdf5 than pandas/pytables (VOTable), I'm wondering how to go about converting between those two formats. I tried loading the data in chunks into pandas, converting it to a vaex DataFrame and then storing it, but there seems to be no way to append data to an existing vaex hdf5 file, at least none that I could find.
Is there really no way to create a large hdf5 dataset from within vaex? Is the only option to convert an existing dataset to vaex' representation (constructing the file via a python script or TOPCAT)?
Related to my previous question, if I work with a large dataset in vaex out-of-core, is it possible to then persist the results of any transformations i apply in vaex into the hdf5 file?
The problem with this storage format is that it is not column-based, which does not play well with datasets with large number of rows, since if you only work with 1 column, for instance, the OS will probably also read large portions of the other columns, as well as the CPU cache gets polluted with it. It would be better to store them to a column based format such as vaex' hdf5 format, or arrow.
Converting to a vaex dataframe can done using:
import vaex
vaex_df = vaex.from_pandas(pandas_df, copy_index=False)
You can do this for each dataframe, and store them on disk as hdf5 or arrow:
vaex_df.export('batch_1.hdf5') # or 'batch_1.arrow'
If you do this for many files, you can lazily (i.e. no memory copies will be made) concatenate them, or use the vaex.open function:
df1 = vaex.open('batch_1.hdf5')
df2 = vaex.open('batch_2.hdf5')
df = vaex.concat([df1, df2]) # will be seen as 1 dataframe without mem copy
df_altnerative = vaex.open('batch*.hdf5') # same effect, but only needs 1 line
Regarding your question about the transformations:
If you do transformations to a dataframe, you can write out the computed values, or get the 'state', which includes the transformations:
import vaex
df = vaex.example()
df['difference'] = df.x - df.y
# df.export('materialized.hdf5', column_names=['difference']) # do this if IO is fast, and memory abundant
# state = df.state_get() # get state in memory
df.state_write('mystate.json') # or write as json
import vaex
df = vaex.example()
# df.join(vaex.open('materialized.hdf5')) # join on rows number (super fast, 0 memory use!)
# df.state_set(state) # or apply the state from memory
df.state_load('mystate.json') # or from disk
df
I have a csv that I am reading into a Pandas DataFrame but it takes about 35 minutes to read. The csv is approximately 120 GB. I found a module called cudf that allows a GPU DataFrame however it is only for Linux. Is there something similar for Windows?
chunk_list = []
combined_array = pd.DataFrame()
for chunk in tqdm(pd.read_csv('\\large_array.csv', header = None,
low_memory = False, error_bad_lines = False, chunksize = 10000)):
print(' --- Complete')
chunk_list.append(chunk)
array = pd.concat(chunk_list)
print(array)
You can also look at dask-dataframe if you really want to read it into a pandas api like dataframe.
For reading csvs , this will parallelize your io task across multiple cores plus nodes. This will probably alleviate memory pressures by scaling across nodes as with 120 GB csv you will probably be memory bound too.
Another good alternative might be using arrow.
Do you have GPU ? if yes, please look at BlazingSQL, the GPU SQL engine in a Python package.
In this article, describe Querying a Terabyte with BlazingSQL. And BlazingSQL support read from CSV.
After you get GPU dataframe convert to Pandas dataframe with
# from cuDF DataFrame to pandas DataFrame
df = gdf.to_pandas()
I have a large tabular data, which needs to be merged and splitted by group. The easy method is to use pandas, but the only problem is memory.
I have this code to merge dataframes:
import pandas as pd;
from functools import reduce;
large_df = pd.read_table('large_file.csv', sep=',')
This, basically load the whole data in memory th
# Then I could group the pandas dataframe by some column value (say "block" )
df_by_block = large_df.groupby("block")
# and then write the data by blocks as
for block_id, block_val in df_by_block:
pd.Dataframe.to_csv(df_by_block, "df_" + str(block_id), sep="\t", index=False)
The only problem with above code is memory allocation, which freezes my desktop. I tried to transfer this code to dask but dask doesn't have a neat groupby implementation.
Note: I could have just sorted the file, then read the data line by line and split as the "block" value changes. But, the only problem is that "large_df.txt" is created in the pipeline upstream by merging several dataframes.
Any suggestions?
Thanks,
Update:
I tried the following approach but, it still seems to be memory heavy:
# find unique values in the column of interest (which is to be "grouped by")
large_df_contig = large_df['contig']
contig_list = list(large_df_contig.unique().compute())
# groupby the dataframe
large_df_grouped = large_df.set_index('contig')
# now, split dataframes
for items in contig_list:
my_df = large_df_grouped.loc[items].compute().reset_index()
pd.DataFrame.to_csv(my_df, 'dask_output/my_df_' + str(items), sep='\t', index=False)
Everything is fine, but the code
my_df = large_df_grouped.loc[items].compute().reset_index()
seems to be pulling everything into the memory again.
Any way to improve this code??
but dask doesn't have a neat groupb
Actually, dask does have groupby + user defined functions with OOM reshuffling.
You can use
large_df.groupby(something).apply(write_to_disk)
where write_to_disk is some short function writing the block to the disk. By default, dask uses disk shuffling in these cases (as opposed to network shuffling). Note that this operation might be slow, and it can still fail if the size of a single group exceeds your memory.
I try to use multiprocessing to read the csv file faster than using read_csv.
df = pd.read_csv('review-1m.csv', chunksize=10000)
But the df I get is not the dataframe but of the type pandas.io.parsers.TextFileReader. So I try to use
df = pd.concat(tp, ignore_index=True)
to convert df into a dataframe. But this process takes a lot of time thus the result is not much different from directly using read_csv. Does anyone know that how to make the process of converting df into dataframe faster?
pd.read_csv() is likely going to give you the same read time as any other method. If you want a real performance increase you should change the format you store your file in.
http://pandas.pydata.org/pandas-docs/stable/io.html#performance-considerations