I'd like to make the pandas categorical class faster... how? - python

I am concerned with creating pandas dataframes with billions of rows. These dataframes are instantiated from a numpy array. The trick is that I need to make some columns into a categorical data type. I would like to do this as fast as possible. Currently, the creation of these categoricals is my bottleneck.
I am currently attempting to create the categoricals with fastpath=True.
Inside the __init__ of Categorical there is a function call codes = coerce_indexer_dtype(values, dtype.categories) (see: https://github.com/pandas-dev/pandas/blob/main/pandas/core/arrays/categorical.py, line 378)
I have data that I can format so I can skip this call (it is one of the primary offenders here).
The super().__init__(codes, dtype) call at the end of the fastpath block seems to prevent me from making an easy subclass of the Categorical class to override the behavior. Perhaps I'm missing something tho. I'm weary of subclassing a pandas class and screwing things up.
Would be very helpful if anyone had any feedback.
Here is a small code with the basics of what I'm doing:
import pandas as pd
import numpy as np
df = pd.DataFrame([(i, j) for i in range(1000) for j in range(1000)])
cats = list(range(1000))
dtype = pd.CategoricalDtype(categories=cats, ordered=True)
df[0] = pd.Categorical(
values=df[0].to_numpy(dtype=np.int16), dtype=dtype, fastpath=True
)
df[1] = pd.Categorical(
values=df[1].to_numpy(dtype=np.int16), dtype=dtype, fastpath=True
)

Related

How to save a dask series to hdf5

Here is what I tried first
df = dd.from_pandas(pd.DataFrame(dict(x=np.random.normal(size=100),
y = np.random.normal(size=100))), chunksize=40)
cat = df.map_partitions( lambda d: np.digitize(d['x']+d['y'], [.3,.9]), meta=pd.Series([], dtype=int, name='x'))
cat.to_hdf('/tmp/cat.h5', '/cat')
This fails with cannot properly create the storer...
I next tried to save cat.values instead:
da.to_hdf5('/tmp/cat.h5', '/cat', cat.values)
This fails with cannot convert float NaN to integer which I am guessing to be due to cat.values not having nan shape and chunksize values.
How do I get both of these to work? Note the actual data would not fit in memory.
This works fine:
import numpy as np
import pandas as pd
import dask.dataframe as dd
df = pd.DataFrame(dict(x=np.random.normal(size=100),
y=np.random.normal(size=100)))
ddf = dd.from_pandas(df, chunksize=40)
cat = ddf.map_partitions(lambda d: pd.Series(np.digitize(d['x'] + d['y'], [.3,.9])),
meta=('x', int))
cat.to_hdf('cat.h5', '/cat')
You were missing the pd.Series wrapper around the call to np.digitize, which meant the output of map_partitions was a numpy array instead of a pandas series (an error). In the future when debugging it may be useful to try computing a bit of data from steps along the way to see where the error is (for example, I found this issue by running .head() on cat).

How to unpack the columns of a pandas DataFrame to multiple variables

Lists or numpy arrays can be unpacked to multiple variables if the dimensions match. For a 3xN array, the following will work:
import numpy as np
a,b = [[1,2,3],[4,5,6]]
a,b = np.array([[1,2,3],[4,5,6]])
# result: a=[1,2,3], b=[4,5,6]
How can I achieve a similar behaviour for the columns of a pandas DataFrame? Extending the above example:
import pandas as pd
df = pd.DataFrame([[1,2,3],[4,5,6]])
df.columns = ['A','B','C'] # Rename cols and
df.index = ['i', 'ii'] # rows for clarity
The following does not work as expected:
a,b = df.T
# result: a='i', b='ii'
a,b,c = df
# result: a='A', b='B', c='C'
However, what I would like to get is the following:
a,b,c = unpack(df)
result: a=df['A'], b=df['B'], c=df['C']
Is the function unpack already available in pandas? Or can it be mimicked in an easy way?
I just figured that the following works, which is already close to what I try to achieve:
a,b,c = df.T.values # Common
a,b,c = df.T.to_numpy() # Recommended
# a,b,c = df.T.as_matrix() # Deprecated
Details: As always, things are a little more complicated than one thinks. Note that a pd.DataFrame stores columns separately in Series. Calling df.values (or better: df.to_numpy()) is potentially expensive, as it combines the columns in a single ndarray, which likely involves copying actions and type conversions. Also, the resulting container has a single dtype able to accommodate all data in the data frame.
In summary, the above approach loses the per-column dtype information and is potentially expensive. It is technically cleaner to iterate the columns in one of the following ways (there are more options):
# The following alternatives create VIEWS!
a,b,c = (v for _,v in df.items()) # returns pd.Series
a,b,c = (df[c] for c in df) # returns pd.Series
Note that the above creates views! Modifying the data likely will trigger a SettingWithCopyWarning.
a.iloc[0] = "blabla" # raises SettingWithCopyWarning
If you want to modify the unpacked variables, you have to copy the columns.
# The following alternatives create COPIES!
a,b,c = (v.copy() for _,v in df.items()) # returns pd.Series
a,b,c = (df[c].copy() for c in df) # returns pd.Series
a,b,c = (df[c].to_numpy() for c in df) # returns np.ndarray
While this is cleaner, it requires more characters. I personally do not recommend the above approach for production code. But to avoid typing (e.g., in interactive shell sessions), it is still a fair option...
# More verbose and explicit alternatives
a,b,c = df["the first col"], df["the second col"], df["the third col"]
a,b,c = df.iloc[:,0], df.iloc[:,1], df.iloc[:,2]
The dataframe.values shown method is indeed a good solution, but it involves building a numpy array.
In the case you want to access pandas series methods after unpacking, I personally use a different approach.
For the people like me that use a lot of chained methods, I have a solution by adding a custom unpacking method to pandas. Note that this may not be very good for production pipelines, but it is very handy in ad-hoc data analyses.
df = pd.DataFrame({
"lat": [30, 40],
"lon": [0, 1],
})
This approach involves returning a generator on a .unpack() call.
from typing import Tuple
def unpack(self: pd.DataFrame) -> Tuple[pd.Series]:
return (
self[col]
for col in self.columns
)
pd.DataFrame.unpack = unpack
This can be used in two major ways.
Either directly as a solution to your problem:
lat, lon = df.unpack()
Or, can be used in a method chaining.
Imagine a geo function which has to take a latitude serie in the first arg and a longitude in the second arg, named do_something_geographical(lat, lon)
df_result = (
df
.(...some method chaining...)
.assign(
geographic_result=lambda dataframe: do_something_geographical(dataframe[["lat", "lon"]].unpack())
)
.(...some method chaining...)
)

Creating many feature columns in Tensorflow

I'm getting started on a Tensorflow project, and am in the middle of defining and creating my feature columns. However, I have hundreds and hundreds of features- it's a pretty extensive dataset. Even after preprocessing and scrubbing, I have a lot of columns.
The traditional way of creating a feature_column is defined in the Tensorflow tutorial and even this StackOverflow post. You essentially declare and initialize a Tensorflow object for each feature column:
gender = tf.feature_column.categorical_column_with_vocabulary_list(
"gender", ["Female", "Male"])
This works all well and good if your dataset has only a few columns, but in my case, I surely don't want to have hundreds of lines of code initializing different feature_column objects.
What's the best way to resolve this issue? I notice that in the tutorial, all the columns are collected as a list:
base_columns = [
gender, native_country, education, occupation, workclass, relationship,
age_buckets,
]
Which is ultimately passed into your estimator:
m = tf.estimator.LinearClassifier(
model_dir=model_dir, feature_columns=base_columns)
So would the ideal way of handling feature_column creation for hundreds of columns be to append them directly into a list? Something like this?
my_columns = []
for col in df.columns:
if is_string_dtype(df[col]): #is_string_dtype is pandas function
my_column.append(tf.feature_column.categorical_column_with_hash_bucket(col,
hash_bucket_size= len(df[col].unique())))
elif is_numeric_dtype(df[col]): #is_numeric_dtype is pandas function
my_column.append(tf.feature_column.numeric_column(col))
Is this the best way of creating these feature columns? Or am I missing some functionality to Tensorflow that allows me to work around this step?
What you have posted in the question makes sense. Small extension based on your own code:
import pandas.api.types as ptypes
my_columns = []
for col in df.columns:
if ptypes.is_string_dtype(df[col]):
my_columns.append(tf.feature_column.categorical_column_with_hash_bucket(col,
hash_bucket_size= len(df[col].unique())))
elif ptypes.is_numeric_dtype(df[col]):
my_columns.append(tf.feature_column.numeric_column(col))
elif ptypes.is_categorical_dtype(df[col]):
my_columns.append(tf.feature_column.categorical_column(col,
hash_bucket_size= len(df[col].unique())))
I used your own answer. Just edited a little bit (there should be my_columns instead of my_column in for loop) and posting it the way it worked for me.
import pandas.api.types as ptypes
my_columns = []
for col in df.columns:
if ptypes.is_string_dtype(df[col]): #is_string_dtype is pandas function
my_columns.append(tf.feature_column.categorical_column_with_hash_bucket(col,
hash_bucket_size= len(df[col].unique())))
elif ptypes.is_numeric_dtype(df[col]): #is_numeric_dtype is pandas function
my_columns.append(tf.feature_column.numeric_column(col))
The above two methods works only if the data is provided in pandas data frame where you have column name for each column. But, in case you have all numeric column and you don't want to name those columns. for e.g. reading several numerical columns from a numpy array, you can use something like this:-
feature_column = [tf.feature_column.numeric_column(key='image',shape=(784,))]
input_fn = tf.estimator.inputs.numpy_input_fn(dict({'image':x_train})
where X_train is your numy array with 784 columns. You can check this post by Vikas Sangwan for more details.

Data imputation with fancyimpute and pandas

I have a large pandas data fame df. It has quite a few missings. Dropping row/or col-wise is not an option. Imputing medians, means or the most frequent values is not an option either (hence imputation with pandas and/or scikit unfortunately doens't do the trick).
I came across what seems to be a neat package called fancyimpute (you can find it here). But I have some problems with it.
Here is what I do:
#the neccesary imports
import pandas as pd
import numpy as np
from fancyimpute import KNN
# df is my data frame with the missings. I keep only floats
df_numeric = = df.select_dtypes(include=[np.float])
# I now run fancyimpute KNN,
# it returns a np.array which I store as a pandas dataframe
df_filled = pd.DataFrame(KNN(3).complete(df_numeric))
However, df_filled is a single vector somehow, instead of the filled data frame. How do I get a hold of the data frame with imputations?
Update
I realized, fancyimpute needs a numpay array. I hence converted the df_numeric to a an array using as_matrix().
# df is my data frame with the missings. I keep only floats
df_numeric = df.select_dtypes(include=[np.float]).as_matrix()
# I now run fancyimpute KNN,
# it returns a np.array which I store as a pandas dataframe
df_filled = pd.DataFrame(KNN(3).complete(df_numeric))
The output is a dataframe with the column labels gone missing. Any way to retrieve the labels?
Add the following lines after your code:
df_filled.columns = df_numeric.columns
df_filled.index = df_numeric.index
I see the frustration with fancy impute and pandas. Here is a fairly basic wrapper using the recursive override method. Takes in and outputs a dataframe - column names intact. These sort of wrappers work well with pipelines.
from fancyimpute import SoftImpute
class SoftImputeDf(SoftImpute):
"""DataFrame Wrapper around SoftImpute"""
def __init__(self, shrinkage_value=None, convergence_threshold=0.001,
max_iters=100,max_rank=None,n_power_iterations=1,init_fill_method="zero",
min_value=None,max_value=None,normalizer=None,verbose=True):
super(SoftImputeDf, self).__init__(shrinkage_value=shrinkage_value,
convergence_threshold=convergence_threshold,
max_iters=max_iters,max_rank=max_rank,
n_power_iterations=n_power_iterations,
init_fill_method=init_fill_method,
min_value=min_value,max_value=max_value,
normalizer=normalizer,verbose=False)
def fit_transform(self, X, y=None):
assert isinstance(X, pd.DataFrame), "Must be pandas dframe"
for col in X.columns:
if X[col].isnull().sum() < 10:
X[col].fillna(0.0, inplace=True)
z = super(SoftImputeDf, self).fit_transform(X.values)
return pd.DataFrame(z, index=X.index, columns=X.columns)
I really appreciate #jander081's approach, and expanded on it a tiny bit to deal with setting categorical columns. I had a problem where the categorical columns would get unset and create errors during training, so modified the code as follows:
from fancyimpute import SoftImpute
import pandas as pd
class SoftImputeDf(SoftImpute):
"""DataFrame Wrapper around SoftImpute"""
def __init__(self, shrinkage_value=None, convergence_threshold=0.001,
max_iters=100,max_rank=None,n_power_iterations=1,init_fill_method="zero",
min_value=None,max_value=None,normalizer=None,verbose=True):
super(SoftImputeDf, self).__init__(shrinkage_value=shrinkage_value,
convergence_threshold=convergence_threshold,
max_iters=max_iters,max_rank=max_rank,
n_power_iterations=n_power_iterations,
init_fill_method=init_fill_method,
min_value=min_value,max_value=max_value,
normalizer=normalizer,verbose=False)
def fit_transform(self, X, y=None):
assert isinstance(X, pd.DataFrame), "Must be pandas dframe"
for col in X.columns:
if X[col].isnull().sum() < 10:
X[col].fillna(0.0, inplace=True)
z = super(SoftImputeDf, self).fit_transform(X.values)
df = pd.DataFrame(z, index=X.index, columns=X.columns)
cats = list(X.select_dtypes(include='category'))
df[cats] = df[cats].astype('category')
# return pd.DataFrame(z, index=X.index, columns=X.columns)
return df
df=pd.DataFrame(data=mice.complete(d), columns=d.columns, index=d.index)
The np.array that is returned by the .complete() method of the fancyimpute object (be it mice or KNN) is fed as the content (argument data=) of a pandas dataframe whose cols and indexes are the same as the original data frame.

Separating pandas dataframe by offset string

Lets say I have a pandas.DataFrame that has hourly data for 3 days:
import pandas as pd
import numpy as np
import datetime as dt
dates = pd.date_range('20130101', periods=3*24, freq='H')
df = pd.DataFrame(np.random.randn(3*24,2),index=dates,columns=list('AB'))
I would like to get every, let's say, 6 hours of data and independently fit a curve to that data. Since pandas' resample function has a how keyword that is supposed to be any numpy array function, I thought that I could maybe try to use resample to do that with polyfit, but apparently there is no way (right?).
So the only alternative way I thought of doing that is separating df into a sequence of DataFrames, so I am trying to create a function that would work such as
l=splitDF(df, '6H')
and it would return to me a list of dataframes, each one with 6 hours of data (except maybe the first and last ones). So far I got nothing that could work except something like the following manual method:
def splitDF(data, rule):
res_index=data.resample(rule).index
out=[]
cont=0
for date in data.index:
... check for date in res_index ...
... and start cutting at those points ...
But this method would be extremely slow and there is probably a faster way to do it. Is there a fast (maybe even pythonic) way of doing this?
Thank you!
EDIT
A better method (that needs some improvement but it's faster) would be the following:
def splitDF(data, rule):
res_index=data.resample(rule).index
out=[]
pdate=res_index[0]
for date in res_index:
out.append(data[pdate:date][:-1])
pdate=date
out.append(data[pdate:])
return out
But still seems to me that there should be a better method.
Ok, so this sounds like a textbook case for using groupby. Here's my thinking:
import pandas as pd
#let's define a function that'll group a datetime-indexed dataframe by hour-interval/date
def create_date_hour_groups(df, hr):
new_df = df.copy()
hr_int = int(hr)
new_df['hr_group'] = new_df.index.hour/hr_int
new_df['dt_group'] = new_df.index.date
return new_df
#now we define a wrapper for polyfit to pass to groupby.apply
def polyfit_x_y(df, x_col='A', y_col='B', poly_deg=3):
df_new = df.copy()
coef_array = pd.np.polyfit(df_new[x_col], df_new[y_col], poly_deg)
poly_func = pd.np.poly1d(coef_array)
df_new['poly_fit'] = poly_func(df[x_col])
return df_new
#to the actual stuff
dates = pd.date_range('20130101', periods=3*24, freq='H')
df = pd.DataFrame(pd.np.random.randn(3*24,2),index=dates,columns=list('AB'))
df = create_date_hour_groups(df, 6)
df_fit = df.groupby(['dt_group', 'hr_group'],
as_index=False).apply(polyfit_x_y)
How about?
np.array_split(df,len(df)/6)

Categories