Shifting all rows in dask dataframe - python

In Pandas, there is a method DataFrame.shift(n) which shifts the contents of an array by n rows, relative to the index, similarly to np.roll(a, n). I can't seem to find a way to get a similar behaviour working with Dask. I realise things like row-shifts may be difficult to manage with Dask's chunked system, but I don't know of a better way to compare each row with the subsequent one.
What I'd like to be able to do is this:
import numpy as np
import pandas as pd
import dask.DataFrame as dd
with pd.HDFStore(path) as store:
data = dd.from_hdf(store, 'sim')[col1]
shifted = data.shift(1)
idx = data.apply(np.sign) != shifted.apply(np.sign)
in order to create a boolean series indicating the locations of sign changes in the data. (I am aware that method would also catch changes from a signed value to zero)
I would then use the boolean series to index a different Dask dataframe for plotting.

Rolling functions
Currently dask.dataframe does not implement the shift operation. It could though if you raise an issue. In principle this is not so dissimilar from rolling operations that dask.dataframe does support, like rolling_mean, rolling_sum, etc..
Actually, if you were to create a Pandas function that adheres to the same API as these pandas.rolling_foo functions then you can use the dask.dataframe.rolling.wrap_rolling function to turn your pandas style rolling function into a dask.dataframe rolling function.
dask.dataframe.rolling_sum = wrap_rolling(pandas.rolling_sum)

The following code might help to shift down the series.
s = dd_df['column'].rolling(window=2).sum() - dd_df['column']
Edit (03/09/2019):
When you are rolling and finding the sum, for a particular row,
result[i] = row[i-1] + row[i]
Then by subtracting the old value of the column from the result, you are doing the following operation:
final_row[i] = result[i] - row[i]
Which equals:
final_row[i] = row[i-1] + row[i] - row[i]
Which ultimately results in the whole column getting shifted down once.
Tip:
If you want to shift it down multiple rows, you should actually execute the whole operation again that many times with the same window.

Related

Repeat calculations for every row of dataframe

The following calculations were for the 1st row, i.e., train_df.y1[0].
I want to repeat this operation for all 400 rows of train_df
squared_deviations_y1_0_train = ((ideal_df.loc[:0,"y1":"y50"] - train_df.y1[0]) ** 2).sum(axis=1)
The result is correct, just need to repeat it.
Since your end result seems to be a scalar, you can convert both of these dataframes to Numpy and take advantage of braodcasting.
Something like this,
squared_deviations = ((ideal_df.to_numpy() - train_df.y1.to_numpy().reshape(-1,1)) ** 2).sum(axis=1)
would do pretty nicely. If you MUST stay within pandas, you could use the subtract() method to get the same outcome.
(train_df.y1.subtract(ideal_df.T) ** 2).sum(axis=0)
Not that train_df.y1 becomes a row vector of size (400,) so you need to make the row dimension 400 to do this subtraction (hence the transpose of ideal_df).
You can also use the apply() method as Barmar suggested. This will require you to define a function that calculates the row index so that you can subtract the appropriate value of train_df for every cell before you perform the square and sum operations. Something like this,
(ideal_df.apply(lambda cell: cell - train_df.y1[cell.index]) ** 2).sum(axis=1)
would also work. I highly recommend using Numpy for these tasks because Numpy was designed with broadcasting in mind, but as shown you can get away with doing it in Pandas.

How to run different functions in different parts of a dataframe in python?

I have a dataframe(df).
I need to find the standard deviation dataframe from this one.For the first row I want to use the traditional variance formula.
sum of the(x - x(mean))/n
and from second row(=i) I want to use the following formula
lamb*(variance of first row) + (1-lamb)* (first row of returns)^2
※by first row, I meant the previous row.
# Generate Sample Dataframe
import numpy as np
import pandas as pd
df=pd.Dataframe({'a':range(1,7),
'b':[x**2 for x in range(1,7)],
'c':[x**3 for x in range(1,7)]})
# Generate return Dataframe
returns=df.pct_change()
# Generate new Zero dataframe
d=pd.DataFrame(0,index=np.arange(len(returns)),columns=returns.columns)
#populate first row
lamb=0.94
d.iloc[0]=list(returns.var())
Now my question is how to populated the second row till the end using the second formula?
It should be something like
d[1:].agg(lambda x: lamb*x.shift(-1)+(1-lamb)*returns[:2]
but it obviously returned a long error.
Could you please help?
for i in range(1,len(d)):
d.iloc[i].apply(lambda x: lamb*d.iloc[i-1] + (1-lamb)*returns.iloc[i-1])
I'm not completely sure if this gives the right answer but it wont throw an error. But try using apply, for loop and .iloc for iterating over rows and this should do the job for you if you use the correct formula.

Memory-efficient filtering of `DataFrame` rows

I have a large DataFrame object (1,440,000,000 rows). I operate at memory (swap includet) limit.
I need to extract a subset of the rows with certain value of a field. However if i do like that:
>>> SUBSET = DATA[DATA.field == value]
I end with either MemoryError exception or crash.
Is there any way to filter rows explicitely - without calculating intermediate mask (DATA.field == value)?
I have found DataFrame.filter() and DataFrame.select() methods, but they operate on column labels/row indices rather than on the row data.
Use query, it should be a bit faster:
df = df.query("field == value")
If by any change all the data in the DataFrame are of same types, use numpy array instead, it's more memory efficient and faster. You can convert your dataframe to numpy matrix by df.as_matrix().
Also that you might wanna check how much memory the dataframe already takes by:
import sys
sys.getsizeof()
that returns the size in bytes.

Python pandas - using apply funtion and creating new columns in dataframe

I have a dataframe with 40 million records and I need to create 2 new columns (net_amt and share_amt) from existing amt and sharing_pct columns. I created two functions which calculate these amounts and then used apply function to populate them back to dataframe. As my dataframe is large it is taking more time to complete. Can we calculate both amounts at one shot or is there completely a better way of doing it
def fn_net(row):
if (row['sharing']== 1):
return row['amt'] * row['sharing_pct']
else:
return row['amt']
def fn_share(row):
if (row['sharing']== 1):
return (row['amt']) * (1- row['sharing_pct'])
else:
return 0
df_load['net_amt'] = df_load.apply (lambda row: fn_net (row),axis=1)
df_load['share_amt'] = df_load.apply (lambda row: fn_share (row),axis=1)
I think numpy where() will be the best choice here (after import numpy as np):
df['net_amount'] = np.where( df['sharing']==1, # test/condition
df['amt']*df['sharing_pct'], # value if True
df['amt'] ) # value if False
You can, of course, use this same method for 'share_amt' also. I don't think there is any faster way to do this, and I don't think you can do it in "one shot", depending on how you define it. Bottom line: doing it with np.where is way faster than applying a function.
More specifically, I tested on the sample dataset below (10,000 rows) and it's about 700x faster than the function/apply method in that case.
df=pd.DataFrame({ 'sharing':[0,1]*5000,
'sharing_pct':np.linspace(.01,1.,10000),
'amt':np.random.randn(10000) })

dask.DataFrame.apply and variable length data

I would like to apply a function to a dask.DataFrame, that returns a Series of variable length. An example to illustrate this:
def generate_varibale_length_series(x):
'''returns pd.Series with variable length'''
n_columns = np.random.randint(100)
return pd.Series(np.random.randn(n_columns))
#apply this function to a dask.DataFrame
pdf = pd.DataFrame(dict(A=[1,2,3,4,5,6]))
ddf = dd.from_pandas(pdf, npartitions = 3)
result = ddf.apply(generate_varibale_length_series, axis = 1).compute()
Apparently, this works fine.
Concerning this, I have two questions:
Is this supposed to work always or am I just lucky here? Is dask expecting, that all partitions have the same amount of columns?
In case the metadata inference fails, how can I provide metadata, if the number of columns is not known beforehand?
Background / usecase: In my dataframe each row represents a simulation trail. The function I want to apply extracts time points of certain events from it. Since I do not know the number of events per trail in advance, I do not know how many columns the resulting dataframe will have.
Edit:
As MRocklin suggested, here an approach that uses dask delayed to compute result:
#convert ddf to delayed objects
ddf_delayed = ddf.to_delayed()
#delayed version of pd.DataFrame.apply
delayed_apply = dask.delayed(lambda x: x.apply(generate_varibale_length_series, axis = 1))
#use this function on every delayed object
apply_on_every_partition_delayed = [delayed_apply(d) for d in ddf.to_delayed()]
#calculate the result. This gives a list of pd.DataFrame objects
result = dask.compute(*apply_on_every_partition_delayed)
#concatenate them
result = pd.concat(result)
Short answer
No, dask.dataframe does not support this
Long answer
Dask.dataframe expects to know the columns of every partition ahead of time and it expects those columns to match.
However, you can still use Dask and Pandas together through dask.delayed, which is far more capable of handling problems like these.
http://dask.pydata.org/en/latest/delayed.html

Categories