resample Pandas dataframe and merge strings in column - python

I want to resample a pandas dataframe and apply different functions to different columns. The problem is that I cannot properly process a column with strings. I would like to apply a function that merges the string with a delimiter such as " - ". This is a data example:
import pandas as pd
import numpy as np
idx = pd.date_range('2017-01-31', '2017-02-03')
data=list([[1,10,"ok"],[2,20,"merge"],[3,30,"us"]])
dates=pd.DatetimeIndex(['2017-01-31','2017-02-03','2017-02-03'])
d=pd.DataFrame(data, index=,columns=list('ABC'))
A B C
2017-01-31 1 10 ok
2017-02-03 2 20 merge
2017-02-03 3 30 us
Resampling the numeric columns A and B with a sum and mean aggregator works. Column C however kind of works with sum (but it gets placed on the second place, which might mean that something fails).
d.resample('D').agg({'A': sum, 'B': np.mean, 'C': sum})
A C B
2017-01-31 1.0 a 10.0
2017-02-01 NaN 0 NaN
2017-02-02 NaN 0 NaN
2017-02-03 5.0 merge us 25.0
I would like to get this:
...
2017-02-03 5.0 merge - us 25.0
I tried using lambda in different ways but without success (not shown).
If I may ask a second related question: I can do some post processing for this, but how to fill missing cells in different columns with zeros or ""?

Your agg function for column 'C' should be a join
d.resample('D').agg({'A': sum, 'B': np.mean, 'C': ' - '.join})
A B C
2017-01-31 1.0 10.0 ok
2017-02-01 NaN NaN
2017-02-02 NaN NaN
2017-02-03 5.0 25.0 merge - us

Related

How to create a dataframe from series object when iterating

I am iterating and as a result of a single iteration I acquire a pandas series object which looks like this:
DE_AT 118.55
DE_CZ 62.73
PL_DE 263.36
PL_SK 315.07
dtype: float64
Sometimes I might get different names and lengths of this series for example I might get:
DE_AT 118.55
DE_CZ 62.73
PL_DE 263.36
PL_NL 315.07
PL_UK 420
dtype: float64
Now I want to create a dataframe from these series objects when iterating such that I will have all names as the index, from these two series objects I would like to get:
index 1 2
DE_AT 118.55 118.55
DE_CZ 62.73 62.73
PL_DE 263.36 263.36
PL_SK 315.07 NaN
PL_NL NaN 315.07
PL_UK NaN 420
Or maybe I can store them in a list and later create a dataframe?
Basic outer join of two series:
s1=pd.Series(index=["DE_AT","DE_CZ","PL_DE", "PL_SK"], data=[1,2,3,4]).to_frame()
s2=pd.Series(index=["DE_AT","DE_CZ","PL_DE", "PL_NL", "PL_UK"], data=[1,2,3,4,5]).to_frame()
s1.join(s2, how="outer",lsuffix="1",rsuffix="2")
Output:
index
00
01
DE_AT
1.0
1.0
DE_CZ
2.0
2.0
PL_DE
3.0
3.0
PL_NL
NaN
4.0
PL_SK
4.0
NaN
PL_UK
NaN
5.0

Pandas Dataframe: grouping by index keeping only notnan value in each column

I have dataframes similar to the following ones:
,A,B
2020-01-15,1,
2020-01-15,,2
2020-01-16,3,
2020-01-16,,4
2020-01-17,5,
2020-01-17,,6
,A,B,C
2020-01-15,1,
2020-01-15,,2
2020-01-15,,,3
2020-01-16,4,
2020-01-16,,5
2020-01-16,,,6
2020-01-17,7,
2020-01-17,,8
2020-01-17,,,9
I need to transform them to the following:
,A,B
2020-01-15,1,2
2020-01-16,3,4
2020-01-17,5,6
,A,B,C
2020-01-15,1,2,3
2020-01-16,4,5,6
2020-01-17,7,8,9
I have tried with groupby().first() without success
Let us do grubby + first
s=df.groupby(level=0).first()
A B
aaa
2020-01-15 1.0 2.0
2020-01-16 3.0 4.0
2020-01-17 5.0 6.0

Python pandas show repeated values

I'm trying to get data from txt file with pandas.read_csv but it doesn't show the repeated(same) values in the file such as I have 2043 in the row but It shows it once not in every row.
My file sample
Result set
All the circles I've drawn should be 2043 also but they are empty.
My code is :
import pandas as pd
df= pd.read_csv('samplefile.txt', sep='\t', header=None,
names = ["234", "235", "236"]
You get MultiIndex, so first level value are not shown only.
You can convert MultiIndex to columns by reset_index:
df = df.reset_index()
Or specify each column in parameter names for avoid MultiIndex:
df = pd.read_csv('samplefile.txt', sep='\t', names = ["one","two","next", "234", "235", "236"]
A word of warning with MultiIndex as I was bitten by this yesterday and wasted time trying to trouble shoot a non-existant problem.
If one of your index levels is of type float64 then you may find that the indexes are not shown in full. I had a dataframe I was df.groupby().describe() and the variable I was performing the groupby() on was originally a long int, at some point it was converted to a float and when printing out this index was rounded. There were a number of values very close to each other and so it appeared on printing that the groupby() had found multiple levels of the second index.
Thats not very clear so here is an illustrative example...
import numpy as np
import pandas as pd
index = np.random.uniform(low=89908893132829,
high=89908893132929,
size=(50,))
df = pd.DataFrame({'obs': np.arange(100)},
index=np.append(index, index)).sort_index()
df.index.name = 'index1'
df['index2'] = [1, 2] * 50
df.reset_index(inplace=True)
df.set_index(['index1', 'index2'], inplace=True)
Look at the dataframe and it appears that there is only one level of index1...
df.head(10)
obs
index1 index2
8.990889e+13 1 4
2 54
1 61
2 11
1 89
2 39
1 65
2 15
1 60
2 10
groupby(['index1', 'index2']).describe() and it looks like there is only one level of index1...
summary = df.groupby(['index1', 'index2']).describe()
summary.head()
obs
count mean std min 25% 50% 75% max
index1 index2
8.990889e+13 1 1.0 4.0 NaN 4.0 4.0 4.0 4.0 4.0
2 1.0 54.0 NaN 54.0 54.0 54.0 54.0 54.0
1 1.0 61.0 NaN 61.0 61.0 61.0 61.0 61.0
2 1.0 11.0 NaN 11.0 11.0 11.0 11.0 11.0
1 1.0 89.0 NaN 89.0 89.0 89.0 89.0 89.0
But if you look at the actual values of index1 in either you see that there are multiple unique values. In the original dataframe...
df.index.get_level_values('index1')
Float64Index([89908893132833.12, 89908893132833.12, 89908893132834.08,
89908893132834.08, 89908893132835.05, 89908893132835.05,
89908893132836.3, 89908893132836.3, 89908893132837.95,
89908893132837.95, 89908893132838.1, 89908893132838.1,
89908893132838.6, 89908893132838.6, 89908893132841.89,
89908893132841.89, 89908893132841.95, 89908893132841.95,
89908893132845.81, 89908893132845.81, 89908893132845.83,
89908893132845.83, 89908893132845.88, 89908893132845.88,
89908893132846.02, 89908893132846.02, 89908893132847.2,
89908893132847.2, 89908893132847.67, 89908893132847.67,
89908893132848.5, 89908893132848.5, 89908893132848.5,
89908893132848.5, 89908893132855.17, 89908893132855.17,
89908893132855.45, 89908893132855.45, 89908893132864.62,
89908893132864.62, 89908893132868.61, 89908893132868.61,
89908893132873.16, 89908893132873.16, 89908893132875.6,
89908893132875.6, 89908893132875.83, 89908893132875.83,
89908893132878.73, 89908893132878.73, 89908893132879.9,
89908893132879.9, 89908893132880.67, 89908893132880.67,
89908893132880.69, 89908893132880.69, 89908893132881.31,
89908893132881.31, 89908893132881.69, 89908893132881.69,
89908893132884.45, 89908893132884.45, 89908893132887.27,
89908893132887.27, 89908893132887.83, 89908893132887.83,
89908893132892.8, 89908893132892.8, 89908893132894.34,
89908893132894.34, 89908893132894.5, 89908893132894.5,
89908893132901.88, 89908893132901.88, 89908893132903.27,
89908893132903.27, 89908893132904.53, 89908893132904.53,
89908893132909.27, 89908893132909.27, 89908893132910.38,
89908893132910.38, 89908893132911.86, 89908893132911.86,
89908893132913.4, 89908893132913.4, 89908893132915.73,
89908893132915.73, 89908893132916.06, 89908893132916.06,
89908893132922.48, 89908893132922.48, 89908893132923.44,
89908893132923.44, 89908893132924.66, 89908893132924.66,
89908893132925.14, 89908893132925.14, 89908893132928.28,
89908893132928.28],
dtype='float64', name='index1')
...and in the summarised dataframe...
summary.index.get_level_values('index1')
Float64Index([89908893132833.12, 89908893132833.12, 89908893132834.08,
89908893132834.08, 89908893132835.05, 89908893132835.05,
89908893132836.3, 89908893132836.3, 89908893132837.95,
89908893132837.95, 89908893132838.1, 89908893132838.1,
89908893132838.6, 89908893132838.6, 89908893132841.89,
89908893132841.89, 89908893132841.95, 89908893132841.95,
89908893132845.81, 89908893132845.81, 89908893132845.83,
89908893132845.83, 89908893132845.88, 89908893132845.88,
89908893132846.02, 89908893132846.02, 89908893132847.2,
89908893132847.2, 89908893132847.67, 89908893132847.67,
89908893132848.5, 89908893132848.5, 89908893132855.17,
89908893132855.17, 89908893132855.45, 89908893132855.45,
89908893132864.62, 89908893132864.62, 89908893132868.61,
89908893132868.61, 89908893132873.16, 89908893132873.16,
89908893132875.6, 89908893132875.6, 89908893132875.83,
89908893132875.83, 89908893132878.73, 89908893132878.73,
89908893132879.9, 89908893132879.9, 89908893132880.67,
89908893132880.67, 89908893132880.69, 89908893132880.69,
89908893132881.31, 89908893132881.31, 89908893132881.69,
89908893132881.69, 89908893132884.45, 89908893132884.45,
89908893132887.27, 89908893132887.27, 89908893132887.83,
89908893132887.83, 89908893132892.8, 89908893132892.8,
89908893132894.34, 89908893132894.34, 89908893132894.5,
89908893132894.5, 89908893132901.88, 89908893132901.88,
89908893132903.27, 89908893132903.27, 89908893132904.53,
89908893132904.53, 89908893132909.27, 89908893132909.27,
89908893132910.38, 89908893132910.38, 89908893132911.86,
89908893132911.86, 89908893132913.4, 89908893132913.4,
89908893132915.73, 89908893132915.73, 89908893132916.06,
89908893132916.06, 89908893132922.48, 89908893132922.48,
89908893132923.44, 89908893132923.44, 89908893132924.66,
89908893132924.66, 89908893132925.14, 89908893132925.14,
89908893132928.28, 89908893132928.28],
dtype='float64', name='index1')
I wasted time scratching my head wondering why my groupby([index1,index2) had produced only one level of index1!

Aligning 2 python lists according to 2 other list

I have two arrays namely nlxTTL and ttlState. Both the arrays comprise of repeating pattern of 0's and 1's indicating input voltage which can be HIGH(1) or LOW(0) and are recorded from same source which sends a TTL pulse(HIGH and LOW) with 1second pulse width.
But due to some logging mistake, some drops happen in ttlState list i.e. it doesn't log a repeating sequence of 0 and 1's and ends up dropping values.
The good part is I also log timestamp for each TTL input received for both the lists. Inter TTL event timestamp difference clearly shows that the TTL event has missed one of the pulses.
Here is an example of what data looks like:
nlxTTL, ttlState, nlxTime, ttlTime
0,0,1000,1000
1,1,2000,2000
0,1,3000,4000
1,1,4000,6000
0,0,5000,7000
1,1,6000,8000
0,0,7000,9000
1,1,8000,10000
As you can see the nlxTime and ttlTime clearly are different from each other. How can then using these timestamps I can align all 4 lists?
When dealing with tabular data such as a CSV file, it's a good idea to use a library to make the process easier. I like the pandas dataframe library.
Now for your question, one way to think about this problem is that you really have two datasets... An nlx dataset and a ttl dataset. You want to join those datasets together by timestamp. Pandas makes tasks like this very easy.
import pandas as pd
from StringIO import StringIO
data = """\
nlxTTL, ttlState, nlxTime, ttlTime
0,0,1000,1000
1,1,2000,2000
0,1,3000,4000
1,1,4000,6000
0,0,5000,7000
1,1,6000,8000
0,0,7000,9000
1,1,8000,10000
"""
# Load data into dataframe.
df = pd.read_csv(StringIO(data))
# Remove spaces from column names.
df.columns = [x.strip() for x in df.columns]
# Split the data into an nlx dataframe and a ttl dataframe.
nlx = df[['nlxTTL', 'nlxTime']].reset_index()
ttl = df[['ttlState', 'ttlTime']].reset_index()
# Merge the dataframes back together based on their timestamps.
# Use an outer join so missing data gets filled with NaNs instead
# of just dropping the rows.
merged_df = nlx.merge(ttl, left_on='nlxTime', right_on='ttlTime', how='outer')
# Get back to the original set of columns
merged_df = merged_df[df.columns]
# Print out the results.
print(merged_df)
This produces the following output.
nlxTTL ttlState nlxTime ttlTime
0 0.0 0.0 1000.0 1000.0
1 1.0 1.0 2000.0 2000.0
2 0.0 NaN 3000.0 NaN
3 1.0 1.0 4000.0 4000.0
4 0.0 NaN 5000.0 NaN
5 1.0 1.0 6000.0 6000.0
6 0.0 0.0 7000.0 7000.0
7 1.0 1.0 8000.0 8000.0
8 NaN 0.0 NaN 9000.0
9 NaN 1.0 NaN 10000.0
You'll notice that it fills in the dropped values with NaN values because we are doing an outer join. If this is undesirable, change the how='outer' parameter to how='inner' to perform an inner join. This will only keep records for which you have both an nlx and ttl response at that timestamp.

what is the most efficient way to synchronize two large data frames in pandas?

I would like to synchronize two very long data frames, performance is key in this use case. The two data frames are indexed in chronological order (this should be exploited to be as fast as possible) using datetimes or Timestamps.
One way to synch is provided in this example:
import pandas as pd
df1=pd.DataFrame({'A':[1,2,3,4,5,6], 'B':[1,5,3,4,5,7]}, index=pd.date_range('20140101 101501', freq='u', periods=6))
df2=pd.DataFrame({'D':[10,2,30,4,5,10], 'F':[1,5,3,4,5,70]}, index=pd.date_range('20140101 101501.000003', freq='u', periods=6))
# synch data frames
df3=df1.merge(df2, how='outer', right_index=True, left_index=True).fillna(method='ffill')
My question is if this is the most efficient way to do it? I am ready to explore other solutions (e.g. using numpy or cython) if there are faster ways to solve this task.
Thanks
Note: time-stamps are not in general equally spaced (as in the example above), the method should also work in this case
Comment after reading the answers
I think there are many use cases in which neither align nor merge or join help. The point is to not use DB related semantics for aligning (which for timeseries are not so relevant in my opinion). For me aligning means map series A into B and have a way to deal with missing values (typically sample and hold method), align and join cause a not wanted effects like several timestamps repeated as a result of joining. I still do not have a perfect solution, but it seems np.searchsorted can help (it is much faster than using several calls to join / align to do what I need). I could not find a pandas way to do this up to now.
How can I map A into B so that B so that the result has all timestamps of A and B but no repetitions (except those which are already in A and B)?
Another typical use case is sample and hold synch, which can be solved in an efficient way as follows (synch A with B, i.e. take for every timestamp in A the corresponding values in B:
idx=np.searchsorted(B.index.values, A.index.values, side='right')-1
df=A.copy()
for i in B:
df[i]=B[i].ix[idx].values
the result df contains the same index of A and the synchronized values in B.
Is there an effective way to do such things directly in pandas?
If you need to synchronize then, use align, docs are here. Otherwise merge is a good option.
In [18]: N=100000
In [19]: df1=pd.DataFrame({'A':[1,2,3,4,5,6]*N, 'B':[1,5,3,4,5,7]*N}, index=pd.date_range('20140101 101501', freq='u', periods=6*N))
In [20]: df2=pd.DataFrame({'D':[10,2,30,4,5,10]*N, 'F':[1,5,3,4,5,70]*N}, index=pd.date_range('20140101 101501.000003', freq='u', periods=6*N))
In [21]: %timeit df1.merge(df2, how='outer', right_index=True, left_index=True).fillna(method='ffill')
10 loops, best of 3: 69.3 ms per loop
In [22]: %timeit df1.align(df2)
10 loops, best of 3: 36.5 ms per loop
In [24]: pd.set_option('max_rows',10)
In [25]: x, y = df1.align(df2)
In [26]: x
Out[26]:
A B D F
2014-01-01 10:15:01 1 1 NaN NaN
2014-01-01 10:15:01.000001 2 5 NaN NaN
2014-01-01 10:15:01.000002 3 3 NaN NaN
2014-01-01 10:15:01.000003 4 4 NaN NaN
2014-01-01 10:15:01.000004 5 5 NaN NaN
... .. .. .. ..
2014-01-01 10:15:01.599998 5 5 NaN NaN
2014-01-01 10:15:01.599999 6 7 NaN NaN
2014-01-01 10:15:01.600000 NaN NaN NaN NaN
2014-01-01 10:15:01.600001 NaN NaN NaN NaN
2014-01-01 10:15:01.600002 NaN NaN NaN NaN
[600003 rows x 4 columns]
In [27]: y
Out[27]:
A B D F
2014-01-01 10:15:01 NaN NaN NaN NaN
2014-01-01 10:15:01.000001 NaN NaN NaN NaN
2014-01-01 10:15:01.000002 NaN NaN NaN NaN
2014-01-01 10:15:01.000003 NaN NaN 10 1
2014-01-01 10:15:01.000004 NaN NaN 2 5
... .. .. .. ..
2014-01-01 10:15:01.599998 NaN NaN 2 5
2014-01-01 10:15:01.599999 NaN NaN 30 3
2014-01-01 10:15:01.600000 NaN NaN 4 4
2014-01-01 10:15:01.600001 NaN NaN 5 5
2014-01-01 10:15:01.600002 NaN NaN 10 70
[600003 rows x 4 columns]
If you wish to use the index of one of your DataFrames as pattern for synchronizing, maybe useful:
df3 = df1.iloc[df1.index.isin(df2.index),]
Note: I guess shape of df1 > shape of df2
In the previous code snippet, you get the elements in df1 and df2 but if you want to add new indexes maybe you prefer doing:
new_indexes = df1.index.diff(df2.index) # indexes of df1 and not in df2
default_values = np.zeros((new_indexes.shape[0],df2.shape[1]))
df2 = df2.append(pd.DataFrame(default_values , index=new_indexes)).sort(axis=0)
You can see another way to synchronize in this post
To my view sync of time series is a very simple procedure. Assume ts# (#=0,1,2) to be filled with
ts#[0,:] - time
ts#[1,:] - ask
ts#[2,:] - bid
ts#[3,:] - asksz
ts#[4,:] - bidsz
output is
totts[0,:] - sync time
totts[1-4,:] - ask/bid/asksz/bidsz of ts0
totts[5-8,:] - ask/bid/asksz/bidsz of ts1
totts[9-12,:] - ask/bid/asksz/bidsz of ts2
function:
def syncTS(ts0,ts1,ts2):
ti0 = ts0[0,:]
ti1 = ts1[0,:]
ti2 = ts2[0,:]
totti = np.union1d(ti0, ti1)
totti = np.union1d(totti,ti2)
totts = np.ndarray((13,len(totti)))
it0=it1=it2=0
nT0=len(ti0)-1
nT1=len(ti1)-1
nT2=len(ti2)-1
for it,tim in enumerate(totti):
if tim >= ti0[it0] and it0 < nT0:
it0+=1
if tim >= ti1[it1] and it1 < nT1:
it1 += 1
if tim >= ti2[it2] and it2 < nT2:
it2 += 1
totts[0, it] = tim
for k in range(1,5):
totts[k, it] = ts0[k, it0]
totts[k + 4, it] = ts1[k, it1]
totts[k + 8, it] = ts2[k, it2]
return totts

Categories