making multi-index in pandas dataframes in Python? - python

I have a data set where there is a matrix of numeric values indexed by a time variable. Each matrix is a numpy array (that can be converted into a dataframe with columns corresponding to columns of the matrix). if i have these matrices how can i make them into a single dataframe where each matrix has a time index? specifically:
# time t1
d1 = pandas.DataFrame({"a": [1,2,3,4], "b":[10,20,30,40]})
# time t2
d2 = pandas.DataFrame({"a": [10,20,30,40], "b": [1,2,5,6]})
# time t3
d3 = ...
i want to make an index called "time" to index these dataframes, and then aggregate values from columns "a" and "b" across the time index. how can you do this in pandas?
my attempt:
d=pandas.DataFrame([d1,d2],index=(0, 1),columns=["time"])
update: unutbu's solution for adding two hierarchical columns is:
c = pd.concat([d1, d2], keys=[('t1', 'p1'), ('t2', 'p2')], names=['time', 'position'])
my final question is how do you access this resulting structure? for example how do you do vectorized operations across time, or across position? eg take the average of the rows for each value of time.
also, how does this compare to encoding time and position into each dataframe and using groupby? in other words when to use levels versus flat columns that are grouped? here's an alternative solution using flat dataframe with groupby:
d1["time"] = 1
d1["position"] = "x"
d2["time"] = 2
d2["position"] = "y"
c = pandas.concat([d1, d2])
# take mean for all time values
c.groupby("time").apply(lambda x: np.mean(x, axis=1))

Given
import pandas as pd
d1 = pd.DataFrame({"a": [1,2,3,4], "b":[10,20,30,40]})
d2 = pd.DataFrame({"a": [10,20,30,40], "b": [1,2,5,6]})
then pd.concat([d1, d2], keys=['t1', 't2']) returns:
In [177]: pd.concat([d1, d2], keys=['t1', 't2'])
Out[177]:
a b
t1 0 1 10
1 2 20
2 3 30
3 4 40
t2 0 10 1
1 20 2
2 30 5
3 40 6
If you wish to add more than one level to the new MultiIndex, you can instead pass a list of tuples to the keys parameter:
In [237]: pd.concat([d1, d2], keys=[('t1', 'p1'), ('t2', 'p2')], names=['time', 'position'])
Out[237]:
a b
time position
t1 p1 0 1 10
1 2 20
2 3 30
3 4 40
t2 p2 0 10 1
1 20 2
2 30 5
3 40 6
Note, it is important here that keys receives a list of tuples, rather than a list of lists.

Related

Pandas sort column but keep elements of the same category together

I have a dataframe with two columns. One is numeric and the other is categorical. For example,
c1 c2
0 15 A
1 11 A
2 12 B
3 40 C
I want to sort by c1 but keep rows with the same c2 value together (so all the A's stay together). In categories where there are multiple entries, we sort by the largest value in that category.
So end result would be
c1 c2
0 40 C
1 15 A
2 11 A
3 12 B
How should I do this?
Thanks
We can create a temp column withgroupby transform max to get the max value per group sort_values with ascending False then drop the added column.
df = (
df.assign(key=df.groupby('c2')['c1'].transform('max'))
.sort_values(['key', 'c2', 'c1'], ascending=False, ignore_index=True)
.drop(columns=['key'])
)
df:
c1 c2
0 40 C
1 15 A
2 11 A
3 12 B
IIUC, you can try:
df = (
df.sort_values(by='c1', ascending=False)
.groupby('c2', as_index=False, sort=False)
.agg(list)
.explode('c1')
)
df.sort_values(by = ['c2', 'c1'], ascending = False)

How to extract a subset of a bigger dataset [duplicate]

A simple pandas question:
Is there a drop_duplicates() functionality to drop every row involved in the duplication?
An equivalent question is the following: Does pandas have a set difference for dataframes?
For example:
In [5]: df1 = pd.DataFrame({'col1':[1,2,3], 'col2':[2,3,4]})
In [6]: df2 = pd.DataFrame({'col1':[4,2,5], 'col2':[6,3,5]})
In [7]: df1
Out[7]:
col1 col2
0 1 2
1 2 3
2 3 4
In [8]: df2
Out[8]:
col1 col2
0 4 6
1 2 3
2 5 5
so maybe something like df2.set_diff(df1) will produce this:
col1 col2
0 4 6
2 5 5
However, I don't want to rely on indexes because in my case, I have to deal with dataframes that have distinct indexes.
By the way, I initially thought about an extension of the current drop_duplicates() method, but now I realize that the second approach using properties of set theory would be far more useful in general. Both approaches solve my current problem, though.
Thanks!
Bit convoluted but if you want to totally ignore the index data. Convert the contents of the dataframes to sets of tuples containing the columns:
ds1 = set(map(tuple, df1.values))
ds2 = set(map(tuple, df2.values))
This step will get rid of any duplicates in the dataframes as well (index ignored)
set([(1, 2), (3, 4), (2, 3)]) # ds1
can then use set methods to find anything. Eg to find differences:
ds1.difference(ds2)
gives:
set([(1, 2), (3, 4)])
can take that back to dataframe if needed. Note have to transform set to list 1st as set cannot be used to construct dataframe:
pd.DataFrame(list(ds1.difference(ds2)))
Here's another answer that keeps the index and does not require identical indexes in two data frames. (EDIT: make sure there is no duplicates in df2 beforehand)
pd.concat([df2, df1, df1]).drop_duplicates(keep=False)
It is fast and the result is
col1 col2
0 4 6
2 5 5
from pandas import DataFrame
df1 = DataFrame({'col1':[1,2,3], 'col2':[2,3,4]})
df2 = DataFrame({'col1':[4,2,5], 'col2':[6,3,5]})
print(df2[~df2.isin(df1).all(1)])
print(df2[(df2!=df1)].dropna(how='all'))
print(df2[~(df2==df1)].dropna(how='all'))
Apply by the columns of the object you want to map (df2); find the rows that are not in the set (isin is like a set operator)
In [32]: df2.apply(lambda x: df2.loc[~x.isin(df1[x.name]),x.name])
Out[32]:
col1 col2
0 4 6
2 5 5
Same thing, but include all values in df1, but still per column in df2
In [33]: df2.apply(lambda x: df2.loc[~x.isin(df1.values.ravel()),x.name])
Out[33]:
col1 col2
0 NaN 6
2 5 5
2nd example
In [34]: g = pd.DataFrame({'x': [1.2,1.5,1.3], 'y': [4,4,4]})
In [35]: g.columns=df1.columns
In [36]: g
Out[36]: 
   col1  col2
0   1.2     4
1   1.5     4
2   1.3     4
In [32]: g.apply(lambda x: g.loc[~x.isin(df1[x.name]),x.name])
Out[32]:
col1 col2
0 1.2 NaN
1 1.5 NaN
2 1.3 NaN
Note, in 0.13, there will be an isin operator on the frame level, so something like: df2.isin(df1) should be possible
There are 3 methods which work, but two of them have some flaws.
Method 1 (Hash method):
It worked for all cases I tested.
df1.loc[:, "hash"] = df1.apply(lambda x: hash(tuple(x)), axis = 1)
df2.loc[:, "hash"] = df2.apply(lambda x: hash(tuple(x)), axis = 1)
df1 = df1.loc[~df1["hash"].isin(df2["hash"]), :]
Method 2 (Dict method):
It fails if DataFrames contain datetime columns.
df1 = df1.loc[~df1.isin(df2.to_dict(orient="list")).all(axis=1), :]
Method 3 (MultiIndex method):
I encountered cases when it failed on columns with None's or NaN's.
df1 = df1.loc[~df1.set_index(list(df1.columns)).index.isin(df2.set_index(list(df2.columns)).index)
Edit: You can now make MultiIndex objects directly from data frames as of pandas 0.24.0 which greatly simplifies the syntax of this answer
df1mi = pd.MultiIndex.from_frame(df1)
df2mi = pd.MultiIndex.from_frame(df2)
dfdiff = df2mi.difference(df1mi).to_frame().reset_index(drop=True)
Original Answer
Pandas MultiIndex objects have fast set operations implemented as methods, so you can convert the DataFrames to MultiIndexes, use the difference() method, then convert the result back to a DataFrame. This solution should be much faster (by ~100x or more from my brief testing) than the solutions given here so far, and it will not depend on the row indexing of the original frames. As Piotr mentioned for his answer, this will fail with null values, since np.nan != np.nan. Any row in df2 with a null value will always appear in the difference. Also, the columns should be in the same order for both DataFrames.
df1mi = pd.MultiIndex.from_arrays(df1.values.transpose(), names=df1.columns)
df2mi = pd.MultiIndex.from_arrays(df2.values.transpose(), names=df2.columns)
dfdiff = df2mi.difference(df1mi).to_frame().reset_index(drop=True)
Numpy's setdiff1d would work and perhaps be faster.
For each column:
np.setdiff1(df1.col1.values, df2.col1.values)
So something like:
setdf = pd.DataFrame({
col: np.setdiff1d(getattr(df1, col).values, getattr(df2, col).values)
for col in df1.columns
})
numpy.setdiff1d docs
Get the indices of the intersection with a merge, then drop them:
>>> df_all = pd.DataFrame(np.arange(8).reshape((4,2)), columns=['A','B']); df_all
A B
0 0 1
1 2 3
2 4 5
3 6 7
>>> df_completed = df_all.iloc[::2]; df_completed
A B
0 0 1
2 4 5
>>> merged = pd.merge(df_all.reset_index(), df_completed); merged
index A B
0 0 0 1
1 2 4 5
>>> df_pending = df_all.drop(merged['index']); df_pending
A B
1 2 3
3 6 7
Assumption:
df1 and df2 have identical columns
it is a set operation so duplicates are ignored
sets are not extremely large so you do not worry about memory
union = pd.concat([df1,df2])
sym_diff = union[~union.duplicated(keep=False)]
union_of_df1_and_sym_diff = pd.concat([df1, sym_diff])
diff = union_of_df1_and_sym_diff[union_of_df1_and_sym_diff.duplicated()]
I'm not sure how pd.concat() implicitly joins overlapping columns but I had to do a little tweak on #radream's answer.
Conceptually, a set difference (symmetric) on multiple columns is a set union (outer join) minus a set intersection (or inner join):
df1 = pd.DataFrame({'col1':[1,2,3], 'col2':[2,3,4]})
df2 = pd.DataFrame({'col1':[4,2,5], 'col2':[6,3,5]})
o = pd.merge(df1, df2, how='outer')
i = pd.merge(df1, df2)
set_diff = pd.concat([o, i]).drop_duplicates(keep=False)
This yields:
col1 col2
0 1 2
2 3 4
3 4 6
4 5 5
In Pandas 1.1.0 you can count unique rows with value_counts and find difference between counts:
df1 = pd.DataFrame({'col1':[1,2,3], 'col2':[2,3,4]})
df2 = pd.DataFrame({'col1':[4,2,5], 'col2':[6,3,5]})
diff = df2.value_counts().sub(df1.value_counts(), fill_value=0)
Result:
col1 col2
1 2 -1.0
2 3 0.0
3 4 -1.0
4 6 1.0
5 5 1.0
dtype: float64
Get positive counts:
diff[diff > 0].reset_index(name='counts')
col1 col2 counts
0 4 6 1.0
1 5 5 1.0
this should work even if you have multiple columns in both dataframes. But make sure that the column names of both the dataframes are the exact same.
set_difference = pd.concat([df2, df1, df1]).drop_duplicates(keep=False)
With multiple columns you can also use:
col_names=['col_1','col_2']
set_difference = pd.concat([df2[col_names], df1[col_names],
df1[col_names]]).drop_duplicates(keep=False)

Nearest neighbor matching in Pandas

Given two DataFrames (t1, t2), both with a column 'x', how would I append a column to t1 with the ID of t2 whose 'x' value is the nearest to the 'x' value in t1?
t1:
id x
1 1.49
2 2.35
t2:
id x
3 2.36
4 1.5
output:
id id2
1 4
2 3
I can do this by creating a new DataFrame and iterating on t1.groupby() and doing look ups on t2 then merging, but this take incredibly long given a 17 million row t1 DataFrame.
Is there a better way to accomplish? I've scoured the pandas docs regarding groupby, apply, transform, agg, etc. But an elegant solution has yet to present itself despite my thought that this would be a common problem.
Using merge_asof
df = pd.merge_asof(df1.sort_values('x'),
df2.sort_values('x'),
on='x',
direction='nearest',
suffixes=['', '_2'])
print(df)
Out[975]:
id x id_2
0 3 0.87 6
1 1 1.49 5
2 2 2.35 4
Method 2 reindex
df1['id2']=df2.set_index('x').reindex(df1.x,method='nearest').values
df1
id x id2
0 1 1.49 4
1 2 2.35 3
convert to list t1 and t2 and sort them after this
and with the zip() function match the id
list1 = t1.values.tolist()
list2 = t2.values.tolist()
list1.sort() // ASC ORD DESC YOU DECIDE
list2.sort()
list3 = zip(list1,list2)
print(list3)
//after that you must see the output like (1,4),(2,3)
You can calculate a new array with the distance from each element in t1 to each element in t2, and then take the argmin along the rows to get the right index. This has the advantage that you can choose whatever distance function you like, and it does not require the dataframes to be of equal length.
It creates one intermediate array of size len(t1) * len(t2). Using a pandas builtin might be more memory-efficient, but this should be as fast as you can get as everything is done on the C side of numpy. You could always do this method in batches if memory is a problem.
import numpy as np
import pandas as pd
t1 = pd.DataFrame({"id": [1, 2], "x": np.array([1.49, 2.35])})
t2 = pd.DataFrame({"id": [3, 4], "x": np.array([2.36, 1.5])})
Now comes the part doing the actual work. The .to_numpy() bit is important since otherwise Pandas tries to merge on the indices. The first line uses broadcasting to create horizontal and vertical "repetitions" in a memory-efficient way.
dist = np.abs(t1["x"][np.newaxis, :] - t2["x"][:, np.newaxis])
closest_idx = np.argmin(dist, axis=1)
closest_id = t2["id"][closest_idx].to_numpy()
output = pd.DataFrame({"id1": t1["id"], "id2": closest_id})
print(output)
Alternatively, you can use round to 1 precision
t1 = {'id': [1, 2], 'x': [1.49,2.35]}
t2 = {'id': [3, 4], 'x': [2.36,1.5]}
df1 = pd.DataFrame(t1)
df2 = pd.DataFrame(t2)
df = df1.round(1).merge(df2.round(1), on='x', suffixes=('','2')).drop('x',1)
print(df)
id id2
0 1 4
1 2 3
add .drop('x',1) to remove the output for the binding column 'x'.
add suffixes=('','2') to rename the column titles.

Pandas aggregate statistics as new columns

I have a dataframe df with 3 columns: A is an object id, B is a flag, and C is a value measured on object A with flag B.
I want to compute the avarage value of C grouped by [A,B] and store the results as three new columns:
C0: meanC (or NaN) when B = 0
C1: meanC (or NaN) when B = 1
C2: meanC (or NaN) when B = 2
Below there's an example of how I am trying to transform the dataframe df into res.
import numpy as np
import pandas as pd
df = pd.DataFrame({
"A":[0,0,0,0,0,1,2,2,3,3,3],
"B":[0,1,2,0,1,2,0,2,0,1,1],
"C":[.654,.123,1.45,6.1,0.322,1.77,9.234,2.54,1,6.77,6.438]})
grouped = df.groupby(["A","B"]).agg("mean")
# how to transform grouped into res?
res = pd.DataFrame({
"A":[0,1,2,3],
"C0":[3.377,np.nan,9.234,1],
"C1":[0.2225,np.nan,np.nan,6.604],
"C2":[1.45,1.77,2.54,np.nan]})
Add unstack with add_prefix:
res = df.groupby(["A","B"])['C'].mean().unstack().add_prefix('C').reset_index()
Or use pivot_table with default mean aggregate function:
res = df.pivot_table(index="A",columns="B",values='C').add_prefix('C').reset_index()
print (res)
B A C0 C1 C2
0 0 3.377 0.2225 1.45
1 1 NaN NaN 1.77
2 2 9.234 NaN 2.54
3 3 1.000 6.6040 NaN

Which data structure in Python to use to replace Excel 2-dim array of strings/amounts?

I am using xlwings to replace my VB code with Python but since I am not an experienced programmer I was wondering - which data structure to use?
Data is in .xls in 2 columns and has the following form; In VB I lift this into a basic two dimensional array arrCampaignsAmounts(i, j):
Col 1: 'market_channel_campaign_product'; Col 2: '2334.43 $'
Then I concatenate words from 4 columns on another sheet into a similar 'string', into another 2-dim array arrStrings(i, j):
'Austria_Facebook_Winter_Active vacation'; 'rowNumber'
Finally, I search strings from 1. array within strings from 2. array; if found I write amounts into rowNumber from arrStrings(i, 2).
Would I use 4 lists for this task?
Two dictionaries?
Something else?
Definitely use pandas Dataframes. Here are references and very simple Dataframe examples.
#reference: http://pandas.pydata.org/pandas-docs/stable/10min.html
#reference: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html.
import numpy as np
import pandas as pd
def df_dupes(df_in):
'''
Returns [object,count] pairs for each unique item in the dataframe.
'''
# import pandas
if isinstance(df_in, list) or isinstance(df_in, tuple):
import pandas as pd
df_in = pd.DataFrame(df_in)
return df_in.groupby(df_in.columns.tolist(),as_index=False).size()
def df_filter_example(df):
'''
In [96]: df
Out[96]:
A B C D
0 1 4 9 1
1 4 5 0 2
2 5 5 1 0
3 1 3 9 6
'''
import pandas as pd
df=pd.DataFrame([[1,4,9,1],[4,5,0,2],[5,5,1,0],[1,3,9,6]],columns=['A','B','C','D'])
return df[(df.A == 1) & (df.D == 6)]
def df_compare(df1, df2, compare_col_list, join_type):
'''
df_compare compares 2 dataframes.
Returns left, right, inner or outer join
df1 is the first/left dataframe
df2 is the second/right dataframe
compare_col_list is a lsit of column names that must match between df1 and df2
join_type = 'inner', 'left', 'right' or 'outer'
'''
import pandas as pd
return pd.merge(df1, df2, how=join_type,
on=compare_col_list)
def df_compare_examples():
import numpy as np
import pandas as pd
df1=pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]], columns = ['c1', 'c2', 'c3'])
''' c1 c2 c3
0 1 2 3
1 4 5 6
2 7 8 9 '''
df2=pd.DataFrame([[4,5,6],[7,8,9],[10,11,12]], columns = ['c1', 'c2', 'c3'])
''' c1 c2 c3
0 4 5 6
1 7 8 9
2 10 11 12 '''
# One can see that df1 contains 1 row ([1,2,3]) not in df3 and
# df2 contains 1 rown([10,11,12]) not in df1.
# Assume c1 is not relevant to the comparison. So, we merge on cols 2 and 3.
df_merge = pd.merge(df1,df2,how='outer',on=['c2','c3'])
print(df_merge)
''' c1_x c2 c3 c1_y
0 1 2 3 NaN
1 4 5 6 4
2 7 8 9 7
3 NaN 11 12 10 '''
''' One can see that columns c2 and c3 are returned. We also received
columns c1_x and c1_y, where c1_X is the value of column c1
in the first dataframe and c1_y is the value of c1 in the second
dataframe. As such,
any row that contains c1_y = NaN is a row from df1 not in df2 &
any row that contains c1_x = NaN is a row from df2 not in df1. '''
df1_unique = pd.merge(df1,df2,how='left',on=['c2','c3'])
df1_unique = df1_unique[df1_unique['c1_y'].isnull()]
print(df1_unique)
df2_unique = pd.merge(df1,df2,how='right',on=['c2','c3'])
print(df2_unique)
df_common = pd.merge(df1,df2,how='inner',on=['c2','c3'])
print(df_common)
def delete_column_example():
print 'create df'
import pandas as pd
df = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]], columns=['a','b','c'])
print 'drop (delete/remove) column'
col_name = 'b'
df.drop(col_name, axis=1, inplace=True) # or df = df.drop('col_name, 1)
def delete_rows_example():
print '\n\ncreate df'
import pandas as pd
df = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]], columns=['col_1','col_2','col_3'])
print(df)
print '\n\nappend rows'
df= df.append(pd.DataFrame([[11,22,33]], columns=['col_1','col_2','col_3']))
print(df)
print '\n\ndelete rows where (based on) column value'
df = df[df.col_1 == 4]
print(df)

Categories