I want to slice a column in a dataframe (which contains only strings) based on the integers from a series. Here is an example:
data = pandas.DataFrame(['abc','scb','dvb'])
indices = pandas.Series([0,1,0])
Then apply some function so I get the following:
0
0 a
1 c
2 d
You can use python to manipulate the lists beforehand.
l1 = ['abc','scb','dvb']
l2 = [0,1,0]
l3 = [l1[i][l2[i]] for i in range(len(l1))]
You get l3 as
['a', 'c', 'd']
Now converting it to DataFrame
data = pd.DataFrame(l3)
You get the desired dataframe
You can use the following vectorized approach:
In [191]: [tuple(x) for x in indices.reset_index().values]
Out[191]: [(0, 0), (1, 1), (2, 0)]
In [192]: data[0].str.extractall(r'(.)') \
.loc[[tuple(x) for x in indices.reset_index().values]]
Out[192]:
0
match
0 0 a
1 1 c
2 0 d
In [193]: data[0].str.extractall(r'(.)') \
.loc[[tuple(x) for x in indices.reset_index().values]] \
.reset_index(level=1, drop=True)
Out[193]:
0
0 a
1 c
2 d
Explanation:
In [194]: data[0].str.extractall(r'(.)')
Out[194]:
0
match
0 0 a
1 b
2 c
1 0 s
1 c
2 b
2 0 d
1 v
2 b
In [195]: data[0].str.extractall(r'(.)').loc[ [ (0,0), (1,1) ] ]
Out[195]:
0
match
0 0 a
1 1 c
Numpy solution:
In [259]: a = np.array([list(x) for x in data.values.reshape(1, len(data))[0]])
In [260]: a
Out[260]:
array([['a', 'b', 'c'],
['s', 'c', 'b'],
['d', 'v', 'b']],
dtype='<U1')
In [263]: pd.Series(a[np.arange(len(data)), indices])
Out[263]:
0 a
1 c
2 d
dtype: object
Related
I want to group a df by a column col_2, which contains mostly integers, but some cells contain a range of integers. In my real life example, each unique integer represents a specific serial number of an assembled part. Each row in the dataframe represents a single part, which is allocated to the assembled part by col_2. Some parts can only be allocated to the assembled part with a given uncertainty (range).
The expected output would be one single group for each referenced integer (assembled part S/N). For example, the entry col_1 = c should be allocated to both groups where col_2 = 1 and col_2 = 2.
df = pd.DataFrame( {'col_1': ['a', 'b', 'c', 'd', 'e', 'f'],
'col_2': [1, 2, range(1,3), 3,range(2,5),5]})
col_1 col_2
0 a 1
1 b 2
2 c (1, 2)
3 d 3
4 e (2, 3, 4)
5 f 5
print(df.groupby(['col_2']).groups)
The code above gives an error:
TypeError: '<' not supported between instances of 'range' and 'int'
I think this does what you want:
s = df.col_2.apply(pd.Series).set_index(df.col_1).stack().astype(int)
s.reset_index().groupby(0).col_1.apply(list)
The first step gives you:
col_1
a 0 1
b 0 2
c 0 1
1 2
d 0 3
e 0 2
1 3
2 4
f 0 5
And the final result is:
1 [a, c]
2 [b, c, e]
3 [d, e]
4 [e]
5 [f]
Try this:
df = pd.DataFrame( {'col_1': ['a', 'b', 'c', 'd', 'e', 'f'],
'col_2': [1, 2, range(1,3), 3,range(2,5),5]})
col_1 col_2
0 a 1
1 b 2
2 c (1, 2)
3 d 3
4 e (2, 3, 4)
5 f 5
df['col_2'] = df.col_2.map(lambda x: range(x) if type(x) != range else x)
print(df.groupby(['col_2']).groups)```
My data frame looks like this
Pandas data frame with multiple categorical variables for a user
I made sure there are no duplicates in it. I want to encode it and I want my final output like this
I tried using pandas dummies directly but I am not getting the desired result.
Can anyone help me through this??
IIUC, your user is empty and everything is on name. If that's the case, you can
pd.pivot_table(df, index=df.name.str[0], columns=df.name.str[1:].values, aggfunc='count').fillna(0)
You can split each row in name using r'(\d+)' to separate digits from letters, and use pd.crosstab:
d = pd.DataFrame(df.name.str.split(r'(\d+)').values.tolist())
pd.crosstab(columns=d[2], index=d[1], values=d[1], aggfunc='count')
You could try the the str accessor get_dummies with groupby user column:
df.name.str.get_dummies().groupby(df.user).sum()
Example
Given your sample DataFrame
df = pd.DataFrame({'user': [1]*4 + [2]*4 + [3]*3,
'name': ['a', 'b', 'c', 'd']*2 + ['d', 'e', 'f']})
df_dummies = df.name.str.get_dummies().groupby(df.user).sum()
print(df_dummies)
[out]
a b c d e f
user
1 1 1 1 1 0 0
2 1 1 1 1 0 0
3 0 0 0 1 1 1
Assuming the following dataframe:
user name
0 1 a
1 1 b
2 1 c
3 1 d
4 2 a
5 2 b
6 2 c
7 3 d
8 3 e
9 3 f
You could groupby user and then use get_dummmies:
import pandas as pd
# create data-frame
data = [[1, 'a'], [1, 'b'], [1, 'c'], [1, 'd'], [2, 'a'],
[2, 'b'], [2, 'c'], [3, 'd'], [3, 'e'], [3, 'f']]
df = pd.DataFrame(data=data, columns=['user', 'name'])
# group and get_dummies
grouped = df.groupby('user')['name'].apply(lambda x: '|'.join(x))
print(grouped.str.get_dummies())
Output
a b c d e f
user
1 1 1 1 1 0 0
2 1 1 1 0 0 0
3 0 0 0 1 1 1
As a side-note, you can do it all in one line:
result = df.groupby('user')['name'].apply(lambda x: '|'.join(x)).str.get_dummies()
I have the following dataframe:
import numpy as np
import pandas as pd
index = pd.MultiIndex.from_product([[1, 2], ['a', 'b', 'c'], ['a', 'b', 'c']],
names=['one', 'two', 'three'])
df = pd.DataFrame(np.random.rand(18, 3), index=index)
0 1 2
one two three
1 a b 0.002568 0.390393 0.040717
c 0.943853 0.105594 0.738587
b b 0.049197 0.500431 0.001677
c 0.615704 0.051979 0.191894
2 a b 0.748473 0.479230 0.042476
c 0.691627 0.898222 0.252423
b b 0.270330 0.909611 0.085801
c 0.913392 0.519698 0.451158
I want to select rows where combination of index levels two and three are (a, b) or (b, c). How can I do this?
I tried df.loc[(slice(None), ['a', 'b'], ['b', 'c']), :] but that gives me all combinations of [a, b] and [b, c], including (a, c) and (b, b), which aren't needed.
I tried df.loc[pd.MultiIndex.from_tuples([(None, 'a', 'b'), (None, 'b', 'c')])] but that returns NaN in level one of the index.
df.loc[pd.MultiIndex.from_tuples([(None, 'a', 'b'), (None, 'b', 'c')])]
0 1 2
NaN a b NaN NaN NaN
b c NaN NaN NaN
So I thought I needed a slice at level one, but that gives me a TypeError:
pd.MultiIndex.from_tuples([(slice(None), 'a', 'b'), (slice(None), 'b', 'c')])
TypeError: unhashable type: 'slice'
I feel like I'm missing some simple one-liner here :).
Use df.query():
In [174]: df.query("(two=='a' and three=='b') or (two=='b' and three=='c')")
Out[174]:
0 1 2
one two three
1 a b 0.211555 0.193317 0.623895
b c 0.685047 0.369135 0.899151
2 a b 0.082099 0.555929 0.524365
b c 0.901859 0.068025 0.742212
UPDATE: we can also generate such "query" dynamically:
In [185]: l = [('a','b'), ('b','c')]
In [186]: q = ' or '.join(["(two=='{}' and three=='{}')".format(x,y) for x,y in l])
In [187]: q
Out[187]: "(two=='a' and three=='b') or (two=='b' and three=='c')"
In [188]: df.query(q)
Out[188]:
0 1 2
one two three
1 a b 0.211555 0.193317 0.623895
b c 0.685047 0.369135 0.899151
2 a b 0.082099 0.555929 0.524365
b c 0.901859 0.068025 0.742212
Here's one approach with loc and get_level_values
In [3231]: idx = df.index.get_level_values
In [3232]: df.loc[((idx('two') == 'a') & (idx('three') == 'b')) |
((idx('two') == 'b') & (idx('three') == 'c'))]
Out[3232]:
0 1 2
one two three
1 a b 0.442332 0.380669 0.832598
b c 0.458145 0.017310 0.068655
2 a b 0.933427 0.148962 0.569479
b c 0.727993 0.172090 0.384461
Generic way
In [3262]: conds = [('a', 'b'), ('b', 'c')]
In [3263]: mask = np.column_stack(
[(idx('two') == c[0]) & (idx('three') == c[1]) for c in conds]
).any(1)
In [3264]: df.loc[mask]
Out[3264]:
0 1 2
one two three
1 a b 0.442332 0.380669 0.832598
b c 0.458145 0.017310 0.068655
2 a b 0.933427 0.148962 0.569479
b c 0.727993 0.172090 0.384461
Is there a way in pandas to select, out of a grouped dataframe, the groups with more than x members ?
something like:
grouped = df.groupby(['a', 'b'])
dupes = [g[['a', 'b', 'c', 'd']] for _, g in grouped if len(g) > 1]
I can't find a solution in the docs or on SO.
use filter:
grouped.filter(lambda x: len(x) > 1)
Example:
In [64]:
df = pd.DataFrame({'a':[0,0,1,2],'b':np.arange(4)})
df
Out[64]:
a b
0 0 0
1 0 1
2 1 2
3 2 3
In [65]:
df.groupby('a').filter(lambda x: len(x)>1)
Out[65]:
a b
0 0 0
1 0 1
I'm trying to group by a column containing tuples. Each tuple has a different length.
I'd like to perform simple groupby operations on this column of tuples, such as sum or count.
Example :
df = pd.DataFrame(data={
'col1': [1,2,3,4] ,
'col2': [('a', 'b'), ('a'), ('b', 'n', 'k'), ('a', 'c', 'k', 'z') ] ,
})
print df
outputs :
col1 col2
0 1 (a, b)
1 2 (a, m)
2 3 (b, n, k)
3 4 (a, c, k, z)
I'd like to be able to group by col2 on col1, with for instance a sum.
Expected output would be :
col2 sum_col1
0 a 7
1 b 4
2 c 4
3 n 3
3 m 2
3 k 7
3 z 4
I feel that pd.melt might be able to use, but i can't see exactly how.
Here is an approach using .get_dummies and .melt:
import pandas as pd
df = pd.DataFrame(data={
'col1': [1,2,3,4] ,
'col2': [('a', 'b'), ('a'), ('b', 'n', 'k'), ('a', 'c', 'k', 'z') ] ,
})
value_col = 'col1'
id_col = 'col2'
Unpack tuples to DataFrame:
df = df.join(df.col2.apply(lambda x: pd.Series(x)))
Create columns with values of tuples:
dummy_cols = df.columns.difference(df[[value_col, id_col]].columns)
dfd = pd.get_dummies(df[dummy_cols | pd.Index([value_col])])
Producing:
col1 0_a 0_b 1_b 1_c 1_n 2_k 3_z
0 1 1 0 1 0 0 0 0
1 2 1 0 0 0 0 0 0
2 3 0 1 0 0 1 1 0
3 4 1 0 0 1 0 1 1
Then .melt it and clean variable column from prefixes:
dfd = pd.melt(dfd, value_vars=dfd.columns.difference([value_col]).tolist(), id_vars=value_col)
dfd['variable'] = dfd.variable.str.replace(r'\d_', '')
print dfd.head()
Yielding:
col1 variable value
0 1 a 1
1 2 a 1
2 3 a 0
3 4 a 1
4 1 b 0
And finally get your output:
dfd[dfd.value != 0].groupby('variable')[value_col].sum()
variable
a 7
b 4
c 4
k 7
n 3
z 4
Name: col1, dtype: int64