How to get a transition string per row object based on two different columns in python (without using loops)? - python

I have the following data structure:
The columns s and d are indicating the transition of object in column x. What I want to do is get a transition string per object present in the column x. For e.g. with a new column as follows:
Is there a lean way to do it using pandas, without using too many loops?
This was the code I tried:
obj = df['x'].tolist()
rows = []
for o in obj:
locs = df[df['x'] == o]['s'].tolist()
str_locs = '->'.join(str(l) for l in locs)
print(str_locs)
d = dict()
d['x'] = o
d['new'] = str_locs
rows.append(d)
tmp = pd.DataFrame(rows)
This give the output temp as:
x new
a 1->2->4->8
a 1->2->4->8
a 1->2->4->8
a 1->2->4->8
b 1->2
b 1->2

Example df:
df = pd.DataFrame({"x":["a","a","a","a","b","b"], "s":[1,2,4,8,5,11],"d":[2,4,8,9,11,12]})
print(df)
x s d
0 a 1 2
1 a 2 4
2 a 4 8
3 a 8 9
4 b 5 11
5 b 11 12
Following code will generate a transition string of all objects present in the column x.
groupby with respect to column x and get list of lists of s and d for every object available in x
Merge the list of lists sequentially
Remove consecutive duplicates from the merged list using itertools.groupby
Join the items of merged list with -> to make it a single string.
Finally map the series to column x of input df
from itertools import groupby
grp = df.groupby('x')[['s', 'd']].apply(lambda x: x.values.tolist())
grp = grp.apply(lambda x: [str(item) for tup in x for item in tup])
sr = grp.apply(lambda x: "->".join([i[0] for i in groupby(x)]))
df["new"] = df["x"].map(sr)
print(df)
x s d new
0 a 1 2 1->2->4->8->9
1 a 2 4 1->2->4->8->9
2 a 4 8 1->2->4->8->9
3 a 8 9 1->2->4->8->9
4 b 5 11 5->11->12
5 b 11 12 5->11->12

Related

Pandas Groupby remove row where specific value combination occurs

As Stated, I would like to remove a specific row based on group by logic. In the below dataframe wherever combination for F and G occurs for an ID, I would like to remove row with value G.
import pandas as pd
op_d = {'ID': [1,1,2,2,3,4],'Value':['F','G','K','G','H','G']}
df = pd.DataFrame(data=op_d)
df
In this case, I would like to remove second row with value 'G' for ID = 1. So far
temp = df.groupby('ID').apply(lambda x: (x['Value'].nunique()>1)).reset_index().rename(columns={0:'Expected_Output'})
temp = temp.loc[temp['Expected_Output']==True]
multiple_options = df.loc[df['ID'].isin(temp['ID'])]
So far, I am able to figure out where each ID has a multiple value. Could you tell how to remove this specific row ?
Use Series.eq + Series.groupby transform with any:
m1, m2 = df['Value'].eq('F'), df['Value'].eq('G')
m = m2 & m1.groupby(df['ID']).transform('any') & m2.groupby(df['ID']).transform('any')
df1 = df[~m]
Result:
print(df1)
ID Value
0 1 F
2 2 K
3 2 G
4 3 H
5 4 G
Using isin:
c = (df['Value'].isin(['F','G']).groupby(df['ID']).transform('sum').eq(2)
& df['Value'].eq('G'))
out = df[~c].copy()
ID Value
0 1 F
2 1 H
3 2 K
4 2 G
5 3 H
6 4 G

Unpack DataFrame with tuple entries into separate DataFrames

I wrote a small class to compute some statistics through bootstrap without replacement. For those not familiar with this technique, you get n random subsamples of some data, compute the desired statistic (lets say the median) on each subsample, and then compare the values across subsamples. This allows you to get a measure of variance on the obtained median over the dataset.
I implemented this in a class but reduced it to a MWE given by the following function
import numpy as np
import pandas as pd
def bootstrap_median(df, n=5000, fraction=0.1):
if isinstance(df, pd.DataFrame):
columns = df.columns
else:
columns = None
# Get the values as a ndarray
arr = np.array(df.values)
# Get the bootstrap sample through random permutations
sample_len = int(len(arr)*fraction)
if sample_len<1:
sample_len = 1
sample = []
for n_sample in range(n):
sample.append(arr[np.random.permutation(len(arr))[:sample_len]])
sample = np.array(sample)
# Compute the median on each sample
temp = np.median(sample, axis=1)
# Get the mean and std of the estimate across samples
m = np.mean(temp, axis=0)
s = np.std(temp, axis=0)/np.sqrt(len(sample))
# Convert output to DataFrames if necesary and return
if columns:
m = pd.DataFrame(data=m[None, ...], columns=columns)
s = pd.DataFrame(data=s[None, ...], columns=columns)
return m, s
This function returns the mean and standard deviation across the medians computed on each bootstrap sample.
Now consider this example DataFrame
data = np.arange(20)
group = np.tile(np.array([1, 2]).reshape(-1,1), (1,10)).flatten()
df = pd.DataFrame.from_dict({'data': data, 'group': group})
print(df)
print(bootstrap_median(df['data']))
this prints
data group
0 0 1
1 1 1
2 2 1
3 3 1
4 4 1
5 5 1
6 6 1
7 7 1
8 8 1
9 9 1
10 10 2
11 11 2
12 12 2
13 13 2
14 14 2
15 15 2
16 16 2
17 17 2
18 18 2
19 19 2
(9.5161999999999995, 0.056585753613431718)
So far so good because bootstrap_median returns a tuple of two elements. However, if I do this after a groupby
In: df.groupby('group')['data'].apply(bootstrap_median)
Out:
group
1 (4.5356, 0.0409710449952)
2 (14.5006, 0.0403772204095)
The values inside each cell are tuples, as one would expect from apply. I can unpack the result into two DataFrame's by iterating over elements like this:
index = []
data1 = []
data2 = []
for g, (m, s) in out.iteritems():
index.append(g)
data1.append(m)
data2.append(s)
dfm = pd.DataFrame(data=data1, index=index, columns=['E[median]'])
dfm.index.name = 'group'
dfs = pd.DataFrame(data=data2, index=index, columns=['std[median]'])
dfs.index.name = 'group'
thus
In: dfm
Out:
E[median]
group
1 4.5356
2 14.5006
In: dfs
Out:
std[median]
group
1 0.0409710449952
2 0.0403772204095
This is a bit cumbersome and my question is if there is a more pandas native way to "unpack" a dataframe whose values are tuples into separate DataFrame's
This question seemed related but it concerned string regex replacements and not unpacking true tuples.
I think you need change:
return m, s
to:
return pd.Series([m, s], index=['m','s'])
And then get:
df1 = df.groupby('group')['data'].apply(bootstrap_median)
print (df1)
group
1 m 4.480400
s 0.040542
2 m 14.565200
s 0.040373
Name: data, dtype: float64
So is possible select by xs:
print (df1.xs('s', level=1))
group
1 0.040542
2 0.040373
Name: data, dtype: float64
print (df1.xs('m', level=1))
group
1 4.4804
2 14.5652
Name: data, dtype: float64
Also if need one column DataFrame add to_frame:
df1 = df.groupby('group')['data'].apply(bootstrap_median).to_frame()
print (df1)
data
group
1 m 4.476800
s 0.041100
2 m 14.468400
s 0.040719
print (df1.xs('s', level=1))
data
group
1 0.041100
2 0.040719
print (df1.xs('m', level=1))
data
group
1 4.4768
2 14.4684

Convert list of tuples to dataframe - where first element of tuple is column name

I have a list of tuples in the format:
tuples = [('a',1,10,15),('b',11,0,3),('c',7,19,2)] # etc.
I wish to store the data in a DataFrame with the format:
a b c ...
0 1 11 7 ...
1 10 0 19 ...
2 15 3 2 ...
Where the first element of the tuple is what I wish to be the column name.
I understand that if I can achieve what I want by running:
df = pd.DataFrame(tuples)
df = df.T
df.columns = df.iloc[0]
df = df[1:]
But it seems to me like it should be more straightforward than this. Is this a more pythonic way of solving this?
Here's one way
In [151]: pd.DataFrame({x[0]:x[1:] for x in tuples})
Out[151]:
a b c
0 1 11 7
1 10 0 19
2 15 3 2
You can use dictionary comprehension, like:
pd.DataFrame({k:v for k,*v in tuples})
in python-3.x, or:
pd.DataFrame({t[0]: t[1:] for t in tuples})
in python-2.7.
which generates:
>>> pd.DataFrame({k:v for k,*v in tuples})
a b c
0 1 11 7
1 10 0 19
2 15 3 2
The columns will be sorted alphabetically.
If you want the columns to be sorted like the original content, you can use the columns parameter:
pd.DataFrame({k:v for k,*v in tuples},columns=[k for k,*_ in tuples])
again in python-3.x, or for python-2.7:
pd.DataFrame({t[0]: t[1:] for t in tuples},columns=[t[0] for t in tuples])
We can shorten this a bit into:
from operator import itemgetter
pd.DataFrame({t[0]: t[1:] for t in tuples},columns=map(itemgetter(0),tuples))
Incase if the values in tuple are in row wise, then
df = pd.DataFrame(tuples, columns=tuples[0])[1:]

vectorise nested iterations by using groupby methods

I have written code to iterate through a dataset that has a demarcation column. This column consist of a value shared by all equally demarked rows. The code iterate through each demarcated section with a nested loop to iterate through each line, finding the nearest neighbor for each row in its respective demarcated block
import pandas as pd
import numpy as np
Create a df with XYZ and Section demark
p=5
df = pd.DataFrame(np.random.randn(100, 3), columns=list('XYZ'))
df2 = df.sort('Z')
df2 = df2.reset_index(drop=True)
df2['Section_demark'] = (df2.index/p).astype('int')
df2.head(15)
X Y Z Section_demark
0 -1.125526 -0.249091 -2.505444 0
1 0.710114 1.357477 -2.195904 0
2 -0.580319 -0.997311 -2.031280 0
3 1.311526 -0.268590 -1.741079 0
4 0.481450 0.448904 -1.546278 0
5 -1.820224 -0.846628 -1.392700 1
6 0.528618 0.418862 -1.388170 1
7 0.360560 -0.309429 -1.319548 1
8 -0.369107 -1.290528 -1.233815 1
9 0.139063 0.045076 -1.209820 1
10 0.049387 1.087300 -1.188375 2
11 0.678247 -1.191882 -1.172214 2
12 -0.976294 -0.752081 -1.092286 2
13 0.875952 0.319304 -1.079185 2
14 0.469730 -0.329548 -1.044178 2
Function for euclidean distance
def eucl_d(item_id):
a = df3.sub(df3.iloc[item_id], axis=1)
b = np.sum( np.square(a), axis=1 )
return b
Iterate through the section demarks, iterate through the lines in each Section_demark and find nearest neighbor,
Isolate the row nearest to top row and create a series, take the ix for that series and compile a list from it.
read the list back to df2, creating a new column with the Nearest neighbor index number as value
s=0
elements = []
while s<(len(df2)/p):
df3 = df2[df2['Section_demark']==(s)]
r=0
while r<(p):
df4=df3.copy()
df4['dist'] = eucl_d(r)
df4 = df4.sort('dist')
ser = df4.iloc[1]
elements.append(ser.name)
r=r+1
s=s+1
df2["NNIX"] = elements
df2.head(10)
X1 Y1 Z1 NNIX
0 0.002299 1.284195 -1.604009 1
1 -0.444305 0.346856 -2.396538 0
2 -0.490741 -1.416682 -1.423573 3
3 0.203635 -0.676841 -1.596332 2
4 0.002299 1.284195 -1.604009 1
5 -0.314330 0.036554 -1.153127 6
6 -0.387839 0.129000 -1.235331 5
7 -0.314330 0.036554 -1.153127 6
8 -0.059477 -0.205260 -1.136376 7
9 0.717980 0.130665 -1.040372 8
I would like to exchange the last section of iteration with a groupby command and use aggregate or apply to run the eucl_d function, but it eludes me
I can get df2 grouped by running this:
grouped = df3.groupby('Section_demark')
Its the second step that is giving me trouble
I was thinking:
grouped.agg(eucl_d(item_id))
But I dont know how to specify the item_id for eucl_d(item_id)

python split pd dataframe by column

Is there a function that splits a pandas.dataframe object into multiple sub-dataframes, by a specific column value? For example, if I have
A 1
B 2
A 3
B 4
I want the result as follow:
A 1
A 3
and
B 2
B 4
In R, it is the split function. How is it being done in python? I know I can use subset within a forloop. But is there a function does that? Thanks.
You can use groupby() with list-comprehension to extract a list of sub data frames where each of them contains only a single ind value:
import pandas as pd
from StringIO import StringIO
df = pd.read_csv(StringIO("""A 1
B 2
A 3
B 4"""), sep = "\s+", names=['ind', 'value'])
lst = [g for _, g in df.groupby('ind')]
lst[0]
# ind value
#0 A 1
#2 A 3
lst[1]
# ind value
#1 B 2
#3 B 4

Categories