How to process column names and create new columns - python

This is my pandas DataFrame with original column names.
old_dt_cm1_tt old_dm_cm1 old_rr_cm2_epf old_gt
1 3 0 0
2 1 1 5
Firstly I want to extract all unique variations of cm, e.g. in this case cm1 and cm2.
After this I want to create a new column per each unique cm. In this example there should be 2 new columns.
Finally in each new column I should store the total count of non-zero original column values, i.e.
old_dt_cm1_tt old_dm_cm1 old_rr_cm2_epf old_gt cm1 cm2
1 3 0 0 2 0
2 1 1 5 2 1
I implemented the first step as follows:
cols = pd.DataFrame(list(df.columns))
ind = [c for c in df.columns if 'cm' in c]
df.ix[:, ind].columns
How to proceed with steps 2 and 3, so that the solution is automatic (I don't want to manually define column names cm1 and cm2, because in original data set I might have many cm variations.

You can use:
print df
old_dt_cm1_tt old_dm_cm1 old_rr_cm2_epf old_gt
0 1 3 0 0
1 2 1 1 5
First you can filter columns contains string cm, so columns without cm are removed.
df1 = df.filter(regex='cm')
Now you can change columns to new values like cm1, cm2, cm3.
print [cm for c in df1.columns for cm in c.split('_') if cm[:2] == 'cm']
['cm1', 'cm1', 'cm2']
df1.columns = [cm for c in df1.columns for cm in c.split('_') if cm[:2] == 'cm']
print df1
cm1 cm1 cm2
0 1 3 0
1 2 1 1
Now you can count non - zero values - change df1 to boolean DataFrame and sum - True are converted to 1 and False to 0. You need count by unique column names - so groupby columns and sum values.
df1 = df1.astype(bool)
print df1
cm1 cm1 cm2
0 True True False
1 True True True
print df1.groupby(df1.columns, axis=1).sum()
cm1 cm2
0 2 0
1 2 1
You need unique columns, which are added to original df:
print df1.columns.unique()
['cm1' 'cm2']
Last you can add new columns by df[['cm1','cm2']] from groupby function:
df[df1.columns.unique()] = df1.groupby(df1.columns, axis=1).sum()
print df
old_dt_cm1_tt old_dm_cm1 old_rr_cm2_epf old_gt cm1 cm2
0 1 3 0 0 2 0
1 2 1 1 5 2 1

Once you know which columns have cm in them you can map them (with a dict) to the desired new column with an adapted version of this answer:
col_map = {c:'cm'+c[c.index('cm') + len('cm')] for c in ind}
# ^ if you are hard coding this in you might as well use 2
so that instead of the string after cm it is cm and the character directly following, in this case it would be:
{'old_dm_cm1': 'cm1', 'old_dt_cm1_tt': 'cm1', 'old_rr_cm2_epf': 'cm2'}
Then add the new columns to the DataFrame by iterating over the dict:
for col,new_col in col_map.items():
if new_col not in df:
df[new_col] =[int(a!=0) for a in df[col]]
else:
df[new_col]+=[int(a!=0) for a in df[col]]
note that int(a!=0) will simply give 0 if the value is 0 and 1 otherwise. The only issue with this is because dicts are inherently unordered it may be preferable to add the new columns in order according to the values: (like the answer here)
import operator
for col,new_col in sorted(col_map.items(),key=operator.itemgetter(1)):
if new_col in df:
df[new_col]+=[int(a!=0) for a in df[col]]
else:
df[new_col] =[int(a!=0) for a in df[col]]
to ensure the new columns are inserted in order.

Related

Count the number of column for each rows of a pandas where a condition holds

I have a Pandas dataframe as follow:
data = pd.DataFrame({'w1':[0,1,0],'w2':[5,8,0],'w3':[0,0,0],'w4' :[5,1,0], 'w5' : [7,1,0],'condition' : [5,1,0]})
I need to have a column that for each row,counts the number of columns( columns other than "condition") which their values are equal to "condition".
The final output should look like bellow:
I don't want to write a for loop.
As a solution,I wanted to replace the values which are equal to the "condition" with 1 and others with 0 by np.where as bellow, and then sums the 1s of each row, which was not helpful:
data = pd.DataFrame(np.where(data.loc[:,data.columns != 'condition'] == data['condition'], 1, 0), columns = data.columns)
That was just an idea (I mean replacing the values with 1 and 0) but any pythonic solution isappreciated.
Compare all columns without last by column condition with DataFrame.eq and count Trues by sum:
data['new'] = data.iloc[:, :-1].eq(data['condition'], axis=0).sum(axis=1)
Another idea is compare all columns with remove condition col:
data['new'] = data.drop('condition', axis=1).eq(data['condition'], axis=0).sum(axis=1)
Thank you for comment #Sayandip Dutta, your idea is compare all columns and remove 1:
data['new'] = data.eq(data['condition'], axis=0).sum(axis=1).sub(1)
print (data)
w1 w2 w3 w4 w5 condition new
0 0 5 0 5 7 5 2
1 1 8 0 1 1 1 3
2 0 0 0 0 0 0 5

In a DataFrame, how could we get a list of indexes with 0's in specific columns?

We have a large dataset that needs to be modified based on specific criteria.
Here is a sample of the data:
Input
BL.DB BL.KB MI.RO MI.RA MI.XZ MAY.BE
0 0 1 1 1 0 1
1 0 0 1 0 0 1
SampleData1 = pd.DataFrame([[0,1,1,1,1],[0,0,1,0,0]],columns =
['BL.DB',
'BL.KB',
'MI.RO',
'MI.RA',
'MI.XZ'])
The fields of this data are all formatted 'family.member', and a family may have any number of members. We need to remove all rows of the dataframe which have all 0's for any family.
Simply put, we want to only keep rows of the data that contain at least one member of every family.
We have no reproducible code for this problem because we are unsure of where to start.
We thought about using iterrows() but the documentation says:
#You should **never modify** something you are iterating over.
#This is not guaranteed to work in all cases. Depending on the
#data types, the iterator returns a copy and not a view, and writing
#to it will have no effect.
Other questions on S.O. do not quite solve our problem.
Here is what we want the SampleData to look like after we run it:
Expected output
BL.DB BL.KB MI.RO MI.RA MI.XZ MAY.BE
0 0 1 1 1 0 1
SampleData1 = pd.DataFrame([[0,1,1,1,0]],columns = ['BL.DB',
'BL.KB',
'MI.RO',
'MI.RA',
'MI.XZ'])
Also, could you please explain why we should not modify a data we iterate over when we do that all the time with for loops, and what is the correct way to modify DataFrame's too, please?
Thanks for the help in advance!
Start from copying df and reformatting its columns into a MultiIndex:
df2 = df.copy()
df2.columns = df.columns.str.split(r'\.', expand=True)
The result is:
BL MI
DB KB RO RA XZ
0 0 1 1 1 0
1 0 0 1 0 0
To generate "family totals", i.e. sums of elements in rows over the top
(0) level of column index, run:
df2.groupby(level=[0], axis=1).sum()
The result is:
BL MI
0 1 2
1 0 1
But actually we want to count zeroes in each row of the above table,
so extend the above code to:
(df2.groupby(level=[0], axis=1).sum() == 0).astype(int).sum(axis=1)
The result is:
0 0
1 1
dtype: int64
meaning:
row with index 0 has no "family zeroes",
row with index 1 has one such zero (for one family).
And to print what we are looking for, run:
df[(df2.groupby(level=[0], axis=1).sum() == 0)\
.astype(int).sum(axis=1) == 0]
i.e. print rows from df, with indices for which the count of
"family zeroes" in df2 is zero.
It's possible to group along axis=1. For each row, check that all families (grouped on the column name before '.') have at least one 1, then slice by this Boolean Series to retain these rows.
m = df.groupby(df.columns.str.split('.').str[0], axis=1).any(1).all(1)
df[m]
# BL.DB BL.KB MI.RO MI.RA MI.XZ MAY.BE
#0 0 1 1 1 0 1
As an illustration, here's what grouping along axis=1 looks like; it partitions the DataFrame by columns.
for idx, gp in df.groupby(df.columns.str.split('.').str[0], axis=1):
print(idx, gp, '\n')
#BL BL.DB BL.KB
#0 0 1
#1 0 0
#MAY MAY.BE
#0 1
#1 1
#MI MI.RO MI.RA MI.XZ
#0 1 1 0
#1 1 0 0
Now it's rather straightforward to find the rows where all of these groups have any single non-zero column, by using those with axis=1.
You basically want to group on families and retain rows where there is one or more member for all families in the row.
One way to do this is to transpose the original dataframe and then split the index on the period, taking the first element which is the family identifier. The columns are the index values in the original dataframe.
We can then group on the families (level=0) and sum the number of members in each for every record (df2.groupby(level=0).sum()). No we retain the index values with more than one member in each family (.gt(0).all()). We create a mask using these values, and apply it to a boolean index on the original dataframe to get the relevant rows.
df2 = SampleData1.T
df2.index = [idx.split('.')[0] for idx in df2.index]
# >>> df2
# 0 1
# BL 0 0
# BL 1 0
# MI 1 1
# MI 1 0
# MI 0 0
# >>> df2.groupby(level=0).sum()
# 0 1
# BL 1 0
# MI 2 1
mask = df2.groupby(level=0).sum().gt(0).all()
>>> SampleData1[mask]
BL.DB BL.KB MI.RO MI.RA MI.XZ
0 0 1 1 1 0

Compare the columns of a dataframe in reverse order and create a new column with the index of the column which has value 0

I have imported data from a csv file into my program and then used set_index to set 'rule_id' as index. I used this code:
df = pd.read_excel('stack.xlsx')
df.set_index(['rule_id'])
and the data looks like this:
Now I want to compare one column with another but in reverse order , for eg; I want to compare 'c' data with 'b' , then compare 'b' with 'a' and so on and create another column after the comparison which contains the index of the column where the value was zero. If both the columns have value 0 , then Null should be updated in the new column and if both the comparison values are other than 0 , then also Null should be updated in the new column.
The result should look like this:
I am not able to write the code of how should I approach this problem, if you guys could help me , that would be great.
Edit: A minor edit. I have imported the data from an excel which looks like this , this is just a part of data , there are multiple columns:
Then I used pivot_table to manipulate the data as per my requirement using this code:
df = df.pivot_table(index = 'rule_id' , columns = ['date'], values = 'rid_fc', fill_value = 0)
and my data looks like this now:
Now I want to compare one column with another but in reverse order , for eg; I want to compare '2019-04-25 16:36:32' data with '2019-04-25 16:29:05' , then compare '2019-04-25 16:29:05' with '2019-04-25 16:14:14' and so on and create another column after the comparison which contains the index of the column where the value was zero. If both the columns have value 0 , then Null should be updated in the new column and if both the comparison values are other than 0 , then also Null should be updated in the new column.
IIUC you can try with:
d={i:e for e,i in enumerate(df.columns)}
m1=df[['c','b']]
m2=df[['b','a']]
df['comp1']=m1.eq(0).dot(m1.columns).map(d)
m3=m2.eq(0).dot(m2.columns)
m3.loc[m3.str.len()!=1]=np.nan
df['comp2']=m3.map(d)
print(df)
a b c comp1 comp2
rule_id
51234 0 7 6 NaN 0.0
53219 0 0 1 1.0 NaN
56195 0 2 2 NaN 0.0
I suggest use numpy - compare shifted values with logical_and and set new columns by range created by np.arange with swap order and numpy.where with DatFrame constructor:
df = pd.DataFrame({
'a':[0,0,0],
'b':[7,0,2],
'c':[6,1,2],
})
#change order of array
x = df.values[:, ::-1]
#compare for equal 0 and and not equal 0
a = np.logical_and(x[:, 1:] == 0, x[:, :-1] != 0)
#create range from top to 0
b = np.arange(a.shape[1]-1, -1, -1)
#new columns names
c = [f'comp{i+1}' for i in range(x.shape[1] - 1)]
#set values by boolean array a and set values
df1 = pd.DataFrame(np.where(a, b[None, :], np.nan), columns=c, index=df.index)
print (df1)
comp1 comp2
0 NaN 0.0
1 1.0 NaN
2 NaN 0.0
You can make use of this code snippet. I did not have time to perfect it with loops etc. so please make the change as per requirements.
import pandas as pd
import numpy as np
# Data
print(df.head())
a b c
0 0 7 6
1 0 0 1
2 0 2 2
cp = df.copy()
cp[cp != 0] = 1
cp['comp1'] = cp['a'] + cp['b']
cp['comp2'] = cp['b'] + cp['c']
# Logic
cp = cp.replace([0, 1, 2], [1, np.nan, 0])
cp[['a', 'b', 'c']] = df[['a', 'b', 'c']]
# Results
print(cp.head())
a b c comp1 comp2
0 0 7 6 NaN 0.0
1 0 0 1 1.0 NaN
2 0 2 2 NaN 0.0

Editing then concatenating values of several columns into a single one (pandas, python)

I'm looking for a way to use pandas and python to combine several columns in an excel sheet with known column names into a new, single one, keeping all the important information as in the example below:
input:
ID,tp_c,tp_b,tp_p
0,transportation - cars,transportation - boats,transportation - planes
1,checked,-,-
2,-,checked,-
3,checked,checked,-
4,-,checked,checked
5,checked,checked,checked
desired output:
ID,tp_all
0,transportation
1,cars
2,boats
3,cars+boats
4,boats+planes
5,cars+boats+planes
The row with ID of 0 contans a description of the contents of the column. Ideally the code would parse the description in the second row, look after the '-' and concatenate those values in the new "tp_all" column.
This is quite interesting as it's a reverse get_dummies...
I think I would manually munge the column names so that you have a boolean DataFrame:
In [11]: df1 # df == 'checked'
Out[11]:
cars boats planes
0
1 True False False
2 False True False
3 True True False
4 False True True
5 True True True
Now you can use an apply with zip:
In [12]: df1.apply(lambda row: '+'.join([col for col, b in zip(df1.columns, row) if b]),
axis=1)
Out[12]:
0
1 cars
2 boats
3 cars+boats
4 boats+planes
5 cars+boats+planes
dtype: object
Now you just have to tweak the headers, to get the desired csv.
Would be nice if there were a less manual way / faster to do reverse get_dummies...
OK a more dynamic method:
In [63]:
# get a list of the columns
col_list = list(df.columns)
# remove 'ID' column
col_list.remove('ID')
# create a dict as a lookup
col_dict = dict(zip(col_list, [df.iloc[0][col].split(' - ')[1] for col in col_list]))
col_dict
Out[63]:
{'tp_b': 'boats', 'tp_c': 'cars', 'tp_p': 'planes'}
In [64]:
# define a func that tests the value and uses the dict to create our string
def func(x):
temp = ''
for col in col_list:
if x[col] == 'checked':
if len(temp) == 0:
temp = col_dict[col]
else:
temp = temp + '+' + col_dict[col]
return temp
df['combined'] = df[1:].apply(lambda row: func(row), axis=1)
df
Out[64]:
ID tp_c tp_b tp_p \
0 0 transportation - cars transportation - boats transportation - planes
1 1 checked NaN NaN
2 2 NaN checked NaN
3 3 checked checked NaN
4 4 NaN checked checked
5 5 checked checked checked
combined
0 NaN
1 cars
2 boats
3 cars+boats
4 boats+planes
5 cars+boats+planes
[6 rows x 5 columns]
In [65]:
df = df.ix[1:,['ID', 'combined']]
df
Out[65]:
ID combined
1 1 cars
2 2 boats
3 3 cars+boats
4 4 boats+planes
5 5 cars+boats+planes
[5 rows x 2 columns]
Here is one way:
newCol = pandas.Series('',index=d.index)
for col in d.ix[:, 1:]:
name = '+' + col.split('-')[1].strip()
newCol[d[col]=='checked'] += name
newCol = newCol.str.strip('+')
Then:
>>> newCol
0 cars
1 boats
2 cars+boats
3 boats+planes
4 cars+boats+planes
dtype: object
You can create a new DataFrame with this column or do what you like with it.
Edit: I see that you have edited your question so that the names of the modes of transportation are now in row 0 instead of in the column headers. It is easier if they're in the column headers (as my answer assumes), and your new column headers don't seem to contain any additional useful information, so you should probably start by just setting the column names to the info from row 0, and deleting row 0.

Pandas Multi-Colum Boolean Indexing/Selection with Dict Generator

Lets imagine you have a DataFrame df with a large number of columns, say 50, and df does not have any indexes (i.e. index_col=None). You would like to select a subset of the columns as defined by a required_columns_list, but would like to only return those rows meeting a mutiple criteria as defined by various boolean indexes. Is there a way to consicely generate the selection statement using a dict generator?
As an example:
df = pd.DataFrame(np.random.randn(100,50),index=None,columns=["Col" + ("%03d" % (i + 1)) for i in range(50)])
# df.columns = Index[u'Col001', u'Col002', ..., u'Col050']
required_columns_list = ['Col002', 'Col012', 'Col025', 'Col032', 'Col033']
now lets imagine that I define:
boolean_index_dict = {'Col001':"MyAccount", 'Col002':"Summary", 'Col005':"Total"}
I would like to select out using a dict generator to construct the multiple boolean indices:
df.loc[GENERATOR_USING_boolean_index_dict, required_columns_list].values
The above generator boolean method would be the equivalent of:
df.loc[(df['Col001']=="MyAccount") & (df['Col002']=="Summary") & (df['Col005']=="Total"), ['Col002', 'Col012', 'Col025', 'Col032', 'Col033']].values
Hopefully, you can see that this would be really useful 'template' in operating on large DataFrames and the boolean indexing can then be defined in the boolean_index_dict. I would greatly appreciate if you could let me know if this is possible in Pandas and how to construct the GENERATOR_USING_boolean_index_dict?
Many thanks and kind regards,
Bertie
p.s. If you would like to test this out, you will need to populate some of df columns with text. The definition of df using random numbers was simply given as a starter if required for testing...
Suppose this is your df:
df = pd.DataFrame(np.random.randint(0,4,(100,50)),index=None,columns=["Col" + ("%03d" % (i + 1)) for i in range(50)])
# the first five cols and rows:
df.iloc[:5,:5]
Col001 Col002 Col003 Col004 Col005
0 2 0 2 3 1
1 0 1 0 1 3
2 0 1 1 0 3
3 3 1 0 2 1
4 1 2 3 1 0
Compared to your example all columns are filled with ints of 0,1,2 or 3.
Lets define the criteria:
req = ['Col002', 'Col012', 'Col025', 'Col032', 'Col033']
filt = {'Col001': 2, 'Col002': 2, 'Col005': 2}
So we want some columns, where some others columns all contain the value 2.
You can then get the result with:
df.loc[df[filt.keys()].apply(lambda x: x.tolist() == filt.values(), axis=1), req]
In my case this is the result:
Col002 Col012 Col025 Col032 Col033
43 2 2 1 3 3
98 2 1 1 1 2
Lets check the required columns for those rows:
df[filt.keys()].iloc[[43,98]]
Col005 Col001 Col002
43 2 2 2
98 2 2 2
And some other (non-matching) rows:
df[filt.keys()].iloc[[44,99]]
Col005 Col001 Col002
44 3 0 3
99 1 0 0
I'm starting to like Pandas more and more.

Categories