How to shift rows up in Pandas Dataframe based on specific column - python

How do I shift up all the values in a row for one specific column without affecting the order of the other columns?
For example, let's say i have the following code:
import pandas as pd
data= {'ColA':["A","B","C"],
'ColB':[0,1,2],
'ColC':["First","Second","Third"]}
df = pd.DataFrame(data)
print(df)
I would see the following output:
ColA ColB ColC
0 A 0 First
1 B 1 Second
2 C 2 Third
In my case I want to verify that Column B does not have any 0s and if so, it is removed and all the other values below it get pushed up, and the order of the other columns are not affected. Presumably, I would then see the following:
ColA ColB ColC
0 A 1 First
1 B 2 Second
2 C NaN Third
I can't figure out how to do this using either the drop() or shift() methods.
Thank you

Let us do simple sorted
invalid=0
df['ColX']=sorted(df.ColB,key=lambda x : x==invalid)
df.ColX=df.ColX.mask(df.ColX==invalid)
df
Out[351]:
ColA ColB ColC ColX
0 A 0 First 1.0
1 B 1 Second 2.0
2 C 2 Third NaN

The way I'd do this IIUC is to filter out the values in ColB which are not 0, and fill the column with these values according to the length of the obtained valid values:
m = df.loc[~df.ColB.eq(0), 'ColB'].values
df['ColB'] = float('nan')
df.loc[:m.size-1, 'ColB'] = m
print(df)
ColA ColB ColC
0 A 1.0 First
1 B 2.0 Second
2 C NaN Third

You can swap 0s for nans and then move up the rest of the values:
import numpy as np
df.ColB.replace(0, np.nan, inplace=True)
df.assign(ColB=df.ColB.shift(df.ColB.count() - len(df.ColB)))

Related

Creating Pandas DataFrame Column with Dashes

Background -
I am aware of the various ways to add columns to a pandas DataFrame, such as assign(), insert(), concat() etc.. However, I haven't been able to ascertain how I can add a column (similar to inset() in that I specify the position or index) and have that column populated with dashed values.
Expected DataFrame structure -Per the following, I am looking to add colC and add - values that span the same length as the columns that do have data.
print(df)
colA colB colC
0 True 1 -
1 False 2 -
2 False 3 -
In the above example, as the DataFrame has 3x rows, colC populates with - values for all 3x rows.
It's as simple as it gets.
>>> df
colA colB
0 True 1
1 False 2
2 False 3
>>> df['colC'] = '-'
>>> df
colA colB colC
0 True 1 -
1 False 2 -
2 False 3 -

Get column name based on condition in pandas

I have a dataframe as below:
I want to get the name of the column if column of a particular row if it contains 1 in the that column.
Use DataFrame.dot:
df1 = df.dot(df.columns)
If there is multiple 1 per row:
df2 = df.dot(df.columns + ';').str.rstrip(';')
Firstly
Your question is very ambiguous and I recommend reading this link in #sammywemmy's comment. If I understand your problem correctly... we'll talk about this mask first:
df.columns[
(df == 1) # mask
.any(axis=0) # mask
]
What's happening? Lets work our way outward starting from within df.columns[**HERE**] :
(df == 1) makes a boolean mask of the df with True/False(1/0)
.any() as per the docs:
"Returns False unless there is at least one element within a series or along a Dataframe axis that is True or equivalent".
This gives us a handy Series to mask the column names with.
We will use this example to automate for your solution below
Next:
Automate to get an output of (<row index> ,[<col name>, <col name>,..]) where there is 1 in the row values. Although this will be slower on large datasets, it should do the trick:
import pandas as pd
data = {'foo':[0,0,0,0], 'bar':[0, 1, 0, 0], 'baz':[0,0,0,0], 'spam':[0,1,0,1]}
df = pd.DataFrame(data, index=['a','b','c','d'])
print(df)
foo bar baz spam
a 0 0 0 0
b 0 1 0 1
c 0 0 0 0
d 0 0 0 1
# group our df by index and creates a dict with lists of df's as values
df_dict = dict(
list(
df.groupby(df.index)
)
)
Next step is a for loop that iterates the contents of each df in df_dict, checks them with the mask we created earlier, and prints the intended results:
for k, v in df_dict.items(): # k: name of index, v: is a df
check = v.columns[(v == 1).any()]
if len(check) > 0:
print((k, check.to_list()))
('b', ['bar', 'spam'])
('d', ['spam'])
Side note:
You see how I generated sample data that can be easily reproduced? In the future, please try to ask questions with posted sample data that can be reproduced. This way it helps you understand your problem better and it is easier for us to answer it for you.
Getting column name are dividing in 2 sections.
If you want in a new column name then condition should be unique because it will only give 1 col name for each row.
data = {'foo':[0,0,3,0], 'bar':[0, 5, 0, 0], 'baz':[0,0,2,0], 'spam':[0,1,0,1]}
df = pd.DataFrame(data)
df=df.replace(0,np.nan)
df
foo bar baz spam
0 NaN NaN NaN NaN
1 NaN 5.0 NaN 1.0
2 3.0 NaN 2.0 NaN
3 NaN NaN NaN 1.0
If you were looking for min or maximum
max= df.idxmax(1)
min = df.idxmin(1)
out= df.assign(max=max , min=min)
out
foo bar baz spam max min
0 NaN NaN NaN NaN NaN NaN
1 NaN 5.0 NaN 1.0 bar spam
2 3.0 NaN 2.0 NaN foo baz
3 NaN NaN NaN 1.0 spam spam
2nd case, If your condition is satisfied in multiple columns for example you are looking for columns that contain 1 and you are looking for list because its not possible to adjust in same dataframe.
str_con= df.astype(str).apply(lambda x:x.str.contains('1.0',case=False, na=False)).any()
df.column[str_con]
#output
Index(['spam'], dtype='object') #only spam contains 1
Or you are looking for numerical condition columns contains value more than 1
num_con = df.apply(lambda x:x>1.0).any()
df.columns[num_con]
#output
Index(['foo', 'bar', 'baz'], dtype='object') #these col has higher value than 1
Happy learning

new column with each element as a list pandas

I have some data frames where I want to add new columns, and in this new column each element should be a string for example of two rows,
df
index colA colB
0 a a1
1 b b1
Now I can add new column as
df['colC']=5
index colA colB colC
0 a a1 5
1 b b1 5
now I want to add a third column with each element as list
index colA colB colC
0 a a1 ['m','n','p']
1 b b1 ['m','n','p']
but,
df['colC']=['m','n','p'] is giving error
ValueError: Length of values does not match length of index
which is obvious.
I know in our example I can do
df['colC']=[['m','n','p'],['m','n','p']]
But I want to set each element to same list of strings, when I do not know number of rows.
Can anyone suggest something easy to achieve this.
Adding object(list) to cell is tricky
df['colC']=[['m','n','p']]*len(df)
Or
df['colC'] = [list('mnp') for _ in range(len(df))]
df returns:
index colA colB colC
0 0 a a1 [m, n, p]
1 1 b b1 [m, n, p]

How to select rows with NaN in particular column?

Given this dataframe, how to select only those rows that have "Col2" equal to NaN?
df = pd.DataFrame([range(3), [0, np.NaN, 0], [0, 0, np.NaN], range(3), range(3)], columns=["Col1", "Col2", "Col3"])
which looks like:
0 1 2
0 0 1 2
1 0 NaN 0
2 0 0 NaN
3 0 1 2
4 0 1 2
The result should be this one:
0 1 2
1 0 NaN 0
Try the following:
df[df['Col2'].isnull()]
#qbzenker provided the most idiomatic method IMO
Here are a few alternatives:
In [28]: df.query('Col2 != Col2') # Using the fact that: np.nan != np.nan
Out[28]:
Col1 Col2 Col3
1 0 NaN 0.0
In [29]: df[np.isnan(df.Col2)]
Out[29]:
Col1 Col2 Col3
1 0 NaN 0.0
If you want to select rows with at least one NaN value, then you could use isna + any on axis=1:
df[df.isna().any(axis=1)]
If you want to select rows with a certain number of NaN values, then you could use isna + sum on axis=1 + gt. For example, the following will fetch rows with at least 2 NaN values:
df[df.isna().sum(axis=1)>1]
If you want to limit the check to specific columns, you could select them first, then check:
df[df[['Col1', 'Col2']].isna().any(axis=1)]
If you want to select rows with all NaN values, you could use isna + all on axis=1:
df[df.isna().all(axis=1)]
If you want to select rows with no NaN values, you could notna + all on axis=1:
df[df.notna().all(axis=1)]
This is equivalent to:
df[df['Col1'].notna() & df['Col2'].notna() & df['Col3'].notna()]
which could become tedious if there are many columns. Instead, you could use functools.reduce to chain & operators:
import functools, operator
df[functools.reduce(operator.and_, (df[i].notna() for i in df.columns))]
or numpy.logical_and.reduce:
import numpy as np
df[np.logical_and.reduce([df[i].notna() for i in df.columns])]
If you're looking for filter the rows where there is no NaN in some column using query, you could do so by using engine='python' parameter:
df.query('Col2.notna()', engine='python')
or use the fact that NaN!=NaN like #MaxU - stop WAR against UA
df.query('Col2==Col2')

Finding highest values in each row in a data frame for python

I'd like to find the highest values in each row and return the column header for the value in python. For example, I'd like to find the top two in each row:
df =
A B C D
5 9 8 2
4 1 2 3
I'd like my for my output to look like this:
df =
B C
A D
You can use a dictionary comprehension to generate the largest_n values in each row of the dataframe. I transposed the dataframe and then applied nlargest to each of the columns. I used .index.tolist() to extract the desired top_n columns. Finally, I transposed this result to get the dataframe back into the desired shape.
top_n = 2
>>> pd.DataFrame({n: df.T[col].nlargest(top_n).index.tolist()
for n, col in enumerate(df.T)}).T
0 1
0 B C
1 A D
I decided to go with an alternative way: Apply the pd.Series.nlargest() function to each row.
Path to Solution
>>> df.apply(pd.Series.nlargest, axis=1, n=2)
A B C D
0 NaN 9.0 8.0 NaN
1 4.0 NaN NaN 3.0
This gives us the highest values for each row, but keeps the original columns, resulting in ugly NaN values where a column is not everywhere part of the top n values. Actually, we want to receive the index of the nlargest() result.
>>> df.apply(lambda s, n: s.nlargest(n).index, axis=1, n=2)
0 Index(['B', 'C'], dtype='object')
1 Index(['A', 'D'], dtype='object')
dtype: object
Almost there. Only thing left is to convert the Index objects into Series.
Solution
df.apply(lambda s, n: pd.Series(s.nlargest(n).index), axis=1, n=2)
0 1
0 B C
1 A D
Note that I'm not using the Index.to_series() function since I do not want to preserve the original index.

Categories