I`d like to qualify my dropna option within the first 3 rows of the dataframe. The original dataframe is:
A C
0 0.0 0
1 NaN 1
2 2.0 2
3 3.0 3
4 NaN 4
5 5.0 5
6 6.0 6
And I would love to see:
A C
0 0.0 0
2 2.0 2
3 3.0 3
4 NaN 4
5 5.0 5
6 6.0 6
With only the row indexed 1 removed. Is it possible to make it within just one line of code?
Thanks!
You could use
In [594]: df[df.notnull().all(1) | (df.index > 3)]
Out[594]:
A C
0 0.0 0
2 2.0 2
3 3.0 3
4 NaN 4
5 5.0 5
6 6.0 6
Related
I think this has probably been answered, but I cant find the answer anywhere. It is pretty trivial. How can I add a list to a pandas dataframe as a column, but keep the NaNs at the top?
This is the code i have:
df = pd.DataFrame()
a = [1,2,3,4,5,6,7]
b = [2,3,5,6,4,3,2]
c = [2,3,5,6,4,3]
d = [1,2,3,4]
df["a"] = a
df["b"] = b
df.loc[range(len(c)),'c'] = c
df.loc[range(len(d)),'d'] = d
print(df)
which returns this:
a b c d
0 1 2 2.0 1.0
1 2 3 3.0 2.0
2 3 5 5.0 3.0
3 4 6 6.0 4.0
4 5 4 4.0 NaN
5 6 3 3.0 NaN
6 7 2 NaN NaN
However, I would like it to return this instead:
a b c d
0 1 2 NaN NaN
1 2 3 2.0 NaN
2 3 5 3.0 NaN
3 4 6 5.0 1.0
4 5 4 6.0 2.0
5 6 3 4.0 3.0
6 7 2 3.0 4.0
Let us try
df=df.apply(lambda x : sorted(x,key=pd.notnull))
a b c d
0 1 2 NaN NaN
1 2 3 2.0 NaN
2 3 5 3.0 NaN
3 4 6 5.0 1.0
4 5 4 6.0 2.0
5 6 3 4.0 3.0
6 7 2 3.0 4.0
l = df.apply(sorted, key = lambda s: (~np.isnan(s), s), axis = 0)
You can sort the dataframe rows using a key argument to keep NaNs first
If the problem is with assignment instead of transformation, you can also try with iloc with get_loc after creating a dictionary (d):
d = {'c':c,'d':d}
df = df.reindex(columns=df.columns.union(d.keys(),sort=False))
for k,v in d.items():
df.iloc[-len(v):,df.columns.get_loc(k)] = v
print(df)
a b c d
0 1 2 NaN NaN
1 2 3 2.0 NaN
2 3 5 3.0 NaN
3 4 6 5.0 1.0
4 5 4 6.0 2.0
5 6 3 4.0 3.0
6 7 2 3.0 4.0
You may find out how many rows have NaN in them (using s.isna().sum()) and then do shift() to that column by the amount of Nans you have.
Code example on d column:
import pandas as pd
df = pd.DataFrame()
a = [1,2,3,4,5,6,7]
b = [2,3,5,6,4,3,2]
c = [2,3,5,6,4,3]
d = [1,2,3,4]
df["a"] = a
df["b"] = b
df.loc[range(len(c)),'c'] = c
df.loc[range(len(d)),'d'] = d
df['d'] = df['d'].shift(df['d'].isna().sum()) # example on the 'd' row
print(df)
Output:
a b c d
0 1 2 2.0 NaN
1 2 3 3.0 NaN
2 3 5 5.0 NaN
3 4 6 6.0 1.0
4 5 4 4.0 2.0
5 6 3 3.0 3.0
6 7 2 NaN 4.0
the way how to do it! just reset index and put na values first.
df.reset_index()
df2 = df.sort_values(by =['a','b','c','d'], ascending = False, na_position='first')
#Result
a b c d
6 7 2 NaN NaN
5 6 3 3.0 NaN
4 5 4 4.0 NaN
3 4 6 6.0 4.0
2 3 5 5.0 3.0
1 2 3 3.0 2.0
0 1 2 2.0 1.0
This is a new question after this, with more information
I want to merge two dataframes like the outer join, but I do not want the cartesian product, but only the concatenation, for example:
df1:
A
0 2
1 2
2 2
3 2
4 2
5 3
df2:
B
0 1
1 2
2 2
3 3
4 4
with : df3 = df1.merge(df2, left_on=['A'], right_on=['B'], how='outer') I get df3:
A B
0 2.0 2
1 2.0 2
2 2.0 2
3 2.0 2
4 2.0 2
5 2.0 2
6 2.0 2
7 2.0 2
8 2.0 2
9 2.0 2
10 3.0 3
11 NaN 1
12 NaN
But I want:
A B
0 2.0 2
1 2.0 2
2 2.0 NaN
3 2.0 NaN
4 2.0 NaN
5 3.0 3
6 NaN 1
7 NaN 4
just concatenate the first 'm' of df1 with the m of df2 and dhe remaining values of df1
with a NaN value
get the cumulative counts of A and B, and use the combination of the counts with A and B as merge conditions :
df1['checker'] = df1.groupby("A").cumcount()
df2['checker'] = df2.groupby("B").cumcount()
res = df1.merge(df2,left_on=['A','checker'],right_on=['B','checker'],how='outer').drop('checker',axis=1)
res
A B
0 2.0 2.0
1 2.0 2.0
2 2.0 NaN
3 2.0 NaN
4 2.0 NaN
5 3.0 3.0
6 NaN 1.0
7 NaN 4.0
You might want to try/use the concat method. ex:
result = pd.concat([A, B], axis=1, sort=False)
You can read more here.
I have a dataframe and I want to drop duplicates based on different conditions....
A B
0 1 1.0
1 1 1.0
2 2 2.0
3 2 2.0
4 3 3.0
5 4 4.0
6 5 5.0
7 - 5.1
8 - 5.1
9 - 5.3
I want to drop all the duplicates from column A except rows with "-". After this, I want to drop duplicates from column A with "-" as a value based on their column B value. Given the input dataframe, this should return the following:-
A B
0 1 1.0
2 2 2.0
4 3 3.0
5 4 4.0
6 5 5.0
7 - 5.1
9 - 5.3
I have the following code but it's not very efficient for very large amounts of data, how can I improve this....
def generate(df):
str_col = df[df["A"] == "-"]
df.drop(df[df["A"] == "-"].index, inplace=True)
df = df.drop_duplicates(subset="A")
str_col = b.drop_duplicates(subset="B")
bigdata = df.append(str_col, ignore_index=True)
return bigdata.sort_values("B")
duplicated and eq:
df[~df.duplicated('A') # keep those not duplicates in A
| (df['A'].eq('-') # or those '-' in A
& ~df['B'].duplicated())] # which are not duplicates in B
Output:
A B
0 1 1.0
2 2 2.0
4 3 3.0
5 4 4.0
6 5 5.0
7 - 5.1
9 - 5.3
df.drop_duplicates(subset=['A', 'B'])
Given a full set of data:
A B C
0 1 1.0 0
1 1 1.0 1
2 2 2.0 2
3 2 2.0 3
4 3 3.0 4
5 4 4.0 5
6 5 5.0 6
7 - 5.1 7
8 - 5.1 8
9 - 5.3 9
Result:
A B C
0 1 1.0 0
2 2 2.0 2
4 3 3.0 4
5 4 4.0 5
6 5 5.0 6
7 - 5.1 7
9 - 5.3 9
groupby + head
df.groupby(['A','B']).head(1)
Out[7]:
A B
0 1 1.0
2 2 2.0
4 3 3.0
5 4 4.0
6 5 5.0
7 - 5.1
9 - 5.3
How can I merge two pandas dataframes with different lengths like those:
df1 = Index block_id Ut_rec_0
0 0 7
1 1 10
2 2 2
3 3 0
4 4 10
5 5 3
6 6 6
7 7 9
df2 = Index block_id Ut_rec_1
0 0 3
2 2 5
3 3 5
5 5 9
7 7 4
result = Index block_id Ut_rec_0 Ut_rec_1
0 0 7 3
1 1 10 NaN
2 2 2 5
3 3 0 5
4 4 10 NaN
5 5 3 9
6 6 6 NaN
7 7 9 4
I already tried something like, but it did not work:
df_result = pd.concat([df1, df2], join_axes=[df1['block_id']])
I already tried:
df_result = pd.concat([df1,df2,axis = 1)
But the result was:
Index block_id Ut_rec_0 Index block_id Ut_rec_1
0 0 7 0.0 0.0 3.0
1 1 10 1.0 2.0 5.0
2 2 2 2.0 3.0 5.0
3 3 0 3.0 5.0 9.0
4 4 10 4.0 7.0 4.0
5 5 3 NaN NaN NaN
6 6 6 NaN NaN NaN
7 7 9 NaN NaN NaN
pandas.DataFrame.join can "join" dataframes based on overlap in column data (or index). Something like this will likely work for you:
df1.join(df2.set_index('block_id'), on='block_id')
As #Wen said best would be using concat with axis as 1, like the below code:
pd.concat([df1, df2],axis=1)
you need, pd.merge with outer join,
pd.merge(df1,df2,on=['Index','block_id'],how='outer')
#[out]
#Index block_id Ut_rec_0 Ut_rec_1
#0 0 7 3.0
#1 1 10 NaN
#2 2 2 5.0
#3 3 0 5.0
#4 4 10 NaN
#5 5 3 9.0
#6 6 6 NaN
#7 7 9 4.0
I have the following dataframe describing the percent of shares held by a type of investor in a company:
company investor pct
1 A 1
1 A 2
1 B 4
2 A 2
2 A 4
2 A 6
2 C 10
2 C 8
And I would like to create a new column for each investor type computing the mean of the shares held in each company. I also need to keep the same lenght of the dataset, using transform for instance.
Here is the result I would like to have:
company investor pct pct_mean_A pct_mean_B pct_mean_C
1 A 1 1.5 4 0
1 A 2 1.5 4 0
1 B 4 1.5 4 0
2 A 2 4.0 0 9
2 A 4 4.0 0 9
2 A 6 4.0 0 9
2 C 10 4.0 0 9
2 C 8 4.0 0 9
Thanks a lot for your help!
Use groupby with aggregate mean and reshape by unstack for helper DataFrame which is join to original df:
s = (df.groupby(['company','investor'])['pct']
.mean()
.unstack(fill_value=0)
.add_prefix('pct_mean_'))
df = df.join(s, 'company')
print (df)
company investor pct pct_mean_A pct_mean_B pct_mean_C
0 1 A 1 1.5 4.0 0.0
1 1 A 2 1.5 4.0 0.0
2 1 B 4 1.5 4.0 0.0
3 2 A 2 4.0 0.0 9.0
4 2 A 4 4.0 0.0 9.0
5 2 A 6 4.0 0.0 9.0
6 2 C 10 4.0 0.0 9.0
7 2 C 8 4.0 0.0 9.0
Or use pivot_table with default aggregate function mean:
s = df.pivot_table(index='company',
columns='investor',
values='pct',
fill_value=0).add_prefix('pct_mean_')
df = df.join(s, 'company')
print (df)
company investor pct pct_mean_A pct_mean_B pct_mean_C
0 1 A 1 1.5 4 0
1 1 A 2 1.5 4 0
2 1 B 4 1.5 4 0
3 2 A 2 4.0 0 9
4 2 A 4 4.0 0 9
5 2 A 6 4.0 0 9
6 2 C 10 4.0 0 9
7 2 C 8 4.0 0 9