subtract a dataframe by another dataframe series [duplicate] - python

This question already has answers here:
How do I operate on a DataFrame with a Series for every column?
(3 answers)
Closed 4 years ago.
I have a two dataframe with identical index in both dataframes. I want to perform subtract operation. i.e., I want to subtract the all columns in df1 with respect to df2 column. where df2 has only one column.
Input:
df1
col1 col2 col3
0 10 34 6
1 3 23 123
2 23 45 23
3 5 1 5
4 1 45 6
5 65 6 88
df2
base
0 12
1 43
2 435
3 76
4 23
5 12
I tried,
df1-df2['base']
Results,
0 1 2 3 4 5 col1 col2 col3
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN NaN NaN NaN
5 NaN NaN NaN NaN NaN NaN NaN NaN NaN
But Expected.
col1 col2 col3
0 -2 22 -6
1 -40 -20 80
2 -412 -390 -412
3 -71 -75 -71
4 -22 22 -17
5 53 -6 76
Why I'am getting NaN and how two df's combined?
How to get expected result?

Use DataFrame.substract with argument axis=0
df1.subtract(df2['base'], axis=0)
[out]
col1 col2 col3
0 -2 22 -6
1 -40 -20 80
2 -412 -390 -412
3 -71 -75 -71
4 -22 22 -17
5 53 -6 76

Related

How to insert multiple rows to a pandas DF with a missing value?

I have a DF:
df = pd.DataFrame({"A":[0,1,3,5,6], "B":['B0','B1','B3','B5','B6'], "C":['C0','C1','C3','C5','C6']})
I’m trying to insert 10 empty rows at the position where the number is missed from the continuous sequence of column A. For the 10 rows, values of column A, B and C's are the missed number, Nan, and Nan, respectively. Like this:
A B C
0 B0 C0
1 B1 C1
2 NaN NaN
2 NaN NaN
2 NaN NaN
2 NaN NaN
2 NaN NaN
2 NaN NaN
2 NaN NaN
2 NaN NaN
2 NaN NaN
2 NaN NaN
3 B3 C3
4 NaN NaN
4 NaN NaN
4 NaN NaN
4 NaN NaN
4 NaN NaN
4 NaN NaN
4 NaN NaN
4 NaN NaN
4 NaN NaN
4 NaN NaN
5 B5 C5
6 B6 C6
I've played with index, but this adds only 1 row:
df1 = df.merge(how='right', on='A', right = pd.DataFrame({'A':np.arange(df.iloc[0]['A'],
df.iloc[-1]['A']+1)})).reset_index().drop(['index'], axis=1)
Thanks in advance!
Let's try to repeat the indices where the values diff is above 1 and concat:
N = 10
out = (pd.concat([df, df[['A']].loc[df.index.repeat(df['A'].diff(-1).lt(-1).mul(N-1))]])
.sort_index(kind='stable')
)
Output:
A B C
0 0 B0 C0
1 1 B1 C1
1 1 NaN NaN
1 1 NaN NaN
1 1 NaN NaN
1 1 NaN NaN
1 1 NaN NaN
1 1 NaN NaN
1 1 NaN NaN
1 1 NaN NaN
1 1 NaN NaN
2 3 B3 C3
2 3 NaN NaN
2 3 NaN NaN
2 3 NaN NaN
2 3 NaN NaN
2 3 NaN NaN
2 3 NaN NaN
2 3 NaN NaN
2 3 NaN NaN
2 3 NaN NaN
3 5 B5 C5
4 6 B6 C6
One approach could be as follows:
First, use df.set_index to make column A the index.
Next, use range for a range that runs from 0 through to the max of A (i.e. 6).
Now, apply df.reindex based on np.repeat. We use a loop to feed a 1 to the repeats parameter for all the values that exist in A, for all the ones that are missing, we use 10.
Finally, chain df.reset_index.
df.set_index('A', inplace=True)
rng = range(df.index.max()+1)
df = df.reindex(np.repeat(rng,[1 if i in df.index else 10 for i in rng]))\
.reset_index(drop=False)
print(df)
A B C
0 0 B0 C0
1 1 B1 C1
2 2 NaN NaN
3 2 NaN NaN
4 2 NaN NaN
5 2 NaN NaN
6 2 NaN NaN
7 2 NaN NaN
8 2 NaN NaN
9 2 NaN NaN
10 2 NaN NaN
11 2 NaN NaN
12 3 B3 C3
13 4 NaN NaN
14 4 NaN NaN
15 4 NaN NaN
16 4 NaN NaN
17 4 NaN NaN
18 4 NaN NaN
19 4 NaN NaN
20 4 NaN NaN
21 4 NaN NaN
22 4 NaN NaN
23 5 B5 C5
24 6 B6 C6

Pandas assign series to a column based on index

I have 2 dataframes:
DF1:
Count
0 98.0
1 176.0
2 260.5
3 389.0
I have to assign these values to a column in another dataframe for every 3rd row starting from 3rd row.
The Output of DF2 should look like this:
Count
0
1
2 98.0
3
4
5 176.0
6
7
8 260.5
9
10
11 389.0
I am doing
DF2.loc[2::3,'Count'] = DF1['Count']
But, I am not getting the expected results.
Use values
Ohterwise, Pandas tries to align the index values from DF1 and that messes you up.
DF2.loc[2::3, 'Count'] = DF1['Count'].values
DF2
Count
0 NaN
1 NaN
2 98.0
3 NaN
4 NaN
5 176.0
6 NaN
7 NaN
8 260.5
9 NaN
10 NaN
11 389.0
New From DF1
DF1.set_index(DF1.index * 3 + 2).reindex(range(len(DF1) * 3))
Count
0 NaN
1 NaN
2 98.0
3 NaN
4 NaN
5 176.0
6 NaN
7 NaN
8 260.5
9 NaN
10 NaN
11 389.0

Count number of columns with some values for each row in pandas

I have dataframe like this,
data:
Site code Col1 Col2 Col3
A5252 24 53 NaN
A5636 36 NaN NaN
A4366 NaN NaN NaN
A7578 42 785 24
And I want to count a number of columns with some value, but none NaN.
Desired output:
Site code Col1 Col2 Col3 Count
A5252 24 53 NaN 2
A5636 36 NaN NaN 1
A4366 NaN NaN NaN 0
A7578 42 785 24 3
Something oposite to this:
df = data.isnull().sum(axis=1)
Need change isnull to notnull:
#if first columns is not index, set it
data = data.set_index('Site code')
data['Count'] = data.notnull().sum(axis=1)
Or use function DataFrame.count:
data = data.set_index('Site code')
data['Count'] = data.count(axis=1)
print (data)
Col1 Col2 Col3 Count
Site code
A5252 24.0 53.0 NaN 2
A5636 36.0 NaN NaN 1
A4366 NaN NaN NaN 0
A7578 42.0 785.0 24.0 3
Another solution with selecting columns by loc (Site code is column, not index):
print (data.loc[:, 'Col1':])
Col1 Col2 Col3
0 24.0 53.0 NaN
1 36.0 NaN NaN
2 NaN NaN NaN
3 42.0 785.0 24.0
data['Count'] = data.loc[:, 'Col1':].count(axis=1)
print (data)
Site code Col1 Col2 Col3 Count
0 A5252 24.0 53.0 NaN 2
1 A5636 36.0 NaN NaN 1
2 A4366 NaN NaN NaN 0
3 A7578 42.0 785.0 24.0 3
Another nice idea from Jon Clements - use filter:
data['Count'] = data.filter(regex="^Col").count(axis=1)
print (data)
Site code Col1 Col2 Col3 Count
0 A5252 24.0 53.0 NaN 2
1 A5636 36.0 NaN NaN 1
2 A4366 NaN NaN NaN 0
3 A7578 42.0 785.0 24.0 3
Simple use notnull()
import pandas as pd
df = pd.read_csv("your_csv.csv")
df['count'] = df.notnull().sum(axis=1)
print(df)
Also to add a column to a dataframe just use:
df['new_column_name'] = newcolumn
output:
Site code Col1 Col 2 Col3 count
A5252 24 53 NaN 2
A5636 36 NaN NaN 1
A4366 NaN NaN NaN 0
A7578 42 785 24 3

Error adding date column in Pandas

Needed some help solving why my dataframe is returning all NaNs.
print df
0 1 2 3 4
0 1 9 0 7 30
1 2 8 0 4 30
2 3 5 0 3 30
3 4 3 0 3 30
4 5 1 0 3 30
Then I added date index. I only need to increment by one day for 5 days.
date = pd.date_range(datetime.datetime.today(), periods=5)
data = DataFrame(df, index=date)
print data
0 1 2 3 4
2014-04-10 17:16:09.433000 NaN NaN NaN NaN NaN
2014-04-11 17:16:09.433000 NaN NaN NaN NaN NaN
2014-04-12 17:16:09.433000 NaN NaN NaN NaN NaN
2014-04-13 17:16:09.433000 NaN NaN NaN NaN NaN
2014-04-14 17:16:09.433000 NaN NaN NaN NaN NaN
Tried a few different things to no avail. If I switch my original dataframe to
np.random.randn(5,5)
Then it works. Anyone have an idea of what is going on here?
Edit: Going to add that the data type is float64
print df.dtypes
0 float64
1 float64
2 float64
3 float64
4 float64
dtype: object
You should overwrite the index of the original dataframe with the following:
df.index = date
What DataFrame(df, index=date) does is that it creates new dataframe by matching the values of index to the df being used, for example:
DataFrame(df, index=[0,1,2,5,5])
returns the following:
0 1 2 3 4
0 1 9 0 7 30
1 2 8 0 4 30
2 3 5 0 3 30
5 NaN NaN NaN NaN NaN
5 NaN NaN NaN NaN NaN
because 5 is not included in the index of the original dataframe.

Python Pandas - turn absolute periods into relative periods

I have a dataframe that I want to use to calculate rolling sums relative to an event date. The event date is different for each column and is represented by the latest date in which there is a value in each column.
Here is a toy example:
rng = pd.date_range('1/1/2011', periods=8, freq='D')
df = pd.DataFrame({
'1' : [56, 2, 3, 4, 5, None, None, None],
'2' : [51, 2, 3, 4, 5, 6, None, None],
'3' : [51, 2, 3, 4, 5, 6, 0, None]}, index = rng)
pd.rolling_sum(df,3)
The dataframe it produces looks like this:
1 2 3
2011-01-01 NaN NaN NaN
2011-01-02 NaN NaN NaN
2011-01-03 61 56 56
2011-01-04 9 9 9
2011-01-05 12 12 12
2011-01-06 NaN 15 15
2011-01-07 NaN NaN 11
2011-01-08 NaN NaN NaN
I now want to align the last event dates on the final row of the dataframe and set the index to 0 with each preceding row index -1,-2,-3 and so on. The periods no longer being absolute but relative to the event date.
The desired dataframe would look like this:
1 2 3
-7.00 NaN NaN NaN
-6.00 NaN NaN NaN
-5.00 NaN NaN NaN
-4.00 NaN NaN 56
-3.00 NaN 56 9
-2.00 61 9 12
-1.00 9 12 15
0.00 12 15 11
Thanks for any guidance.
I don't see any easy ways to do this. The following will work, but a bit messy.
In [37]: def f(x):
....: y = x.dropna()
....: return Series(y.values,x.index[len(x)-len(y):])
....:
In [40]: roller = pd.rolling_sum(df,3).reset_index(drop=True)
In [41]: roller
Out[41]:
1 2 3
0 NaN NaN NaN
1 NaN NaN NaN
2 61 56 56
3 9 9 9
4 12 12 12
5 NaN 15 15
6 NaN NaN 11
7 NaN NaN NaN
[8 rows x 3 columns]
In [43]: roller.apply(f).reindex_like(roller)
Out[43]:
1 2 3
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN 56
4 NaN 56 9
5 61 9 12
6 9 12 15
7 12 15 11
[8 rows x 3 columns]
In [44]: result = roller.apply(f).reindex_like(roller)
In [49]: result.index = result.index.values-len(result.index)+1
In [50]: result
Out[50]:
1 2 3
-7 NaN NaN NaN
-6 NaN NaN NaN
-5 NaN NaN NaN
-4 NaN NaN 56
-3 NaN 56 9
-2 61 9 12
-1 9 12 15
0 12 15 11
[8 rows x 3 columns]

Categories