I want to create a new column diff aqualing the differenciation of a series in a nother column.
The following is my dataframe:
df=pd.DataFrame({
'series_1' : [10.1, 15.3, 16, 12, 14.5, 11.8, 2.3, 7.7,5,10],
'series_2' : [9.6,10.4, 11.2, 3.3, 6, 4, 1.94, 15.44, 6.17, 8.16]
})
It has the following display:
series_1 series_2
0 10.1 9.60
1 15.3 10.40
2 16.0 11.20
3 12.0 3.30
4 14.5 6.00
5 11.8 4.00
6 2.3 1.94
7 7.7 15.44
8 5.0 6.17
9 10.0 8.16
Goal
Is to get the following output:
series_1 series_2 diff_2
0 10.1 9.60 NaN
1 15.3 10.40 0.80
2 16.0 11.20 0.80
3 12.0 3.30 -7.90
4 14.5 6.00 2.70
5 11.8 4.00 -2.00
6 2.3 1.94 -2.06
7 7.7 15.44 13.50
8 5.0 6.17 -9.27
9 10.0 8.16 1.99
My code
To reach the desired output I used the following code and it worked:
diff_2=[np.nan]
l=len(df)
for i in range(1, l):
diff_2.append(df['series_2'][i] - df['series_2'][i-1])
df['diff_2'] = diff_2
Issue with my code
I replicated here a simplified dataframe, the real one I am working on is extremly large and my code took almost 9 minute runtime!
I want an alternative allowing me to get the output in a fast way,
Any suggestion from your side will be highly appreciated, thanks.
here is one way to do it, using diff
# create a new col by taking difference b/w consecutive rows of DF using diff
df['diff_2']=df['series_2'].diff()
df
series_1 series_2 diff_2
0 10.1 9.60 NaN
1 15.3 10.40 0.80
2 16.0 11.20 0.80
3 12.0 3.30 -7.90
4 14.5 6.00 2.70
5 11.8 4.00 -2.00
6 2.3 1.94 -2.06
7 7.7 15.44 13.50
8 5.0 6.17 -9.27
9 10.0 8.16 1.99
You might want to add the following line of code:
df["diff_2"] = df["series_2"].sub(df["series_2"].shift(1))
to achieve your goal output:
series_1 series_2 diff_2
0 10.1 9.60 NaN
1 15.3 10.40 0.80
2 16.0 11.20 0.80
3 12.0 3.30 -7.90
4 14.5 6.00 2.70
5 11.8 4.00 -2.00
6 2.3 1.94 -2.06
7 7.7 15.44 13.50
8 5.0 6.17 -9.27
9 10.0 8.16 1.99
That is a build-in pandas feature, so that should be optimized for good performance.
I've got a dataframe that looks like:
0 1 2 3 4 5 6 7 8 9 10 11
12 13 13 13.4 13.4 12.4 12.4 16 0 0 0 0
14 12.2 12.2 13.4 13.4 12.6 12.6 19 5 5 6.7 6.7
.
.
.
Each 'layer'/row has pairs that are duplicates that I want to reduce.
The one problem is that there are repeating 0s as well so I cannot just simply remove duplicates per row or it will leave an uneven number of rows.
My desired output would be a lambda function that I could apply to all rows of this dataframe to get this:
0 1 2 3 4 5 6
12 13 13.4 12.4 16 0 0
14 12.2 13.4 12.6 19 5 6.7
.
.
.
Is there a simple function I could write to do this?
Method 1 using transpose
As mentioned by Yuca in the comments:
df = df.T.drop_duplicates().T
df.columns = range(len(df.columns))
print(df)
0 1 2 3 4 5 6
0 12.0 13.0 13.4 12.4 16.0 0.0 0.0
1 14.0 12.2 13.4 12.6 19.0 5.0 6.7
Method 2 using list comprehension with even numbers
We can make a list of even numbers and then select those columns based on their index:
idxcols = [x-1 for x in range(len(df.columns)) if x % 2]
df = df.iloc[:, idxcols]
df.columns = range(len(df.columns))
print(df)
0 1 2 3 4 5
0 12 13.0 13.4 12.4 0 0.0
1 14 12.2 13.4 12.6 5 6.7
In your case
from itertools import zip_longest
l=[sorted(set(x), key=x.index) for x in df.values.tolist()]
newdf=pd.DataFrame(l).ffill(1)
newdf
Out[177]:
0 1 2 3 4 5 6
0 12.0 13.0 13.4 12.4 16.0 0.0 0.0
1 14.0 12.2 13.4 12.6 19.0 5.0 6.7
You can use functools.reduce to sequentially concatenate columns to your output DataFrame if the next column is not equal to the last column added:
from functools import reduce
output_df = reduce(
lambda d, c: d if (d.iloc[:,-1] == df[c]).all() else pd.concat([d, df[c]], axis=1),
df.columns[1:],
df[df.columns[0]].to_frame()
)
print(output_frame)
# 0 1 3 5 7 8 10
#0 12 13.0 13.4 12.4 16 0 0.0
#1 14 12.2 13.4 12.6 19 5 6.7
This method also maintains the column names of the columns which were picked, if that's important.
Assuming this is your input df:
print(df)
# 0 1 2 3 4 5 6 7 8 9 10 11
#0 12 13.0 13.0 13.4 13.4 12.4 12.4 16 0 0 0.0 0.0
#1 14 12.2 12.2 13.4 13.4 12.6 12.6 19 5 5 6.7 6.7
I have a df with columns (A, B, C, D, F). I want to:
1) Compare consecutive rows
2) if the absolute difference between consecutive E <=1 AND absolute difference between consecutive C>7, then delete the row with the lowest C value.
Sample Data:
A B C D E
0 94.5 4.3 26.0 79.0 NaN
1 34.0 8.8 23.0 58.0 54.5
2 54.2 5.4 25.5 9.91 50.2
3 42.2 3.5 26.0 4.91 5.1
4 98.0 8.2 13.0 193.7 5.5
5 20.5 9.6 17.0 157.3 5.3
6 32.9 5.4 24.5 45.9 79.8
Desired result:
A B C D E
0 94.5 4.3 26.0 79.0 NaN
1 34.0 8.8 23.0 58.0 54.5
2 54.2 5.4 25.5 9.91 50.2
3 42.2 3.5 26.0 4.91 5.01
4 32.9 5.4 24.5 45.9 79.8
Row 4 was deleted when compared with row 3. Row 5 is now row 4 and it was deleted when compared to row 3.
This code returns the results as boolean (not df with values) and does not satisfy all the conditions.
df = (abs(df.E.diff(-1)) <=1 & (abs(df.C.diff(-1)) >7.)
The result of the code:
0 False
1 False
2 False
3 True
4 False
5 False
6 False
dtype: bool
Any help appreciated.
Using shift() to compare the rows, and a while loop to iterate until no new change happens:
while(True):
rows = len(df)
df = df[~((abs(df.E - df.E.shift(1)) <= 1)&(abs(df.C - df.C.shift(1)) > 7))]
df.reset_index(inplace = True, drop = True)
if (rows == len(df)):
break
It produces the desired output:
A B C D E
0 94.5 4.3 26.0 79.00 NaN
1 34.0 8.8 23.0 58.00 54.5
2 54.2 5.4 25.5 9.91 50.2
3 42.2 3.5 26.0 4.91 5.1
4 32.9 5.4 24.5 45.90 79.8
I have several datasets, which I am trying to merge into one. Below, I created fictive simpler smaller datasets to test the method and it worked perfectly fine.
examplelog = pd.DataFrame({'Depth':[10,20,30,40,50,60,70,80],
'TVD':[10,19.9,28.8,37.7,46.6,55.5,64.4,73.3],
'T1':[11,11.3,11.5,12.,12.3,12.6,13.,13.8],
'T2':[11.3,11.5,11.8,12.2,12.4,12.7,13.1,14.1]})
log1 = pd.DataFrame({'Depth':[30,40,50,60],'T3':[12.1,12.6,13.7,14.]})
log2 = pd.DataFrame({'Depth':[20,30,40,50,60],'T4':[12.0,12.2,12.4,13.2,14.1]})
logs=[log1,log2]
result=examplelog.copy()
for i in logs:
result=result.merge(i,how='left', on='Depth')
print result
The result is, as expected:
Depth T1 T2 TVD T3 T4
0 10 11.0 11.3 10.0 NaN NaN
1 20 11.3 11.5 19.9 NaN 12.0
2 30 11.5 11.8 28.8 12.1 12.2
3 40 12.0 12.2 37.7 12.3 12.4
4 50 12.3 12.4 46.6 13.5 13.2
5 60 12.6 12.7 55.5 14.2 14.1
6 70 13.0 13.1 64.4 NaN NaN
7 80 13.8 14.1 73.3 NaN NaN
Happy with the result, I applied this method to my actual data, but for T3 and T4 in the resulting dataframes, I received just empty columns (all values were NaN). I suspect that the problem is with floating numbers, because my datasets were created on different machines by different software and although the "Depth" has the precision of two decimal numbers in all of the files, I am afraid that it may not be 20.05 in both of them, but one might be 20.049999999999999 while in the other it might be 20.05000000000001. Then, the merge function will not work, as shown in the following example:
examplelog = pd.DataFrame({'Depth':[10,20,30,40,50,60,70,80],
'TVD':[10,19.9,28.8,37.7,46.6,55.5,64.4,73.3],
'T1':[11,11.3,11.5,12.,12.3,12.6,13.,13.8],
'T2':[11.3,11.5,11.8,12.2,12.4,12.7,13.1,14.1]})
log1 = pd.DataFrame({'Depth':[30.05,40.05,50.05,60.05],'T3':[12.1,12.6,13.7,14.]})
log2 = pd.DataFrame({'Depth':[20.01,30.01,40.01,50.01,60.01],'T4':[12.0,12.2,12.4,13.2,14.1]})
logs=[log1,log2]
result=examplelog.copy()
for i in logs:
result=result.merge(i,how='left', on='Depth')
print result
Depth T1 T2 TVD T3 T4
0 10 11.0 11.3 10.0 NaN NaN
1 20 11.3 11.5 19.9 NaN NaN
2 30 11.5 11.8 28.8 NaN NaN
3 40 12.0 12.2 37.7 NaN NaN
4 50 12.3 12.4 46.6 NaN NaN
5 60 12.6 12.7 55.5 NaN NaN
6 70 13.0 13.1 64.4 NaN NaN
7 80 13.8 14.1 73.3 NaN NaN
Do you know how to fix this?
Thanks!
Round the Depth values to the appropriate precision:
for df in [examplelog, log1, log2]:
df['Depth'] = df['Depth'].round(1)
import numpy as np
import pandas as pd
examplelog = pd.DataFrame({'Depth':[10,20,30,40,50,60,70,80],
'TVD':[10,19.9,28.8,37.7,46.6,55.5,64.4,73.3],
'T1':[11,11.3,11.5,12.,12.3,12.6,13.,13.8],
'T2':[11.3,11.5,11.8,12.2,12.4,12.7,13.1,14.1]})
log1 = pd.DataFrame({'Depth':[30.05,40.05,50.05,60.05],'T3':[12.1,12.6,13.7,14.]})
log2 = pd.DataFrame({'Depth':[20.01,30.01,40.01,50.01,60.01],
'T4':[12.0,12.2,12.4,13.2,14.1]})
for df in [examplelog, log1, log2]:
df['Depth'] = df['Depth'].round(1)
logs=[log1,log2]
result=examplelog.copy()
for i in logs:
result=result.merge(i,how='left', on='Depth')
print(result)
yields
Depth T1 T2 TVD T3 T4
0 10 11.0 11.3 10.0 NaN NaN
1 20 11.3 11.5 19.9 NaN 12.0
2 30 11.5 11.8 28.8 12.1 12.2
3 40 12.0 12.2 37.7 12.6 12.4
4 50 12.3 12.4 46.6 13.7 13.2
5 60 12.6 12.7 55.5 14.0 14.1
6 70 13.0 13.1 64.4 NaN NaN
7 80 13.8 14.1 73.3 NaN NaN
Per the comments, rounding does not appear to work for the OP on the actual
data. To debug the problem, find some rows which should merge:
subframes = []
for frame in [examplelog, log2]:
mask = (frame['Depth'] < 20.051) & (frame['Depth'] >= 20.0)
subframes.append(frame.loc[mask])
Then post
for frame in subframes:
print(frame.to_dict('list'))
print(frame.info()) # shows the dtypes of the columns
This might give us the info we need to reproduce the problem.