Delete rows with condition if the first row is -1 - python

Hello i have a data set that i need to sort.
I am sorting it 3 different ways, (Forward fill, Backwards fill and drop")
My code works as is, but i have a problem, i been trying to solve with no luck.
If "forward fill is enabled and the first row is -1, then the drop function should be run.
and if "backwards fill" is enabled and the last row is -1 the the drop function should be run.
I tried alot of if statements but i always get the "the truth value is ambigous" error
def load_measurements(filename, fmode):
RD=pd.read_csv(filename, names=["Year","Month","Day","hour","Minute", "Second", "Zone1", "Zone2", "Zone3", "Zone4"])
#Forward fill replaces rows with -1 with previous data row
if fmode =='forward fill':
if RD.loc[6,0]==-1:
RD="blah"
else:
RD=RD.replace(-1, np.nan).ffill()
#Drop deletes all rows if a the data is=-1
if fmode=="drop":
RD = RD[RD.Zone2 != -1]
RD = RD[RD.Zone1 != -1]
RD = RD[RD.Zone3 != -1]
RD = RD[RD.Zone4 != -1]
print(RD)
#Backward fill replaces rows with -1 with the next data row
if fmode =='backward fill':
RD=RD.replace(-1, np.nan).bfill()
#Splits RD into tvec and data
tvec=RD.iloc[:, 0:6]
data=RD.iloc[:,6:]
return (data, tvec)
print(load_measurements("test.csv", 'forward fill'))

As I finally realized you want something like this:
Note, that I relaced pandas.DataFrame.bfill() and pandas.DataFrame.ffill() as these are aliasses of pandas.DataFrame.fillna(mode="bfill") and pandas.DataFrame.fillna(mode="ffill") respectively and bfill() doesn't gave the required result, I guess it is some bug.
import numpy as np
import pandas as pd
def load_measurements(filename, fmode):
RD = pd.read_csv(filename, names=["Year","Month","Day","hour","Minute", "Second", "Zone1", "Zone2", "Zone3", "Zone4"], sep=',')
print(RD)
RD = RD.replace(-1, np.NaN)
#Forward fill replaces rows with NaN with previous data row
if fmode =='forward fill':
if RD.iloc[0, 6:].isna().sum() == 0:
RD = RD.fillna(method='ffill')
else:
print("\nThere were -1 in the first row!\n")
#Backward fill replaces rows with NaN with the next data row
elif fmode =='backward fill':
if RD.iloc[-1, 6:].isna().sum() == 0:
RD = RD.fillna(method='bfill')
else:
print("\nThere were -1 in the last row!\n")
RD = RD.dropna()
#Splits RD into tvec and data
tvec=RD.iloc[:, 0:6]
data=RD.iloc[:, 6:]
return (data, tvec)
data , twec = load_measurements("test.csv", 'backward fill')
print(data)
print(twec)
Out("forward fill"):
Year Month Day hour Minute Second Zone1 Zone2 Zone3 Zone4
0 2006 1 15 4 0 0 -1.0 2.0 3.0 4.0
1 2006 2 11 6 1 0 5.0 6.0 1.0 8.0
2 2006 4 21 8 2 0 3.0 -1.0 -1.0 6.0
3 2006 7 14 9 3 0 2.0 3.0 4.0 5.0
4 2006 10 2 9 4 0 3.0 2.0 5.0 -1.0
There were -1 in the first row!
Zone1 Zone2 Zone3 Zone4
1 5.0 6.0 1.0 8.0
3 2.0 3.0 4.0 5.0
Year Month Day hour Minute Second
1 2006 2 11 6 1 0
3 2006 7 14 9 3 0
Out("drop"):
Year Month Day hour Minute Second Zone1 Zone2 Zone3 Zone4
0 2006 1 15 4 0 0 -1.0 2.0 3.0 4.0
1 2006 2 11 6 1 0 5.0 6.0 1.0 8.0
2 2006 4 21 8 2 0 3.0 -1.0 -1.0 6.0
3 2006 7 14 9 3 0 2.0 3.0 4.0 5.0
4 2006 10 2 9 4 0 3.0 2.0 5.0 -1.0
Zone1 Zone2 Zone3 Zone4
1 5.0 6.0 1.0 8.0
3 2.0 3.0 4.0 5.0
Year Month Day hour Minute Second
1 2006 2 11 6 1 0
3 2006 7 14 9 3 0
Out("backward fill"):
Year Month Day hour Minute Second Zone1 Zone2 Zone3 Zone4
0 2006 1 15 4 0 0 -1.0 2.0 3.0 4.0
1 2006 2 11 6 1 0 5.0 6.0 1.0 8.0
2 2006 4 21 8 2 0 3.0 -1.0 -1.0 6.0
3 2006 7 14 9 3 0 2.0 3.0 4.0 5.0
4 2006 10 2 9 4 0 3.0 2.0 5.0 -1.0
There were -1 in the last row!
Zone1 Zone2 Zone3 Zone4
1 5.0 6.0 1.0 8.0
3 2.0 3.0 4.0 5.0
Year Month Day hour Minute Second
1 2006 2 11 6 1 0
3 2006 7 14 9 3 0

Related

Pandas get rank on rolling with FixedForwardWindowIndexer

I am using Pandas 1.51 and I'm trying to get the rank of each row in a dataframe in a rolling window that looks ahead by employing FixedForwardWindowIndexer. But I can't make sense of the results. My code:
df = pd.DataFrame({"X":[9,3,4,5,1,2,8,7,6,10,11]})
window_size = 5
indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=window_size)
df.rolling(window=indexer).rank(ascending=False)
results:
X
0 5.0
1 4.0
2 1.0
3 2.0
4 3.0
5 1.0
6 1.0
7 NaN
8 NaN
9 NaN
10 NaN
By my reckoning, it should look like:
X
0 1.0 # based on the window [9,3,4,5,1], 9 is ranked 1st w/ascending = False
1 3.0 # based on the window [3,4,5,1,2], 3 is ranked 3rd
2 3.0 # based on the window [4,5,1,2,8], 4 is ranked 3rd
3 3.0 # etc
4 5.0
5 5.0
6 3.0
7 NaN
8 NaN
9 NaN
10 NaN
I am basing this on a backward-looking window, which works fine:
>>> df.rolling(window_size).rank(ascending=False)
X
0 NaN
1 NaN
2 NaN
3 NaN
4 5.0
5 4.0
6 1.0
7 2.0
8 3.0
9 1.0
10 1.0
Any assistance is most welcome.
Here is another way to do it:
df["rank"] = [
x.rank(ascending=False).iloc[0].values[0]
for x in df.rolling(window_size)
if len(x) == window_size
] + [pd.NA] * (window_size - 1)
Then:
print(df)
# Output
X rank
0 9 1.0
1 3 3.0
2 4 3.0
3 5 3.0
4 1 5.0
5 2 5.0
6 8 3.0
7 7 <NA>
8 6 <NA>
9 10 <NA>
10 11 <NA>

Adding new rows with new values at some specific columns in pandas

Assume we have a table looks like the following:
id
week_num
people
date
level
a
b
1
1
20
1990101
1
2
3
1
2
30
1990108
1
2
3
1
3
40
1990115
1
2
3
1
5
100
1990129
1
2
3
1
7
100
1990212
1
2
3
week_num skip the "4" and "6" because the corresponding "people" is 0. However, we want the all the rows included like the following table.
id
week_num
people
date
level
a
b
1
1
20
1990101
1
2
3
1
2
30
1990108
1
2
3
1
3
40
1990115
1
2
3
1
4
0
1990122
1
2
3
1
5
100
1990129
1
2
3
1
6
0
1990205
1
2
3
1
7
100
1990212
1
2
3
The date starts with 1990101, the next row must +7 days if it is a continuous week_num(Ex: 1,2 is continuous; 1,3 is not).
How can we use python(pandas) to achieve this goal?
Note: Each id has 10 week_num(1,2,3,...,10), the output must include all "week_num" with corresponding "people" and "date".
Update: Other columns like "level","a","b" should stay the same even we add the skipped week_num.
This assumes that the date restarts at 1990-01-01 for each id:
import itertools
# reindex to get all combinations of ids and week numbers
df_full = (df.set_index(["id", "week_num"])
.reindex(list(itertools.product([1,2], range(1, 11))))
.reset_index())
# fill people with zero
df_full = df_full.fillna({"people": 0})
# forward fill some other columns
cols_ffill = ["level", "a", "b"]
df_full[cols_ffill] = df_full[cols_ffill].ffill()
# reconstruct date from week starting from 1990-01-01 for each id
df_full["date"] = pd.to_datetime("1990-01-01") + (df_full.week_num - 1) * pd.Timedelta("1w")
df_full
# out:
id week_num people date level a b
0 1 1 20.0 1990-01-01 1.0 2.0 3.0
1 1 2 30.0 1990-01-08 1.0 2.0 3.0
2 1 3 40.0 1990-01-15 1.0 2.0 3.0
3 1 4 0.0 1990-01-22 1.0 2.0 3.0
4 1 5 100.0 1990-01-29 1.0 2.0 3.0
5 1 6 0.0 1990-02-05 1.0 2.0 3.0
6 1 7 100.0 1990-02-12 1.0 2.0 3.0
7 1 8 0.0 1990-02-19 1.0 2.0 3.0
8 1 9 0.0 1990-02-26 1.0 2.0 3.0
9 1 10 0.0 1990-03-05 1.0 2.0 3.0
10 2 1 0.0 1990-01-01 1.0 2.0 3.0
11 2 2 0.0 1990-01-08 1.0 2.0 3.0
12 2 3 0.0 1990-01-15 1.0 2.0 3.0
13 2 4 0.0 1990-01-22 1.0 2.0 3.0
14 2 5 0.0 1990-01-29 1.0 2.0 3.0
15 2 6 0.0 1990-02-05 1.0 2.0 3.0
16 2 7 0.0 1990-02-12 1.0 2.0 3.0
17 2 8 0.0 1990-02-19 1.0 2.0 3.0
18 2 9 0.0 1990-02-26 1.0 2.0 3.0
19 2 10 0.0 1990-03-05 1.0 2.0 3.0

fill NA of a column with elements of another column

i'm in this situation,
my df is like that
A B
0 0.0 2.0
1 3.0 4.0
2 NaN 1.0
3 2.0 NaN
4 NaN 1.0
5 4.8 NaN
6 NaN 1.0
and i want to apply this line of code:
df['A'] = df['B'].fillna(df['A'])
and I expect a workflow and final output like that:
A B
0 2.0 2.0
1 4.0 4.0
2 1.0 1.0
3 NaN NaN
4 1.0 1.0
5 NaN NaN
6 1.0 1.0
A B
0 2.0 2.0
1 4.0 4.0
2 1.0 1.0
3 2.0 NaN
4 1.0 1.0
5 4.8 NaN
6 1.0 1.0
but I receive this error:
TypeError: Unsupported type Series
probably because each time there is an NA it tries to fill it with the whole series and not with the single element with the same index of the B column.
I receive the same error with a syntax like that:
df['C'] = df['B'].fillna(df['A'])
so the problem seems not to be the fact that I'm first changing the values of A with the ones of B and then trying to fill the "B" NA with the values of a column that is technically the same as B
I'm in a databricks environment and I'm working with koalas data frames but they work as the pandas ones.
can you help me?
Another option
Suppose the following dataset
import pandas as pd
import numpy as np
df = pd.DataFrame(data={'State':[1,2,3,4,5,6, 7, 8, 9, 10],
'Sno Center': ["Guntur", "Nellore", "Visakhapatnam", "Biswanath", "Doom-Dooma", "Guntur", "Labac-Silchar", "Numaligarh", "Sibsagar", "Munger-Jamalpu"],
'Mar-21': [121, 118.8, 131.6, 123.7, 127.8, 125.9, 114.2, 114.2, 117.7, 117.7],
'Apr-21': [121.1, 118.3, 131.5, np.NaN, 128.2, 128.2, 115.4, 115.1, np.NaN, 118.3]})
df
State Sno Center Mar-21 Apr-21
0 1 Guntur 121.0 121.1
1 2 Nellore 118.8 118.3
2 3 Visakhapatnam 131.6 131.5
3 4 Biswanath 123.7 NaN
4 5 Doom-Dooma 127.8 128.2
5 6 Guntur 125.9 128.2
6 7 Labac-Silchar 114.2 115.4
7 8 Numaligarh 114.2 115.1
8 9 Sibsagar 117.7 NaN
9 10 Munger-Jamalpu 117.7 118.3
Then
df.loc[(df["Mar-21"].notnull()) & (df["Apr-21"].isna()), "Apr-21"] = df["Mar-21"]
df
State Sno Center Mar-21 Apr-21
0 1 Guntur 121.0 121.1
1 2 Nellore 118.8 118.3
2 3 Visakhapatnam 131.6 131.5
3 4 Biswanath 123.7 123.7
4 5 Doom-Dooma 127.8 128.2
5 6 Guntur 125.9 128.2
6 7 Labac-Silchar 114.2 115.4
7 8 Numaligarh 114.2 115.1
8 9 Sibsagar 117.7 117.7
9 10 Munger-Jamalpu 117.7 118.3
IIUC:
try with max():
df['A']=df[['A','B']].max(axis=1)
output of df:
A B
0 2.0 2.0
1 4.0 4.0
2 1.0 1.0
3 2.0 NaN
4 1.0 1.0
5 4.8 NaN
6 1.0 1.0

find duplicate subset of columns with nan values in dataframe

I have a dataframe with 4 columns that can have np.nan
df =
i_example i_frame OId HId
0 0 20 3.0 0.0
1 3 13 NaN 8.0
2 3 13 NaN 10.0
3 0 21 3.0 NaN
4 0 21 3.0 0.0
5 1 22 0.0 4.0
6 1 22 NaN 4.0
7 2 20 0.0 4.0
8 2 20 1.0 4.0
I am looking for invalid rows.
invalid rows are
[1] rows with duplicate columns = [i_example, i_frame, OId] or
[2] rows with duplicate columns = [i_example, i_frame, HId].
So in the example above, all the rows are invalid beside the first three rows.
valid_df =
i_example i_frame OId HId
0 0 20 3.0 0.0
1 3 13 NaN 8.0
2 3 13 NaN 10.0
and
invalid_df =
i_example i_frame OId HId
3 0 21 3.0 NaN
4 0 21 3.0 0.0
5 1 22 0.0 4.0
6 1 22 NaN 4.0
7 2 20 0.0 4.0
8 2 20 1.0 4.0
1 0 21 3.0 NaN
2 0 21 3.0 0.0
These two rows are invalid because of the condition [1].
and
3 1 22 0.0 4.0
4 1 22 NaN 4.0
are invalid because of the condition [2]
and
5 2 20 0.0 4.0
6 2 20 1.0 4.0
are invalid for the same reason
I tried is_duplicated but it does not work with nan values
I am not sure if the df.duplicated() function offers to eliminate NaNs. But you can add a condition to check of the value is NaN or not and find the duplicates.
df[df.duplicated(['i_example', 'i_frame', 'OId'], keep=False) & df['OId'].notna()]
Result:
i_example i_frame OId HId
3 0 21 3.0 NaN
4 0 21 3.0 0.0
So, for your question, I would see if the value is not NaN and then find the duplicates using df.duplicated() and create a boolean mask. With that filter the df as valid and invalid.
dupes = (df['OId'].notna() & df.duplicated(['i_example', 'i_frame', 'OId'], keep=False)) | (df['HId'].notna() & df.duplicated(['i_example', 'i_frame', 'HId'], keep=False))
invalid_df = df[dupes]
valid_df = df[~dupes]
Result:
valid_df =
i_example i_frame OId HId
0 0 20 3.0 0.0
1 3 13 NaN 8.0
2 3 13 NaN 10.0
invalid_df =
i_example i_frame OId HId
3 0 21 3.0 NaN
4 0 21 3.0 0.0
5 1 22 0.0 4.0
6 1 22 NaN 4.0
7 2 20 0.0 4.0
8 2 20 1.0 4.0

fill pandas missing places with different method for different column

I have a pandas dataframe df and I want the final output dataframe final_df as
In [17]: df
Out[17]:
Date symbol cost prev
0 10 a 30 9
1 10 b 33 10
2 12 a 25 4
3 13 a 29 5
In [18]: final_df
Out[18]:
Date symbol cost prev
0 10 a 30.0 9.0
1 10 b 33.0 10.0
2 11 a 0.0 9.0
3 11 b 0.0 10.0
4 12 a 25.0 4.0
5 13 a 29.0 5.0
6 14 a 0.0 5.0
In [19]: dates=[10,11,12,13,14]
That is as you can see I want to fill up the missing dates and fill the corresponding values with 0 for cost column but for column prev I want to fill it with the value from previous date. As the single date may contains multiple symbol I am using the pivot_table.
If I use the ffill
In [12]: df.pivot_table(index="Date",columns="symbol").reindex(dates,method="ffill").stack().reset_index()
Out[12]:
Date symbol cost prev
0 10 a 30.0 9.0
1 10 b 33.0 10.0
2 11 a 30.0 9.0
3 11 b 33.0 10.0
4 12 a 25.0 4.0
5 13 a 29.0 5.0
6 14 a 29.0 5.0
This gives almost final data structure (it has 7 rows as final_df) except for cost column where it copies previous data but I want 0 there.
So I tried to fill missing values of different columns with different method, but that gives a problem, like
In [13]: df1=df.pivot_table(index="Date",columns="symbol").reindex(dates)
In [14]: df1["cost"]=df1["cost"].fillna(0)
In [15]: df1["prev"]=df1["prev"].ffill()
In [16]: df1.stack().reset_index()
Out[16]:
Date symbol cost prev
0 10 a 30.0 9.0
1 10 b 33.0 10.0
2 11 a 0.0 9.0
3 11 b 0.0 10.0
4 12 a 25.0 4.0
5 12 b 0.0 10.0
6 13 a 29.0 5.0
7 13 b 0.0 10.0
8 14 a 0.0 5.0
9 14 b 0.0 10.0
As you can see in output there is data with symbol "b" for date 12,13,14 but I don't want that because in initial dataframe there was no data data with symbol "b" for date 12,13 and I want to keep it that way and also there must not be one in new date 14 as it follows 13.
So how can I solve this problem and get the final_df output?
EDIT
Here is another example to check the program.
In [17]: df
Out[17]:
Date symbol cost prev
0 10 a 30 9
1 10 b 33 10
2 14 a 29 5
In [18]: dates=range(10,17)
In [19]: final_df
Out[19]:
Date symbol cost prev
0 10 a 30 9
1 10 b 33 10
2 11 a 0 9
3 11 b 0 10
4 12 a 0 9
5 12 b 0 10
6 13 a 0 9
7 13 b 0 10
8 14 a 29 5
9 15 a 0 5
10 16 a 0 5
Solution
I have found this way to the problem. Here I using a trick that keeps track of the missing places in in the initial pivot_table and removes finally.
In [44]: df1=df.pivot_table(index="Date",columns='symbol',fill_value="missing").reindex(dates)
In [45]: df1["cost"]= df1["cost"].fillna(0)
In [46]: df1["prev"]=df1["prev"].ffill()
In [47]: df1.stack().replace(to_replace="missing",value=np.nan).dropna().reset_index()
Out[47]:
Date symbol cost prev
0 10 a 30.0 9.0
1 10 b 33.0 10.0
2 11 a 0.0 9.0
3 11 b 0.0 10.0
4 12 a 0.0 9.0
5 12 b 0.0 10.0
6 13 a 0.0 9.0
7 13 b 0.0 10.0
8 14 a 29.0 5.0
9 15 a 0.0 5.0
10 16 a 0.0 5.0

Categories