Select rows containing a NaN following a specific value in Pandas - python

I am trying to create a new DataFrame consisting of the rows corresponding to the value 1.0 or NaN in the last column, whereby I only take the Nans under a 1.0 (that is, I'm interested in everything until a 0.0 appears).
Timestamp Value Mode
00-00-10 34567 1.0
00-00-20 45425
00-00-30 46773 0.0
00-00.40 64567
00-00-50 25665 1.0
00-00-60 25678
My attempt is:
for row in data.itertuples():
while data[data.Mode != 0.0]:
df2 = df2.append(row)
else:
#How do I differentiate between a NaN under a 1.0 and a NaN under a 0.0?
print (df2)
The idea is to save every row until a 0.0 appears, and afterwards ignore every row until a 1.0 appears again.

You can use .ffill to figure out if it's a NaN below a 1 or a 0.
Here are the NaN values below a 1
df[(df['Mode'].isnull()) & df['Mode'].ffill() == 1]
# Timestamp Value Mode
#1 00-00-20 45425 NaN
#5 00-00-60 25678 NaN
To get all of the 1s and NaN below:
df[((df['Mode'].isnull()) & df['Mode'].ffill() == 1) | df.Mode == 1]
# Timestamp Value Mode
#0 00-00-10 34567 1.0
#1 00-00-20 45425 NaN
#4 00-00-50 25665 1.0
#5 00-00-60 25678 NaN
You can get away with slightly nicer logic, since you have only 1 and 0, though this might not always work due to the NaN in 'Mode' (It seems to work for the above bit)
df[((df['Mode'].isnull()) & df['Mode'].ffill()) | df.Mode]

Related

Python - Find percent change for previous 7-day period's average

I have time-series data in a dataframe. Is there any way to calculate for each day the percent change of that day's value from the average of the previous 7 days?
I have tried
df['Change'] = df['Column'].pct_change(periods=7)
However, this simply finds the difference between t and t-7 days. I need something like:
For each value of Ti, find the average of the previous 7 days, and subtract from Ti
Sure, you can for example use:
s = df['Column']
n = 7
mean = s.rolling(n, closed='left').mean()
df['Change'] = (s - mean) / mean
Note on closed='left'
There was a bug prior to pandas=1.2.0 that caused incorrect handling of closed for fixed windows. Make sure you have pandas>=1.2.0; for example, pandas=1.1.3 will not give the result below.
As described in the docs:
closed: Make the interval closed on the ‘right’, ‘left’, ‘both’ or ‘neither’ endpoints. Defaults to ‘right’.
A simple way to understand is to try with some very simple data and a small window:
a = pd.DataFrame(range(5), index=pd.date_range('2020', periods=5))
b = a.assign(
sum_left=a.rolling(2, closed='left').sum(),
sum_right=a.rolling(2, closed='right').sum(),
sum_both=a.rolling(2, closed='both').sum(),
sum_neither=a.rolling(2, closed='neither').sum(),
)
>>> b
0 sum_left sum_right sum_both sum_neither
2020-01-01 0 NaN NaN NaN NaN
2020-01-02 1 NaN 1.0 1.0 NaN
2020-01-03 2 1.0 3.0 3.0 NaN
2020-01-04 3 3.0 5.0 6.0 NaN
2020-01-05 4 5.0 7.0 9.0 NaN

Remove Specific Characters/Strings/Sequences of Characters in Python

I am creating a long list of what seem to be tuples that I would like to later convert into a Dataframe, but there are certain common sequences of characters that prevent this from being possible. And example of a fraction of the output:
0,"GAME_ID 21900001
EVENTNUM 2
EVENTMSGTYPE 12
EVENTMSGACTIONTYPE 0
PERIOD 1
WCTIMESTRING 8:04 PM
PCTIMESTRING 12:00
HOMEDESCRIPTION
NEUTRALDESCRIPTION
VISITORDESCRIPTION
SCORE NaN
SCOREMARGIN NaN
PERSON1TYPE 0
PLAYER1_ID 0
PLAYER1_NAME NaN
PLAYER1_TEAM_ID NaN
PLAYER1_TEAM_CITY NaN
PLAYER1_TEAM_NICKNAME NaN
PLAYER1_TEAM_ABBREVIATION NaN
PERSON2TYPE 0
PLAYER2_ID 0
PLAYER2_NAME NaN
PLAYER2_TEAM_ID NaN
PLAYER2_TEAM_CITY NaN
PLAYER2_TEAM_NICKNAME NaN
PLAYER2_TEAM_ABBREVIATION NaN
PERSON3TYPE 0
PLAYER3_ID 0
PLAYER3_NAME NaN
PLAYER3_TEAM_ID NaN
PLAYER3_TEAM_CITY NaN
PLAYER3_TEAM_NICKNAME NaN
PLAYER3_TEAM_ABBREVIATION NaN
VIDEO_AVAILABLE_FLAG 0
DESCRIPTION
TIME_ELAPSED 0
TIME_ELAPSED_PERIOD 0
Name: 0, dtype: object"
Whereas the desired output would be:
GAME_ID 21900001
EVENTNUM 2
EVENTMSGTYPE 12
EVENTMSGACTIONTYPE 0
PERIOD 1
WCTIMESTRING 8:04 PM
PCTIMESTRING 12:00
HOMEDESCRIPTION
NEUTRALDESCRIPTION
VISITORDESCRIPTION
SCORE NaN
SCOREMARGIN NaN
PERSON1TYPE 0
PLAYER1_ID 0
PLAYER1_NAME NaN
PLAYER1_TEAM_ID NaN
PLAYER1_TEAM_CITY NaN
PLAYER1_TEAM_NICKNAME NaN
PLAYER1_TEAM_ABBREVIATION NaN
PERSON2TYPE 0
PLAYER2_ID 0
PLAYER2_NAME NaN
PLAYER2_TEAM_ID NaN
PLAYER2_TEAM_CITY NaN
PLAYER2_TEAM_NICKNAME NaN
PLAYER2_TEAM_ABBREVIATION NaN
PERSON3TYPE 0
PLAYER3_ID 0
PLAYER3_NAME NaN
PLAYER3_TEAM_ID NaN
PLAYER3_TEAM_CITY NaN
PLAYER3_TEAM_NICKNAME NaN
PLAYER3_TEAM_ABBREVIATION NaN
VIDEO_AVAILABLE_FLAG 0
DESCRIPTION
TIME_ELAPSED 0
TIME_ELAPSED_PERIOD 0
How can I get rid of the 0 and " at the start, and then the trash at the end past the TIME_ELAPSED_PERIOD? The int at the start and the one in the bottom row increases by 1 until the end of my program, which could likely go upwards of around 320,000, so I will need the code to be able to adapt for a range of int values. I think it would be easiest to do this after the creation of my list, so it shouldn't be necessary for me to show you any of my code. Just a systematic manipulation of characters should do the trick. Thanks!
Provided that your input data is in the form of a list, you can try the following to meet your requirements:
inputlist = Your_list_to_be_corrected #Assign your input list here
# Now, remove the rows in the list that have the format "Name: 0, dtype: object""
inputlist = [ x for x in inputlist if "dtype: object" not in x ]
#Now, correct the rows containing GAME_ID by removing the int number and special characters
sep = 'GAME_ID'
for index, element in enumerate(inputlist):
if "GAME_ID" in element:
inputlist[index] = 'GAME_ID' + element.split(sep, 1)[1]

Interpolation still leaving NaN's (pandas groupby)

I have a Dataframe with the location of some customers (so I have a column with Customer_id and others with Lat and Lon) and I am trying to interpolate the NaN's according to each customer.
For example, if I interpolate with the nearest approach here (I made up the values here):
Customer_id Lat Lon
A 1 1
A NaN NaN
A 2 2
B NaN NaN
B 4 4
I would like the NaN for B to be 4 and not 2.
I have tried this
series.groupby('Customer_id').apply(lambda group: group.interpolate(method = 'nearest', limit_direction = 'both'))
And the number of NaN's goes down from 9003 to 94. But I'm not understanding why it is still leaving some missing values.
I checked and these 94 missing values corresponded to records from customers that were already being interpolated. For example,
Customer_id Lat
0. A 1
1. A NaN
2. A NaN
3. A NaN
4. A NaN
It would interpolate correctly until some value (let's say it interpolates 1, 2 and 3 correctly) and then leaves 4 as NaN.
I have tried to set a limit in interpolate greater than the maximum number of records per client but it is still not working out. I don't know where my mistake is, can somebody help out?
(I don't know if it's relevant to mention or not but I fabricated my own NaN's for this. This is the code I used Replace some values in a dataframe with NaN's if the index of the row does not exist in another dataframe I think the problem isn't here but since I'm very confused as to where the issue actually is I'll just leave it here)
When you interpolate with nearest it is only able to fill in-between missing values. (You'll notice this because you get an error when there's only 1 non-null value, like in your example). The remaining null values are "edges" which are taken care of with .bfill().ffill() for the nearest logic. This is also the appropriate logic to "interpolate" with only one non-missing value.
def my_interp(x):
if x.notnull().sum() > 1:
return x.interpolate(method='nearest').ffill().bfill()
else:
return x.ffill().bfill()
df.groupby('Customer_id').transform(my_interp)
# Lat Lon
#0 1.0 1.0
#1 1.0 1.0
#2 2.0 2.0
#3 4.0 4.0
#4 4.0 4.0

Using reindex with duplicated axis

Let's say I have a dataframe with dates as index. Each row contains information about a certain event on that date. The problem is that there could be more than one event on said date.
This is an example DataFrame, df2:
one two
1/2 1.0 1.0
1/2 1.0 1.0
1/4 3.0 3.0
1/5 NaN 4.0
I want to add missing dates to the dataframe, and I used to be able to do it with .loc. Now .loc raises the following warning:
Passing list-likes to .loc or [] with any missing label will raise
KeyError in the future, you can use .reindex() as an alternative.
This is my code (it works but raises warning):
# I want to add any missing date- in this example, 1/3.
df2.loc[["1/2","1/3","1/4","1/5"]]
one two
1/2 1.0 1.0
1/2 1.0 1.0
1/3 NaN NaN
1/4 3.0 3.0
1/5 NaN 4.0
I've tried using reindex as it suggests, but my index contains duplicated values so it doesn't work:
#This doesn't work
df2.reindex(["1/2","1/3","1/4","1/5"])
ValueError: cannot reindex from a duplicate axis
What can I do to replace the old loc?
One way from join
df.join(pd.DataFrame(index=["1/2","1/3","1/4","1/5"]),how='outer')
Out[193]:
one two
1/2 1.0 1.0
1/2 1.0 1.0
1/3 NaN NaN
1/4 3.0 3.0
1/5 NaN 4.0

Python Pandas Dataframe: length of index does not match - df['column'] = ndarray

I have a pandas Dataframe containing EOD financial data (OHLC) for analysis.
I'm using https://github.com/cirla/tulipy library to generate technical indicator values, that have a certain timeperiod as option. For Example. ADX with timeperiod=5 shows ADX for last 5 days.
Because of this timeperiod, the generated array with indicator values is always shorter in length than the Dataframe. Because the prices of first 5 days are used to generate ADX for day 6..
pdi14, mdi14 = ti.di(
high=highData, low=lowData, close=closeData, period=14)
df['mdi_14'] = mdi14
df['pdi_14'] = pdi14
>> ValueError: Length of values does not match length of index
Unfortunately, unlike TA-LIB for example, this tulip library does not provide NaN-values for these first couple of empty days...
Is there an easy way to prepend these NaN to the ndarray?
Or insert into df at a certain index & have it create NaN for the rows before it automatically?
Thanks in advance, I've been researching for days!
Maybe make the shift yourself in the code ?
period = 14
pdi14, mdi14 = ti.di(
high=highData, low=lowData, close=closeData, period=period
)
df['mdi_14'] = np.NAN
df['mdi_14'][period - 1:] = mdi14
I hope they will fill the first values with NAN in the lib in the future. It's dangerous to leave time series data like this without any label.
Full MCVE
df = pd.DataFrame(1, range(10), list('ABC'))
a = np.full((len(df) - 6, df.shape[1]), 2)
b = np.full((6, df.shape[1]), np.nan)
c = np.row_stack([b, a])
d = pd.DataFrame(c, df.index, df.columns)
d
A B C
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
5 NaN NaN NaN
6 2.0 2.0 2.0
7 2.0 2.0 2.0
8 2.0 2.0 2.0
9 2.0 2.0 2.0
The C version of the tulip library includes a start function for each indicator (reference: https://tulipindicators.org/usage) that can be used to determine the output length of an indicator given a set of input options. Unfortunately, it does not appear that the python bindings library, tulipy, includes this functionality. Instead you have to resort to dynamically reassigning your index values to align the output with the original DataFrame.
Here is an example that uses the price series from the tulipy docs:
#Create the dataframe with close prices
prices = pd.DataFrame(data={81.59, 81.06, 82.87, 83, 83.61, 83.15, 82.84, 83.99, 84.55,
84.36, 85.53, 86.54, 86.89, 87.77, 87.29}, columns=['close'])
#Compute the technical indicator using tulipy and save the result in a DataFrame
bbands = pd.DataFrame(data=np.transpose(ti.bbands(real = prices['close'].to_numpy(), period = 5, stddev = 2)))
#Dynamically realign the index; note from the tulip library documentation that the price/volume data is expected be ordered "oldest to newest (index 0 is oldest)"
bbands.index += prices.index.max() - bbands.index.max()
#Put the indicator values with the original DataFrame
prices[['BBANDS_5_2_low', 'BBANDS_5_2_mid', 'BBANDS_5_2_up']] = bbands
prices.head(15)
close BBANDS_5_2_low BBANDS_5_2_mid BBANDS_5_2_up
0 81.06 NaN NaN NaN
1 81.59 NaN NaN NaN
2 82.87 NaN NaN NaN
3 83.00 NaN NaN NaN
4 83.61 80.530042 82.426 84.321958
5 83.15 81.494061 82.844 84.193939
6 82.84 82.533343 83.094 83.654657
7 83.99 82.471983 83.318 84.164017
8 84.55 82.417750 83.628 84.838250
9 84.36 82.435203 83.778 85.120797
10 85.53 82.511331 84.254 85.996669
11 86.54 83.142618 84.994 86.845382
12 86.89 83.536488 85.574 87.611512
13 87.77 83.870324 86.218 88.565676
14 87.29 85.288871 86.804 88.319129

Categories