Taking first value in a rolling window that is not numeric - python

This question follows one I previously asked here, and that was answered for numeric values.
I raise this 2nd one now relative to data of Period type.
While the example given below appears simple, I have actually windows that are of variable size. Interested in the 1st row of the windows, I am looking for a technic that makes use of this definition.
import pandas as pd
from random import seed, randint
# DataFrame
pi1h = pd.period_range(start='2020-01-01 00:00+00:00', end='2020-01-02 00:00+00:00', freq='1h')
seed(1)
values = [randint(0, 10) for ts in pi1h]
df = pd.DataFrame({'Values' : values, 'Period' : pi1h}, index=pi1h)
# This works (numeric type)
df['first'] = df['Values'].rolling(3).agg(lambda rows: rows[0])
# This doesn't (Period type)
df['OpeningPeriod'] = df['Period'].rolling(3).agg(lambda rows: rows[0])
Result of 2nd command
DataError: No numeric types to aggregate
Please, any idea? Thanks for any help! Bests,

First row of rolling window of size 3 means row, which is 2 rows above the current - just use pd.Series.shift(2):
df['OpeningPeriod'] = df['Period'].shift(2)
For the variable size (for the sake of example- I took Values column as this variable size):
import numpy as np
x=(np.arange(len(df))-df['Values'])
df['OpeningPeriod'] = np.where(x.ge(0), df.loc[df.index[x.tolist()], 'Period'], np.nan)

Convert your period[H] to a float
# convert to float
df['Period1'] = df['Period'].dt.to_timestamp().values.astype(float)
# rolling and convert back to period
df['OpeningPeriod'] = pd.to_datetime(df['Period1'].rolling(3)\
.agg(lambda rows: rows[0])).dt.to_period('1h')
# drop column
df = df.drop(columns='Period1')

Related

Replace unknown values (with different median values)

I have a particular problem, I would like to clean and prepare my data and I have a lot of unknown values for the "highpoint_metres" column of my dataframe (members). As there is no missing information for the "peak_id", I calculated the median value of the height according to the peak_id to be more accurate.
I would like to do two steps: 1) add a new column to my "members" dataframe where there would be the value of the median but different depending on the "peak_id" (value calculated thanks to the code in the question). 2) That the code checks that the value in highpoint_metres is null, if it is, that the value of the new column is put instead. I don't know if this is clearer
code :
import pandas as pd
members = pd.read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2020/2020-09-22/members.csv")
print(members)
mediane_peak_id = members[["peak_id","highpoint_metres"]].groupby("peak_id",as_index=False).median()
And I don't know how to continue from there (my level of python is very bad ;-))
I believe that's what you're looking for:
import numpy as np
import pandas as pd
members = pd.read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2020/2020-09-22/members.csv")
median_highpoint_by_peak = members.groupby("peak_id")["highpoint_metres"].transform("median")
is_highpoint_missing = np.isnan(members.highpoint_metres)
members["highpoint_meters_imputed"] = np.where(is_highpoint_missing, median_highpoint_by_peak, members.highpoint_metres)
so one way to go about replacing 0 with median could be:
import numpy as np
df[col_name] = df[col_name].replace({0: np.median(df[col_name])})
You can also use apply function:
df[col_name] = df[col_name].apply(lambda x: np.median(df[col_name]) if x==0 else x)
Let me know if this helps.
So adding a little bit more info based on Marie's question.
One way to get median is through groupby and then left join it with the original dataframe.
df_gp = df.groupby(['peak_id']).agg(Median = (highpoint_metres, 'median')).reset_index()
df = pd.merge(df, df_gp, on='peak_id')
df = df.apply(lambda x['highpoint_metres']: x['Median'] if x['highpoint_metres']==np.nan else x['highpoint_metres'])
Let me know if this solves your issue

In python How to ensure that seed in using randint keeps changing when i am trying to pick a random number?

def claims(dataframe):
dataframe.loc[(dataframe.severity ==1),'claims_made']= randint(200, 20000)
return dataframe
here 'severity' is an existing column and 'claims_made' is a new column, I want to have the randint keep picking different values that are being assigned to the 'claims_made' column. because for now it's just picking one random value out of the bucket specified and is assigning the same value to all the rows that satisfy the condition
Your code gets a single randint and applies that one value to the column you create. Its the same as if you had done
val = randint(20, 20000)
dataframe.loc[(dataframe.severity ==1),'claims_made']= val
Instead you could get an index of the rows you want to assign. Use it to create a series of random integers and when you assign that back to the dataframe, non-indexed rows become NaN.
import pandas as pd
import numpy as np
def claims(dataframe):
wanted_index = dataframe[df.severity==1].index
dataframe["claims_made"] = pd.Series(
np.random.randint(20,20000, size=len(wanted_index)),
index=wanted_index)
return dataframe
df = pd.DataFrame({"severity":[1, 1, 0, 8, -1, 99, 1]})
print(claims(df))
If you want to stick with your existing approach, you could do something like this:
def claims2(df):
n_rows = len(df.loc[(df.severity==1), 'claims_made'])
vals = [randint(200, 20000) for _ in range(n_rows)]
df.loc[(df.severity==1), 'claims_made'] = vals
return df
p.s. I'd recommend accessing columns via df['severity'] instead of df.severity -- you can get into trouble using the . syntax if you have a dataset with spaces etc. in the column names.
I'll give you a broad hint; coding is up to you.
Form a series (a temporary column object) of random numbers in the desired range. Assign that series to your data frame column. You can find examples of this technique in any tutorial on data frames.

How to use 2 methods of filling NA in 1 column in Python

I have a data frame with 1 column.
- There are many NA values at the beginning and at the end that I would like to eliminate them completely.
- At the same time, there are some NA values in the between of 2 available values that I would like to fill them by the mean of 2 closed available values.
For illustration, I attach the image here for your imagine.
I can not think of any solution. Just wonder if anyone can please help me with that.
Thank you for your help]1
Try this,i have reproduced example by using random numbers
import pandas as pd
import numpy as np
random_index = np.random.randint(0,100,size=(5, 1))
random_range = np.arange(10,15)
df = pd.DataFrame(np.random.randint(0,100,size=(100, 1)), columns=list('A'))
df.loc[10:15,'A'] = "#N/A"
for c in random_index:
df.loc[c,"A"] = "#N/A"
// replacing start from here
df[df=="#N/A"]= np.nan
index = list(np.where(df['A'].isna()))[0]
drops = []
for i in index:
if pd.isnull(df.loc[(i-1),"A"]) is False and pd.isnull(df.loc[(i+1),"A"]) is False:
df.loc[i,"A"] = (df.loc[(i-1),"A"]+df.loc[(i+1),"A"])/2
else:
drops.append(i)
df = df.drop(df.index[drops]).reset_index(drop=True)
First, if each N/A is in string format, replace either with np.nan.The most straightforward possible way is to use isnan on the given column, then extract true indices(such as using the result on a np.arange array). From there you can either use a for to iterate indices to check if they are sequential or not, or calculate the distance between consecutive elements to find the ones not equal to 1.

Dataframe.sample - Weights - How to use it?

I have this situation:
A have a probability of 0.1348 calculated in a variable called treat_conv
Now, I am trying to create a dataframe from the original dataframe, using this probability to bring a especified column. Is that possible? I am trying to using weights but no success. Maybe am I using it wrong?
Follow my code:
weights = np.array(treat_conv) #creating a array with treat_conv
new_page_converted = df2.sample(n = treat_group.shape[0], weights=df2.converted(weights)) #creating new dataframe with the number of rows of treat_group and the column converted must have a 0.13 of chance to bring value 1
So, the code works if I use the n alone. It creates a new dataframe with the correct ammount of rows. But I cant get the correct probabiliy to bring certain ammount of value 1 in converted column.
I hope my explanation is undestandable.
Thank you!
You could do something like this
import pandas as pd
import numpy as np
df = pd.DataFrame(data=np.arange(0, 100, 1), columns=["SomeValue"])
selected = pd.DataFrame(data=np.random.choice(df["SomeValue"], int(len(df["SomeValue"]) * 0.13), replace=False),
columns=["SomeValue"])
selected["Trigger"] = 1
df = df.merge(selected, how="left", on="SomeValue")
df["Trigger"].fillna(0, inplace=True)
"df" is your original DataFrame. Then select random 13% of the values and add a column indicating they've been selected. Finally, merge all back to your original Dataframe.

Creating a dataframe in Python

Not sure how to correctly phrase this, but here goes:
What's the easiest way to create a one-column dataframe in Python that holds ones and zeros, and the length is determined by some input?
For example, say that I have a sample size of 1000, of which a 100 successes (ones). The amount of zeros would then be the sample size (i.e., 1000) minus successes. So the output would be a df with a length of 1000, of which a 100 rows contain a one and 900 a zero.
From what you describe, a simple list would do the trick. Otherwise, you can use numpy.array or pandas.DataFrame/pandas.Series (more table-like).
import numpy as np
import pandas as pd
input_length = 1000
# List approach:
my_list = [0 for i in range(input_length)]
# Numpy array:
my_array = np.zeros(input length)
# With Pandas:
my_table = pd.Series(0, index=range(input_length))
All these create a vector of zeroes, then you assign the successes (ones) as you please. If these were to follow some known distribution, numpy also has methods to generate random vectors that follow them (see here).
If you're really looking for the pandas approach, it can also be combined with the previous ones. This is, you can assign a list or numpy.array to the values of your Series/DataFrame. For example, imagine you want to draw 1000 random samples of a binomial distribution with p=0.5:
p=0.5
my_data = pd.Series(np.random.binomial(1, p, input_length))
In addition to N.P.'s answer. You could do something like this:
import pandas as pd
import numpy as np
def generate_df(df_len):
values = np.random.binomial(n=1, p=0.1, size=df_len)
return pd.DataFrame({'value': values})
df = generate_df(1000)
edit:
More complete function:
def generate_df(df_len, option, p_success=0.1):
'''
Generate a pandas DataFrame with one single field filled with
1s and 0s in p_success proportion and length df_len.
Input:
- df_len: int, length of the 1st dimension of the DataFrame
- option: string, determines how will the sample be generated
* random: according to a bernoully distribution with p=p_success
* fixed: failures first, and fixed proportion of successes p_success
* fixed_shuffled: fixed proportion of successes p_success, random order
- p_success: proportion of successes among total
Output:
- df: pandas Dataframe
'''
if option == 'random':
values = np.random.binomial(n=1, p=p_success, size=df_len)
elif option in ('fixed_shuffled', 'fixed'):
n_success = int(df_len*p_success)
n_fail = df_len - n_success
values = [0]*n_fail + [1]*n_success
if option == 'fixed_shuffled':
np.random.shuffle(values)
else:
raise Exception('Unknown option: {}'.format(option))
df = pd.DataFrame({'value': values})
return df

Categories