python pandas: fluent setter for DataFrame index? - python

Is there a fluent setter for index? Something like df.with_index(other_index), which would keep the data (the np.array) as is, but replaces the index with other_index?
A non-fluent way (that modifies an existing DataFrame) is:
df.index = other_index
I found a way that doesn't affect the original, can be generated on-the-fly without temporary variables (so, kind of fluent) and is a shallow copy (it doesn't duplicate the data itself; the same unique np.array is shared by both df and the result). However, is a bit verbose:
def with_index(df, index):
return pd.DataFrame(data=df.values, index=index, columns=df.columns)
Alternatively:
def with_axes(df, index=None, columns=None):
if index is None:
index = df.index
if columns is None:
columns = df.columns
return pd.DataFrame(data=df.values, index=index, columns=columns)
Is there some method that I missed that does that? I tried df.assign(index=other_index), but it just creates a new column called 'index'... And of course, df.reindex(), df.replace(), df.set_index() do different things.

Doh. I just found it:
df.set_axis(other_index, inplace=False)
(Apparently, in a future pandas release, inplace=None will default to False, instead of the current --as of 0.25.1-- default to True).

Related

Should I redefine a pandas dataframe with every function?

From experience, some pandas functions require that I redefine the dataframe if I intend to use them, otherwise they won't return a copy by default. For example: df.drop("ColA", axis=1) will not actually drop the column, but I need to implement it by df = df.drop("ColA", axis=1) or by df.drop("ColA", axis=1, inplace=True) if I need to modify the dataframe.
This seems to be the case with some other pandas functions. Therefore, what I usually do is redefine a dataframe for every function so that I can ensure it is modified. For example:
df = df.set_index("id")
df = df.sort_values(by="Date")
df["B"] = df["B"].fillna(-1)
df = df.reset_index(drop = True)
df["ColA"] = df["ColA"].astype(str)
I know some of these functions do not require to define the dataframe, but I just do it to make sure the changes are applied. My question is if there is a way to know which functions require redefining the dataframe and which don't need it, and also if there is any computational difference between using df = df.set_index("id") and df.set_index("id") if they have the same output.
Also is there a difference between df["B"] = df["B"].fillna(-1) and df = df["B"].fillna(-1)?
My question is if there is a way to know which functions require redefining the dataframe and which don't need it
It's called the manual.
set_index() has an inplace=True parameter; if that's set, you won't need to reassigned.
sort_values() has that too.
fillna() has that too.
reset_index() has that too.
astype() has copy=True by default, but heed the warning setting it to False:
"be very careful setting copy=False as changes to values then may propagate to other pandas objects"
if there is any computational difference between
Yes – if Pandas is able to make the changes in-place, it won't need to copy the series or dataframe, which could be a significant time and memory expense with large dataframes.
Also is there a difference between df["B"] = df["B"].fillna(-1) and df = df["B"].fillna(-1)?
Yes, there is. The first reassigns a series into a dataframe, the other just assigns the single series into the (now misnamed) name df
In pandas github is long discussion about this, check this.
I also agree the best dont use inplace, because confused and not sure how/when it save memory.
Should I redefine a pandas dataframe with every function?
I think yes, maybe if use large DataFrames here should be exceptions, link.
There is always list of methods with inplace parameter.
Also is there a difference between df["B"] = df["B"].fillna(-1) and df = df["B"].fillna(-1)
If use df["B"] = df["B"].fillna(-1) it reassign column B (Series) back with replaced missing values to -1.
If use df = df["B"].fillna(-1) it return Series with replaced values, but it is reassigned to df, so original DataFrame is overwitten by this replaced Series.
I don't think there is a solution for this. Some methods work inplace by default and some others return a copy of the df and you need to reassign the df as you usually do. The best option is to check the docs (for the inplace parameter) everytime you want to use some method and you will learn by practice, at least the most common ones, like sorting, reseting index, etc

How to filter multiple dataframes in a loop?

I have a lot of dataframes and I would like to apply the same filter to all of them without having to copy paste the filter condition every time.
This is my code so far:
df_list_2019 = [df_spain_2019,df_amsterdam_2019, df_venice_2019, df_sicily_2019]
for data in df_list_2019:
data = data[['host_since','host_response_time','host_response_rate',
'host_acceptance_rate','host_is_superhost','host_total_listings_count',
'host_has_profile_pic','host_identity_verified',
'neighbourhood','neighbourhood_cleansed','zipcode','latitude','longitude','property_type','room_type',
'accommodates','bathrooms','bedrooms','beds','amenities','price','weekly_price',
'monthly_price','cleaning_fee','guests_included','extra_people','minimum_nights','maximum_nights',
'minimum_nights_avg_ntm','has_availability','availability_30','availability_60','availability_90',
'availability_365','number_of_reviews','number_of_reviews_ltm','review_scores_rating',
'review_scores_checkin','review_scores_communication','review_scores_location', 'review_scores_value',
'instant_bookable','is_business_travel_ready','cancellation_policy','reviews_per_month'
]]
but it doesn't apply the filter to the data frame. How can I change the code to do that?
Thank you
The filter (column selection) is actually applied to every DataFrame, you just throw the result away by overriding what the name data points to.
You need to store the results somewhere, a list for example.
cols = ['host_since','host_response_time', ...]
filtered = [df[cols] for df in df_list_2019]
As soon as you write var = new_value, you do not change the original object but have the variable refering a new object.
If you want to change the dataframes from df_list_2019, you have to use an inplace=True method. Here, you could use drop:
keep = set(['host_since','host_response_time','host_response_rate',
'host_acceptance_rate','host_is_superhost','host_total_listings_count',
'host_has_profile_pic','host_identity_verified',
'neighbourhood','neighbourhood_cleansed','zipcode','latitude','longitude','property_type','room_type',
'accommodates','bathrooms','bedrooms','beds','amenities','price','weekly_price',
'monthly_price','cleaning_fee','guests_included','extra_people','minimum_nights','maximum_nights',
'minimum_nights_avg_ntm','has_availability','availability_30','availability_60','availability_90',
'availability_365','number_of_reviews','number_of_reviews_ltm','review_scores_rating',
'review_scores_checkin','review_scores_communication','review_scores_location', 'review_scores_value',
'instant_bookable','is_business_travel_ready','cancellation_policy','reviews_per_month'
])
for data in df_list_2019:
data.drop(columns=[col for col in data.columns if col not in keep], inplace=True)
But beware, pandas experts recommend to prefere the df = df. ... idiom to the df...(..., inplace=True) because it allows chaining the operations. So you should ask yourself if #timgeb's answer cannot be used. Anyway this one should work for your requirements.

Drop Columns in Pandas Dataframe: Inconsistency in Output

Problem: While dropping column labelled 'Happiness_Score' below, I'm getting it dropped in the parent Dataframe as well. This is not supposed to happen, would like clarification on this?
A = df_new
A.drop('Happiness_Score', axis = 1, inplace = True)
This is the output: As you can see the column gets dropped in df_new too; isn't inplace = True mean that it gets dropped only in the A Dataframe.
NOTE:
I'm able to workaround this by changing the code; now output is as expected.
B=df_new.drop('Happiness_Score', axis = 1)
Actually, when you do A = df_new
you are not creating a copy of the Dataframe, rather just a pointer. So to execute this correctly you should use A = df_new.copy()
When you are selecting a subset or indexing: A = df_new[condition] then it creates copy of a slice of a dataframe, so your workaround works too.
A = def_new creates a new reference to your original def_new, an not a new copy. You are binding A to the same thing def_new holding the reference to. And what happens when you do modification in a reference? It is reflected in the original object. I'll illustrate this with an example.
orgList = [1,2,3,4,5]
bkpList = orgList
print(bkpList is orgList) #OUTPUT: True
This is because both variables are pointing to same list. Modify any one, and change will be reflected in original list. Same thing can be observed in your dataframe case.
Solution: Keep a separate copy of your dataframe.
The variable A is a reference to df_new. Try creating A by doing a complete slice of df_new or df_new.copy().

How do you effectively use pd.DataFrame.apply on rows with duplicate values?

The function that I'm applying is a little expensive, as such I want it to only calculate the value once for unique values.
The only solution I've been able to come up with has been as follows:
This step because apply doesn't work on arrays, so I have to convert the unique values into a series.
new_vals = pd.Series(data['column'].unique()).apply(function)
This one because .merge has to be used on dataframes.
new_dataframe = pd.DataFrame( index = data['column'].unique(), data = new_vals.values)
Finally Merging The results
yet_another= pd.merge(data, new_dataframe, right_index = True, left_on = column)
data['calculated_column'] = yet_another[0]
So basically I had to Convert my values to a Series, apply the function, convert to a Dataframe, merge the results and use that column to create me new column.
I'm wondering if there is some one-line solution that isn't as messy. Something pythonic that doesn't involve re-casting object types multiple times. I've tried grouping by but I just can't figure out how to do it.
My best guess would have been to do something along these lines
data[calculated_column] = dataframe.groupby(column).index.apply(function)
but that isn't right either.
This is an operation that I do often enough to want to learn a better way to do, but not often enough that I can easily find the last time I used it, so I end up re-figuring a bunch of things again and again.
If there is no good solution I guess I could just add this function to my library of common tools that I hedonistically > from me_tools import *
def apply_unique(data, column, function):
new_vals = pd.Series(data[column].unique()).apply(function)
new_dataframe = pd.DataFrame( data = new_vals.values, index =
data[column].unique() )
result = pd.merge(data, new_dataframe, right_index = True, left_on = column)
return result[0]
I would do something like this:
def apply_unique(df, orig_col, new_col, func):
return df.merge(df[[orig_col]]
.drop_duplicates()
.assign(**{new_col: lambda x: x[orig_col].apply(func)}
), how='inner', on=orig_col)
This will return the same DataFrame as performing:
df[new_col] = df[orig_col].apply(func)
but will be much more performant when there are many duplicates.
How it works:
We join the original DataFrame (calling) to another DataFrame (passed) that contains two columns; the original column and the new column transformed from the original column.
The new column in the passed DataFrame is assigned using .assign and a lambda function, making it possible to apply the function to the DataFrame that has already had .drop_duplicates() performed on it.
A dict is used here for convenience only, as it allows a column name to be passed in as a str.
Edit:
As an aside: best to drop new_col if it already exists, otherwise the merge will append suffixes to each new_col
if new_col in df:
df = df.drop(new_col, axis='columns')

change several frames in a function in python

I would like to solve the below problem
I have the below code. I need to insert several data frames and apply the change at once
def reverse_df(*df):
for x in df:
x=x.loc[::-1].reset_index(level=0, drop=True)
return
reverse_df(df1,df2,df3,df4,df5)
I am able to do changes to a dataframe inside a function only when i am using inplace=True like in below
def remove_na(*df):
for x in df:
x.dropna(axis=0, how='all',inplace=True)
return
remove_na(df1,df2,df3,df4,df5)
buy the below doesn't work
def remove_na(*df):
for x in df:
x=x.dropna(axis=0, how='all')
return
remove_na(df1,df2,df3,df4,df5)
What am I doing wrong?
Short answer: x = x.dropna(axis=0, how='all') inside a function creates a local variable called x, so the reference to the original dataframe is lost, and any changes you make are not applied.
To solve the particular case of reversing the dataframe you can do:
def reverse(df):
df.reset_index(drop=False, inplace=True)
df.sort_index(ascending=False, inplace=True)
df.set_index('index', drop=True, inplace=True)
However, since inplace operations are not really inplace, you're probably better off returning a modified dataframe.

Categories