stack/unstack/pivot dataframe on python/pandas - python

I have a dataframe which looks like this:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 198300 entries, 0 to 198299
Data columns (total 3 columns):
var 198300 non-null values
period 198300 non-null values
value 141492 non-null values
dtypes: float64(1), object(2)
I'd like to change i from having three collumns (var, period, value) to having all values of the period variable as columns, the values in var as rows. i try using:
X.pivot(index='var', columns='period', values='value')
But I get this error:
raise ReshapeError('Index contains duplicate entries, '
pandas.core.reshape.ReshapeError: Index contains duplicate entries, cannot reshape
But I've checked in excel, there are no duplicate entries... Any help out there? Thanks

To give this question an answer: usually when pandas objects that there are duplicate entries, it's right. To check this I often use
someseries.value_counts().head()
to see if one found its way in there.

Related

Most efficient method of retrieving the last row of any size dataframe

I have a continually growing dataframe and periodically I want to retrieve the last row.
# dbdf.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 6652 entries, 2022-10-23 17:15:00-04:00 to 2022-10-28 08:06:00-04:00
Freq: T
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 open 6592 non-null float64
1 high 6592 non-null float64
2 low 6592 non-null float64
3 close 6592 non-null float64
dtypes: float64(4)
memory usage: 259.8 KB
The dataframe doesn't occupy a very large memory footprint, but notwithstanding, I'd like to understand the most efficient method of retrieving the last row to the extent that I can then call .to_dicts() on that last row.
I can certainly do something naive like:
bars = dbdf.to_dict(orient="records")
print(bars[-1])
And in this particular case it would likely be just fine given the small size of the dataframe, but if the dataframe was orders of magnitude larger in memory footprint and rows, is there a better way to achieve the same that could also be considered a best common practise regardless as to the dataframe's footprint?
First select last row by DataFrame.iloc and then convert to dictionary by Series.to_dict:
d = df.iloc[-1].to_dict()
There are 2 ways:
Use Tail Function
The tail Function is used to show the last rows from the dataFrame. specifying the number 1 will show the last row of df.
df.tail(1)
Use Iloc Function.
iloc is an indexed-based selection technique which means that we have to pass integer index in the method to select a specific row/column
df.iloc[-1]

How to show all dataframe datatypes of too many columns in pandas

I have a dataframe consists of 115 columns and I need to show the datatypes and null values using df.info(), but when using the code I got the following:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 25979 entries, 0 to 25978
Columns: 115 entries, id to BSA
dtypes: float64(96), int64(9), object(10)
memory usage: 22.8+ MB
is there a way to show all columns details?
df is pandas dataframe
dataTypeSeries = df.dtypes
As per Pandas documentation, "display.large_repr lets you select whether to display dataframes that exceed max_columns or max_rows as a truncated frame, or as a summary."
So, you could try this:
>>> pd.set_option("large_repr", "info")
>>> df

create new column in data frame based on another data frame [duplicate]

I have 2 dataframes:
restaurant_ids_dataframe
Data columns (total 13 columns):
business_id 4503 non-null values
categories 4503 non-null values
city 4503 non-null values
full_address 4503 non-null values
latitude 4503 non-null values
longitude 4503 non-null values
name 4503 non-null values
neighborhoods 4503 non-null values
open 4503 non-null values
review_count 4503 non-null values
stars 4503 non-null values
state 4503 non-null values
type 4503 non-null values
dtypes: bool(1), float64(3), int64(1), object(8)`
and
restaurant_review_frame
Int64Index: 158430 entries, 0 to 229905
Data columns (total 8 columns):
business_id 158430 non-null values
date 158430 non-null values
review_id 158430 non-null values
stars 158430 non-null values
text 158430 non-null values
type 158430 non-null values
user_id 158430 non-null values
votes 158430 non-null values
dtypes: int64(1), object(7)
I would like to join these two DataFrames to make them into a single dataframe using the DataFrame.join() command in pandas.
I have tried the following line of code:
#the following line of code creates a left join of restaurant_ids_frame and restaurant_review_frame on the column 'business_id'
restaurant_review_frame.join(other=restaurant_ids_dataframe,on='business_id',how='left')
But when I try this I get the following error:
Exception: columns overlap: Index([business_id, stars, type], dtype=object)
I am very new to pandas and have no clue what I am doing wrong as far as executing the join statement is concerned.
any help would be much appreciated.
You can use merge to combine two dataframes into one:
import pandas as pd
pd.merge(restaurant_ids_dataframe, restaurant_review_frame, on='business_id', how='outer')
where on specifies field name that exists in both dataframes to join on, and how
defines whether its inner/outer/left/right join, with outer using 'union of keys from both frames (SQL: full outer join).' Since you have 'star' column in both dataframes, this by default will create two columns star_x and star_y in the combined dataframe. As #DanAllan mentioned for the join method, you can modify the suffixes for merge by passing it as a kwarg. Default is suffixes=('_x', '_y'). if you wanted to do something like star_restaurant_id and star_restaurant_review, you can do:
pd.merge(restaurant_ids_dataframe, restaurant_review_frame, on='business_id', how='outer', suffixes=('_restaurant_id', '_restaurant_review'))
The parameters are explained in detail in this link.
Joining fails if the DataFrames have some column names in common. The simplest way around it is to include an lsuffix or rsuffix keyword like so:
restaurant_review_frame.join(restaurant_ids_dataframe, on='business_id', how='left', lsuffix="_review")
This way, the columns have distinct names. The documentation addresses this very problem.
Or, you could get around this by simply deleting the offending columns before you join. If, for example, the stars in restaurant_ids_dataframe are redundant to the stars in restaurant_review_frame, you could del restaurant_ids_dataframe['stars'].
In case anyone needs to try and merge two dataframes together on the index (instead of another column), this also works!
T1 and T2 are dataframes that have the same indices
import pandas as pd
T1 = pd.merge(T1, T2, on=T1.index, how='outer')
P.S. I had to use merge because append would fill NaNs in unnecessarily.
In case, you want to merge two DataFrames horizontally, then use this code:
df3 = pd.concat([df1, df2],axis=1, ignore_index=True, sort=False)

error in .groupby and .agg when using a tuple column

I'm having trouble using
.groupby
and
.agg
using a tuple column
here is the .info()
account_aggregates.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9713 entries, 0 to 9712
Data columns (total 14 columns):
NATIVEACCOUNTKEY 9713 non-null int64
(POLL, sum) 9713 non-null int64
num_cancellations 8 non-null float64
I'm trying to do something like this:
session_deciles2_grouped = account_aggregates.groupby(('POLL','sum'))
and this:
session_deciles22=session_deciles2_grouped[('POLL','sum')].agg(['mean','count'])
but the columns aren't being recognized - I keep getting a key error.
account_aggregates.groupby([('POLL','sum'),]) would be required here.
The reason account_aggregates.groupby(('POLL','sum')) won't work is because ('POLL','sum') is a collection, and groupby reads this as there are a column called POLL and there is a column called sum, and use both columns to do a groupby operation.
when we put ('POLL','sum') in a list, it means to groupby by a column named ('POLL','sum').
Therefore, account_aggregates.groupby([('POLL','sum'),]) or account_aggregates.groupby((('POLL','sum'),)) will work.

Using boolean masks in Pandas

This is probably a trivial query but I can't work it out.
Essentially, I want to be able to filter out noisy tweets from a dataframe below
<class 'pandas.core.frame.DataFrame'>
Int64Index: 140381 entries, 0 to 140380
Data columns:
text 140381 non-null values
created_at 140381 non-null values
id 140381 non-null values
from_user 140381 non-null values
geo 5493 non-null values
dtypes: float64(1), object(4)
I can create a dataframe based on unwanted keywords thus:
junk = df[df.text.str.contains("Swans")]
But what's the best way to use this to see what's left?
df[~df.text.str.contains("Swans")]
You can also use the following two options:
option 1:
df[-df.text.str.contains("Swans")]
option 2:
import numpy as np
df[np.invert(df.text.str.contains("Swans"))]

Categories