I have two dataframes. One is excel file and another will be created by user inputs. Based on the user inputs and conditions on columns in the 1st dataframe, new columns should be added to 1st dataframe with calculations. I have wrote the code, which was successful for the test data, but the results are not coming to dataframe. Any help?
1st Dataframe:
Data columns (total 9 columns):
Column Non-Null Count Dtype
0 DDO Code 8621 non-null object
1 ULB Name 8621 non-null object
2 Dist. 8621 non-null object
3 Div. 8621 non-null object
4 Kgid No 8621 non-null int64
5 Name Of The Official 8621 non-null object
6 PRAN Number 8621 non-null float64
7 Join Date 8621 non-null datetime64[ns]
8 Present Basic 8621 non-null int64
dtypes: datetime64ns, float64(1), int64(2), object(5)
2nd Dataframe will be created by user inputs:
enter image description here
from the above data, I need to append 'n' columns based on the user inputs with loops and condition.
here is the code:
for a,b in zip(month_data.month_list, month_data.month_range):
for i,x in zip(contr_calc_new["Join Date"],contr_calc_new['Present Basic']):
if i.date().strftime('%Y-%m') == b.date().strftime('%Y-%m'):
contr_calc_new[a] = 0
else:
contr_calc_new[a] = int(((x + (x*rate)//100)*14//100))
this code is working for test data, but the results are not appending to the 1st dataframe by the calculation based on 2nd dataframe.
i need the result should be like below:
if [join date] column is equal to year & month entered by user, it must return zero, else it should return some calculation. Advance thanks for the help.
Finally I found the proper code. Thank you for your replies.
for a,b in zip(month_data.month_list, month_data.month_range):
contr_calc_new[a] = np.where(contr_calc_new['Join Date'].dt.strftime('%Y-%m') == b.date().strftime('%Y-%m'),0,((contr_calc_new['Present Basic'] + (contr_calc_new['Present Basic']*da_rate)//100)*14//100).astype(int))
Related
All this is asking me to do is write a code that shows if there are any missing values where it is not the customers first order. I have provided the DataFrame. Should I use column 'Order_number" instead? Is my code wrong?
I named the DataFrame df_orders.
I thought my code would find the columns that have missing values and a greater order number than 1.
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 478967 entries, 0 to 478966
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 order_id 478967 non-null int64
1 user_id 478967 non-null int64
2 order_number 478967 non-null int64
3 order_dow 478967 non-null int64
4 order_hour_of_day 478967 non-null int64
5 days_since_prior_order 450148 non-null float64
dtypes: float64(1), int64(5)
memory usage: 21.9 MB
None
# Are there any missing values where it's not a customer's first order?
m_v_fo= df_orders[df_orders['days_since_prior_order'].isna() > 1]
print(m_v_fo.head())
Empty DataFrame
Columns: [order_id, user_id, order_number, order_dow, order_hour_of_day,
days_since_prior_order]
Index: []
When you say .isna() you are returning a series of True or False. So that will never be > 1
Instead, try this:
m_v_fo= df_orders[df_orders['days_since_prior_order'].isna().sum() > 1]
If that doesn't solve the problem, then I'm not sure - try editing your question to add more detail and I can try again. :)
Update: I read your question again, and I think you're doing this out of order. First you need to filter on days_since_prior_order and then look for na.
m_v_fo = df_orders[df_orders['days_since_prior_order'] > 1].isna()
I am trying to create a new pandas dataframe displayDF with 4 columns from the dataframe finalDF.
displayDF = finalDF[['False','True','RULE ID','RULE NAME']]
This command is failing with the error:
KeyError: "['False', 'True'] not in index"
However, I can see the columns "False" and "True" when I run finalDF.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 12 entries, 0 to 11
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 rule_rec_id 12 non-null object
1 False 12 non-null int64
2 True 12 non-null int64
3 RULE ID 12 non-null object
4 RULE NAME 12 non-null object
5 RULE DESCRIPTION 12 non-null object
dtypes: int64(2), object(4)
memory usage: 672.0+ bytes
Additional Background:
I created finalDF by merging two dataframes (pivot_stackedPandasDF and dfPandaDescriptions)
finalDF = pd.merge(pivot_stackedPandasDF, dfPandaDescriptions, how='left', left_on=['rule_rec_id'], right_on=['RULE ID'])
I created pivot_stackedPandasDF with this command.
pivot_stackedPandasDF = stackedPandasDF.pivot_table(index="rule_rec_id", columns="alert_value", values="count").reset_index()
I think the root cause may be in the way I ran the .pivot_table() command.
I have 2 dataframes:
restaurant_ids_dataframe
Data columns (total 13 columns):
business_id 4503 non-null values
categories 4503 non-null values
city 4503 non-null values
full_address 4503 non-null values
latitude 4503 non-null values
longitude 4503 non-null values
name 4503 non-null values
neighborhoods 4503 non-null values
open 4503 non-null values
review_count 4503 non-null values
stars 4503 non-null values
state 4503 non-null values
type 4503 non-null values
dtypes: bool(1), float64(3), int64(1), object(8)`
and
restaurant_review_frame
Int64Index: 158430 entries, 0 to 229905
Data columns (total 8 columns):
business_id 158430 non-null values
date 158430 non-null values
review_id 158430 non-null values
stars 158430 non-null values
text 158430 non-null values
type 158430 non-null values
user_id 158430 non-null values
votes 158430 non-null values
dtypes: int64(1), object(7)
I would like to join these two DataFrames to make them into a single dataframe using the DataFrame.join() command in pandas.
I have tried the following line of code:
#the following line of code creates a left join of restaurant_ids_frame and restaurant_review_frame on the column 'business_id'
restaurant_review_frame.join(other=restaurant_ids_dataframe,on='business_id',how='left')
But when I try this I get the following error:
Exception: columns overlap: Index([business_id, stars, type], dtype=object)
I am very new to pandas and have no clue what I am doing wrong as far as executing the join statement is concerned.
any help would be much appreciated.
You can use merge to combine two dataframes into one:
import pandas as pd
pd.merge(restaurant_ids_dataframe, restaurant_review_frame, on='business_id', how='outer')
where on specifies field name that exists in both dataframes to join on, and how
defines whether its inner/outer/left/right join, with outer using 'union of keys from both frames (SQL: full outer join).' Since you have 'star' column in both dataframes, this by default will create two columns star_x and star_y in the combined dataframe. As #DanAllan mentioned for the join method, you can modify the suffixes for merge by passing it as a kwarg. Default is suffixes=('_x', '_y'). if you wanted to do something like star_restaurant_id and star_restaurant_review, you can do:
pd.merge(restaurant_ids_dataframe, restaurant_review_frame, on='business_id', how='outer', suffixes=('_restaurant_id', '_restaurant_review'))
The parameters are explained in detail in this link.
Joining fails if the DataFrames have some column names in common. The simplest way around it is to include an lsuffix or rsuffix keyword like so:
restaurant_review_frame.join(restaurant_ids_dataframe, on='business_id', how='left', lsuffix="_review")
This way, the columns have distinct names. The documentation addresses this very problem.
Or, you could get around this by simply deleting the offending columns before you join. If, for example, the stars in restaurant_ids_dataframe are redundant to the stars in restaurant_review_frame, you could del restaurant_ids_dataframe['stars'].
In case anyone needs to try and merge two dataframes together on the index (instead of another column), this also works!
T1 and T2 are dataframes that have the same indices
import pandas as pd
T1 = pd.merge(T1, T2, on=T1.index, how='outer')
P.S. I had to use merge because append would fill NaNs in unnecessarily.
In case, you want to merge two DataFrames horizontally, then use this code:
df3 = pd.concat([df1, df2],axis=1, ignore_index=True, sort=False)
I want to preview a Pandas dataframe. I would use head(mymatrix) in R, but I do not know how to do this in Pandas Python.
When I type
df.head(10) I get...
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10 entries, 0 to 9
Data columns (total 14 columns):
#Book_Date 10 non-null values
Item_Qty 10 non-null values
Item_id 10 non-null values
Location_id 10 non-null values
MFG_Discount 10 non-null values
Sale_Revenue 10 non-null values
Sales_Flg 10 non-null values
Sell_Unit_Cost 5 non-null values
Store_Discount 10 non-null values
Transaction_Id 10 non-null values
Unit_Cost_Amt 10 non-null values
Unit_Received_Cost 5 non-null values
Unnamed: 0 10 non-null values
Weight 10 non-null values
Suppose you want to output the first and last 10 rows of the iris data set.
In R:
data(iris)
head(iris, 10)
tail(iris, 10)
In Python (scikit-learn required to load the iris data set):
import pandas as pd
from sklearn import datasets
iris = pd.DataFrame(datasets.load_iris().data)
iris.head(10)
iris.tail(10)
Now, as previously answered, if your data frame is too large for the display you use in the terminal, a summary is output. To visualize your data in a terminal, you could either expend the terminal or reduce the number of columns to display, as follows.
iris.iloc[:,1:2].head(10)
EDIT. Changed .ix to .iloc. From the pandas documentation,
Starting in 0.20.0, the .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers.
This is probably a trivial query but I can't work it out.
Essentially, I want to be able to filter out noisy tweets from a dataframe below
<class 'pandas.core.frame.DataFrame'>
Int64Index: 140381 entries, 0 to 140380
Data columns:
text 140381 non-null values
created_at 140381 non-null values
id 140381 non-null values
from_user 140381 non-null values
geo 5493 non-null values
dtypes: float64(1), object(4)
I can create a dataframe based on unwanted keywords thus:
junk = df[df.text.str.contains("Swans")]
But what's the best way to use this to see what's left?
df[~df.text.str.contains("Swans")]
You can also use the following two options:
option 1:
df[-df.text.str.contains("Swans")]
option 2:
import numpy as np
df[np.invert(df.text.str.contains("Swans"))]