Plot multiple rows of dataframe in pandas for specific columns - python

df
SKU Comp Brand Jan_Sales Feb_Sales Mar_sales Apr_sales Dec_sales..
A AC BA 122 100 50 200 300
B BC BB 100 50 80 90 250
C CC BC 40 30 100 10 11
and so on
Now I want a graph which will plot Jan sales, feb sales and so on till dec in one line for SKU A, Similarly one line on the same graph for SKU B and same way for SKU C.
I read few answers which say that I need to transpose my data. Something like below
df.T. plot()
However my first column is SKU, and I want to plot based on that. Rest of the columns are numeric. So I want that on each line SKU Name should be mentioned. And plotting should be row wise
EDIT(added after receiving some answers as I am facing this issue in few other datasets):
lets say I dont want columns Company, brand etc, then what to do

Use DataFrame.set_index for convert SKU to index and then tranpose:
df.set_index('SKU').T.plot()

Use set_index then transpose:
df.set_index("SKU").T.plot()
Output:

Related

Combine Duplicate Rows in a Column in PySpark Dataframe

I have duplicate rows in a PySpark data frame and I want to combine and sum all of them into one row per column based on duplicate entries in one column.
Current Table
Deal_ID Title Customer In_Progress Deal_Total
30 Deal 1 Client A 350 900
30 Deal 1 Client A 360 850
50 Deal 2 Client B 30 50
30 Deal 1 Client A 125 200
30 Deal 1 Client A 90 100
10 Deal 3 Client C 32 121
Attempted PySpark Code
F.when(F.count(F.col('Deal_ID')) > 1, F.sum(F.col('In_Progress')) && F.sum(F.col('Deal_Total'))))
.otherwise(),
Expected Table
Deal_ID Title Customer In_Progress Deal_Total
30 Deal 1 Client A 925 2050
50 Deal 2 Client B 30 50
10 Deal 3 Client C 32 121
I think you need to group by the columns with duplicated rows then aggregate the amounts. I think this solves your problem :
df = df.groupBy(['Deal_ID', 'Title', 'Customer']).agg({'In_Progress': 'sum', ' Deal_Total': 'sum'})
You have a SQL tag, so that's how it will work in there
select
deal_id,
title,
customer,
sum(in_progress) as in_progress,
sum(deal_total) as deal_total
from <table_name>
group by 1,2,3
otherwise you can use the same group by function in python pandas / dataframe and apply to your datadrame:
you have to pass in the columns that you would need to aggregate by as a list
then you need to specify the aggregation type and the column you want to add up
df = df.groupBy(['deal_id', 'title', 'Customer']).agg({'in_progress': 'sum', ' deal_total': 'sum'})

Iterate over certain columns with unique values and generate plots python

New to pandas and much help would be appreciated. I'm currently analyzing some Airbnb data and have over 50 different columns. Some of these columns have tens of thousands of unique values while some have very few unique values (categorical).
How do I loop over the columns that have less than 10 unique values to generate plots for them?
Count of unique values in each column:
id 38185
last_scraped 3
name 36774
description 34061
neighborhood_overview 18479
picture_url 37010
host_since 4316
host_location 1740
host_about 14178
host_response_time 4
host_response_rate 78
host_acceptance_rate 101
host_is_superhost 2
host_neighbourhood 486
host_total_listings_count 92
host_verifications 525
host_has_profile_pic 2
host_identity_verified 2
neighbourhood_cleansed 222
neighbourhood_group_cleansed 5
property_type 80
room_type 4
The above is stored through unique_vals = df.nunique()
Apologies if this is a repeat question, the closest answer I could find was Iterate through columns to generate separate plots in python but it pertained to the entire data set
Thanks!
You can filter the columns using df.columns[ unique_vals < 10 ]
You can also pass the df.nunique() call directly if you wish:
unique_columns = df.columns[ df.nunique() < 10 ]

How to drop rows in pandas dataframe, when there is similar values?

I have a python pandas dataframe of stock data, and I'm trying to filter some of those tickers.
There are companies that have 2 or more tickers (different types of shares when a share is preferred and the other not).
I want to drop the lines of those additional share values, and let just the share with the higher volume. In the dataframe I also have the company name, so maybe there is a way of using it to make some condition and then drop it when comparing the volume of the same company? How can I do this?
Use groupby and idxmax:
Suppose this dataframe:
>>> df
ticker volume
0 CEBR3 123
1 CEBR5 456
2 CEBR6 789 # <- keep for group CEBR
3 GOAU3 23 # <- keep for group GOAU
4 GOAU4 12
5 CMIN3 135 # <- keep for group CMIN3
>>> df.loc[df.groupby(df['ticker'].str.extract(r'^(.*)\d', expand=False),
sort=False)['volume'].idxmax().tolist()]
ticker volume
2 CEBR6 789
3 GOAU3 23
5 CMIN3 135

dataframe count frequency of a string in a column

I have a csv that contains 1000 rows in a python code and returning a new dataframe with 3 columns:
noOfPeople and Description, Location
My final df will be like this one:
id companyName noOfPeople Description Location
1 comp1 75 tech USA
2 comp2 22 fashion USA
3 comp3 70 tech USA
I want to write a code that will stop once I have 200 rows where noOfPeople is greater or equal to 70 and it will return all the rest rows empty. So the code will count columns where noOfPeople >=70. Once I have 200 rows that has this condition, the code will stop.
Can someone help?
df[df['noOfPeople'] >= 70].iloc[:200]
Use head or iloc for select first 200 values and then get max:
print (df1['noOfPeople'].iloc[:199].max())
And add your filter what ever you need.

Trying to code a Python equivalent of SUMIFs feature in Excel

I am trying to rewrite a .xlsx file from scratch using Python. The excel sheet has 99 rows and 11 columns. I have generated 99 rows x 8 columns already and I am currently working on generating the 99 rows x 9th column.
This 9th column is calculated based on a SUM-IFS formula in excel. It takes into account columns 2, 4 and 7.
Col. 2 has numerical int values.
Col. 4 has three letter airport code values like NYC for New York City
Col. 7 also has three letter airport code values like DEL for Delhi.
The sum-if formula for column 9 cells
SUMIFS(B:B, D:D, D2, G:G, G2)
Hence it sums the numerical values in column 2 for corresponding cities in col. 4 and col. 7. If there is only one occurrence of the pair of cities in col. 4 and col. 7 then there is nothing to sum and the cell in col.9 = int value of cell in col. 2
However, if are multiple occurrences of the pair of cities in col. 4 and col. 7 then the corresponding values in col. 2 are SUMMED and that becomes the value of the cell in col. 9
Example:
In this example, col. 2 is Sales, col.4 is Origin City, col. 7 is Destination City and col. 9 is the Result that utilizes =SUMIFS(B:B,C:C,C2,D:D,D2)
I am trying to calculate the column 9 using python on the large data set that I have. For now, I have been able to create a list of dictionaries, where I have made the key as origin_city-destination_city and the value as the integer value of col. 2. The list of dicts has 99 rows like the excel file, hence each row of the excel file is represented as a dict. On printing the dictionary, it is something like this:
{'YTO-YVR': 570}
{'YVR-YTO': 542}
{'YTO-YYC': 420}
{'YYT-YTO': 32}
{'YWG-YYC': 115}
I have been contemplating if it is possible to loop over the list of dicts and create a SUMIFS version of it --- resulting in 99 dicts in the list, with each dict having the sumif value. After this I have to write all these values to the column in the excel file..
I hope someone here can help !! Thank you very much in advance :)
You can use pandas' groupby with transform:
import pandas as pd
df = pd.DataFrame({'Sales': [100,110,200,300,150,200,100],
'Origin': ['YYZ','YEA','CDG','YYZ','YEA','YVR','YEA'],
'Dest': ['DEL','NYC','YUL','DEL','YTO','HKG','NYC']})
df['Result'] = df.groupby(['Origin','Dest']).Sales.transform('sum')
Result:
Sales Origin Dest Result
0 100 YYZ DEL 400
1 110 YEA NYC 210
2 200 CDG YUL 200
3 300 YYZ DEL 400
4 150 YEA YTO 150
5 200 YVR HKG 200
6 100 YEA NYC 210

Categories