Hope the title is not misleading.
I load an Excel file in a pandas dataframe as usual
df = pd.read_excel('complete.xlsx')
and this is what's inside (usually is already ordered - this is a really small sample)
df
Out[21]:
Country City First Name Last Name Ref
0 England London John Smith 34
1 England London Bill Owen 332
2 England Brighton Max Crowe 25
3 England Brighton Steve Grant 55
4 France Paris Roland Tomas 44
5 France Paris Anatole Donnet 534
6 France Lyon Paulin Botrel 234
7 Spain Madrid Oriol Abarquero 34
8 Spain Madrid Alberto Olloqui 534
9 Spain Barcelona Ander Moreno 254
10 Spain Barcelona Cesar Aranda 222
what I need to do is automating an export of the data creating a sqlite db for every country, (i.e. 'England.sqlite') which will contain a table for evey city (i.e. London and Brighton) and every table will have the related personnel info.
The sqlite is not a problem, I'm only trying to figure how to "unpack" the dataframe in the most rapid and "pythonic way
Thanks
You can loop by DataFrame.groupby object:
for i, subdf in df.groupby('Country'):
print (i)
print (subdf)
#processing
Related
Having a data set as below.Here I need to group the subset in column and fill the missing values using mode method.Here specifically needs to fill the missing value of Tom from UK. So I need to group the TOM from Uk, and in that group the most repeating value needs to be added to the missing value.
Below fig shows how i need to do the group by.From the below matrix i need to replace all the Nan values using mode.
the desired output:
attaching the dataset
Name location Value
Tom USA 20
Tom UK Nan
Tom USA Nan
Tom UK 20
Jack India Nan
Nihal Africa 30
Tom UK Nan
Tom UK 20
Tom UK 30
Tom UK 20
Tom UK 30
Sam UK 30
Sam UK 30
try:
df = df\
.set_index(['Name', 'location'])\
.fillna(
df[df.Name.eq('Tom') & df.location.eq('UK')]\
.groupby(['Name', 'location'])\
.agg(pd.Series.mode)\
.to_dict()
)\
.reset_index()
Output:
Name location Value
0 Tom USA 20
1 Tom UK 20
2 Tom USA NaN
3 Tom UK 20
4 Jack India NaN
5 Nihal Africa 30
6 Tom UK 20
7 Tom UK 20
8 Tom UK 30
9 Tom UK 20
10 Tom UK 30
11 Sam UK 30
12 Sam UK 30
I have a python code that gets links from a dataframe (df1) , collect data from a website and return output in a new dataframe
df1:
id Name link Country Continent
1 Company1 www.link1.com France Europe
2 Company2 www.link2.com France Europe
3 Company3 www.Link3.com France Europe
The ouput from the code is df2:
link numberOfPPL City
www.link1.com 8 Paris
www.link1.com 9 Paris
www.link2.com 15 Paris
www.link2.com 1 Paris
I want to join these 2 dataframes in one (dfinal). My code:
dfinal = df1.append(df2, ignore_index=True)
I got dfinal:
link numberOfPPL City id Name Country Continent
www.link1.com 8 Paris
www.link1.com 9 Paris
www.link2.com 15 Paris
www.link2.com 1 Paris
www.link1.com 1 Company1 France Continent
..
..
I Want my final dataframe to be like this:
link numberOfPPL City id Name Country Continent
www.link1.com 8 Paris 1 Company1 France Europe
www.link1.com 9 Paris 1 Company1 France Europe
www.link2.com 15 Paris 1 Company1 France Europe
www.link2.com 1 Paris 2 Company2 France Europe
Can anyone help please ??
You can merge the two dataframes on 'link':
outputDF = df2.merge(df1, how='left', on=['link'])
i do road csv file's and grouping 2 headers in csv file so i want to
each other count about 1 headers value and percent count/total and add
dataframe
have a lot of data in test.csv
==example==
country city name
KOREA busan Kim
KOREA busan choi
KOREA Seoul park
USA LA Jane
Spain Madrid Torres
(name is not overlap)
==========
csv_file = pd.read_csv("test.csv")
need_group = csv_file.groupby(['category','city names'])
returns
country city names
0 KOREA Seoul, Busan, ...
1 KOREA Daegu, Seoul
2 USA LA, New York...
2 USA LA, ...
want to
- count is cf name's
country city names count percent
0 KOREA Seoul 2 20%
1 KOREA Daegu 1 10%
2 USA LA 2 20%
3 USA New York 1 10%
4 Spain Madrid 4 40%
I believe you need counts per country and name by GroupBy.size and then percentage divide by length of DataFrame:
print (csv_file)
country city name
0 KOREA busan Kim
1 KOREA busan Dongs
2 KOREA Seoul park
3 USA LA Jane
4 Spain Madrid Torres
df = csv_file.groupby(['country','city']).size().reset_index(name='count')
df['percent'] = df['count'].div(df['count'].sum()).mul(100)
I have a dataframe and I'm trying to group by the Name and Destination columns and calculate the sum of the sales for that Destination for the particular Name and then get the top 2 for each name.
data=
Name Origin Destination Sales
John Italy China 2
Dan UK China 3
Dan UK India 2
Sam UK India 5
Sam Italy Malaysia 1
John Italy Malaysia 1
Dan France India 4
Dan Italy China 2
Sam Italy Malaysia 2
John France Malaysia 1
Sam Italy China 2
Dan UK Malaysia 4
Dan France India 2
John France Malaysia 4
John Italy China 4
John UK Malaysia 1
Sam UK China 4
Sam France China 5
I have tried to do this but I keep getting it sorted by the Destination and not the Sales. Below is the code I tried.
data.groupby(['Name', 'Destination'])['Sales'].sum().groupby(level=0).head(2).reset_index(name='Total_Sales')
This code gives me this dataframe:
Name Destination Total_Sales
Dan China 5
Dan India 8
John China 6
John Malaysia 7
Sam China 11
Sam India 5
But it is sorted on the wrong column (Destination) but I would like to sort by the sum of the sales (Total_Sales).
The expected result I want I want to achieve is:
Name Destination Total_Sales
Dan India 8
Dan China 5
John Malaysia 7
John China 6
Sam China 11
Sam India 5
Your code:
grouped_df = data.groupby(['Name', 'Destination'])['Sales'].sum().groupby(level=0).head(2).reset_index(name='Total_Sales')
To sort the result:
sorted_df = grouped_df.sort_values(by=['Name','Total_Sales'], ascending=(True,False))
print(sorted_df)
Output:
Name Destination Total_Sales
1 Dan India 8
0 Dan China 5
3 John Malaysia 7
2 John China 6
4 Sam China 11
5 Sam India 5
I'm trying to prune some data from my data frame but only the rows where there are duplicates in the "To country" column
My data frame looks like this:
Year From country To country Points
0 2016 Albania Armenia 0
1 2016 Albania Armenia 2
2 2016 Albania Australia 12
Year From country To country Points
2129 2016 United Kingdom The Netherlands 0
2130 2016 United Kingdom Ukraine 10
2131 2016 United Kingdom Ukraine 5
[2132 rows x 4 columns]
I try this on it:
df.drop_duplicates(subset='To country', inplace=True)
And what happens is this:
Year From country To country Points
0 2016 Albania Armenia 0
2 2016 Albania Australia 12
4 2016 Albania Austria 0
Year From country To country Points
46 2016 Albania The Netherlands 0
48 2016 Albania Ukraine 0
50 2016 Albania United Kingdom 5
[50 rows x 4 columns]
While this does get rid of the duplicated 'To country' entries, it also removes all the values of the 'From country' column. I must be using the drop_duplicates() wrong, but the pandas documentation isn't helping me understand why its dropping more than I'd expect it to?
No, this behavior is correct—assuming every team played every other team, it's finding the firsts, and all of those firsts are "From" Albania.
From what you've said below, you want to keep row 0, but not row 1 because it repeats both the To and From countries. The way to eliminate those is:
df.drop_duplicates(subset=['To country', 'From country'], inplace=True)
The simplest solution is to group by the 'to country' name and take the first (or the last, if you prefer) row from each group:
df.groupby('To country').first().reset_index()
# To country Year From country Points
#0 Armenia 2016 Albania 0
#1 Australia 2016 Albania 12
#2 The Netherlands 2016 United Kingdom 0
#3 Ukraine 2016 United Kingdom 10
Compared to aryamccarthy's solution, this one gives you more control over which duplicates to keep.