How to repeat 2 columns every nth row in pandas? - python

I have a df that looks like this.
id rent place
0 Yes colorado
0 yes Mexico
0 yes Brazil
1 yes colorado
1 yes Mexico
1 yes Brazil
2 yes colorado
2 yes Mexico
2 yes Brazil
3 yes colorado
3 yes Mexico
3 yes Brazil
I need the "id" column to continue to increase by 1 and the values in the "place" column to repeat every 3rd row. I have no idea how to do this.

You could build your DataFrame row by row, and append the relevant row(s) as you desire.
id = [0,1,2,3]
rent = [123, 'yes', 'yes']
place = ['colorado', 'Mexico', 'Brazil']
df = pd.DataFrame({'rent': [], 'place': []}, index=[]) #empty df
for i in range(len(id)):
for j in range(len(rent)):
df = df.append(pd.DataFrame({'rent': rent[j], 'place': place[j]}, index=[id[i]]))
df.reset_index(inplace=True)
df.rename(columns={'index': 'id'}, inplace=True)
Output df is:
id rent place
0 0 123 colorado
1 0 yes Mexico
2 0 yes Brazil
3 1 123 colorado
4 1 yes Mexico
5 1 yes Brazil
6 2 123 colorado
7 2 yes Mexico
8 2 yes Brazil
9 3 123 colorado
10 3 yes Mexico
11 3 yes Brazil

You can generate a new one like so:
N = 200
from itertools import cycle
places = cycle(["colorado", "mexico", "brazil"])
data = {"id": [j//3 for j in range(N)], "rent": True, "place": [next(places) for j in range(N)]}
df = pd.DataFrame(data)
Note that I've replaced rent with a boolean to be less error prone
than text. Output:
id rent place
0 0 True colorado
1 0 True mexico
2 0 True brazil
3 1 True colorado
4 1 True mexico
.. .. ... ...
195 65 True colorado
196 65 True mexico
197 65 True brazil
198 66 True colorado
199 66 True mexico
Alternatively, you can concatenate dfs and then sort them:
df = pd.DataFrame()
for place in ["brazil", "colorado", "mexico"]:
sub_df = pd.DataFrame({"id": range(N), "rent": True, "place": place})
df = pd.concat([df, sub_df], axis=0)
df = df.sort_values(["id"])

Related

Creating new variable by aggregation in python 2

I have data on births that looks like this:
Date Country Sex
1.1.20 USA M
1.1.20 USA M
1.1.20 Italy F
1.1.20 England M
2.1.20 Italy F
2.1.20 Italy M
3.1.20 USA F
3.1.20 USA F
My purpose is to get a new dataframe in which each row is a date at a country, and then number of total births, number of male births and number of female births. It's supposed to look like this:
Date Country Births Males Females
1.1.20 USA 2 2 0
1.1.20 Italy 1 0 1
1.1.20 England 1 1 0
2.1.20 Italy 2 1 1
3.1.20 USA 2 0 2
I tried using this code:
df.groupby(by=['Date', 'Country', 'Sex']).size()
but it only gave me a new column of total births, with different rows for each sex in every date+country combination.
any help will be appreciated.
Thanks,
Eran
You can group the dataframe on columns Date and Country then aggregate column Sex using value_counts followed by unstack to reshape, finally assign the Births columns by summing frequency along axis=1:
out = df.groupby(['Date', 'Country'], sort=False)['Sex']\
.value_counts().unstack(fill_value=0)
out.assign(Births=out.sum(1)).reset_index()\
.rename(columns={'M': 'Male', 'F': 'Female'})
Or you can use a very similar approach with .crosstab instead of groupby + value_counts:
out = pd.crosstab([df['Date'], df['Country']], df['Sex'], colnames=[None])
out.assign(Births=out.sum(1)).reset_index()\
.rename(columns={'M': 'Male', 'F': 'Female'})
Date Country Female Male Births
0 1.1.20 USA 0 2 2
1 1.1.20 Italy 1 0 1
2 1.1.20 England 0 1 1
3 2.1.20 Italy 1 1 2
4 3.1.20 USA 2 0 2

Creating new column based on column values in row and column values in other rows in df?

I have df below as:
id | name | status | country | ref_id
3 Bob False Germany NaN
5 422 True USA 3
7 Nick False India NaN
6 Chris True Australia 7
8 324 True Africa 28
28 Tim False Canada 53
I want to add a new column for each row, where if the status for that row is True, if the ref_id for that row exists in the id column in another row and that rows status is False, give me the value of the name in that column.
So expected output below would be:
id | name | status | country | ref_id | new
3 Bob False Germany NaN NaN
5 422 True USA 3 Bob
7 Nick False India NaN NaN
6 Chris True Australia 7 Nick
8 324 True Africa 28 Tim
28 Tim False Canada 53 NaN
I have code below that I am using for other purposes that just filters for rows that have a status of True, and and id_reference value that exists in the id column like below:
(df.loc[df["status"]&df["id_reference"].astype(float).isin(df.loc[~df["status"], "id"])])
But I am trying to also calculate a new column as mentioned prior above with the value of the name if it has one in that column
Thanks!
Let us try
df['new']=df.loc[df.status,'ref_id'].map(df.set_index('id')['name'])
df
id name status country ref_id new
0 3 Bob False Germany NaN NaN
1 5 422 True USA 3.0 Bob
2 7 Nick False India NaN NaN
3 6 Chris True Australia 7.0 Nick
4 8 324 True Africa 28.0 Tim
5 28 Tim False Canada 53.0 NaN
This essentially a merge:
merged = (df.loc[df['status'],['ref_id']]
.merge(df.loc[~df['status'],['id','name']], left_on='ref_id', right_on='id')
)
df['ref_id'] = (df['id'].map(merged.set_index('id')['name'])
.where(df['status'])
)

How can I group multiple columns in a Data Frame?

I don't know if this is possible but I have a data frame like this one:
df
State County Homicides Man Woman Not_Register
Gto Celaya 2 2 0 0
NaN NaN 8 4 2 2
NaN NaN 3 2 1 0
NaN Yiriria 2 1 1 0
Nan Acambaro 1 1 0 0
Sin Culiacan 3 1 1 1
NaN Nan 5 4 0 1
Chih Juarez 1 1 0 0
I want to group by State, County, Man Women, Homicides and Not Register. Like this:
State County Homicides Man Woman Not_Register
Gto Celaya 13 8 3 2
Gto Yiriria 2 1 1 0
Gto Acambaro 1 1 0 0
Sin Culiacan 8 5 1 2
Chih Juarez 1 1 0 0
So far, I been able to group by State and County and fill the rows with NaN with the right name of the county and State. My result and code:
import numpy as np
import math
df = df.fillna(method ='pad') #To repeat the name of the State and County with the right order
#To group
df = df.groupby(["State","County"]).agg('sum')
df =df.reset_index()
df
State County Homicides
Gto Celaya 13
Gto Yiriria 2
Gto Acambaro 1
Sin Culiacan 8
Chih Juarez 1
But When I tried to add the Men and woman
df1 = df.groupby(["State","County", "Man", "Women", "Not_Register"]).agg('sum')
df1 =df.reset_index()
df1
My result is repeating the Counties not giving me a unique County for State,
How can I resolve this issue?
Thanks for your help
Change to
df[['Homicides','Man','Woman','Not_Register']]=df[['Homicides','Man','Woman','Not_Register']].apply(pd.to_numeric,errors = 'coerce')
df = df.groupby(['State',"County"]).sum().reset_index()

How to group rows so as to use value_counts on the created groups with pandas?

I have some customer data such as this in a data frame:
S No Country Sex
1 Spain M
2 Norway F
3 Mexico M
...
I want to have an output such as this:
Spain
M = 1207
F = 230
Norway
M = 33
F = 102
...
I have a basic notion that I want to group my rows based on their countries with something like df.groupby(df.Country), and on the selected rows, I need to run something like df.Sex.value_counts()
Thanks!
I think need crosstab:
df = pd.crosstab(df.Sex, df.Country)
Or if want use your solution add unstack for columns with first level of MultiIndex:
df = df.groupby(df.Country).Sex.value_counts().unstack(level=0, fill_value=0)
print (df)
Country Mexico Norway Spain
Sex
F 0 1 0
M 1 0 1
EDIT:
If want add more columns then is possible set which level parameter is converted to columns:
df1 = df.groupby([df.No, df.Country]).Sex.value_counts().unstack(level=0, fill_value=0).reset_index()
print (df1)
No Country Sex 1 2 3
0 Mexico M 0 0 1
1 Norway F 0 1 0
2 Spain M 1 0 0
df2 = df.groupby([df.No, df.Country]).Sex.value_counts().unstack(level=1, fill_value=0).reset_index()
print (df2)
Country No Sex Mexico Norway Spain
0 1 M 0 0 1
1 2 F 0 1 0
2 3 M 1 0 0
df2 = df.groupby([df.No, df.Country]).Sex.value_counts().unstack(level=2, fill_value=0).reset_index()
print (df2)
Sex No Country F M
0 1 Spain 0 1
1 2 Norway 1 0
2 3 Mexico 0 1
You can also use pandas.pivot_table:
res = df.pivot_table(index='Country', columns='Sex', aggfunc='count', fill_value=0)
print(res)
SNo
Sex F M
Country
Mexico 0 1
Norway 1 0
Spain 0 1

How to do keyword mapping in pandas

I have keyword
India
Japan
United States
Germany
China
Here's sample dataframe
id Address
1 Chome-2-8 Shibakoen, Minato, Tokyo 105-0011, Japan
2 Arcisstraße 21, 80333 München, Germany
3 Liberty Street, Manhattan, New York, United States
4 30 Shuangqing Rd, Haidian Qu, Beijing Shi, China
5 Vaishnavi Summit,80feet Road,3rd Block,Bangalore, Karnataka, India
My Goal Is make
id Address India Japan United States Germany China
1 Chome-2-8 Shibakoen, Minato, Tokyo 105-0011, Japan 0 1 0 0 0
2 Arcisstraße 21, 80333 München, Germany 0 0 0 1 0
3 Liberty Street, Manhattan, New York, USA 0 0 1 0 0
4 30 Shuangqing Rd, Haidian Qu, Beijing Shi, China 0 0 0 0 1
5 Vaishnavi Summit,80feet Road,Bangalore, Karnataka, India 1 0 0 0 0
The basic idea is create keyword detector, I am thinking to use str.contain and word2vec but I can't get the logic
Make use of pd.get_dummies():
countries = df.Address.str.extract('(India|Japan|United States|Germany|China)', expand = False)
dummies = pd.get_dummies(countries)
pd.concat([df,dummies],axis = 1)
Also, the most straightforward way is to have the countries in a list and use a for loop, say
countries = ['India','Japan','United States','Germany','China']
for c in countries:
df[c] = df.Address.str.contains(c) * 1
but it can be slow if you have a lot of data and countries.
In [58]: df = df.join(df.Address.str.extract(r'.*,(.*)', expand=False).str.get_dummies())
In [59]: df
Out[59]:
id Address China Germany India Japan United States
0 1 Chome-2-8 Shibakoen, Minato, Tokyo 105-0011, J... 0 0 0 1 0
1 2 Arcisstra?e 21, 80333 Munchen, Germany 0 1 0 0 0
2 3 Liberty Street, Manhattan, New York, United St... 0 0 0 0 1
3 4 30 Shuangqing Rd, Haidian Qu, Beijing Shi, China 1 0 0 0 0
4 5 Vaishnavi Summit,80feet Road,3rd Block,Bangalo... 0 0 1 0 0
NOTE: this method will not work if country is not at the last position in Address column or if country name contains ,
from numpy.core.defchararray import find
kw = 'India|Japan|United States|Germany|China'.split('|')
a = df.Address.values.astype(str)[:, None]
df.join(
pd.DataFrame(
find(a, kw) >= 0,
df.index, kw,
dtype=int
)
)
id Address India Japan United States Germany China
0 1 Chome-2-8 Shibakoen, Minat... 0 1 0 0 0
1 2 Arcisstraße 21, 80333 Münc... 0 0 0 1 0
2 3 Liberty Street, Manhattan,... 0 0 1 0 0
3 4 30 Shuangqing Rd, Haidian ... 0 0 0 0 1
4 5 Vaishnavi Summit,80feet Ro... 1 0 0 0 0

Categories