I have got data like this:
Col
Texas[x]
Dallas
Austin
California[x]
Los Angeles
San Francisco
What i want is this:
col1 Col2
Texas[x] Dallas
Austin
California[x] Los Angeles
San Francisco
Please help!
Use str.extract to create columns and then clean up
df.Col.str.extract('(.*\[x\])?(.*)').ffill()\
.replace('', np.nan).dropna()\
.rename(columns = {0:'Col1', 1: 'Col2'})\
.set_index('Col1')
Col2
Col1
Texas [x] Dallas
Texas [x] Austin
California [x] Los Angeles
California [x] San Francisco
Update: To address the follow-up question.
df.Col.str.extract('(.*\[x\])?(.*)').ffill()\
.replace('', np.nan).dropna()\
.rename(columns = {0:'Col1', 1: 'Col2'})
You get
Col1 Col2
1 Texas[x] Dallas
2 Texas[x] Austin
4 California[x] Los Angeles
5 California[x] San Francisco
Seems like [x] represents state in a list. You can try to iterate over the dataframe using iterrows. Something like this:
state = None # initialize as None, in case something goes wrong
city = None
rowlist = []
for idx, row in df.iterrows():
# get the state
if '[x]' in row['Col']:
state = row['Col']
continue
# now, get the cities
city = row['Col']
rowlist.append([state, city])
df2 = pd.DataFrame(rowlist)
This assumes that your initial dataframe is called df and column name is Col, and only works if cities are followed by states, which it seems like they do from your data sample.
Related
I have a pandas dataframe in which I have the column "Bio Location", I would like to filter it so that I only have the locations of my list in which there are names of cities. I have made the following code which works except that I have a problem.
For example, if the location is "Paris France" and I have Paris in my list then it will return the result. However, if I had "France Paris", it would not return "Paris". Do you have a solution? Maybe use regex? Thank u a lot!!!
df = pd.read_csv(path_to_file, encoding='utf-8', sep=',')
cities = [Paris, Bruxelles, Madrid]
values = df[df['Bio Location'].isin(citiesfr)]
values.to_csv(r'results.csv', index = False)
What you want here is .str.contains():
1. The DF I used to test:
df = {
'col1':['Paris France','France Paris Test','France Paris','Madrid Spain','Spain Madrid Test','Spain Madrid'] #so tested with 1x at start, 1x in the middle and 1x at the end of a str
}
df = pd.DataFrame(df)
df
Result:
index
col1
0
Paris France
1
France Paris Test
2
France Paris
3
Madrid Spain
4
Spain Madrid Test
5
Spain Madrid
2. Then applying the code below:
Updated following comment
#so tested with 1x at start, 1x in the middle and 1x at the end of a str
reg = ('Paris|Madrid')
df = df[df.col1.str.contains(reg)]
df
Result:
index
col1
0
Paris France
1
France Paris Test
2
France Paris
3
Madrid Spain
4
Spain Madrid Test
5
Spain Madrid
Suppose I have two dataframes
df_1
city state salary
New York NY 85000
Chicago IL 65000
Miami FL 75000
Dallas TX 78000
Seattle WA 96000
df_2
city state taxes
New York NY 15000
Chicago IL 5000
Miami FL 6500
Next, I join the two dataframes
joined_df = df_1.merge(df_2, how='inner', left_on=['city'], right_on = ['city'])
The Result:
joined_df
city state salary city state taxes
New York NY 85000 New York NY 15000
Chicago IL 65000 Chicago IL 5000
Miami FL 75000 Miami FL 6500
Is there anyway I can stack the two dataframes on top of each other joining on the city instead of extending the line horizontally, like below:
Requested:
joined_df
city state salary taxes
New York NY 85000
New York NY 15000
Chicago IL 65000
Chicago IL 5000
Miami FL 75000
Miami FL 6500
How can I do this in Pandas!
In this case we might need to use merge to restrict to the relevant rows before concat if we need to consider both city and state.
rel_df_1 = df_1.merge(df_2)[df_1.columns]
rel_df_2 = df_2.merge(df_1)[df_2.columns]
df = pd.concat([rel_df_1, rel_df_2]).sort_values(['city', 'state'])
You can use append (a shortcut for concat) to achieve that:
result = df1.append(df2, sort=False)
If your dataframes have overlapping indexes, you can use:
df1.append(df2, ignore_index=True, sort=False)
Also, you can look for more information here
UPDATE: After appending your dataframes, you can filter your result to get only the rows that contains the city in both dataframes:
result = result.loc[result['city'].isin(df1['city'])
& result['city'].isin(df2['city'])]
Try with stack():
stacked = df_1.merge(df_2, on=["city", "state"]).set_index(["city", "state"]).stack()
output = pd.concat([stacked.where(stacked.index.get_level_values(-1)=="salary"),
stacked.where(stacked.index.get_level_values(-1)=="taxes")],
axis=1,
keys=["salary", "taxes"]) \
.droplevel(-1) \
.reset_index()
>>> output
city state salary taxes
0 New York NY 85000.0 NaN
1 New York NY NaN 15000.0
2 Chicago IL 65000.0 NaN
3 Chicago IL NaN 5000.0
4 Miami FL 75000.0 NaN
5 Miami FL NaN 6500.0
I want to split one column from my dataframe into multiple columns, then attach those columns back to my original dataframe and divide my original dataframe based on whether the split columns include a specific string.
I have a dataframe that has a column with values separated by semicolons like below.
import pandas as pd
data = {'ID':['1','2','3','4','5','6','7'],
'Residence':['USA;CA;Los Angeles;Los Angeles', 'USA;MA;Suffolk;Boston', 'Canada;ON','USA;FL;Charlotte', 'NA', 'Canada;QC', 'USA;AZ'],
'Name':['Ann','Betty','Carl','David','Emily','Frank', 'George'],
'Gender':['F','F','M','M','F','M','M']}
df = pd.DataFrame(data)
Then I split the column as below, and separated the split column into two based on whether it contains the string USA or not.
address = df['Residence'].str.split(';',expand=True)
country = address[0] != 'USA'
USA, nonUSA = address[~country], address[country]
Now if you run USA and nonUSA, you'll note that there are extra columns in nonUSA, and also a row with no country information. So I got rid of those NA values.
USA.columns = ['Country', 'State', 'County', 'City']
nonUSA.columns = ['Country', 'State']
nonUSA = nonUSA.dropna(axis=0, subset=[1])
nonUSA = nonUSA[nonUSA.columns[0:2]]
Now I want to attach USA and nonUSA to my original dataframe, so that I will get two dataframes that look like below:
USAdata = pd.DataFrame({'ID':['1','2','4','7'],
'Name':['Ann','Betty','David','George'],
'Gender':['F','F','M','M'],
'Country':['USA','USA','USA','USA'],
'State':['CA','MA','FL','AZ'],
'County':['Los Angeles','Suffolk','Charlotte','None'],
'City':['Los Angeles','Boston','None','None']})
nonUSAdata = pd.DataFrame({'ID':['3','6'],
'Name':['David','Frank'],
'Gender':['M','M'],
'Country':['Canada', 'Canada'],
'State':['ON','QC']})
I'm stuck here though. How can I split my original dataframe into people whose Residence include USA or not, and attach the split columns from Residence ( USA and nonUSA ) back to my original dataframe?
(Also, I just uploaded everything I had so far, but I'm curious if there's a cleaner/smarter way to do this.)
There is unique index in original data and is not changed in next code for both DataFrames, so you can use concat for join together and then add to original by DataFrame.join or concat with axis=1:
address = df['Residence'].str.split(';',expand=True)
country = address[0] != 'USA'
USA, nonUSA = address[~country], address[country]
USA.columns = ['Country', 'State', 'County', 'City']
nonUSA = nonUSA.dropna(axis=0, subset=[1])
nonUSA = nonUSA[nonUSA.columns[0:2]]
#changed order for avoid error
nonUSA.columns = ['Country', 'State']
df = pd.concat([df, pd.concat([USA, nonUSA])], axis=1)
Or:
df = df.join(pd.concat([USA, nonUSA]))
print (df)
ID Residence Name Gender Country State \
0 1 USA;CA;Los Angeles;Los Angeles Ann F USA CA
1 2 USA;MA;Suffolk;Boston Betty F USA MA
2 3 Canada;ON Carl M Canada ON
3 4 USA;FL;Charlotte David M USA FL
4 5 NA Emily F NaN NaN
5 6 Canada;QC Frank M Canada QC
6 7 USA;AZ George M USA AZ
County City
0 Los Angeles Los Angeles
1 Suffolk Boston
2 NaN NaN
3 Charlotte None
4 NaN NaN
5 NaN NaN
6 None None
But it seems it is possible simplify:
c = ['Country', 'State', 'County', 'City']
df[c] = df['Residence'].str.split(';',expand=True)
print (df)
ID Residence Name Gender Country State \
0 1 USA;CA;Los Angeles;Los Angeles Ann F USA CA
1 2 USA;MA;Suffolk;Boston Betty F USA MA
2 3 Canada;ON Carl M Canada ON
3 4 USA;FL;Charlotte David M USA FL
4 5 NA Emily F NA None
5 6 Canada;QC Frank M Canada QC
6 7 USA;AZ George M USA AZ
County City
0 Los Angeles Los Angeles
1 Suffolk Boston
2 None None
3 Charlotte None
4 None None
5 None None
6 None None
The dataframe has 122,145 rows.
Following is snippet of data :
country_name,subdivision_1_name,subdivision_2_name,city_name
Spain,Madrid,Madrid,Sevilla La Nueva
Spain,Principality of Asturias,Asturias,Sevares
Spain,Catalonia,Barcelona,Seva
Spain,Cantabria,Cantabria,Setien
Spain,Basque Country,Biscay,Sestao
Spain,Navarre,Navarre,Sesma
Spain,Catalonia,Barcelona,Barcelona
I want to substitute city_name with subdivision_2_name whenever both the following conditions are satisfied:
subdivision_2_name and city_name have same country_name and same
subdivision_1_name , and
subdivision_2_name is present in city_name.
ex: For city_name "Seva" the subdivison_2_name "Barcelona" is present as a city_name as well in the dataframe with the same country_name "Spain" and same subdivision_1_name "Catalonia" , so I will replace "Seva" with "Barcelona".
I am able to create a proper apply func. I have prepared a loop:
for i in range(df.shape[0]):
if df.subdivision_2_name[i] in set(df.city_name[(df.country_name == df.country_name[i]) & (df.subdivision_1_name == df.subdivision_1_name[i])]):
df.city_name[i] = df.subdivision_2_name[i]
Edit : This loop took 1637 seconds(~28 min) to run
Suggest me a better method.
Use:
def f(x):
if x['subdivision_2_name'].isin(x['city_name']).any():
x['city_name'] = x['subdivision_2_name']
return (x)
df1 = df.groupby(['country_name','subdivision_1_name','subdivision_2_name']).apply(f)
print (df1)
country_name subdivision_1_name subdivision_2_name city_name
0 Spain Madrid Madrid Sevilla La Nueva
1 Spain Principality of Asturias Asturias Sevares
2 Spain Catalonia Barcelona Barcelona
3 Spain Cantabria Cantabria Setien
4 Spain Basque Country Biscay Sestao
5 Spain Navarre Navarre Sesma
6 Spain Catalonia Barcelona Barcelona
I have DataFrame that look like this:
Cities Cities_Dict
"San Francisco" ["San Francisco", "New York", "Boston"]
"Los Angeles" ["Los Angeles"]
"berlin" ["Munich", "Berlin"]
"Dubai" ["Dubai"]
I want to create new column that compares city from firest column to the list of cities from secon column and finds the one that is the closest match.
I use difflib for that:
df["new_col"]=difflib.get_close_matches(df["Cities"],df["Cities_Dict"])
However I get error:
TypeError: object of type 'float' has no len()
Use DataFrame.apply with lambda function and axis=1 for processing by rows:
import difflib, ast
#if necessary convert values to lists
#df['Cities_Dict'] = df['Cities_Dict'].apply(ast.literal_eval)
f = lambda x: difflib.get_close_matches(x["Cities"],x["Cities_Dict"])
df["new_col"] = df.apply(f, axis=1)
print (df)
Cities Cities_Dict new_col
0 San Francisco [San Francisco, New York, Boston] [San Francisco]
1 Los Angeles [Los Angeles] [Los Angeles]
2 berlin [Munich, Berlin] [Berlin]
3 Dubai [Dubai] [Dubai]
EDIT:
For first value with empty string for empty list use:
f = lambda x: next(iter(difflib.get_close_matches(x["Cities"],x["Cities_Dict"])), '')
df["new_col"] = df.apply(f, axis=1)
print (df)
Cities Cities_Dict new_col
0 San Francisco [San Francisco, New York, Boston] San Francisco
1 Los Angeles [Los Angeles] Los Angeles
2 berlin [Munich, Berlin] Berlin
3 Dubai [Dubai] Dubai
EDIT1: If possible problematic data is possible use try-except:
def f(x):
try:
return difflib.get_close_matches(x["Cities"],x["Cities_Dict"])[0]
except:
return ''
df["new_col"] = df.apply(f, axis=1)
print (df)
Cities Cities_Dict new_col
0 NaN [San Francisco, New York, Boston]
1 Los Angeles [10]
2 berlin [Munich, Berlin] Berlin
3 Dubai [Dubai] Dubai