I have a dataframe contains misspelled words and abbreviations like this.
input:
df = pd.DataFrame(['swtch', 'cola', 'FBI',
'smsng', 'BCA', 'MIB'], columns=['misspelled'])
output:
misspelled
0 swtch
1 cola
2 FBI
3 smsng
4 BCA
5 MIB
I need to correcting the misspelled words and the Abvreviations
I have tried with creating the dictionary such as:
input:
dicts = pd.DataFrame(['coca cola', 'Federal Bureau of Investigation',
'samsung', 'Bank Central Asia', 'switch', 'Men In Black'], columns=['words'])
output:
words
0 coca cola
1 Federal Bureau of Investigation
2 samsung
3 Bank Central Asia
4 switch
5 Men In Black
and applying this code
x = [next(iter(x), np.nan) for x in map(lambda x: difflib.get_close_matches(x, dicts.words), df.misspelled)]
df['fix'] = x
print (df)
The output is I have succeded correcting misspelled but not the abbreviation
misspelled fix
0 swtch switch
1 cola coca cola
2 FBI NaN
3 smsng samsung
4 BCA NaN
5 MIB NaN
Please help.
How about following a 2-prong approach where first correct the misspellings and then expand the abbreviations:
df = pd.DataFrame(['swtch', 'cola', 'FBI', 'smsng', 'BCA', 'MIB'], columns=['misspelled'])
abbreviations = {
'FBI': 'Federal Bureau of Investigation',
'BCA': 'Bank Central Asia',
'MIB': 'Men In Black',
'cola': 'Coca Cola'
}
spell = SpellChecker()
df['fixed'] = df['misspelled'].apply(spell.correction).replace(abbreviations)
Result:
misspelled fixed
0 swtch switch
1 cola Coca Cola
2 FBI Federal Bureau of Investigation
3 smsng among
4 BCA Bank Central Asia
5 MIB Men In Black
I use pyspellchecker but you can go with any spelling-checking library. It corrected smsng to among but that is a caveat of automatic spelling correction. Different libraries may give out different results
Related
I have a dataframe of restaurants and one column has corresponding cuisines.
The problem is that there are restaurants with multiple cuisines in the same column [up to 8].
Let's say it's something like this:
RestaurantName City Restaurant ID Cuisines
Restaurant A Milan 31333 French, Spanish, Italian
Restaurant B Shanghai 63551 Pizza, Burgers
Restaurant C Dubai 7991 Burgers, Ice Cream
Here's a copy-able code as a sample:
rst= pd.DataFrame({'RestaurantName': ['Rest A', 'Rest B', 'Rest C'],
'City': ['Milan', 'Shanghai', 'Dubai'],
'RestaurantID': [31333,63551,7991],
'Cuisines':['French, Spanish, Italian','Pizza, Burgers','Burgers, Ice Cream']})
I used string split to expand them into 8 different columns and added it to the dataframe.
csnsplit=rst.Cuisines.str.split(", ",expand=True)
rst["Cuisine1"]=csnsplit.loc[:,0]
rst["Cuisine2"]=csnsplit.loc[:,1]
rst["Cuisine3"]=csnsplit.loc[:,2]
rst["Cuisine4"]=csnsplit.loc[:,3]
rst["Cuisine5"]=csnsplit.loc[:,4]
rst["Cuisine6"]=csnsplit.loc[:,5]
rst["Cuisine7"]=csnsplit.loc[:,6]
rst["Cuisine8"]=csnsplit.loc[:,7]
Which leaves me with this:
https://i.stack.imgur.com/AUSDY.png
Now I have no idea how to count individual cuisines since they're across up to 8 different columns, let's say if I want to see top cuisine by city.
I also tried getting dummy columns for all of them, Cuisine 1 to Cuisine 8. This is causing me to have duplicates like Cuisine1_Bakery, Cusine2_Bakery, and so on. I could hypothetically merge like ones and keeping only the one that has a count of "1," but no idea how to do that.
dummies=pd.get_dummies(data=rst,columns=["Cuisine1","Cuisine2","Cuisine3","Cuisine4","Cuisine5","Cuisine6","Cuisine7","Cuisine8"])
print(dummies.columns.tolist())
Which leaves me with all of these columns:
https://i.stack.imgur.com/84spI.png
A third thing I tried was to get unique values from all 8 columns, and I have a deduped list of each type of cuisine. I can probably add all these columns to the dataframe, but wouldn't know how to fill the rows with a count for each one based on the column name.
AllCsn=np.concatenate((rst.Cuisine1.unique(),
rst.Cuisine2.unique(),
rst.Cuisine3.unique(),
rst.Cuisine4.unique(),
rst.Cuisine5.unique(),
rst.Cuisine6.unique(),
rst.Cuisine7.unique(),
rst.Cuisine8.unique()
))
AllCsn=np.unique(AllCsn.astype(str))
AllCsn
Which leaves me with this:
https://i.stack.imgur.com/O9OpW.png
I do want to create a model later on where I maybe have a column for each cuisine, and use the "unique" code above to get all the columns, but then I would need to figure out how to do a count based on the column header.
I am new to this, so please bear with me and let me know if I need to provide any more info.
It sounds like you're looking for str.split without expanding, then explode:
rst['Cuisines'] = rst['Cuisines'].str.split(', ')
rst = rst.explode('Cuisines')
Creates a frame like:
RestaurantName City RestaurantID Cuisines
0 Rest A Milan 31333 French
0 Rest A Milan 31333 Spanish
0 Rest A Milan 31333 Italian
1 Rest B Shanghai 63551 Pizza
1 Rest B Shanghai 63551 Burgers
2 Rest C Dubai 7991 Burgers
2 Rest C Dubai 7991 Ice Cream
Then it sounds like either crosstab:
pd.crosstab(rst['City'], rst['Cuisines'])
Cuisines Burgers French Ice Cream Italian Pizza Spanish
City
Dubai 1 0 1 0 0 0
Milan 0 1 0 1 0 1
Shanghai 1 0 0 0 1 0
Or value_counts
rst[['City', 'Cuisines']].value_counts().reset_index(name='counts')
City Cuisines counts
0 Dubai Burgers 1
1 Dubai Ice Cream 1
2 Milan French 1
3 Milan Italian 1
4 Milan Spanish 1
5 Shanghai Burgers 1
6 Shanghai Pizza 1
Max value_count per City via groupby head:
max_counts = (
rst[['City', 'Cuisines']].value_counts()
.groupby(level=0).head(1)
.reset_index(name='counts')
)
max_counts:
City Cuisines counts
0 Dubai Burgers 1
1 Milan French 1
2 Shanghai Burgers 1
I have 2 datasets (in CSV format) with different size such as follow:
df_old:
index category text
0 spam you win much money
1 spam you are the winner of the game
2 not_spam the weather in Chicago is nice
3 not_spam pizza is an Italian food
4 neutral we have a party now
5 neutral they are driving to downtown
df_new:
index category text
0 spam you win much money
14 spam London is the capital of Canada
15 not_spam no more raining in winter
25 not_spam the soccer game plays on HBO
4 neutral we have a party now
31 neutral construction will be done
I am using a code that concatenates the df_new to the df_old in the way that df_new goes on top of df_old's each category.
The code is:
(pd.concat([df_new,df_old], sort=False).sort_values('category', ascending=False, kind='mergesort'))
Now, the problem is that some of the rows with similar index, category, text (all together at same row) being duplicated at the same time, and (like: [0, spam, you win much money]) I want to avoid this.
The expected output should be:
df_concat:
index category text
14 spam London is the capital of Canada
0 spam you win much money
1 spam you are the winner of the game
15 not_spam no more raining in winter
25 not_spam the soccer game plays on HBO
2 not_spam the weather in Chicago is nice
3 not_spam pizza is an Italian food
31 neutral construction will be done
4 neutral we have a party now
5 neutral they are driving to downtown
I tried this and this but these are removing either the category or text.
To remove duplicates on specific column(s), use subset in drop_duplicates:
df.drop_duplicates(subset=['index', 'category', 'text'], keep='first')
Try concat + sort_values:
res = pd.concat((new_df, old_df)).drop_duplicates()
res = res.sort_values(by=['category'], key=lambda x: x.map({'spam' : 0, 'not_spam' : 1, 'neutral': 2}))
print(res)
Output
index category text
0 0 spam you win much money
1 14 spam London is the capital of Canada
1 1 spam you are the winner of the game
2 15 not_spam no more raining in winter
3 25 not_spam the soccer game plays on HBO
2 2 not_spam the weather in Chicago is nice
3 3 not_spam pizza is an Italian food
4 31 neutral construction will be done
4 4 neutral we have a party now
5 5 neutral they are driving to downtown
Your code seems right , try to add this to the concat result it will remove your duplicates :
# this first lines will create a new column ‘index’ and will help the rest of the code be correct
df_new = df_new.reset_index()
df_ old = df_ old.reset_index()
df_concat = (pd.concat([df_new,df_old], sort=False).sort_values('category', ascending=False, kind='mergesort'))
df_concat.drop_duplicates()
If you want to reindex it you can do ofcourse mot chnging the ‘index’column):
df_concat.drop_duplicates(ignore_index =True)
You can always do combine_first
out = df_new.combine_first(df_old)
I have a dataframe df which contains company names that I need to neatly-format. The names are already in titlecase:
Company Name
0 Visa Inc
1 Msci Inc
2 Coca Cola Inc
3 Pnc Bank
4 Aig Corp
5 Td Ameritrade
6 Uber Inc
7 Costco Inc
8 New York Times
Since many of the companies go by an acronym or an abbreviation (rows 1, 3, 4, 5), I want only the first string in those company names to be uppercase, like so:
Company Name
0 Visa Inc
1 MSCI Inc
2 Coca Cola Inc
3 PNC Bank
4 AIG Corp
5 TD Ameritrade
6 Uber Inc
7 Costco Inc
8 New York Times
I know I can't get 100% accurate replacement, but I believe I can get close by uppercasing only the first string if:
it's 4 or fewer characters
and the first string is not a word in the dictionary
How can I achieve this with something like: df['Company Name'] = df['Company Name'].replace()?
So you can actually use the enchant module to find out if it is a dictionary word or not. Given you are still going to have some off results I.E. Uber.
Here is the code I came up with, sorry for the terrible names of variables and what not.
import enchant
import pandas as pd
def main():
d = enchant.Dict("en_US")
listofcompanys = ['Msci Inc',
'Coca Cola Inc',
'Pnc Bank',
'Aig Corp',
'Td Ameritrade',
'Uber Inc',
'Costco Inc',
'New York Times']
dataframe = pd.DataFrame(listofcompanys, columns=['Company Name'])
for index, name in dataframe.iterrows():
first_word = name['Company Name'].split()
is_word = d.check(first_word[0])
if not is_word:
name['Company Name'] = first_word[0].upper() + ' ' + first_word[1]
print(dataframe)
if __name__ == '__main__':
main()
Output for this was:
Company Name
0 MSCI Inc
1 Coca Cola Inc
2 PNC Bank
3 AIG Corp
4 TD Ameritrade
5 UBER Inc
6 Costco Inc
7 New York Times
Here's a working solution, which makes use of a english word list. Only it's not accurate for td and uber, but like you said, this will be hard to get 100% accurate.
url = 'https://raw.githubusercontent.com/dwyl/english-words/master/words_alpha.txt'
words = set(pd.read_csv(url, header=None)[0])
w1 = df['Company Name'].str.split()
m1 = ~w1.str[0].str.lower().isin(words) # is not an english word
m2 = w1.str[0].str.len().le(4) # first word is < 4 characters
df.loc[m1 & m2, 'Company Name'] = w1.str[0].str.upper() + ' ' + w1.str[1:].str.join(' ')
Company Name
0 Visa Inc
1 MSCI Inc
2 Coca Cola Inc
3 PNC Bank
4 AIG Corp
5 Td Ameritrade
6 UBER Inc
7 Costco Inc
8 New York Times
Note: I also tried this with nltk package, but apparently, the nltk.corpus.words module is by far not complete with the english words.
You can first separate the first words and the other parts. Then filter those first words based on your logic:
company_list = ['Visa']
s = df['Company Name'].str.extract('^(\S+)(.*)')
mask = s[0].str.len().le(4) & (~s[0].isin(company_list))
df['Company Name'] = s[0].mask(mask, s[0].str.upper()) + s[1]
Output (notice that NEW in New York gets changed as well):
Company Name
0 Visa Inc
1 MSCI Inc
2 COCA Cola Inc
3 PNC Bank
4 AIG Corp
5 TD Ameritrade
6 UBER Inc
7 Costco Inc
8 NEW York Times
This will get you first word from the string and make it upper only for those company names that are included in the include list:
import pandas as pd
import numpy as np
company_name = {'Visa Inc', 'Msci Inc', 'Coca Cola Ins', 'Pnc Bank'}
include = ['Msci', 'Pnc']
df = pd.DataFrame(company_name)
df.rename(columns={0: 'Company Name'}, inplace=True)
df['Company Name'] = df['Company Name'].apply(lambda x: x.split()[0].upper() + ' ' + x[len(x.split()[0].upper()):] if x.split()[0].strip() in include else x)
df['Company Name']
Output:
0 MSCI Inc
1 Coca Cola Ins
2 PNC Bank
3 Visa Inc
Name: Company Name, dtype: object
A manual workaround could be appending words like "uber"
from nltk.corpus import words
dict_words = words.words()
dict_words.append('uber')
create a new column
df.apply(lambda x : x['Company Name'].replace(x['Company Name'].split(" ")[0].strip(), x['Company Name'].split(" ")[0].strip().upper())
if len(x['Company Name'].split(" ")[0].strip()) <= 4 and x['Company Name'].split(" ")[0].strip().lower() not in dict_words
else x['Company Name'],axis=1)
Output:
0 Visa Inc
1 Msci Inc
2 Coca Cola Inc
3 PNC Bank
4 AIG Corp
5 TD Ameritrade
6 Uber Inc
7 Costco Inc
8 New York Times
Download the nltk package version by runnning:
import nltk
nltk.download()
Demo:
from nltk.corpus import words
"new" in words.words()
Output:
False
I have this CSV:
Name Species Country
0 Hobbes Tiger U.S.
1 SherKhan Tiger India
2 Rescuer Mouse Australia
3 Mickey Mouse U.S.
And I have a second CSV:
Continent Countries Unnamed: 2 Unnamed: 3 Unnamed: 4
0 North America U.S. Mexico Guatemala Honduras
1 Asia India China Nepal NaN
2 Australia Australia NaN NaN NaN
3 Africa South Africa Botswana Zimbabwe NaN
I want to use the second CSV to update the first file so that the output is:
Name Species Country
0 Hobbes Tiger North America
1 SherKhan Tiger Asia
2 Rescuer Mouse Australia
3 Mickey Mouse North America
So far this the closest I have gotten:
import pandas as pd
# Import my data.
data = pd.read_csv('Continents.csv')
Animals = pd.read_csv('Animals.csv')
Animalsdf = pd.DataFrame(Animals)
# Transpose my data from horizontal to vertical.
data1 = data.T
# Clean my data and update my header with the first column.
data1.columns = data1.iloc[0]
# Drop now duplicated data.
data1.drop(data1.index[[0]], inplace = True)
# Build the dictionary.
data_dict = {col: list(data1[col]) for col in data1.columns}
# Update my csv.
Animals['Country'] = Animals['Country'].map(data_dict)
print ('Animals')
This results in a dictionary that has lists as its values and therefore i just get NaN out:
Name Species Country
0 Hobbes Tiger NaN
1 SherKhan Tiger NaN
2 Rescuer Mole [Australia, nan, nan, nan]
3 Mickey Mole NaN
I've tried flipping from list to tuples and this doesn't work. Have tried multiple ways to pull in the dictionary etc. I am just out of ideas.
Sorry if the code is super junky. I'm learning this as I go. Figured a project was the best way to learn a new language. Didn't think it would be this difficult.
Any suggestions would be appreciated. I need to be able to use the code so that when I get multiple reference CSVs, I can update my data with new keys. Hope this is clear.
Thanks in advance.
One intuitive solution is to use a dictionary mapping. Data from #WillMonge.
pd.DataFrame.itertuples works by producing namedtuples, but they may also be referenced using numeric indexers.
# create mapping dictionary
d = {}
for row in df.itertuples():
d.update(dict.fromkeys(filter(None, row[2:]), row[1]))
# apply mapping dictionary
data['Continent'] = data['Country'].map(d)
print(data)
Country name Continent
0 China 2 Asia
1 China 5 Asia
2 Canada 9 America
3 Egypt 0 Africa
4 Mexico 3 America
You should use DictReader and DictWriter. You can learn how to use them by below link.
https://docs.python.org/2/library/csv.html
Here is an update of your code, I have tried to add comments to explain
import pandas as pd
# Read data in (read_csv also returns a DataFrame directly)
data = pd.DataFrame({'name': [2, 5, 9, 0, 3], 'Country': ['China', 'China', 'Canada', 'Egypt', 'Mexico']})
df = pd.DataFrame({'Continent': ['Asia', 'America', 'Africa'],
'Country1': ['China', 'Mexico', 'Egypt'],
'Country2': ['Japan', 'Canada', None],
'Country3': ['Thailand', None, None ]})
# Unstack to get a row for each country (remove the continent rows)
premap_df = pd.DataFrame(df.unstack('Continent').drop('Continent')).dropna().reset_index()
premap_df.columns = ['_', 'continent_key', 'Country']
# Merge the continent back based on the continent_key (old row number)
map_df = pd.merge(premap_df, df[['Continent']], left_on='continent_key', right_index=True)[['Continent', 'Country']]
# Merge with the data now
pd.merge(data, map_df, on='Country')
For further reference, Wes McKinney's Python for Data Analysis (here is a pdf version I found online) is one of the best books out there for learning pandas
You can always create buckets and run conditions:
import pandas as pd
import numpy as np
df = pd.DataFrame({'Name':['Hobbes','SherKhan','Rescuer','Mickey'], 'Species':['Tiger','Tiger','Mouse','Mouse'],'Country':['U.S.','India','Australia','U.S.']})
North_America = ['U.S.', 'Mexico', 'Guatemala', 'Honduras']
Asia = ['India', 'China', 'Nepal']
Australia = ['Australia']
Africa = ['South Africa', 'Botswana', 'Zimbabwe']
conditions = [
(df['Country'].isin(North_America)),
(df['Country'].isin(Asia)),
(df['Country'].isin(Australia)),
(df['Country'].isin(Africa))
]
choices = [
'North America',
'Asia',
'Australia',
'Africa'
]
df['Continent'] = np.select(conditions, choices, default = np.nan)
df
I came across this extremely well explained similar question (Get last "column" after .str.split() operation on column in pandas DataFrame), and used some of the codes found. However, it's not the output that I would like.
raw_data = {
'category': ['sweet beverage, cola,sugared', 'healthy,salty snacks', 'juice,beverage,sweet', 'fruit juice,beverage', 'appetizer,salty crackers'],
'product_name': ['coca-cola', 'salted pistachios', 'fruit juice', 'lemon tea', 'roasted peanuts']}
df = pd.DataFrame(raw_data)
Objective is to extract the various categories from each row, and only use the last 2 categories to create a new column. I have this code, which works and I have the categories of interest as a new column.
df['my_col'] = df.categories.apply(lambda s:s.split(',')[-2:])
output
my_col
[cola,sugared]
[healthy,salty snacks]
[beverage,sweet]
...
However, it appears as a list. How can I not have it appear as a list? Can this be achieved? Thanks all!!!!!
I believe you need str.split, select last to lists and last str.join:
df['my_col'] = df.category.str.split(',').str[-2:].str.join(',')
print (df)
category product_name my_col
0 sweet beverage, cola,sugared coca-cola cola,sugared
1 healthy,salty snacks salted pistachios healthy,salty snacks
2 juice,beverage,sweet fruit juice beverage,sweet
3 fruit juice,beverage lemon tea fruit juice,beverage
4 appetizer,salty crackers roasted peanuts appetizer,salty crackers
EDIT:
In my opinion pandas str text functions are more recommended as apply with puru python string functions, because also working with NaNs and None.
raw_data = {
'category': [np.nan, 'healthy,salty snacks'],
'product_name': ['coca-cola', 'salted pistachios']}
df = pd.DataFrame(raw_data)
df['my_col'] = df.category.str.split(',').str[-2:].str.join(',')
print (df)
category product_name my_col
0 NaN coca-cola NaN
1 healthy,salty snacks salted pistachios healthy,salty snacks
df['my_col'] = df.category.apply(lambda s: ','.join(s.split(',')[-2:]))
AttributeError: 'float' object has no attribute 'split'
You can also use join in the lambda to the result of split:
df['my_col'] = df.category.apply(lambda s: ','.join(s.split(',')[-2:]))
df
Result:
category product_name my_col
0 sweet beverage, cola,sugared coca-cola cola,sugared
1 healthy,salty snacks salted pistachios healthy,salty snacks
2 juice,beverage,sweet fruit juice beverage,sweet
3 fruit juice,beverage lemon tea fruit juice,beverage
4 appetizer,salty crackers roasted peanuts appetizer,salty crackers