Replace double backslash in Pandas csv file - python

Python 3.9.5/Pandas 1.1.3
I have a very large csv file with values that look like:
Ac\\Nme Products Inc.
and all the values are different company names with double backslashes in random places throughout.
I'm attempting to get rid of all the double backslashes. It's not working in Pandas. But a simple test against the standalone value just using string.replace does work.
Example:
org = "Ac\\Nme Products Inc."
result = org.replace("\\","")
print(result)
returns AcNme Products Inc. as the output, as I would expect.
However, using Pandas with the names in a csv file:
import pandas as pd
csv_input = pd.read_csv('/Users/me/file.csv')
csv_input.replace("\\", "")
csv_input.to_csv('/Users/me/file_revised.csv', index=False)
When I open the new file_revised.csv file, the value still shows as Ac\\Nme Products Inc.
EDIT 1:
Here is a snippet of file.csv as requested:
id,company_name,address,country 1000566,A1 Comm\\Nodity Traders,LEVEL 28 THREE PACIFIC PLACE 1 QUEEN'S RD EAST HK,TH 1000579,"A2 A Mf\\g. Co., Ltd.",53 YONG-AN 2ND ST. TAINAN TAIWAN,CA 1000585,"A2 Z Logisitcs Indi\\Na Pvt., Ltd.",114A/1 1ST FLOOR SOUTH RAJA ST TUTICORIN - 628 001 TAMILNADU - INDIA,PE

Pandas doesn't have a dataframe level string operation, but it can be updated per-column:
for col in csv_input.columns:
if col == 'that_int_column':
continue
csv_input[col] = csv_input[col].str.replace(r"\\N", "")

Related

How do I filter out elements in a column of a data frame based upon if it is in a list?

I'm trying to filter out bogus locations from a column in a data frame. The column is filled with locations taken from tweets. Some of the locations aren't real. I am trying to separate them from the valid locations. Below is the code I have. However, the output is not producing the right thing, it instead will only return France. I'm hoping someone can identify what I'm doing wrong here or another way to try. Let me know if I didn't explain it well enough. Also, I assign variables both outside and inside the function for testing purposes.
import pandas as pd
cn_csv = pd.read_csv("~/Downloads/cntry_list.csv") #this is just a list of every country along with respective alpha 2 and alpha 3 codes, see the link below to download csv
country_names = cn_csv['country']
results = pd.read_csv("~/Downloads/results.csv") #this is a dataframe with multiple columns, one being "source location" See edit below that displays data in "Source Location" column
src_locs = results["Source Location"]
locs_to_list = list(src_locs)
new_list = [entry.split(', ') for entry in locs_to_list]
def country_name_check(input_country_list):
cn_csv = pd.read_csv("~/Downloads/cntrylst.csv")
country_names = cn_csv['country']
results = pd.read_csv("~/Downloads/results.csv")
src_locs = results["Source Location"]
locs_to_list = list(src_locs)
new_list = [entry.split(', ') for entry in locs_to_list]
valid_names = []
tobe_checked = []
for i in new_list:
if i in country_names.values:
valid_names.append(i)
else:
tobe_checked.append(i)
return valid_names, tobe_checked
print(country_name_check(src_locs))
EDIT 1: Adding the link for the cntry_list.csv file. I downloaded the csv of the table data. https://worldpopulationreview.com/country-rankings/country-codes
Since I am unable to share a file on here, here is the "Source Location" column data:
Source Location
She/her
South Carolina, USA
Torino
England, UK
trying to get by
Bemidiji, MN
St. Paul, MN
Stockport, England
Liverpool, England
EH7
DLR - LAX - PDX - SEA - GEG
Barcelona
Curitiba
kent
Paris, France
Moon
Denver, CO
France
If your goal is to find and list country names, both valid and not, you may filter the initial results DataFrame:
# make list from unique values of Source Location that match values from country_names
valid_names = list(results[results['Source Location']
.isin(country_names)]['Source Location']
.unique())
# with ~ select unique values that don't match country_names values
tobe_checked = list(results[~results['Source Location']
.isin(country_names)]['Source Location']
.unique())
Your unwanted result with only France being returned could be solved by trying that simpler approach. However, the problem in your code may be there when reading cntrylst outside of the function, as indicated by ScottC

Downloading key ratios for various Tickers with python library FundamenalAnalysis

I try to download key financial ratios from yahoo finance via the FundamentalAnalysis library. It's pretty easy for single I have a df with tickers and names:
Ticker Company
0 A Agilent Technologies Inc.
1 AA ALCOA CORPORATION
2 AAC AAC Holdings Inc
3 AAL AMERICAN AIRLINES GROUP INC
4 AAME Atlantic American Corp.
I then tried to use a for-loop to download the ratios for every ticker with fa.ratios().
for i in range (3):
i = 0
i = i + 1
Ratios = fa.ratios(tickers["Ticker"][i])
So basically it shall download all ratios for one ticker and the second and so on. I also tried to change the df into a list, but it didn't work as well. If I put them in a list manually like:
Symbol = ["TSLA" , "AAPL" , "MSFT"]
it works somehow. But as I want to work with Data from 1000+ Tickers I don't want to type all of them manually into a list.
Maybe this question has already been answered elsewhere, in that case sorry, but I've not been able to find a thread that helps me. Any ideas?
You can get symbols using
symbols = df['Ticker'].to_list()
and then you could use for-loop without range()
ratios = dict()
for s in symbols:
ratios[s] = fa.ratios(s)
print(ratios)
Because some symbols may not give ratios so you should use try/except
Minimal working example. I use io.StringIO only to simulate file.
import FundamentalAnalysis as fa
import pandas as pd
import io
text='''Ticker Company
A Agilent Technologies Inc.
AA ALCOA CORPORATION
AAC AAC Holdings Inc
AAL AMERICAN AIRLINES GROUP INC
AAME Atlantic American Corp.'''
df = pd.read_csv(io.StringIO(text), sep='\s{2,}')
symbols = df['Ticker'].to_list()
#symbols = ["TSLA" , "AAPL" , "MSFT"]
print(symbols)
ratios = dict()
for s in symbols:
try:
ratios[s] = fa.ratios(s)
except Exception as ex:
print(s, ex)
for s, ratio in ratios.items():
print(s, ratio)
EDIT: it seems fa.ratios() returns DataFrames and if you will keep them on list then you can concatenate all DataFrames to one DataFrame
ratios = list() # list instead of dictionary
for s in symbols:
try:
ratios.append(fa.ratios(s)) # append to list
except Exception as ex:
print(s, ex)
df = pd.concat(ratios, axis=1) # convert list of DataFrames to one DataFrame
print(df.columns)
print(df)
Doc: pandas.concat()

Convert rows of text into pandas structure

I have thousands of rows in a list like the one below that I would like to convert into a pandas table consisting of different columns.
2018-12-03 21:15:24 Sales:120 ID:534343 North America
2018-12-03 21:15:27 Sales:65 ID:534344 Europe
Ideally I would like to to create a pandas structure with the following columns: Date, Sale, ID, Region, and then fill it with values that fit the values.
E.g. so in the first row I have sales = 120, ID = 534343, region = North America and date = 2018-12-03 21:15:24.
Given that I have thousands of rows, what code could make this work?
Supposing your list is in a file, read it first into a string (or into a list already, in which case following code will differ) and then apply code.
To read into a string:
with open('/file/path/myfile.txt','r') as f:
s = f.read()
Code for parsing:
import re
import pandas as pd
s = """2018-12-03 21:15:24 Sales:120 ID:534343 North America
2018-12-03 21:15:27 Sales:65 ID:534344 Europe"""
sales_re = "Sales:([0-9]+)"
id_re = "ID:([0-9]+)"
lst = []
for line in s.split('\n'):
date = line[0:19]
sale = re.search(sales_re, line).groups()[0]
id = re.search(id_re, line).groups()[0]
region = line[line.rfind(":")+1+len(id)+1:] # Search from last ":", add one to go over ":" and 1 to skip space
x = [date, sale, id, region]
lst.append(x)
df = pd.DataFrame(lst)
df.columns = ['date', 'sale', 'id', 'region']
In the example above, I assume everything is loaded into a string. Then I use regular expressions to extract harder part of each line and append everything into a list that. Then I use the pandas.DataFrame constructor to convert into a dataframe.

Fuzzy Match columns of Different Dataframe

Background
I have 2 data frames which has no common key to which I can merge them. Both df have a column that contains "entity name". One df contains 8000+ entities and the other close to 2000 entities.
Sample Data:
vendor_df=
Name of Vendor City State ZIP
FREDDIE LEES AMERICAN GOURMET SAUCE St. Louis MO 63101
CITYARCHRIVER 2015 FOUNDATION St. Louis MO 63102
GLAXOSMITHKLINE CONSUMER HEALTHCARE St. Louis MO 63102
LACKEY SHEET METAL St. Louis MO 63102
regulator_df =
Name of Entity Committies
LACKEY SHEET METAL Private
PRIMUS STERILIZER COMPANY LLC Private
HELGET GAS PRODUCTS INC Autonomous
ORTHOQUEST LLC Governmant
Problem Stmt:
I have to fuzzy match the entities of these two(Name of vendor & Name of Entity) columns and get a score. So, need to know if 1st value of dataframe 1(vendor_df) is matching with any of the 2000 entities of dataframe2(regulator_df).
StackOverflow Links I checked:
fuzzy match between 2 columns (Python)
create new column in dataframe using fuzzywuzzy
Apply fuzzy matching across a dataframe column and save results in a new column
Code
import pandas as pd
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
vendor_df = pd.read_excel('C:\\Users\\40101584\\Desktop\\AUS CUB AML\\Vendors_Sheet.xlsx', sheet_name=0)
regulator_df = pd.read_excel('C:\\Users\\40101584\\Desktop\\AUS CUB AML\\Regulated_Vendors_Sheet.xlsx', sheet_name=0)
compare = pd.MultiIndex.from_product([vendor_df['Name of vendor'],
regulator_df['Name of Entity']]).to_series()
def metrics(tup):
return pd.Series([fuzz.ratio(*tup),
fuzz.token_sort_ratio(*tup)],
['ratio', 'token'])
#compare.apply(metrics) -- Either this works or the below line
result = compare.apply(metrics).unstack().idxmax().unstack(0)
Problems with Above Code:
The code works if the two dataframes are small but it is taking forever when I give the complete dataset. Above code is taken from 3rd link.
Any solution if the same thing can work fast or can work with large dataset?
UPDATE 1
Can the above code be made faster if we pass or hard-code a score say 80 which will filter series/dataframe only with fuzzyscore > 80 ?
Below solution is faster than what I posted but if someone has a more faster approach please tell:
matched_vendors = []
for row in vendor_df.index:
vendor_name = vendor_df.get_value(row,"Name of vendor")
for columns in regulator_df.index:
regulated_vendor_name=regulator_df.get_value(columns,"Name of Entity")
matched_token=fuzz.partial_ratio(vendor_name,regulated_vendor_name)
if matched_token> 80:
matched_vendors.append([vendor_name,regulated_vendor_name,matched_token])
I've implemented the code in Python with parallel processing, which will be much faster than serial computation. Furthermore, where a fuzzy metric score exceeds a threshold, only those computations are performed in parallel. Please see the link below for the code:
https://github.com/ankitcoder123/Important-Python-Codes/blob/main/Faster%20Fuzzy%20Match%20between%20two%20columns/Fuzzy_match.py
Vesrion Compatibility:
pandas version :: 1.1.5 ,
numpy vesrion:: 1.19.5,
fuzzywuzzy version :: 1.1.0 ,
joblib version :: 0.18.0
Fuzzywuzzy metric explanation:
link text
Output from code:
in my case also i need to look for only above 80. i modified your code as per my use case.hope it helps.
compare = compare.apply(metrics)
compare_80=compare[(compare['ratio'] >80) & (compare['token'] >80)]

pandas read_csv ignore separator in last column

I have a file with the following structure (first row is the header, filename is test.dat):
ID_OBS LAT LON ALT TP TO LT_min LT_max STATIONNAME
ALT_NOA_000 82.45 -62.52 210.0 FM 0 0.0 24.0 Alert, Nunavut, Canada
How do I instruct pandas to read the entire station name (in this example, Alert, Nunavut, Canada) as a single element? I use delim_whitespace=True in my code, but that does not work, since the station name contains whitespace characters.
Running:
import pandas as pd
test = pd.read_csv('./test.dat', delim_whitespace=True, header=1)
print(test.to_string())
Produces:
ID_OBS LAT LON ALT TP TO LT_min LT_max STATIONNAME
ALT_NOA_000 82.45 -62.52 210.0 FM 0 0.0 24.0 Alert, Nunavut, Canada
Quickly reading through the tutorials did not help. What am I missing here?
I often approach these by writing my own little parser. In general there are ways to bend pandas to your will, but I find this way is often easier:
Code:
import re
def parse_my_file(filename):
with open(filename) as f:
for line in f:
yield re.split(r'\s+', line.strip(), 8)
# build the generator
my_parser = parse_my_file('test.dat')
# first element returned is the columns
columns = next(my_parser)
# build the data frame
df = pd.DataFrame(my_parser, columns=columns)
print(df)
Results:
ID_OBS LAT LON ALT TP TO LT_min LT_max \
0 ALT_NOA_000 82.45 -62.52 210.0 FM 0 0.0 24.0
STATIONNAME
0 Alert, Nunavut, Canada
Your pasted sample file is a bit ambiguous: it's not possible to tell by eye if something that looks like a few spaces is a tab or not, for example.
In general, though, note that plain old Python is more expressive than Pandas, or CSV modules (Pandas's strength is elseswhere). E.g., there are even Python modules for recursive-descent parsers, which Pandas obviously lacks. You can use regular Python to manipulate the file into an easier form for Pandas to parse. For example:
import re
>>> ['#'.join(re.split(r'[ \t]+', l.strip(), maxsplit=8)) for l in open('stuff.tsv') if l.strip()]
['ID_OBS#LAT#LON#ALT#TP#TO#LT_min#LT_max#STATIONNAME',
'ALT_NOA_000#82.45#-62.52#210.0#FM#0#0.0#24.0#Alert, Nunavut, Canada']
changes the delimiter to '#', which, if you write back to a file, for example, you can parse using delimiter='#'.

Categories