Remove Duplicates from csv - python

I have a csv/txt file of following content:
Mumbai 2
Pune 6
Bangalore 8
Pune 10
Mumbai 8
and I want this in output file :
Mumbai 2,8
Pune 6,10
Bangalore 8
Note : Don't use any python modules, packages

Here is a possible solution:
import re
linepat = re.compile('''
^ \s*
(?:
(
[A-Za-z] \S*
(?: \s+ [A-Za-z] \S* )*
) \s+ ( [0-9]+ )
\s* $
)
|
(.*)
''', re.VERBOSE)
filtered = {}
# fill `filtered` from `duplicates.csv`
with open('duplicates.csv', 'r') as f:
for lnum, line in enumerate(f, start=1):
city, number, invalid = linepat.match(line).groups()
if not city:
invalid = invalid.strip()
if invalid:
raise Exception(f'line {lnum} has a wrong format:\n{line}')
else:
city = ' '.join(city.split())
if city not in filtered:
filtered[city] = set()
filtered[city].add(int(number))
# write `filtered` to `without_duplicates.csv`
with open('without_duplicates.csv', 'w') as f:
for city, numbers in filtered.items():
numbers = ','.join(str(num) for num in sorted(numbers))
f.write(f'{city} {numbers}\n')
# Mumbai 2
# Pune 6
# New York 15
#
# Bangalore 8
# Pune 10
# Mumbai 8
# New York 1
#
# -->
#
# Mumbai 2,8
# Pune 6,10
# New York 1,15
# Bangalore 8
It is not clear from your example how the numbers per line in the output shall be sorted. If you want them sorted by first occurrence in the input file, use a list instead of a set and do citynumbers = filtered[city]; if number not in citynumbers: citynumbers.append(number), and later have them not sorted().
The whitespaces which separate a city name from its number may also be part of the city name. Therefore the regex requires that every part of the city name starts with [a-zA-Z]. Cleaner is, to require that whitespaces are replaced or escaped in city names.
filtered in the code example could also be a defaultdict(set).
For many use cases, the csv module is the simpler approach.

Related

Remove and replace multiple commas in string

I have this dataset
df = pd.DataFrame({'name':{0: 'John,Smith', 1: 'Peter,Blue', 2:'Larry,One,Stacy,Orange' , 3:'Joe,Good' , 4:'Pete,High,Anne,Green'}})
yielding:
name
0 John,Smith
1 Peter,Blue
2 Larry,One,Stacy,Orange
3 Joe,Good
4 Pete,High,Anne,Green
I would like to:
remove commas (replace them by one space)
wherever I have 2 persons in one cell, insert the "&"symbol after the first person family name and before the second person name.
Desired output:
name
0 John Smith
1 Peter Blue
2 Larry One & Stacy Orange
3 Joe Good
4 Pete High & Anne Green
Tried this code below, but it simply removes commas. I could not find how to insert the "&"symbol in the same code.
df['name']= df['name'].str.replace(r',', '', regex=True)
Disclaimer : all names in this table are fictitious. No identification with actual persons (living or deceased)is intended or should be inferred.
I would do it following way
import pandas as pd
df = pd.DataFrame({'name':{0: 'John,Smith', 1: 'Peter,Blue', 2:'Larry,One,Stacy,Orange' , 3:'Joe,Good' , 4:'Pete,High,Anne,Green'}})
df['name'] = df['name'].str.replace(',',' ').str.replace(r'(\w+ \w+) ', r'\1 & ', regex=True)
print(df)
gives output
name
0 John Smith
1 Peter Blue
2 Larry One & Stacy Orange
3 Joe Good
4 Pete High & Anne Green
Explanation: replace ,s using spaces, then use replace again to change one-or-more word characters followed by space followed by one-or-more word character followed by space using content of capturing group (which includes everything but last space) followed by space followed by & character followed by space.
With single regex replacement:
df['name'].str.replace(r',([^,]+)(,)?', lambda m:f" {m.group(1)}{' & ' if m.group(2) else ''}")
0 John Smith
1 Peter Blue
2 Larry One & Stacy Orange
3 Joe Good
4 Pete High & Anne Green
This should work:
import re
def separate_names(original_str):
spaces = re.sub(r',([^,]*(?:,|$))', r' \1', original_str)
return spaces.replace(',', ' & ')
df['spaced'] = df.name.map(separate_names)
df
I created a function called separate_names which replaces the odd number of commas with spaces using regex. The remaining commas (even) are then replaced by & using the replace function. Finally I used the map function to apply separate_names to each row. The output is as follows:
In replace statement you should replace comma with space. Please put space between '' -> so you have ' '
df['name']= df['name'].str.replace(r',', ' ', regex=True)
inserted space ^ here

Removing signs and repeating numbers

I want to remove all signs from my dataframe to leave it in either one of the two formats: 100-200 or 200
So the salaries should either have a single hyphen between them if a range of salaries if given, otherwise a clean single number.
I have the following data:
import pandas as pd
import re
df = {'salary':['£26,768 - £30,136/annum Attractive benefits package',
'£26,000 - £28,000/annum plus bonus',
'£21,000/annum',
'£26,768 - £30,136/annum Attractive benefits package',
'£33/hour',
'£18,500 - £20,500/annum Inc Bonus - Study Support + Bens',
'£27,500 - £30,000/annum £27,500 to £30,000 + Study',
'£35,000 - £40,000/annum',
'£24,000 - £27,000/annum Study Support (ACCA / CIMA)',
'£19,000 - £24,000/annum Study Support',
'£30,000 - £35,000/annum',
'£44,000 - £66,000/annum + 15% Bonus + Excellent Benefits. L',
'£75 - £90/day £75-£90 Per Day']}
data = pd.DataFrame(df)
Here's what I have tried to remove some of the signs:
salary = []
for i in data.salary:
space = re.sub(" ",'',i)
lower = re.sub("[a-z]",'',space)
upper = re.sub("[A-Z]",'',lower)
bracket = re.sub("/",'',upper)
comma = re.sub(",", '', bracket)
plus = re.sub("\+",'',comma)
percentage = re.sub("\%",'', plus)
dot = re.sub("\.",'', percentage)
bracket1 = re.sub("\(",'',dot)
bracket2 = re.sub("\)",'',bracket1)
salary.append(bracket2)
Which gives me:
'£26768-£30136',
'£26000-£28000',
'£21000',
'£26768-£30136',
'£33',
'£18500-£20500-',
'£27500-£30000£27500£30000',
'£35000-£40000',
'£24000-£27000',
'£19000-£24000',
'£30000-£35000',
'£44000-£6600015',
'£75-£90£75-£90'
However, I have some repeating numbers, essentially I want anything after the first range of values removed, and any sign besides the hyphen between the two numbers.
Expected output:
'26768-30136',
'26000-28000',
'21000',
'26768-30136',
'33',
'18500-20500',
'27500-30000',
'35000-40000',
'24000-27000',
'19000-24000',
'30000-35000',
'44000-66000',
'75-90
Another way using pandas.Series.str.partition with replace:
data["salary"].str.partition("/")[0].str.replace("[^\d-]+", "", regex=True)
Output:
0 26768-30136
1 26000-28000
2 21000
3 26768-30136
4 33
5 18500-20500
6 27500-30000
7 35000-40000
8 24000-27000
9 19000-24000
10 30000-35000
11 44000-66000
12 75-90
Name: 0, dtype: object
Explain:
It assumes that you are only interested in the parts upto /; it extracts everything until /, than removes anything but digits and hypen
You can use
data['salary'].str.split('/', n=1).str[0].replace('[^\d-]+','', regex=True)
# 0 26768-30136
# 1 26000-28000
# 2 21000
# 3 26768-30136
# 4 33
# 5 18500-20500
# 6 27500-30000
# 7 35000-40000
# 8 24000-27000
# 9 19000-24000
# 10 30000-35000
# 11 44000-66000
# 12 75-90
Here,
.str.split('/', n=1) - splits into two parts with the first / char
.str[0] - gets the first item
.replace('[^\d-]+','', regex=True) - removes all chars other than digits and hyphens.
A more precise solution is to extract the £num(-£num)? pattern and remove all non-digits/hyphens:
data['salary'].str.extract(r'£(\d+(?:,\d+)*(?:\.\d+)?(?:\s*-\s*£\d+(?:,\d+)*(?:\.\d+)?)?)')[0].str.replace(r'[^\d-]+', '', regex=True)
Details:
£ - a literal char
\d+(?:,\d+)*(?:\.\d+)? - one or more digits, followed with zero or more occurrences of a comma and one or more digits and then an optional sequence of a dot and one or more digits
(?:\s*-\s*£\d+(?:,\d+)*(?:\.\d+)?)? - an optional occurrence of a hyphen enclosed with zero or more whitespaces (\s*-\s*), then a £ char, and a number pattern described above.
You can do it in only two regex passes.
First extract the monetary amounts with a regex, then remove the thousands separators, finally, join the output by group keeping only the first two occurrences per original row.
The advantage of this solution is that is really only extracts monetary digits, not other possible numbers that would be there if the input is not clean.
(data['salary'].str.extractall(r'£([,\d]+)')[0] # extract £123,456 digits
.str.replace(r'\D', '', regex=True) # remove separator
.groupby(level=0).apply(lambda x: '-'.join(x[:2])) # join first two occurrences
)
output:
0 26768-30136
1 26000-28000
2 21000
3 26768-30136
4 33
5 18500-20500
6 27500-30000
7 35000-40000
8 24000-27000
9 19000-24000
10 30000-35000
11 44000-66000
12 75-90
You can use replace with a pattern and optional capture groups to match the data format, and use those groups in the replacement.
import pandas as pd
df = {'salary':['£26,768 - £30,136/annum Attractive benefits package',
'£26,000 - £28,000/annum plus bonus',
'£21,000/annum',
'£26,768 - £30,136/annum Attractive benefits package',
'£33/hour',
'£18,500 - £20,500/annum Inc Bonus - Study Support + Bens',
'£27,500 - £30,000/annum £27,500 to £30,000 + Study',
'£35,000 - £40,000/annum',
'£24,000 - £27,000/annum Study Support (ACCA / CIMA)',
'£19,000 - £24,000/annum Study Support',
'£30,000 - £35,000/annum',
'£44,000 - £66,000/annum + 15% Bonus + Excellent Benefits. L',
'£75 - £90/day £75-£90 Per Day']}
data = pd.DataFrame(df).salary.replace(
r"^£(\d+)(?:,(\d+))?(?:\s*(-)\s*£(\d+)(?:,(\d+))?)?/.*",
r"\1\2\3\4\5", regex=True
)
print(data)
The pattern matches
^ Start of string
£ Match literally
(\d+) Capture 1+ digits in group 1
(?:,(\d+))?Optionally capture 1+ digits in group 2 that is preceded by a comma to match the data format
(?: Non capture group to match as a whole
\s*(-)\s*£ capture - between optional whitespace chars in group 3 and match £
(\d+)(?:,(\d+))? The same as previous, now in group 4 and group 5
)? Close non capture group and make it optional
See a regex demo.
Output
0 26768-30136
1 26000-28000
2 21000
3 26768-30136
4 33
5 18500-20500
6 27500-30000
7 35000-40000
8 24000-27000
9 19000-24000
10 30000-35000
11 44000-66000
12 75-90

Split Column on regex

I really struggle with regex, and I'm hoping for some help.
I have columns that look like this
import pandas as pd
data = {'Location': ['Building A, 100 First St City, State', 'Fire Station # 100, 2 Apple Row, City, State Zip', 'Church , 134 Baker Rd City, State']}
df = pd.DataFrame(data)
Location
0 Building A, 100 First St City, State
1 Fire Station # 100, 2 Apple Row, City, State Zip
2 Church , 134 Baker Rd City, State
I would like to get it to the code chunk below by splitting anytime there is a comma followed by space and then a number. However, I'm running into an issue where I'm removing the number.
Location Name Address
0 Building A 100 First St City, State
1 Fire Station # 100 2 Apple Row, City, State, Zip
2 Church 134 Baker Rd City, State
This is the code I've been using
df['Location Name']= df['Location'].str.split('.,\s\d', expand=True)[0]
df['Address']= df['Location'].str.split('.,\s\d', expand=True)[1]
You can use Series.str.extract:
df[['Location Name','Address']] = df['Location'].str.extract(r'^(.*?),\s(\d.*)', expand=True)
The ^(.*?),\s(\d.*) regex matches
^ - start of string
(.*?) - Group 1 ('Location Name'): any zero or more chars other than line break chars as few as possible
,\s - comma and whitespace
(\d.*) - Group 1 ('Address'): digit and the rest of the line.
See the regex demo.
Another simple solution to your problem is to use a positive lookahead. You want to check if there is a number ahead of your pattern, while not including the number in the match. Here's an example of a regex that solves your problem:
\s?,\s(?=\d)
Here, we optionally remove a trailing whitespace, then match a comma followed by whitespace.
The (?= ) is a positive lookahead, in this case we check for a following digit. If that's matched, the split will remove the comma and whitespace only.

Python Regex match all occurrences of decimal pattern followed by another pattern

I've done lots of searching, including this SO post, which almost worked for me.
I'm working with a huge string, trying to capture the groups of four digits that appear after a series of decimal patterns AND before an alphanumeric word.
There are other four digit number groups that don't qualify since they have words or other number patterns before them.
EDIT: my string is not multiline, it is just shown here for visual convenience.
For example:
>> my_string = """BEAVER COUNTY 001 0000
1010 BEAVER
2010 BEAVER COUNTY SCH DIST
0.008504
...(more decimals)
0.008508
4010 COUNTY SPECIAL SERVICE DIST NO.1 <---capture this 4010
4040 BEAVER COUNTY
8005 GREENVILLE SOLAR
0.004258
0.008348
...(more decimals)
0.008238
4060 SPECIAL SERVICE DISTRICT NO 7 <---capture this 4060
"""
The ideal re.findall should return:
['4010','4060']
Here are patterns I've tried that are lacking:
re.findall(r'(?=(\d\.\d{6}\s+)(\s+\d{4}\s))', my_string)
# also tried
re.findall("(\s+\d{4}\s+)(?:(?!^\d+\.\d+)[\s\S])*", my_string)
# which gets me a little closer but I'm still not getting what I need.
Thanks in advance!
SINGLE LINE STRING APPROACH:
Just match the float number right before the 4 standalone digits:
r'\d+\.\d+\s+(\d{4})\b'
See this regex demo
Python demo:
import re
p = re.compile(r'\d+\.\d+\s+(\d{4})\b')
s = "BEAVER COUNTY 001 0000 1010 BEAVER 2010 BEAVER COUNTY SCH DIST 0.008504 0.008508 4010 COUNTY SPECIAL SERVICE DIST NO.1 4040 BEAVER COUNTY 8005 GREENVILLE SOLAR 0.004258 0.008348 0.008238 4060 SPECIAL SERVICE DISTRICT NO 7"
print(p.findall(s))
# => ['4010', '4060']
ORIGINAL ANSWER: MULTILINE STRING
You may use a regex that will check for a float value on the previous line and then captures the standalone 4 digits on the next line:
re.compile(r'^\d+\.\d+ *[\r\n]+(\d{4})\b', re.M)
See regex demo here
Pattern explanation:
^ - start of a line (as re.M is used)
\d+\.\d+ - 1+ digits, . and again 1 or more digits
* - zero or more spaces (replace with [^\S\r\n] to only match horizontal whitespace)
[\r\n]+ - 1 or more LF or CR symbols (to only restrict to 1 linebreak, replace with (?:\r?\n|\r))
(\d{4})\b - Group 1 returned by the re.findall matching 4 digits followed with a word boundary (a non-digit, non-letter, non-_).
Python demo:
import re
p = re.compile(r'^\d+\.\d+ *[\r\n]+(\d{4})\b', re.MULTILINE)
s = "BEAVER COUNTY 001 0000 \n1010 BEAVER \n2010 BEAVER COUNTY SCH DIST \n0.008504 \n...(more decimals)\n0.008508 \n4010 COUNTY SPECIAL SERVICE DIST NO.1 <---capture this 4010\n4040 BEAVER COUNTY \n8005 GREENVILLE SOLAR\n0.004258 \n0.008348 \n...(more decimals)\n0.008238 \n4060 SPECIAL SERVICE DISTRICT NO 7 <---capture this 4060"
print(p.findall(s)) # => ['4010', '4060']
This will help you:
"((\d+\.\d+)\s+)+(\d+)\s?(?=\w+)"gm
use group three means \3
Demo And Explaination
Try this patter:
re.compile(r'(\d+[.]\d+)+\s+(?P<cap>\d{4})\s+\w+')
I wrote a little code and checked against it and it works.
import re
p=re.compile(r'(\d+[.]\d+)+\s+(?P<cap>\d{4})\s+\w+')
my_string = """BEAVER COUNTY 001 0000
1010 BEAVER
2010 BEAVER COUNTY SCH DIST
0.008504
...(more decimals)
0.008508
4010 COUNTY SPECIAL SERVICE DIST NO.1 <---capture this 4010
4040 BEAVER COUNTY
8005 GREENVILLE SOLAR
0.004258
0.008348
...(more decimals)
0.008238
4060 SPECIAL SERVICE DISTRICT NO 7 <---capture this 4060
"""
s=my_string.replace("\n", " ")
match=p.finditer(s)
for m in match:
print m.group('cap')

Why doesn't this regular expression work in all cases?

I have a text file containing entries like this:
#markwarner VIRGINIA - Mark Warner
#senatorleahy VERMONT - Patrick Leahy NO
#senatorsanders VERMONT - Bernie Sanders
#orrinhatch UTAH - Orrin Hatch NO
#jimdemint SOUTH CAROLINA - Jim DeMint NO
#senmikelee UTAH -- Mike Lee
#kaybaileyhutch TEXAS - Kay Hutchison
#johncornyn TEXAS - John Cornyn
#senalexander TENNESSEE - Lamar Alexander
I have written the following to remove the 'NO' and the dashes using regular expressions:
import re
politicians = open('testfile.txt')
text = politicians.read()
# Grab the 'no' votes
# Should be 11 entries
regex = re.compile(r'(no\s#[\w+\d+\.]*\s\w+\s?\w+?\s?\W+\s\w+\s?\w+)', re.I)
no = regex.findall(text)
## Make the list a string
newlist = ' '.join(no)
## Replace the dashes in the string with a space
deldash = re.compile('\s-*\s')
a = deldash.sub(' ', newlist)
# Delete 'NO' in the string
delno = re.compile('NO\s')
b = delno.sub('', a)
# make the string into a list
# problem with #jimdemint SOUTH CAROLINA Jim DeMint
regex2 = re.compile(r'(#[\w\d\.]*\s[\w\d\.]*\s?[\w\d\.]\s?[\w\d\.]*?\s+?\w+)', re.I)
lst1 = regex2.findall(b)
for i in lst1:
print i
When I run the code, it captures the twitter handle, state and full names other than the surname of Jim DeMint. I have stated that I want to ignore case for the regex.
Any ideas? Why is the expression not capturing this surname?
It's missing it because his state name contains two words: SOUTH CAROLINA
Have your second regex be this, it should help
(#[\w\d\.]*\s[\w\d\.]*\s?[\w\d\.]\s?[\w\d\.]*?\s+?\w+(?:\s\w+)?)
I added
(?:\s\w+)?
Which is a optional, non capturing group matching a space followed by one or more alphanumeric underscore characters
http://regexr.com?31fv5 shows that it properly matches the input with the NOs and dashes stripped
EDIT:
If you want one master regex to capture and split everything properly, after you remove the Nos and dashes, use
((#[\w]+?\s)((?:(?:[\w]+?)\s){1,2})((?:[\w]+?\s){2}))
Which you can play with here: http://regexr.com?31fvk
The full match is available in $1, the Twitter handle in $2, the State in $3 And the name in $4
Each capturing group works as follows:
(#[\w]+?\s)
This matches an # sign followed by at least one but as few characters as possible until a space.
((?:(?:[\w]+?)\s){1,2})
This matches and captures 1 or two words, which should be the state. This only works because of the next piece, which MUST have two words
((?:[\w]+?\s){2})
Matches and captures exactly two words, which is defined as few characters as possible followed by a space
text=re.sub(' (NO|-+)(?= |$)','',text)
And to capture everything:
re.findall('(#\w+) ([A-Z ]+[A-Z]) (.+?(?= #|$))',text)
Or all at once:
re.findall('(#\w+) ([A-Z ]+[A-Z])(?: NO| -+)? (.+?(?= #|$))',text)

Categories