Python proper way to split multiple delimiters [duplicate] - python

This question already has answers here:
Split Strings into words with multiple word boundary delimiters
(31 answers)
Closed 8 years ago.
I found some answers online, but I have no experience with regular expressions, which I believe is what is needed here.
I have a string that needs to be split by either a ';' or ', '
That is, it has to be either a semicolon or a comma followed by a space. Individual commas without trailing spaces should be left untouched
Example string:
"b-staged divinylsiloxane-bis-benzocyclobutene [124221-30-3], mesitylene [000108-67-8]; polymerized 1,2-dihydro-2,2,4- trimethyl quinoline [026780-96-1]"
should be split into a list containing the following:
('b-staged divinylsiloxane-bis-benzocyclobutene [124221-30-3]' , 'mesitylene [000108-67-8]', 'polymerized 1,2-dihydro-2,2,4- trimethyl quinoline [026780-96-1]')

Luckily, Python has this built-in :)
import re
re.split('; |, ', string_to_split)
Update:Following your comment:
>>> a='Beautiful, is; better*than\nugly'
>>> import re
>>> re.split('; |, |\*|\n',a)
['Beautiful', 'is', 'better', 'than', 'ugly']

Do a str.replace('; ', ', ') and then a str.split(', ')

Here's a safe way for any iterable of delimiters, using regular expressions:
>>> import re
>>> delimiters = "a", "...", "(c)"
>>> example = "stackoverflow (c) is awesome... isn't it?"
>>> regex_pattern = '|'.join(map(re.escape, delimiters))
>>> regex_pattern
'a|\\.\\.\\.|\\(c\\)'
>>> re.split(regex_pattern, example)
['st', 'ckoverflow ', ' is ', 'wesome', " isn't it?"]
re.escape allows to build the pattern automatically and have the delimiters escaped nicely.
Here's this solution as a function for your copy-pasting pleasure:
def split(delimiters, string, maxsplit=0):
import re
regex_pattern = '|'.join(map(re.escape, delimiters))
return re.split(regex_pattern, string, maxsplit)
If you're going to split often using the same delimiters, compile your regular expression beforehand like described and use RegexObject.split.
If you'd like to leave the original delimiters in the string, you can change the regex to use a lookbehind assertion instead:
>>> import re
>>> delimiters = "a", "...", "(c)"
>>> example = "stackoverflow (c) is awesome... isn't it?"
>>> regex_pattern = '|'.join('(?<={})'.format(re.escape(delim)) for delim in delimiters)
>>> regex_pattern
'(?<=a)|(?<=\\.\\.\\.)|(?<=\\(c\\))'
>>> re.split(regex_pattern, example)
['sta', 'ckoverflow (c)', ' is a', 'wesome...', " isn't it?"]
(replace ?<= with ?= to attach the delimiters to the righthand side, instead of left)

In response to Jonathan's answer above, this only seems to work for certain delimiters. For example:
>>> a='Beautiful, is; better*than\nugly'
>>> import re
>>> re.split('; |, |\*|\n',a)
['Beautiful', 'is', 'better', 'than', 'ugly']
>>> b='1999-05-03 10:37:00'
>>> re.split('- :', b)
['1999-05-03 10:37:00']
By putting the delimiters in square brackets it seems to work more effectively.
>>> re.split('[- :]', b)
['1999', '05', '03', '10', '37', '00']

This is how the regex look like:
import re
# "semicolon or (a comma followed by a space)"
pattern = re.compile(r";|, ")
# "(semicolon or a comma) followed by a space"
pattern = re.compile(r"[;,] ")
print pattern.split(text)

Related

Python - defining string split delimiter?

How could I define string delimiter for splitting in most efficient way? I mean to not need to use many if's etc?
I have strings that need to be splited strictly into two element lists. The problem is those strings have different symbols by which I can split them. For example:
'Hello: test1'. This one has split delimiter ': '. The other example would be:
'Hello - test1'. So this one would be ' - '. Also split delimiter could be ' -' or '- '. So if I know all variations of delimiters, how could I define them most efficiently?
First I did something like this:
strings = ['Hello - test', 'Hello- test', 'Hello -test']
for s in strings:
delim = ' - '
if len(s.split('- ', 1)) == 2:
delim = '- '
elif len(s.split(' -', 1)) == 2:
delim = ' -'
print s.split(delim, 1)[1])
But then I got new strings that had another unexpected delimiters. So doing this way I should add even more ifs to check other delimiters like ': '. But then I wondered if there is some better way to define them (there is not problem if I should need to include new delimiters in some kind of list if I would need to later on). Maybe regex would help or some other tool?
Put all the delimiters inside re.split function like below using logical OR | operator.
re.split(r': | - | -|- ', string)
Add maxsplit=1, if you want to do an one time split.
re.split(r': | - | -|- ', string, maxsplit=1)
You can use the split function of the re module
>>> strings = ['Hello1 - test1', 'Hello2- test2', 'Hello3 -test3', 'Hello4 :test4', 'Hello5 : test5']
>>> for s in strings:
... re.split(" *[:-] *",s)
...
['Hello1', 'test1']
['Hello2', 'test2']
['Hello3', 'test3']
['Hello4', 'test4']
['Hello5', 'test5']
Where between [] you put all the possible delimiters. The * indicates that some spaces can be put before or after.
\s*[:-]\s*
You can split by this.Use re.split(r"\s*[:-]\s*",string).See demo.
https://regex101.com/r/nL5yL3/14
You should use this if you can have delimiters like - or - or -.wherein you have can have multiple spaces.
This isn't the best way, but if you want to avoid using re for some (or no) reason, this is what I would do:
>>> strings = ['Hello - test', 'Hello- test', 'Hello -test', 'Hello : test']
>>> delims = [':', '-'] # all possible delimiters; don't worry about spaces.
>>>
>>> for string in strings:
... delim = next((d for d in delims if d in string), None) # finds the first delimiter in delims that's present in the string (if there is one)
... if not delim:
... continue # No delimiter! (I don't know how you want to handle this possibility; this code will simply skip the string all together.)
... print [s.strip() for s in string.split(delim, 1)] # assuming you want them in list form.
['Hello', 'test']
['Hello', 'test']
['Hello', 'test']
['Hello', 'test']
This uses Python's native .split() to break the string at the delimiter, and then .strip() to trim the white space off the results, if there is any. I've used next to find the appropriate delimiter, but there are plenty of things you can swap that out with (especially if you like for blocks).
If you're certain that each string will contain at least one of the delimiters (preferably exactly one), then you can shave it down to this:
## with strings and delims defined...
>>> for string in strings:
... delim = next(d for d in delims if d in string) # raises StopIteration at this line if there is no delimiter in the string.
... print [s.strip() for s in string.split(delim, 1)]
I'm not sure if this is the most elegant solution, but it uses fewer if blocks, and you won't have to import anything to do it.

Is there a single function for matching and replacing?

I wonder if there is a simpler alternative (e.g. a single function call) for matching and replacing to the following example:
>>> import re
>>>
>>> line = 'file:///windows-d/academic%20discipline/study%20objects/areas/formal%20systems/math'
>>>
>>> match = re.match(r'^file://(.*)$', line)
>>> if match and match.group(1):
... substitution = re.sub(r'%20', r' ', match.group(1))
...
>>> substitution
'/windows-d/academic discipline/study objects/areas/formal systems/math'
Thanks.
I'm going to dodge your regex question and suggest you use something else for this:
>>> line = 'file:///windows-d/academic%20discipline/study%20objects/areas/formal%20systems/math'
>>> import urllib
>>> urllib.unquote(line)
'file:///windows-d/academic discipline/study objects/areas/formal systems/math'
Then just strip off the file:// with a slice or str.replace if necessary.
%20 (space) is not the only escaped character possible here, so it's better to use the right tool for the job than have your regex solution break later when there is another character needing un-escaping.
You could try the below simple python code,
>>> import re
>>> line = 'file:///windows-d/academic%20discipline/study%20objects/areas/formal%20systems/math'
>>> m = re.sub(r'%20|file://', r' ', line).strip()
>>> m
'/windows-d/academic discipline/study objects/areas/formal systems/math'
re.sub(r'%20|file://', r' ', line).strip() code replaces the string %20 or file:// with a space. And again the strip() function removes all the leading and trailing spaces.
>>> import re
>>> s = 'file:///windows-d/academic%20discipline/study%20objects/areas/formal%20systems/math'
>>> re.sub(r'^file://(.*)$', lambda m: m.group(1).replace('%20', ' '), s)
'/windows-d/academic discipline/study objects/areas/formal systems/math'
>>> s = 'file:///windows-d/academic%20discipline/study%20objects/areas/formal%20systems/math'
>>> s.replace('file://', '').replace('%20', ' ')
'/windows-d/academic discipline/study objects/areas/formal systems/math'

Breaking up substrings in Python based on characters

I am trying to write code that will take a string and remove specific data from it. I know that the data will look like the line below, and I only need the data within the " " marks, not the marks themselves.
inputString = 'type="NN" span="123..145" confidence="1.0" '
Is there a way to take a Substring of a string within two characters to know the start and stop points?
You can extract all the text between pairs of " characters using regular expressions:
import re
inputString='type="NN" span="123..145" confidence="1.0" '
pat=re.compile('"([^"]*)"')
while True:
mat=pat.search(inputString)
if mat is None:
break
strings.append(mat.group(1))
inputString=inputString[mat.end():]
print strings
or, easier:
import re
inputString='type="NN" span="123..145" confidence="1.0" '
strings=re.findall('"([^"]*)"', inputString)
print strings
Output for both versions:
['NN', '123..145', '1.0']
fields = inputString.split('"')
print fields[1], fields[3], fields[5]
You could split the string at each space to get a list of 'key="value"' substrings and then use regular expressions to parse the substrings.
Using your input string:
>>> input_string = 'type="NN" span="123..145" confidence="1.0" '
>>> input_string_split = input_string.split()
>>> print input_string_split
[ 'type="NN"', 'span="123..145"', 'confidence="1.0"' ]
Then use regular expressions:
>>> import re
>>> pattern = r'"([^"]+)"'
>>> for substring in input_string_split:
match_obj = search(pattern, substring)
print match_obj.group(1)
NN
123..145
1.0
The regular expression '"([^"]+)"' matches anything within quotation marks (provided there is at least one character). The round brackets indicate the bit of the regular expression that you are interested in.

python split string on multiple delimeters without regex

I have a string that I need to split on multiple characters without the use of regular expressions. for example, I would need something like the following:
>>>string="hello there[my]friend"
>>>string.split(' []')
['hello','there','my','friend']
is there anything in python like this?
If you need multiple delimiters, re.split is the way to go.
Without using a regex, it's not possible unless you write a custom function for it.
Here's such a function - it might or might not do what you want (consecutive delimiters cause empty elements):
>>> def multisplit(s, delims):
... pos = 0
... for i, c in enumerate(s):
... if c in delims:
... yield s[pos:i]
... pos = i + 1
... yield s[pos:]
...
>>> list(multisplit('hello there[my]friend', ' []'))
['hello', 'there', 'my', 'friend']
Solution without regexp:
from itertools import groupby
sep = ' []'
s = 'hello there[my]friend'
print [''.join(g) for k, g in groupby(s, sep.__contains__) if not k]
I've just posted an explanation here https://stackoverflow.com/a/19211729/2468006
A recursive solution without use of regex. Uses only base python in contrast to the other answers.
def split_on_multiple_chars(string_to_split, set_of_chars_as_string):
# Recursive splitting
# Returns a list of strings
s = string_to_split
chars = set_of_chars_as_string
# If no more characters to split on, return input
if len(chars) == 0:
return([s])
# Split on the first of the delimiter characters
ss = s.split(chars[0])
# Recursive call without the first splitting character
bb = []
for e in ss:
aa = split_on_multiple_chars(e, chars[1:])
bb.extend(aa)
return(bb)
Works very similarly to pythons regular string.split(...), but accepts several delimiters.
Example use:
print(split_on_multiple_chars('my"example_string.with:funny?delimiters', '_.:;'))
Output:
['my"example', 'string', 'with', 'funny?delimiters']
If you're not worried about long strings, you could force all delimiters to be the same using string.replace(). The following splits a string by both - and ,
x.replace('-', ',').split(',')
If you have many delimiters you could do the following:
def split(x, delimiters):
for d in delimiters:
x = x.replace(d, delimiters[0])
return x.split(delimiters[0])
re.split is the right tool here.
>>> string="hello there[my]friend"
>>> import re
>>> re.split('[] []', string)
['hello', 'there', 'my', 'friend']
In regex, [...] defines a character class. Any characters inside the brackets will match. The way I've spaced the brackets avoids needing to escape them, but the pattern [\[\] ] also works.
>>> re.split('[\[\] ]', string)
['hello', 'there', 'my', 'friend']
The re.DEBUG flag to re.compile is also useful, as it prints out what the pattern will match:
>>> re.compile('[] []', re.DEBUG)
in
literal 93
literal 32
literal 91
<_sre.SRE_Pattern object at 0x16b0850>
(Where 32, 91, 93, are the ascii values assigned to , [, ])

How can I split and parse a string in Python? [duplicate]

This question already has answers here:
Splitting on first occurrence
(5 answers)
Split a string by a delimiter in python
(5 answers)
Closed last month.
I am trying to split this string in python: 2.7.0_bf4fda703454
I want to split that string on the underscore _ so that I can use the value on the left side.
"2.7.0_bf4fda703454".split("_") gives a list of strings:
In [1]: "2.7.0_bf4fda703454".split("_")
Out[1]: ['2.7.0', 'bf4fda703454']
This splits the string at every underscore. If you want it to stop after the first split, use "2.7.0_bf4fda703454".split("_", 1).
If you know for a fact that the string contains an underscore, you can even unpack the LHS and RHS into separate variables:
In [8]: lhs, rhs = "2.7.0_bf4fda703454".split("_", 1)
In [9]: lhs
Out[9]: '2.7.0'
In [10]: rhs
Out[10]: 'bf4fda703454'
An alternative is to use partition(). The usage is similar to the last example, except that it returns three components instead of two. The principal advantage is that this method doesn't fail if the string doesn't contain the separator.
Python string parsing walkthrough
Split a string on space, get a list, show its type, print it out:
el#apollo:~/foo$ python
>>> mystring = "What does the fox say?"
>>> mylist = mystring.split(" ")
>>> print type(mylist)
<type 'list'>
>>> print mylist
['What', 'does', 'the', 'fox', 'say?']
If you have two delimiters next to each other, empty string is assumed:
el#apollo:~/foo$ python
>>> mystring = "its so fluffy im gonna DIE!!!"
>>> print mystring.split(" ")
['its', '', 'so', '', '', 'fluffy', '', '', 'im', 'gonna', '', '', '', 'DIE!!!']
Split a string on underscore and grab the 5th item in the list:
el#apollo:~/foo$ python
>>> mystring = "Time_to_fire_up_Kowalski's_Nuclear_reactor."
>>> mystring.split("_")[4]
"Kowalski's"
Collapse multiple spaces into one
el#apollo:~/foo$ python
>>> mystring = 'collapse these spaces'
>>> mycollapsedstring = ' '.join(mystring.split())
>>> print mycollapsedstring.split(' ')
['collapse', 'these', 'spaces']
When you pass no parameter to Python's split method, the documentation states: "runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace".
Hold onto your hats boys, parse on a regular expression:
el#apollo:~/foo$ python
>>> mystring = 'zzzzzzabczzzzzzdefzzzzzzzzzghizzzzzzzzzzzz'
>>> import re
>>> mylist = re.split("[a-m]+", mystring)
>>> print mylist
['zzzzzz', 'zzzzzz', 'zzzzzzzzz', 'zzzzzzzzzzzz']
The regular expression "[a-m]+" means the lowercase letters a through m that occur one or more times are matched as a delimiter. re is a library to be imported.
Or if you want to chomp the items one at a time:
el#apollo:~/foo$ python
>>> mystring = "theres coffee in that nebula"
>>> mytuple = mystring.partition(" ")
>>> print type(mytuple)
<type 'tuple'>
>>> print mytuple
('theres', ' ', 'coffee in that nebula')
>>> print mytuple[0]
theres
>>> print mytuple[2]
coffee in that nebula
If it's always going to be an even LHS/RHS split, you can also use the partition method that's built into strings. It returns a 3-tuple as (LHS, separator, RHS) if the separator is found, and (original_string, '', '') if the separator wasn't present:
>>> "2.7.0_bf4fda703454".partition('_')
('2.7.0', '_', 'bf4fda703454')
>>> "shazam".partition("_")
('shazam', '', '')

Categories