String split with default delimiter vs user defined delimiter - python

I tried a simple example with string split, but get some unexpected behavior. Here is the sample code:
def split_string(source,splitlist):
for delim in splitlist:
source = source.replace(delim, ' ')
return source.split(' ')
out = split_string("This is a test-of the,string separation-code!", " ,!-")
print out
>>> ['This', 'is', 'a', 'test', 'of', 'the', 'string', 'separation', 'code', '']
As you can see, I got an extra empty string at the end of the list when I use space as delimiter argument for split() function. However, if I don't pass in any argument for split() function, I got no empty string at the end of the output list.
From what I read in python docs, they said the default argument for split() is space. So, why when I explicitly pass in a ' ' as delimiter, it creates an empty string at the end of the output list?

The docs:
If sep is not specified or is None, a different splitting algorithm is
applied: runs of consecutive whitespace are regarded as a single
separator, and the result will contain no empty strings at the start
or end if the string has leading or trailing whitespace.

That may happen if you have multiple spaces separating two words.
For example,
'a b'.split(' ') will return ['a', '', '', '', 'b']
But I would suggest you to use split from re module. Check the example below:
import re
print re.split('[\s,; !]+', 'a b !!!!!!! , hello ;;;;; world')
When we run the above piece, it outputs ['a', 'b', 'hello', 'world']

Related

TCL multi-character split in Python?

I am splitting a text file using this tcl proc:
proc mcsplit "str splitStr {mc {\x00}}" {
return [split [string map [list $splitStr $mc] $str] $mc] }
# mcsplit --
# Splits a string based using another string
# Arguments:
# str string to split into pieces
# splitStr substring
# mc magic character that must not exist in the orignal string.
# Defaults to the NULL character. Must be a single character.
# Results:
# Returns a list of strings
The split command splits a string based on each character that is in the splitString. This version handles the splitString as a combined string, splitting the string into constituent parts,
but my objective is to do the same using python does anyone here did the same before?
It's not very clear from your question whether the python split behavior is what you need. If you need to split at each occurrence of a multiple-character string, Python's regular split will do the job:
>>> 'this is a test'.split('es')
['this is a t', 't']
If, however, you want to split at any occurrence of multiple individual characters, you'll need to use re.split:
>>> import re
>>> re.split(r'[es]', 'this is a test')
['thi', ' i', ' a t', '', 't']
>>>

Accessing delimiter in Python regular expressions split()

In the python re module, I'm making use of re.split()
string = '$aText$bFor$cStack$dOverflow'
parts = re.split(r'\$\w', string)
assert parts == ['Text', 'For', 'Stack', 'Overflow']
My question: is it possible to return the instance of the delimiter at the same time as the list of parts? I'd like to know if the delimiter was $c, $d, etc. preceding the various parts.
I suppose I could do a findall() call first, but that would mean manually calling positions in a list, which would introduce bugs. That also doesn't seem very pythonic.
If you put the pattern in a capture group, the delimiters appear in the results:
>>> string = '$aText$bFor$cStack$dOverflow'
>>> re.split(r'(\$\w)', string)
['', '$a', 'Text', '$b', 'For', '$c', 'Stack', '$d', 'Overflow']

Implement a tokeniser in Python

I am trying to implement a tokeniser in python (without using NLTK libraries) that splits a string into words using blank spaces. Example usage is:
>> tokens = tokenise1(“A (small, simple) example”)
>> tokens
[‘A’, ‘(small,’, ‘simple)’, ‘example’]
I can get some of the way using regular expressions but my return value includes white spaces which I don't want. How do i get the correct return value as per example usage?
What i have so far is:
def tokenise1(string):
return re.split(r'(\S+)', string)
and it returns:
['', 'A', ' ', '(small,', ' ', 'simple)', ' ', 'example', '']
so i need to get rid of the white space in the return
The output is having spaces because you capture them using (). Instead you can split like
re.split(r'\s+', string)
['A', '(small,', 'simple)', 'example']
\s+ Matches one or more spaces.

Python - defining string split delimiter?

How could I define string delimiter for splitting in most efficient way? I mean to not need to use many if's etc?
I have strings that need to be splited strictly into two element lists. The problem is those strings have different symbols by which I can split them. For example:
'Hello: test1'. This one has split delimiter ': '. The other example would be:
'Hello - test1'. So this one would be ' - '. Also split delimiter could be ' -' or '- '. So if I know all variations of delimiters, how could I define them most efficiently?
First I did something like this:
strings = ['Hello - test', 'Hello- test', 'Hello -test']
for s in strings:
delim = ' - '
if len(s.split('- ', 1)) == 2:
delim = '- '
elif len(s.split(' -', 1)) == 2:
delim = ' -'
print s.split(delim, 1)[1])
But then I got new strings that had another unexpected delimiters. So doing this way I should add even more ifs to check other delimiters like ': '. But then I wondered if there is some better way to define them (there is not problem if I should need to include new delimiters in some kind of list if I would need to later on). Maybe regex would help or some other tool?
Put all the delimiters inside re.split function like below using logical OR | operator.
re.split(r': | - | -|- ', string)
Add maxsplit=1, if you want to do an one time split.
re.split(r': | - | -|- ', string, maxsplit=1)
You can use the split function of the re module
>>> strings = ['Hello1 - test1', 'Hello2- test2', 'Hello3 -test3', 'Hello4 :test4', 'Hello5 : test5']
>>> for s in strings:
... re.split(" *[:-] *",s)
...
['Hello1', 'test1']
['Hello2', 'test2']
['Hello3', 'test3']
['Hello4', 'test4']
['Hello5', 'test5']
Where between [] you put all the possible delimiters. The * indicates that some spaces can be put before or after.
\s*[:-]\s*
You can split by this.Use re.split(r"\s*[:-]\s*",string).See demo.
https://regex101.com/r/nL5yL3/14
You should use this if you can have delimiters like - or - or -.wherein you have can have multiple spaces.
This isn't the best way, but if you want to avoid using re for some (or no) reason, this is what I would do:
>>> strings = ['Hello - test', 'Hello- test', 'Hello -test', 'Hello : test']
>>> delims = [':', '-'] # all possible delimiters; don't worry about spaces.
>>>
>>> for string in strings:
... delim = next((d for d in delims if d in string), None) # finds the first delimiter in delims that's present in the string (if there is one)
... if not delim:
... continue # No delimiter! (I don't know how you want to handle this possibility; this code will simply skip the string all together.)
... print [s.strip() for s in string.split(delim, 1)] # assuming you want them in list form.
['Hello', 'test']
['Hello', 'test']
['Hello', 'test']
['Hello', 'test']
This uses Python's native .split() to break the string at the delimiter, and then .strip() to trim the white space off the results, if there is any. I've used next to find the appropriate delimiter, but there are plenty of things you can swap that out with (especially if you like for blocks).
If you're certain that each string will contain at least one of the delimiters (preferably exactly one), then you can shave it down to this:
## with strings and delims defined...
>>> for string in strings:
... delim = next(d for d in delims if d in string) # raises StopIteration at this line if there is no delimiter in the string.
... print [s.strip() for s in string.split(delim, 1)]
I'm not sure if this is the most elegant solution, but it uses fewer if blocks, and you won't have to import anything to do it.

Python - splitting a string twice

I have some data that looks like "string,string,string:otherstring,otherstring,otherstring".
I want to manipulate the first set of "string"s one at a time. If I split the input and delimit it based off a colon, I will then end up with a list. I then cannot split this again because "'list' object has no attribute 'split'". Alternatively, if I decide to delimit based off a comma, then that will return everything (including stuff after the comma, which I don't want to manipulate). rsplit has the same problem. Now even with a list, I could still manipulate the first entries by using [0], [1] etc. except for the fact that the number of "string"s is always changing, so I am not able to hardcode the numbers in place. Any ideas on how to get around this list limitation?
Try this:
import re
s = 'string,string,string:otherstring,otherstring,otherstring'
re.split(r'[,:]', s)
=> ['string', 'string', 'string', 'otherstring', 'otherstring', 'other string']
We're using regular expressions and the split() method for splitting a string with more than one delimiter. Or if you want to manipulate the first group of strings in a separate way from the second group, we can create two lists, each one with the strings in each group:
[x.split(',') for x in s.split(':')]
=> [['string', 'string', 'string'], ['otherstring', 'otherstring', 'otherstring']]
… Or if you just want to retrieve the strings in the first group, simply do this:
s.split(':')[0].split(',')
=> ['string', 'string', 'string']
Use a couple join() statements to convert back to a string:
>>> string = "string,string,string:otherstring,otherstring,otherstring"
>>> ' '.join(' '.join(string.split(':')).split(',')).split()
['string', 'string', 'string', 'otherstring', 'otherstring', 'otherstring']
>>>
text = "string,string,string:otherstring,otherstring,otherstring"
replace = text.replace(":", ",").split(",")
print(replace)
['string', 'string', 'string', 'otherstring', 'otherstring', 'otherstring']

Categories