Better split string method - split by multiple characters [duplicate] - python

This question already has answers here:
Split Strings into words with multiple word boundary delimiters
(31 answers)
Closed 6 years ago.
The built-in <string>.split() procedure works only uses whitespace to split the string.
I'd like to define a procedure, split_string, that takes two inputs: the string to split and a string containing all of the characters considered separators.
The procedure should return a list of strings that break the source string up by the characters in the list.
def split_string(source,list):
...
>>> print split_string("This is a test-of the,string separation-code!",",!-")
['This', 'is', 'a', 'test', 'of', 'the', 'string', 'separation', 'code']

re.split() works:
>>> import re
>>> s = "This is a test-of the,string separation-code!"
>>> re.split(r'[ \-\,!]+', s)
['This', 'is', 'a', 'test', 'of', 'the', 'string', 'separation', 'code', '']
In your case searching for words seems more useful:
>>> re.findall(r'[\w']+', s)
['This', 'is', 'a', 'test', 'of', 'the', 'string', 'separation', 'code']

Here's a function you can reuse - that also escapes special characters:
def escape_char(char):
special = ['.', '^', '$', '*', '+', '?', '\\', '[', ']', '|']
return '\\{}'.format(char) if char in special else char
def split(text, *delimiters):
return re.split('|'.join([escape_char(x) for x in delimiters]), text)
It doesn't automatically remove empty entries, e.g.:
>>> split('Python, is awesome!', '!', ',', ' ')
['Python', '', 'is', 'awesome', '']

Related

How to convert command line argument into python list [duplicate]

How do I split a sentence and store each word in a list? For example, given a string like "these are words", how do I get a list like ["these", "are", "words"]?
To split on other delimiters, see Split a string by a delimiter in python.
To split into individual characters, see How do I split a string into a list of characters?.
Given a string sentence, this stores each word in a list called words:
words = sentence.split()
To split the string text on any consecutive runs of whitespace:
words = text.split()
To split the string text on a custom delimiter such as ",":
words = text.split(",")
The words variable will be a list and contain the words from text split on the delimiter.
Use str.split():
Return a list of the words in the string, using sep as the delimiter
... If sep is not specified or is None, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace.
>>> line = "a sentence with a few words"
>>> line.split()
['a', 'sentence', 'with', 'a', 'few', 'words']
Depending on what you plan to do with your sentence-as-a-list, you may want to look at the Natural Language Took Kit. It deals heavily with text processing and evaluation. You can also use it to solve your problem:
import nltk
words = nltk.word_tokenize(raw_sentence)
This has the added benefit of splitting out punctuation.
Example:
>>> import nltk
>>> s = "The fox's foot grazed the sleeping dog, waking it."
>>> words = nltk.word_tokenize(s)
>>> words
['The', 'fox', "'s", 'foot', 'grazed', 'the', 'sleeping', 'dog', ',',
'waking', 'it', '.']
This allows you to filter out any punctuation you don't want and use only words.
Please note that the other solutions using string.split() are better if you don't plan on doing any complex manipulation of the sentence.
[Edited]
How about this algorithm? Split text on whitespace, then trim punctuation. This carefully removes punctuation from the edge of words, without harming apostrophes inside words such as we're.
>>> text
"'Oh, you can't help that,' said the Cat: 'we're all mad here. I'm mad. You're mad.'"
>>> text.split()
["'Oh,", 'you', "can't", 'help', "that,'", 'said', 'the', 'Cat:', "'we're", 'all', 'mad', 'here.', "I'm", 'mad.', "You're", "mad.'"]
>>> import string
>>> [word.strip(string.punctuation) for word in text.split()]
['Oh', 'you', "can't", 'help', 'that', 'said', 'the', 'Cat', "we're", 'all', 'mad', 'here', "I'm", 'mad', "You're", 'mad']
I want my python function to split a sentence (input) and store each word in a list
The str().split() method does this, it takes a string, splits it into a list:
>>> the_string = "this is a sentence"
>>> words = the_string.split(" ")
>>> print(words)
['this', 'is', 'a', 'sentence']
>>> type(words)
<type 'list'> # or <class 'list'> in Python 3.0
If you want all the chars of a word/sentence in a list, do this:
print(list("word"))
# ['w', 'o', 'r', 'd']
print(list("some sentence"))
# ['s', 'o', 'm', 'e', ' ', 's', 'e', 'n', 't', 'e', 'n', 'c', 'e']
shlex has a .split() function. It differs from str.split() in that it does not preserve quotes and treats a quoted phrase as a single word:
>>> import shlex
>>> shlex.split("sudo echo 'foo && bar'")
['sudo', 'echo', 'foo && bar']
NB: it works well for Unix-like command line strings. It doesn't work for natural-language processing.
Split the words without without harming apostrophes inside words
Please find the input_1 and input_2 Moore's law
def split_into_words(line):
import re
word_regex_improved = r"(\w[\w']*\w|\w)"
word_matcher = re.compile(word_regex_improved)
return word_matcher.findall(line)
#Example 1
input_1 = "computational power (see Moore's law) and "
split_into_words(input_1)
# output
['computational', 'power', 'see', "Moore's", 'law', 'and']
#Example 2
input_2 = """Oh, you can't help that,' said the Cat: 'we're all mad here. I'm mad. You're mad."""
split_into_words(input_2)
#output
['Oh',
'you',
"can't",
'help',
'that',
'said',
'the',
'Cat',
"we're",
'all',
'mad',
'here',
"I'm",
'mad',
"You're",
'mad']

discord.py - Divinding string to list [duplicate]

How do I split a sentence and store each word in a list? For example, given a string like "these are words", how do I get a list like ["these", "are", "words"]?
To split on other delimiters, see Split a string by a delimiter in python.
To split into individual characters, see How do I split a string into a list of characters?.
Given a string sentence, this stores each word in a list called words:
words = sentence.split()
To split the string text on any consecutive runs of whitespace:
words = text.split()
To split the string text on a custom delimiter such as ",":
words = text.split(",")
The words variable will be a list and contain the words from text split on the delimiter.
Use str.split():
Return a list of the words in the string, using sep as the delimiter
... If sep is not specified or is None, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace.
>>> line = "a sentence with a few words"
>>> line.split()
['a', 'sentence', 'with', 'a', 'few', 'words']
Depending on what you plan to do with your sentence-as-a-list, you may want to look at the Natural Language Took Kit. It deals heavily with text processing and evaluation. You can also use it to solve your problem:
import nltk
words = nltk.word_tokenize(raw_sentence)
This has the added benefit of splitting out punctuation.
Example:
>>> import nltk
>>> s = "The fox's foot grazed the sleeping dog, waking it."
>>> words = nltk.word_tokenize(s)
>>> words
['The', 'fox', "'s", 'foot', 'grazed', 'the', 'sleeping', 'dog', ',',
'waking', 'it', '.']
This allows you to filter out any punctuation you don't want and use only words.
Please note that the other solutions using string.split() are better if you don't plan on doing any complex manipulation of the sentence.
[Edited]
How about this algorithm? Split text on whitespace, then trim punctuation. This carefully removes punctuation from the edge of words, without harming apostrophes inside words such as we're.
>>> text
"'Oh, you can't help that,' said the Cat: 'we're all mad here. I'm mad. You're mad.'"
>>> text.split()
["'Oh,", 'you', "can't", 'help', "that,'", 'said', 'the', 'Cat:', "'we're", 'all', 'mad', 'here.', "I'm", 'mad.', "You're", "mad.'"]
>>> import string
>>> [word.strip(string.punctuation) for word in text.split()]
['Oh', 'you', "can't", 'help', 'that', 'said', 'the', 'Cat', "we're", 'all', 'mad', 'here', "I'm", 'mad', "You're", 'mad']
I want my python function to split a sentence (input) and store each word in a list
The str().split() method does this, it takes a string, splits it into a list:
>>> the_string = "this is a sentence"
>>> words = the_string.split(" ")
>>> print(words)
['this', 'is', 'a', 'sentence']
>>> type(words)
<type 'list'> # or <class 'list'> in Python 3.0
If you want all the chars of a word/sentence in a list, do this:
print(list("word"))
# ['w', 'o', 'r', 'd']
print(list("some sentence"))
# ['s', 'o', 'm', 'e', ' ', 's', 'e', 'n', 't', 'e', 'n', 'c', 'e']
shlex has a .split() function. It differs from str.split() in that it does not preserve quotes and treats a quoted phrase as a single word:
>>> import shlex
>>> shlex.split("sudo echo 'foo && bar'")
['sudo', 'echo', 'foo && bar']
NB: it works well for Unix-like command line strings. It doesn't work for natural-language processing.
Split the words without without harming apostrophes inside words
Please find the input_1 and input_2 Moore's law
def split_into_words(line):
import re
word_regex_improved = r"(\w[\w']*\w|\w)"
word_matcher = re.compile(word_regex_improved)
return word_matcher.findall(line)
#Example 1
input_1 = "computational power (see Moore's law) and "
split_into_words(input_1)
# output
['computational', 'power', 'see', "Moore's", 'law', 'and']
#Example 2
input_2 = """Oh, you can't help that,' said the Cat: 'we're all mad here. I'm mad. You're mad."""
split_into_words(input_2)
#output
['Oh',
'you',
"can't",
'help',
'that',
'said',
'the',
'Cat',
"we're",
'all',
'mad',
'here',
"I'm",
'mad',
"You're",
'mad']

Regex joining words splitted by whitespace and hyphen

My string is quite messy and looks something like this:
s="I'm hope-less and can -not solve this pro- blem on my own. Wo - uld you help me?"
I'd like to have the hyphen (& sometimes whitespace) stripped words together in one list.. Desired output:
list = ['I'm','hopeless','and','cannot','solve','this','problem','on','my','own','.','Would','you','help','me','?']
I tried a lot of different variations, but nothing worked..
rgx = re.compile("([\w][\w'][\w\-]*\w)")
s = "My string'"
rgx.findall(s)
Here's one way:
[re.sub(r'\s*-\s*', '', i) for i in re.split(r'(?<!-)\s(?!-)', s)]
# ["I'm", 'hopeless', 'and', 'cannot', 'solve', 'this', 'problem', 'on', 'my', 'own.', 'Would', 'you', 'help', 'me?']
Two operations here:
Split the text based on whitespaces without hyphens using both negative lookahead and negative lookbehind.
In each of the split word, replace the hyphens with possible whitespaces in front or behind to empty string.
You can see the first operation's demo here: https://regex101.com/r/ayHPvY/2
And the second: https://regex101.com/r/ayHPvY/1
Edit: To get the . and ? to be separated as well, use this instead:
[re.sub(r'\s*-\s*','', i) for i in re.split(r"(?<!-)\s(?!-)|([^\w\s'-]+)", s) if i]
# ["I'm", 'hopeless', 'and', 'cannot', 'solve', 'this', 'problem', 'on', 'my', 'own', '.', 'Would', 'you', 'help', 'me', '?']
The catch was also splitting the non-alphabets, non-whitespace and not hyphens/apostrophe. The if i is necessary as the split might return some None items.
Quick, non-regex way to do it would be
''.join(map(lambda s: s.strip(), s.split('-'))).split()
that is split on hyphens, strip of additional whitespaces, join back into string and split on space, this however doesn't separate dot or question marks.
How about this:
>>> s
"I'm hope-less and can -not solve this pro- blem on my own. Wo - uld you help me
?"
>>> list(map(lambda x:re.sub(' *- *','',x), filter(lambda x:x, re.split(r'(?<!-) +(?!-)|([.?])',s))))
["I'm", 'hopeless', 'and', 'cannot', 'solve', 'this', 'problem', 'on', 'my', 'own', '.', 'Would', 'you', 'help', 'me', '?']
Above used a simple space ' ', but use \s is better:
list(map(lambda x:re.sub('\s*-\s*','',x), filter(lambda x:x, re.split(r'(?<!-)\s+(?!-)|([.?])',s))))
(?<!-)\s+(?!-) means spaces that don't have - before or after.
[.?] means single . or ?.
re.split(r'(?<!-)\s+(?!-)|([.?])',s) will split the string accordingly, but will have some None and empty string '' inside:
["I'm", None, 'hope-less', None, 'and', None, 'can -not', None, 'solve', None, 'this', None, 'pro- blem', None, 'on', None, 'my', None, 'own', '.', '', None, 'Wo - uld', None, 'you', None, 'help', None, 'me', '?', '']
This result was directly feed to filter to remove None and '', and then feed to map to remove space and - inside each word.

Splitting sentences in Python using regex

I'm trying to split words, punctuation, numbers from a sentence. However, my code produces output that isn't expected. How can I fix it?
This is my input text (in a text file):
"I 2changed to ask then, said that mildes't of men2,
And my code outputs this:
['"', 'I', '2', 'changed', 'to', 'ask', 'then', ',', 'said', 'that', "mildes't", 'of', 'men2']
However, the expected output is:
['"', 'I', '2', 'changed', 'to', 'ask', 'then', ',', 'said', 'that', "mildes't", 'of', 'men','2']
Here's my code:
import re
newlist = []
f = open("Inputfile2.txt",'r')
out = f.readlines()
for line in out:
word = line.strip('\n')
f.close()
lst = re.compile(r"\d|\w+[\w']+|\w|[^\w\s]").findall(word)
print(lst)
In regular expressions, '\w' matches any alphanumeric character, i.e. [a-zA-Z0-9].
Also in the first part of your regular expression, it should be '\d+' to match more than one digits.
The second and the third part of your regular expression '\w+[\w']+|\w' can be merged into a single part by changing '+' to '*'.
import re
with open('Inputfile2.txt', 'r') as f:
for line in f:
word = line.strip('\n')
lst = re.compile(r"\d+|[a-zA-Z]+[a-zA-Z']*|[^\w\s]").findall(word)
print(lst)
This gives:
['"', 'I', '2', 'changed', 'to', 'ask', 'then', ',', 'said', 'that', "mildes't", 'of', 'men', '2', ',']
Note that your expected output is incorrect. It is missing a ','.

split string by arbitrary number of white spaces

I'm trying to find the most pythonic way to split a string like
"some words in a string"
into single words. string.split(' ') works ok but it returns a bunch of white space entries in the list. Of course i could iterate the list and remove the white spaces but I was wondering if there was a better way?
Just use my_str.split() without ' '.
More, you can also indicate how many splits to perform by specifying the second parameter:
>>> ' 1 2 3 4 '.split(None, 2)
['1', '2', '3 4 ']
>>> ' 1 2 3 4 '.split(None, 1)
['1', '2 3 4 ']
How about:
re.split(r'\s+',string)
\s is short for any whitespace. So \s+ is a contiguous whitespace.
Use string.split() without an argument or re.split(r'\s+', string) instead:
>>> s = 'some words in a string with spaces'
>>> s.split()
['some', 'words', 'in', 'a', 'string', 'with', 'spaces']
>>> import re; re.split(r'\s+', s)
['some', 'words', 'in', 'a', 'string', 'with', 'spaces']
From the docs:
If sep is not specified or is None, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace. Consequently, splitting an empty string or a string consisting of just whitespace with a None separator returns [].
>>> a = "some words in a string"
>>> a.split(" ")
['some', 'words', 'in', 'a', 'string']
split parameter is not included in the result, so i guess theres something more about your string. otherwise, it should work
if you have more than one whitespace just use split() without parameters
>>> a = "some words in a string "
>>> a.split()
['some', 'words', 'in', 'a', 'string']
>>> a.split(" ")
['some', 'words', 'in', 'a', 'string', '', '', '', '', '']
or it will just split a by single whitespaces
The most Pythonic and correct ways is to just not specify any delimiter:
"some words in a string".split()
# => ['some', 'words', 'in', 'a', 'string']
Also read:
How can I split by 1 or more occurrences of a delimiter in Python?
text = "".join([w and w+" " for w in text.split(" ")])
converts large spaces into single spaces

Categories