Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'd like to remove all whitespaces in URLs / Email addresses. The addresses are in a "normal" string, like: "Today the weather is fine. Tomorrow, we'll see. More information: www.weather .com or info #weather.com"
I'm looking for a good regex (using the re module of Python), but my versions can't handle all cases
re.sub(u'(www)([ .])([a-zA-Z\-]+)([ .])([a-z]+)', '\\1.\\3.\\5')
Your expression for url just require a little fixing. The regex expression for email can also be inherited from url expression.
>>> #EXPRESSIONS:
>>> url = "(www)+([ .])+([a-zA-Z\-]+)+([ .])+([a-z]+)"
>>> ema = "([a-zA-Z]+)+([ +#]+)+([a-zA-Z\-]+.com)"
>>>
>>> #IMPORTINGS:
>>> import re
>>>
>>> #YOUR DATA:
>>> string = "Today the weather is fine. Tomorrow, we'll see. More information: www.weather .com or info #weather.com"
>>>
>>> #Scraping Data
>>> "".join(re.findall(url,string)[0])
'www.weather.com'
>>> "".join(re.findall(ema,string)[0]).replace(" ","")
'info#weather.com'
>>>
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
If I have a URL www.somewebsite/category/category-xyz and I want to segregate all such category URLs from a list of URLs that I already have, how do I do it in Python?
Take a look at urlparse
>>> from urllib.parse import urlparse
>>> url = "http://www.test.com:8080/cat1/cat2"
>>> parsed = urlparse(url)
>>> parsed
ParseResult(scheme='http', netloc='www.test.com:8080', path='/cat1/cat2', params='', query='', fragment='')
>>> parsed.path
'/cat1/cat2'
>>> parsed.path.split("/")
['', 'cat1', 'cat2']
If you notice above URL parse will take care of breaking out the things you don't care about, making your string processing easier. In the example above you can see it cleanly breaks out the protocol, host, and port and leaves you with just the path to operate on. Additionally if there was any additional query parameters it would break those out too.
Once you have the "path" string, you can simply parse it as you would any string. since your path will always start with a "/" you can probably just omit that from your string split
>>> parsed.path.split("/")[1:]
['cat1', 'cat2']
Please note; if your URL does not contain a path this will probably fail. If you need more details you should provide the end result you're looking for in the question
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What does .start() do in the following script?
import re
str2 = re.search("\((\w+)\)", str1)
return str1[:str2.start()].strip()
If you are more of a reader, the documentation of match.start() would tell you what it does.
If you are more of an experimenter, open an interactive python console, and input the following (feel free to change the input data, after all you are an experimenter):
>>> import re
>>> str1 = 'Hello (python) world'
>>> str2 = re.search("\((\w+)\)", str1)
>>> str2.start()
6
>>> str1[:6]
'Hello '
>>>
Short explanation: it tells you the index of the starting position of the match.
Hope this answer will teach you something more than just what does match.start() do ;-)
From the Python documentation for the start method
https://docs.python.org/3/library/re.html
It returns the index of the substring that matched.
So, str2.start() is where the regex was matched in str1.
Think of that return as saying,
Returning everything in str1 up to where the regex was matched, and strip whitespace.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have written a script in Python that pulls in the body of a multi-line email. I want to grab all the text on a line containing a hashtag, except for the hashtag itself and dump it into a list. The code is below and I've isolated a small portion that is relevant:
Email Body:
#Delete 454454
John Smith
+1.555.555.5555
Desired Output:
[454454]
You can try like this,
>>> s = '''#Delete 544574
... #Delete 457545'''
>>> [int(item.split()[1]) for item in s.splitlines() if item.startswith('#')]
[544574, 457545]
you can use regex:
>>> my_string
'#Delete 544574\n#Delete 457545\n \n\nThis email body contains two examples here.'
>>> import re
>>> map(int,re.findall("#.* (\d+)\n",my_string))
[544574, 457545]
its working on your edit:
>>> my_new
'#Delete 454454\n\n \n\nJohn Smith\n\n+1.555.555.5555'
>>> map(int,re.findall("#.* (\d+)\n",my_new))
[454454]
You can build the list using comprehension, like that (assuming your lines are in the object lines, an iterable):
[int(line[8:-1]) for line in lines if line.startswith('#Delete ') and line[8:-1].isdigit()]
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am not sure how I can extract the variables or groups I created in my regular expression. Specifically datetime and IP. I have read other postings and the documentation but I am getting a bit confused. I was wondering if someone could generate an example for me to follow. What I would like to do is to be able to extract datetime and IP for later use. Perhaps stored in a variable to be called on later
sample log:
log = 'Oct 7 13:24:36 192.168.10.2 2013: 10:07-13:24:35 httpproxy[15359]: id="0001"
httpproxy515139 = re.compile(r'(?P<datetime>\w\w\w\s+\d+\s+\d\d:\d\d:\d\d)\s+(?P<IP>d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}).*')
This sample should help you:
>>> import re
>>> sample = 'this is a sample text'
>>> third_word = re.compile(r'\S+ \S+ (?P<word>\S+) .*')
>>> ms = third_word.match(sample)
>>> ms.groupdict()
{'word': 'a'}
You need to access the groupdict() method of the returned match object.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
How to open a webpage and search for a word in python?
This is a little simplified:
>>> import urllib
>>> import re
>>> page = urllib.urlopen("http://google.com").read()
# => via regular expression
>>> re.findall("Shopping", page)
['Shopping']
# => via string.find, returns the position ...
>>> page.find("Shopping")
2716
First, get the page (e.g. via urllib.urlopen). Second use a regular expression to find portions of the text, you are interested in. Or use string.find.
you can use urllib2
import urllib2
webp=urllib2.urlopen("the_page").read()
webp.find("the_word")
hope that helps :D
How to open a webpage?
I think the most convinient way is:
from urllib2 import urlopen
page = urlopen('http://www.example.com').read()
How to search for a word?
I guess you are going to search for some pattern in the page next, so here we go:
import re
pattern = re.compile('^some regex$')
match = pattern.search(page)