How to regex in python? - python

I am trying to parse the keywords from google suggest, this is the url:
http://google.com/complete/search?output=toolbar&q=test
I've done it with php using:
'|<CompleteSuggestion><suggestion data="(.*?)"/><num_queries int="(.*?)"/></CompleteSuggestion>|is'
But that wont work with python re.match(pattern, string), I tried a few but some show error and some return None.
How can I parse that info? I dont want to use minidom because I think regex will be less code.

You could use etree:
>>> from xml.etree.ElementTree import XMLParser
>>> x = XMLParser()
>>> x.feed('<toplevel><CompleteSuggestion><suggestion data=...')
>>> tree = x.close()
>>> [(e.find('suggestion').get('data'), int(e.find('num_queries').get('int')))
for e in tree.findall('CompleteSuggestion')]
[('test internet speed', 31800000), ('test', 686000000), ...]
It is more code than a regex, but it also does more. Specifically, it will fetch the entire list of matches in one go, and unescape any weird stuff like double-quotes in the data attribute. It also won't get confused if additional elements start appearing in the XML.

RegEx match open tags except XHTML self-contained tags
This is an XML document. Please, reconsider an XML parser. It will be more robust and probably take you less time in the end, even if it is more code.

Related

Python strip XML tags from document

I am trying to strip XML tags from a document using Python, a language I am a novice in. Here is my first attempt using regex, whixh was really a hope-for-the-best idea.
mfile = file("somefile.xml","w")
for line in mfile:
re.sub('<./>',"",line) #trying to match elements between < and />
That failed miserably. I would like to know how it should be done with regex.
Secondly, I googled and found: http://code.activestate.com/recipes/440481-strips-xmlhtml-tags-from-string/
which seems to work. But I would like to know is there a simpler way to get rid of all xml tags? Maybe using ElementTree?
The most reliable way to do this is probably with LXML.
from lxml import etree
...
tree = etree.parse('somefile.xml')
notags = etree.tostring(tree, encoding='utf8', method='text')
print(notags)
It will avoid the problems with "parsing" XML with regular expressions, and should correctly handle escaping and everything.
An alternative to Jeremiah's answer without requiring the lxml external library:
import xml.etree.ElementTree as ET
...
tree = ET.fromstring(Text)
notags = ET.tostring(tree, encoding='utf8', method='text')
print(notags)
Should work with any Python >= 2.5
Please, note, that usually it is not normal to do it by regular expressions. See Jeremiah answer.
Try this:
import re
text = re.sub('<[^<]+>', "", open("/path/to/file").read())
with open("/path/to/file", "w") as f:
f.write(text)

Extract string using regex

How can I extract the content (how are you) from the string:
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">how are you</string>.
Can I use regex for the purpose? if possible whats suitable regex for it.
Note: I dont want to use split function for extract the result. Also can you suggest some links to learn regex for a beginner.
I am using python2.7.2
You could use a regular expression for this (as Joey demonstrates).
However if your XML document is any bigger than this one-liner you could not since XML is not a regular language.
Use BeautifulSoup (or another XML parser) instead:
>>> from BeautifulSoup import BeautifulSoup
>>> xml_as_str = '<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">how are you</string>. '
>>> soup = BeautifulSoup(xml_as_str)
>>> print soup.text
how are you.
Or...
>>> for string_tag in soup.findAll('string'):
... print string_tag.text
...
how are you
Try with following regex:
/<[^>]*>(.*?)</
(?<=<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">)[^<]+(?=</string>)
would match what you want, as a trivial example.
(?<=<)[^<]+
would, too. It all depends a bit on how your input is formatted exactly.
This will match a generic HTML tag (Replace "string" with the tag you want to match):
/<string[^<]*>(.*?)<\/string>/i
(i=case insensitive)

How can I make a regular expression to extract all anchor tags or links from a string?

I've seen other questions which will parse either all plain links, or all anchor tags from a string, but nothing that does both.
Ideally, the regular expression will be able to parse a string like this (I'm using Python):
>>> import re
>>> content = '
http://www.google.com Some other text.
And even more text! http://stackoverflow.com
'
>>> links = re.findall('some-regular-expression', content)
>>> print links
[u'http://www.google.com', u'http://stackoverflow.com']
Is it possible to produce a regular expression which would not result in duplicate links being returned? Is there a better way to do this?
No matter what you do, it's going to be messy. Nevertheless, a 90% solution might resemble:
r'<a\s[^>]*>([^<]*)</a>|\b(\w+://[^<>\'"\t\r\n\xc2\xa0]*[^<>\'"\t\r\n\xc2\xa0 .,()])'
Since that pattern has two groups, it will return a list of 2-tuples; to join them, you could use a list comprehension or even a map:
map(''.join, re.findall(pattern, content))
If you want the src attribute of the anchor instead of the link text, the pattern gets even messier:
r'<a\s[^>]*src=[\'"]([^"\']*)[\'"][^>]*>[^<]*</a>|\b(\w+://[^<>\'"\t\r\n\xc2\xa0]*[^<>\'"\t\r\n\xc2\xa0 .,()])'
Alternatively, you can just let the second half of the pattern pick up the src attribute, which also alleviates the need for the string join:
r'\b\w+://[^<>\'"\t\r\n\xc2\xa0]*[^<>\'"\t\r\n\xc2\xa0 .,()]'
Once you have this much in place, you can replace any found links with something that doesn't look like a link, search for '://', and update the pattern to collect what it missed. You may also have to clean up false positives, particularly garbage at the end. (This pattern had to find links that included spaces, in plain text, so it's particularly prone to excess greediness.)
Warning: Do not rely on this for future user input, particularly when security is on the line. It is best used only for manually collecting links from existing data.
Usually you should never parse HTML with regular expressions since HTML isn't a regular language. Here it seems you only want to get all the http-links either they are in an A element or in text. How about getting them all and then remove the duplicates?
Try something like
set(re.findall("(http:\/\/.*?)[\"' <]", content))
and see if it serves your purpose.
Writing a regex pattern that matches all valid url is tricky business.
If all you're looking for is to detect simple http/https URLs within an arbitrary string, I could offer you this solution:
>>> import re
>>> content = 'http://www.google.com Some other text. And even more text! http://stackoverflow.com'
>>> re.findall(r"https?://[\w\-.~/?:#\[\]#!$&'()*+,;=]+", content)
['http://www.google.com', 'http://www.google.com', 'http://stackoverflow.com']
That looks for strings that start with http:// or https:// followed by one or more valid chars.
To avoid duplicate entries, use set():
>>> list(set(re.findall(r"https?://[\w\-.~/?:#\[\]#!$&'()*+,;=]+", content)))
['http://www.google.com', 'http://stackoverflow.com']
You should not use regular expressions to extract things from HTML. You should use an HTML parser.
If you also want to extract things from the text of the page then you should do that separately.
Here's how you would do it with lxml:
# -*- coding: utf8 -*-
import lxml.html as lh
import re
html = """
is.gd/testhttp://www.google.com Some other text.
And even more text! http://stackoverflow.com
here's a url bit.ly/test
"""
tree = lh.fromstring(html)
urls = set([])
for a in tree.xpath('//a'):
urls.add(a.text)
for text in tree.xpath('//text()'):
for url in re.findall(r'(?i)\b((?:https?://|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:\'".,<>?«»“”‘’]))', text):
urls.add(url[0])
print urls
Result:
set(['http://www.google.com', 'bit.ly/test', 'http://stackoverflow.com', 'is.gd/test'])
URL matchine regex from here: http://daringfireball.net/2010/07/improved_regex_for_matching_urls
No, it will not be able to parse string like this. Regexp are capable of simple matching and you can't handle parsing a complicated grammar as html just with one or two regexps.

Match "without this"

I need to remove all <p></p> that are only <p>'s in <td>.
But how it can be done?
import re
text = """
<td><p>111</p></td>
<td><p>111</p><p>222</p></td>
"""
text = re.sub(r'<td><p>(??no</p>inside??)</p></td>', r'<td>\1</td>', text)
How can I match without</p>inside?
I would use minidom. I stole the following snippet from here which you should be able to modify and work for you:
from xml.dom import minidom
doc = minidom.parse(myXmlFile)
for element in doc.getElementsByTagName('MyElementName'):
if element.getAttribute('name') in ['AttrName1', 'AttrName2']:
parentNode = element.parentNode
parentNode.insertBefore(doc.createComment(element.toxml()), element)
parentNode.removeChild(element)
f = open(myXmlFile, "w")
f.write(doc.toxml())
f.close()
Thanks #Ivo Bosticky
While using regexps with HTML is bad, matching a string that does not contain a given pattern is an interesting question in itself.
Let's assume that we want to match a string beginning with an a and ending with a z and take out whatever is in between only when string bar is not found inside.
Here's my take: "a((?:(?<!ba)r|[^r])+)z"
It basically says: find a, then find either an r which is not preceded by ba, or something different than r (repeat at least once), then find a z. So, a bar cannot sneak in into the catch group.
Note that this approach uses a 'negative lookbehind' pattern and only works with lookbehind patterns of fixed length (like ba).
I would definitely recommend using BeautifulSoup for this. It's a python HTML/XML parser.
http://www.crummy.com/software/BeautifulSoup/
Not quite sure why you want to remove the P tags which don't have closing tags.
However, if this is an attempt to clean code, an advantage of BeautifulSoup is that is can clean HTML for you:
from BeautifulSoup import BeautifulSoup
html = """
<td><p>111</td>
<td><p>111<p>222</p></td>
"""
soup = BeautifulSoup(html)
print soup.prettify()
this doesn't get rid of your unmatched tags, but it fixes the missing ones.

Extracting some HTML tag values in Python

How to get a value of nested <b> HTML tag in Python using regular expressions?
<b>LG</b> X110
# => LG X110
You don't.
Regular Expressions are not well suited to deal with the nested structure of HTML. Use an HTML parser instead.
Don't use regular expressions for parsing HTML. Use an HTML parser like BeautifulSoup. Just look how easy it is:
from BeautifulSoup import BeautifulSoup
html = r'<b>LG</b> X110'
soup = BeautifulSoup(html)
print ''.join(soup.findAll(text=True))
# LG X110
Your question was very hard to understand, but from the given output example it looks like you want to strip everything within < and > from the input text. That can be done like so:
import re
input_text = '<a bob>i <b>c</b></a>'
output_text = re.sub('<[^>]*>', '', input_text)
print output_text
Which gives you:
i c
If that is not what you want, please clarify.
Please note that the regular expression approach for parsing XML is very brittle. For instance, the above example would break on the input <a name="b>c">hey</a>. (> is a valid character in a attribute value: see XML specs)
Try this...
<a.*<b>(.*)</b>(.*)</a>
$1 and $2 should be what you want, or whatever means Python has for printing captured groups.
+1 for Jens's answer. lxml is a good library you can use to actually parse this in a robust fashion. If you'd prefer something in the standard library, you can use sax, dom or elementree.

Categories