I've been trying to write a program for the last several hours that does what I thought would be an incredibly simple task:
Program asks for user input (let's say the type 'happiness')
Program queries the website thinkexist using this format ("http://thinkexist.com/search/searchQuotation.asp?search=USERINPUT")
Program returns first quote from the website.
I've tried using Xpath with lxml, but have no experience and every single construction comes back with a blank array.
The actual meat of the quote appears to be contained in the class "sqq."
If I navigate the site via Firebug, click the DOM tab, it appears the quote is in a textNode attribute "wholeText" or "textContent"-- but I don't know how to use that knowledge programatically.
Any ideas?
import lxml.html
import urllib
site = 'http://thinkexist.com/search/searchquotation.asp'
userInput = raw_input('Search for: ').strip()
url = site + '?' + urllib.urlencode({'search':userInput})
root = lxml.html.parse(url).getroot()
quotes = root.xpath('//a[#class="sqq"]')
print quotes[0].text_content()
... and if you enter 'Shakespeare', it returns
In real life, unlike in Shakespeare, the sweetness
of the rose depends upon the name it bears. Things
are not only what they are. They are, in very important
respects, what they seem to be.
If it's not necessary for you to implement this via XPath, you may use BeautifilSoup library like this (let myXml variable contain the page HTML source):
soup = BeautifulSoup(myXml)
for a in soup.findAll(a,{'class' : 'sqq'}):
# this is your quote
print a.contents
Anyway, read the BS documentation, it may be very useful for some scraping needs that don't require the power of XPath.
You could open the html source to find out the exact class you are looking for. For example, to grab the first StackOverflow username encountered on the page you could do:
#!/usr/bin/env python
from lxml import html
url = 'http://stackoverflow.com/questions/4710307'
tree = html.parse(url)
path = '//div[#class="user-details"]/a[#href]'
print tree.findtext(path)
# -> Parseltongue
# OR to print text including the text in children
a = tree.find(path)
print a.text_content()
# -> Parseltongue
Related
I am trying to scrape column names (player, cost, sel., form, pts) from the page below:
https://fantasy.premierleague.com/a/statistics/total_points
However, I am failing to do so.
Before I go further, let me show you what I have done.
from lxml import html
import requests
page = 'https://fantasy.premierleague.com/a/statistics/total_points'
#Take site and structure html
page = requests.get(page)
tree = html.fromstring(page.content)
#Using the page's CSS classes, extract all links pointing to a team
Location = tree.cssselect('.ism-thead-bold tr .ism-table--el-stats__name')
When I do this, Location should be a list that contains a string "Player".
However, it returns an empty list which means cssselect did not capture anything.
Though each column name has a different 'th class', I used one of them (ism-table--el-stats__name) for this specific trial just to make it simple.
When this problem is fixed, I want to use regex since every class has different suffix after two underscores.
If anyone can help me on these two tasks, I would really appreciate!
thank you guys.
I'm working on a small project, a site scraper, and I've run into a problem that (I think) with urllib.open(). So, let's say I want to scrape Google's homepage, a concatenated query, and then a search query. (I'm not actually trying to scrape from google, but I figured they'd be easy to demonstrate on.)
from bs4 import BeautifulSoup
import urllib
url = urllib.urlopen("https://www.google.com/")
soup = BeautifulSoup(url)
parseList1=[]
for i in soup.stripped_strings:
parseList1.append(i)
parseList1 = list(parseList1[10:15])
#Second URL
url2 = urllib.urlopen("https://www.google.com/"+"#q=Kerbal Space Program")
soup2 = BeautifulSoup(url2)
parseList2=[]
for i in soup2.stripped_strings:
parseList2.append(i)
parseList2 = list(parseList2[10:15])
#Third URL
url3 = urllib.urlopen("https://www.google.com/#q=Kerbal Space Program")
soup3 = BeautifulSoup(url3)
parseList3=[]
for i in soup3.stripped_strings:
parseList3.append(i)
parseList3 = list(parseList3[10:15])
print " 1 "
for i in parseList1:
print i
print " 2 "
for i in parseList2:
print i
print " 3 "
for i in parseList3:
print i
This prints out:
1
A whole nasty mess of scraped code from Google
2
3
Which leads me to believe that the # symbol might be preventing the url from opening?
The concatenated string doesn't throw any errors for concatenation, yet still doesn't read anything in.
Does anyone have any idea on why that would happen? I never thought that a # inside a string would have any effect on the code. I figured this would be some silly error on my part, but if it is, I can't see it.
Thanks
Browsers should not send the url fragment part (ends with "#") to servers.
RFC 1808 (Relative Uniform Resource Locators) : Note that the fragment identifier (and the "#" that precedes it) is
not considered part of the URL. However, since it is commonly used
within the same string context as a URL, a parser must be able to
recognize the fragment when it is present and set it aside as part of
the parsing process.
You can get the right result in browsers because a browser send a request to https://www.google.com, the url fragment is detected by javascript(It is similar with spell checking here and most web sites won't do this), browser then send a new ajax request(https://www.google.com?q=xxxxx), finally render the page with the json data got. urllib can not execute javascript for you.
To fix your problem, just replace https://www.google.com/#q=Kerbal Space Program with https://www.google.com/?q=Kerbal Space Program
I am building a blog on Google App Engine. I would like to convert some keywords in my blog posts to links, just like what you see in many WordPress blogs.
Here is one WP plugin which do the same thing:http://wordpress.org/extend/plugins/blog-mechanics-keyword-link-plugin-v01/
A plugin that allows you to define keyword/link pairs. The keywords are automatically linked in each of your posts.
I think this is more than a simple Python Replace. What I am dealing with is HTML code. It can be quite complex sometimes.
Take the following code snippet as an example. I want to conver the word example into a link to http://example.com:
Here is an example link:example.com
By a simple Python replace function which replaces example with example, it would output:
Here is an example link:example.com">example.com</a>
but I want:
Here is an example link:example.com
Is there any Python plugin that capable of this? Thanks a lot!
This is roughly what you could do using Beautifulsoup:
from BeautifulSoup import BeautifulSoup
html_body ="""
Here is an example link:<a href='http://example.com'>example.com</a>
"""
soup = BeautifulSoup(html_body)
for link_tag in soup.findAll('a'):
link_tag.string = "%s%s%s" % ('|',link_tag.string,'|')
for text in soup.findAll(text=True):
text_formatted = ['example'\
if word == 'example' and not (word.startswith('|') and word.endswith('|'))\
else word for word in foo.split() ]
text.replaceWith(' '.join(text_formatted))
for link_tag in soup.findAll('a'):
link_tag.string = link_tag.string[1:-1]
print soup
Basically I'm stripping out all the text from the post_body, replacing the example word with the given link, without touching the links text that are saved by the '|' characters during the parsing.
This is not 100% perfect, for example it does not work if the word you are trying to replace ends with a period; with some patience you could fix all the edge cases.
This would probably be better suited to client-side code. You could easily modify a word highlighter to get the desired results. By keeping this client-side, you can avoid having to expire page caches when your 'tags' change.
If you really need it to be processed server-side, then you need to look at using re.sub which lets you pass in a function, but unless you are operating on plain-text you will have to first parse the HTML using something like minidom to ensure you are not replacing something in the middle of any elements.
I'd like to grab all the index words and its definitions from here. Is it possible to scrape web content with Python?
Firebug exploration shows the following URL returns my desirable contents including both index and its definition as to 'a'.
http://pali.hum.ku.dk/cgi-bin/cpd/pali?acti=xart&arid=14179&sphra=undefined
what are the modules used? Is there any tutorial available?
I do not know how many words indexed in the dictionary. I`m absolute beginner in the programming.
You should use urllib2 for gettting the URL contents and BeautifulSoup for parsing the HTML/XML.
Example - retrieving all questions from the StackOverflow.com main page:
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://stackoverflow.com")
soup = BeautifulSoup(page)
for incident in soup('h3'):
print [i.decode('utf8') for i in incident.contents]
print
This code sample was adapted from the BeautifulSoup documentation.
You can get data from the web using the built-in urllib or urllib2, but the parsing itself is the most important part. May I suggest the wonderful BeautifulSoup? It can deal with just about anything.
http://www.crummy.com/software/BeautifulSoup/
The documentation is built like a tutorial. Sorta:
http://www.crummy.com/software/BeautifulSoup/documentation.html
In your case, you probably need to use wildcards to see all entries in the dictionary. You can do something like this:
import urllib2
def getArticles(query, start_index, count):
xml = urllib2.urlopen('http://pali.hum.ku.dk/cgi-bin/cpd/pali?' +
'acti=xsea&tsearch=%s&rfield=entr&recf=%d&recc=%d' %
(query, start_index, count))
# TODO:
# parse xml code here (using BeautifulSoup or an xml parser like Python's
# own xml.etree. We should at least have the name and ID for each article.
# article = (article_name, article_id)
return (article_names # a list of parsed names from XML
def getArticleContent(article):
xml = urllib2.urlopen('http://pali.hum.ku.dk/cgi-bin/cpd/pali?' +
'acti=xart&arid=%d&sphra=undefined' % article_id)
# TODO: parse xml
return parsed_article
Now you can loop over things. For instance, to get all articles starting in 'ana', use the wildcard 'ana*', and loop until you get no results:
query = 'ana*'
article_dict = {}
i = 0
while (true):
new_articles = getArticles(query, i, 100)
if len(new_articles) == 0:
break
i += 100
for article_name, article_id in new_articles:
article_dict[article_name] = getArticleContent(article_id)
Once done, you'll have a dictionary of the content of all articles, referenced by names. I omitted the parsing itself, but it's quite simple in this case, since everything is XML. You might not even need to use BeautifulSoup (even though it's still handy and easy to use for XML).
A word of warning though:
You should check the site's usage policy (and maybe robots.txt) before trying to heavily scrap articles. If you're just getting a few articles for yourself they may not care (the dictionary copyright owner, if it's not public domain, may care though), but if you're going to scrape the entire dictionary, this is going to be some heavy usage.
I am using BeautifulSoup in a simple function to extract links that have all uppercase text:
def findAllCapsUrls(page_contents):
""" given HTML, returns a list of URLs that have ALL CAPS text
"""
soup = BeautifulSoup.BeautifulSoup(page_contents)
all_urls = node_with_links.findAll(name='a')
# if the text for the link is ALL CAPS then add the link to good_urls
good_urls = []
for url in all_urls:
text = url.find(text=True)
if text.upper() == text:
good_urls.append(url['href'])
return good_urls
Works well most of the time, but a handful of pages will not parse correctly in BeautifulSoup (or lxml, which I also tried) due to malformed HTML on the page, resulting in an object with no (or only some) links in it. A "handful" might sound like not-a-big-deal, but this function is being used in a crawler so there could be hundreds of pages that the crawler will never find...
How can the above function be refactored to not use a parser like BeautifulSoup? I've searched around for how to do this using regex, but all the answers say "use BeautifulSoup." Alternatively, I started looking at how to "fix" the malformed HTML so that is parses, but I don't think that is the best route...
What is an alternative solution, using re or something else, that can do the same as the function above?
If the html pages are malformed, there is not a lot of solutions that can really help you. BeautifulSoup or other parsing library are the way to go to parse html files.
If you want to avoir the library path, you could use a regexp to match all your links see regular-expression-to-extract-url-from-an-html-link using a range of [A-Z]
When I need to parse a really broken html and speed is not the most important factor I automate a browser with selenium & webdriver.
This is the most resistant way of html parsing I know.
Check this tutorial it shows how to extract google suggestion using webdriver (the code is in java but it can be changed to python).
I ended up with a combination of regex and BeautifulSoup:
def findAllCapsUrls2(page_contents):
""" returns a list of URLs that have ALL CAPS text, given
the HTML from a page. Uses a combo of RE and BeautifulSoup
to handle malformed pages.
"""
# get all anchors on page using regex
p = r'<a\s+href\s*=\s*"([^"]*)"[^>]*>(.*?(?=</a>))</a>'
re_urls = re.compile(p, re.DOTALL)
all_a = re_urls.findall(page_contents)
# if the text for the anchor is ALL CAPS then add the link to good_urls
good_urls = []
for a in all_a:
href = a[0]
a_content = a[1]
a_soup = BeautifulSoup.BeautifulSoup(a_content)
text = ''.join([s.strip() for s in a_soup.findAll(text=True) if s])
if text and text.upper() == text:
good_urls.append(href)
return good_urls
This is working for my use cases so far, but I wouldn't guarantee it to work on all pages. Also, I only use this function if the original one fails.