urllib.urlopen returns an old page? - python

So I have a very simple HTML page (a dir listing) and I try to read it with urllib, this way:
page = urllib.urlopen(coreRepositoryUrl).read()
The problem is, that the HTML I read this way is older than the newest. info() returns me this:
Date: Fri, 19 Apr 2013 18:48:09 GMT
Server: Apache/2.0.52 (Fedora)
Content-Type: text/html; charset=UTF-8
Connection: close
Age: 481084
And the page was last updated today (2013-04-25).
Which component might be the one that caches?

Add the header "Cache-Control" with value "max-age=0" in your request
import urllib2
req = urllib2.Request(url)
req.add_header('Cache-Control', 'max-age=0')
resp = urllib2.urlopen(req)
content = resp.read()
Using that header each cache along the way will revalidate its cache entry

Related

Python library requests open the wrong page

I try to open a html page with python requests library but my code open the site root folder and i don't understand how solve the problem.
import requests
scraping = requests.request("POST", url = "http://www.pollnet.it/WeeklyReport_it.aspx?ID=69")
print scraping.content
Thank you for all suggestion!
You can see easily that the server is redirecting to the main page.
➜ ~ http -v http://www.pollnet.it/WeeklyReport_it.aspx\?ID\=69
GET /WeeklyReport_it.aspx?ID=69 HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: www.pollnet.it
User-Agent: HTTPie/0.9.3
HTTP/1.1 302 Found
Content-Length: 131
Content-Type: text/html; charset=utf-8
Date: Sun, 07 Feb 2016 11:24:52 GMT
Location: /default.asp
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
On further checking, it can be seen that the web server uses session cookies.
➜ ~ http -v http://www.pollnet.it/default_it.asp
HTTP/1.1 200 OK
Cache-Control: private
Content-Encoding: gzip
Content-Length: 9471
Content-Type: text/html; Charset=utf-8
Date: Sun, 07 Feb 2016 13:21:41 GMT
Server: Microsoft-IIS/7.5
Set-Cookie: ASPSESSIONIDSQTSTAST=PBHDLEIDFCNMPKIGANFDNMLK; path=/
Vary: Accept-Encoding
X-Powered-By: ASP.NET
It means that every time the main page is visited, the server sends a "Set-Cookie" header, which instructs the browser to set certain cookies. Then every time the browser asks for a Weekly Report, the server validates the session cookie.
Normally. requests package does not save cookies in between requests, but to do the scraping, we can use a Session object which will save the cookies in between page requests.
import requests
# create a Session object
s= requests.Session()
# first visit the main page
s.get("http://www.pollnet.it/default_it.asp")
# then we can visit the weekly report pages
r = s.get("http://www.pollnet.it/WeeklyReport_it.aspx?ID=69")
print(r.text)
# another page
r = s.get("http://www.pollnet.it/WeeklyReport_it.aspx?ID=89")
print(r.text)
But here is some advice - the web server may only allow opening of a fixed number of pages (maybe 10, maybe 15) with a certain Session object. Either immediately validate the results of r.text each time (maybe check the length of the request body to ensure it is not too small), or create a new Session object, for every 5 or 6 pages.
More info on Session objects here.

Retrieve ALL header data with urllib

I've scraped many websites and have often wondered why the response headers displayed in Firebug and the response headers returned by urllib.urlopen(url).info() are often different in that Firebug reports MORE headers.
I encountered an interesting one today. I'm scraping a website by following a "search url" that fully loads (returns a 200 status code) before redirecting to a final page. The easiest way to perform the scrape would be to return the Location response header and make another request. However, that particular header is absent when I run 'urllib.urlopen(url).info().
Here is the difference:
Firebug headers:
Cache-Control : no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection : keep-alive
Content-Encoding : gzip
Content-Length : 2433
Content-Type : text/html
Date : Fri, 05 Oct 2012 15:59:31 GMT
Expires : Thu, 19 Nov 1981 08:52:00 GMT
Location : /catalog/display/1292/index.html
Pragma : no-cache
Server : Apache/2.0.55
Set-Cookie : PHPSESSID=9b99dd9a4afb0ef0ca267b853265b540; path=/
Vary : Accept-Encoding,User-Agent
X-Powered-By : PHP/4.4.0
Headers returned by my code:
Date: Fri, 05 Oct 2012 17:16:23 GMT
Server: Apache/2.0.55
X-Powered-By: PHP/4.4.0
Set-Cookie: PHPSESSID=39ccc547fc407daab21d3c83451d9a04; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Vary: Accept-Encoding,User-Agent
Content-Type: text/html
Connection: close
Here's my code:
from BeautifulSoup import BeautifulSoup
import urllib
import psycopg2
import psycopg2.extras
import scrape_tools
tools = scrape_tools.tool_box()
db = tools.db_connect()
cursor = db.cursor(cursor_factory = psycopg2.extras.RealDictCursor)
cursor.execute("SELECT data FROM table WHERE variable = 'Constant' ORDER BY data")
for row in cursor:
url = 'http://www.website.com/search/' + row['data']
headers = {
'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Encoding' : 'gzip, deflate',
'Accept-Language' : 'en-us,en;q=0.5',
'Connection' : 'keep-alive',
'Host' : 'www.website.com',
'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1'
}
post_params = {
'query' : row['data'],
'searchtype' : 'products'
}
post_args = urllib.urlencode(post_params)
soup = tools.request(url, post_args, headers)
print tools.get_headers(url, post_args, headers)
Please note: scrape_tools is a module I wrote myself. The code contained in the module to retrieve headers is (basically) as follows:
class tool_box:
def get_headers(self, url, post, headers):
file_pointer = urllib.urlopen(url, post, headers)
return file_pointer.info()
Is there a reason for the discrepancy? Am I making a silly mistake in my code? How can I retrieve the missing header data? I'm fairly new to Python, so please forgive any dumb errors.
Thanks in advance. Any advice is much appreciated!
Also...Sorry about the wall of code =\
You're not getting the same kind of response for the two requests. For example, the response to the Firefox request contains a Location: header, so it's probably a 302 Moved temporarily or a 301. Those don't contain any actual body data, but instead cause your Firefox to issue a second request to the URL in the Location: header (urllib doesn't do that).
The Firefox response also uses Connection : keep-alive while the urllib request got answered with Connection: close.
Also, the Firefox response is gzipped (Content-Encoding : gzip), while the urllib one is not. That's probably because your Firefox sends a Accept-Encoding: gzip, deflate header with its request.
Don't rely on Firebug to tell you HTTP headers (even though it does so truthfully most of the time), but use a sniffer like wireshark to inspect what's actually going over the wire.
You're obviously dealing with two different responses.
There could be several reasons for this. For one, web servers are supposed to respond differently depending on what Accept-Language, Accept-Encoding headers etc.. the client sends in its request. Then there's also the possibility that the server does some kind of User-Agent sniffing.
Either way, capture your requests with urllib as well as the ones with Firefox using wireshark and first compare the requests (not the headers, but the actual GET / HTTP/1.0 part. Are they really the same? If yes, move on to comparing request headers and start manually setting the same headers for the urllib request until you figure out which headers make a difference.

Making HTTP POST request

I'm trying to make a POST request to retrieve information about a book.
Here is the code that returns HTTP code: 302, Moved
import httplib, urllib
params = urllib.urlencode({
'isbn' : '9780131185838',
'catalogId' : '10001',
'schoolStoreId' : '15828',
'search' : 'Search'
})
headers = {"Content-type": "application/x-www-form-urlencoded",
"Accept": "text/plain"}
conn = httplib.HTTPConnection("bkstr.com:80")
conn.request("POST", "/webapp/wcs/stores/servlet/BuybackSearch",
params, headers)
response = conn.getresponse()
print response.status, response.reason
data = response.read()
conn.close()
When I try from a browser, from this page: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828 , it works. What am I missing in my code?
EDIT:
Here's what I get when I call print response.msg
302 Moved Date: Tue, 07 Sep 2010 16:54:29 GMT
Vary: Host,Accept-Encoding,User-Agent
Location: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch
X-UA-Compatible: IE=EmulateIE7
Content-Length: 0
Content-Type: text/plain; charset=utf-8
Seems that the location points to the same url I'm trying to access in the first place?
EDIT2:
I've tried using urllib2 as suggested here. Here is the code:
import urllib, urllib2
url = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch'
values = {'isbn' : '9780131185838',
'catalogId' : '10001',
'schoolStoreId' : '15828',
'search' : 'Search' }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
print response.geturl()
print response.info()
the_page = response.read()
print the_page
And here is the output:
http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch
Date: Tue, 07 Sep 2010 16:58:35 GMT
Pragma: No-cache
Cache-Control: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: JSESSIONID=0001REjqgX2axkzlR6SvIJlgJkt:1311s25dm; Path=/
Vary: Accept-Encoding,User-Agent
X-UA-Compatible: IE=EmulateIE7
Content-Length: 0
Connection: close
Content-Type: text/html; charset=utf-8
Content-Language: en-US
Set-Cookie: TSde3575=225ec58bcb0fdddfad7332c2816f1f152224db2f71e1b0474c866f3b; Path=/
Their server seems to want you to acquire the proper cookie. This works:
import urllib, urllib2, cookielib
cookie_jar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie_jar))
urllib2.install_opener(opener)
# acquire cookie
url_1 = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828'
req = urllib2.Request(url_1)
rsp = urllib2.urlopen(req)
# do POST
url_2 = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch'
values = dict(isbn='9780131185838', schoolStoreId='15828', catalogId='10001')
data = urllib.urlencode(values)
req = urllib2.Request(url_2, data)
rsp = urllib2.urlopen(req)
content = rsp.read()
# print result
import re
pat = re.compile('Title:.*')
print pat.search(content).group()
# OUTPUT: Title: Statics & Strength of Materials for Arch (w/CD)<br />
You might want to use the urllib2 module which should handle redirects better. Here's an example of POSTING with urllib2.
Perhaps that's what the browser gets, and you'll just have to follow the 302 redirect.
If all else fails, you can monitor the dialogue between Firefox and the Web Server using FireBug or tcpdump or wireshark, and see which HTTP headers are different. Possibly it's just the User Agent: header.

urllib2.urlopen throws 404 exception for urls that browser opens

The following url (and others like it) can be opened in a browser but causes urllib2.urlopen to throw a 404 exception: http://store.ovi.com/#/applications?categoryId=20&fragment=1&page=1
geturl() returns the same url (no redirect). The headers are copied and pasted from firebug. I tried passing in the headers as a dictionary to Request, but got the same result. wget opens the url in the console but not from the script.
the code:
source_url = 'http://store.ovi.com/#/applications?categoryId=20&fragment=1&page=2'
try:
socket.setdefaulttimeout(10)
hdrs = [('Host','store.ovi.com'),('User-Agent','Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US;rv:1.9.0.13) Gecko/2009073021 Firefox/3.0.13 AppEngine-Google;(+http://code.google.com/appengine)'),('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),('Accept-Language','en-us,en;q=0.5'),('Accept-Encoding','gzip,deflate'),('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.7'),('Keep-Alive','115'),('Connection','keep-alive'),('Cookie','JNPRSESSID=4u4devdrt7eb6e0qem3gin47i2; s_cc=true; undefined_s=First%20Visit; s_nr=1282817443274; s_sq=%5B%5BB%5D%5D; view=Grid; menu=menuOpen; OVI_DEVICE=b5130'),('Cache-Control','max-age=0')]
ree = urllib2.Request(source_url)
ree.addheaders = hdrs
opener = urllib2.build_opener()
htmlSource = opener.open(ree).read()
except urllib2.HTTPError, e:
print e.code
print e.msg
print e.headers
The error output:
404
Not Found
Date: Sat, 28 Aug 2010 00:36:57 GMT
Server: Apache/2.2.3 (Red Hat)
X-Powered-By: PHP/5.2.2
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Keep-Alive: timeout=7, max=333
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
What, if anything, am I doing incorrectly? Is this a bug? And if so, is there a workaround? Thanks!
Given a URL like:
http://store.ovi.com/#/applications?categoryId=20&fragment=1&page=2
The bit that browsers fetch is just:
http://store.ovi.com/
Everything to the right of that is a ‘fragment identifier’, which is not passed to the server at all (evidently, if you try, it will get confused). Instead, the HTML returned for the / URL will include a load of JavaScript that reads the #... data at the client side and fills in the page content using a bunch of XMLHttpRequests.
Webapps implemented like this are a big old pain to scrape, because you can't just take the HTML content of the main page. Instead you have to either analyse the script to find out where it gets the actual data from, or you have to hook up a real browser in order to execute all the scripts and see what document objects you're left with. They're also typically bad for accessibility and SEO.
Luckily for you this site appears to be putting something in the fragment that's also a valid path. So it looks like you can get the dynamic page data from the URL:
http://store.ovi.com/applications?categoryId=20&fragment=1&page=1

Why doesn't my Django site return a 404 when checked with this URL parser?

Here's a simple python function that checks if a given url is valid:
from httplib import HTTP
from urlparse import urlparse
def checkURL(url):
p = urlparse(url)
h = HTTP(p[1])
h.putrequest('HEAD', p[2])
h.endheaders()
if h.getreply()[0] == 200:
return 1
else: return 0
This works for most sites, but with my Django-based site I get 200 status code even when I enter a url that is clearly wrong. If I view the same page in a browser, I get a 404. For example, the following page gives a 404 in a browser: http://wefoundland.com/GooseBumper
But gives a 200 when checked with this script. Why?
Edit: While mopoke's answer solved the issue from the Django side of things, there was also a bug in the script above:
instead of parsing the url and then using
h.putrequest('HEAD', p[2])
I actually needed to use the url in the request, like so:
h.putrequest('HEAD', url)
that solved the issue.
Although the content says 404, the site is returning 200 OK in the headers:
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 30 Dec 2009 01:38:24 GMT
Content-Type: text/html; charset=utf-8
Connection: close
Make sure your response is using HttpResponseNotFound. e.g.:
return HttpResponseNotFound('<h1>Page not found</h1>')
Your page isn't actually returning a 404 status code:
alex#alex-laptop:~$ curl -I http://wefoundland.com/GooseBumper
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 30 Dec 2009 01:37:41 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
To get a 404 to be returned by your Django view, use HttpResponseNotFound instead of HttpResponse, or pass in 'status=404' to the HttpResponse constructor.

Categories