parse link of url pointing to another url in python - python

I have a rather strange question regarding urls which point to another url. So, for example, I have a url:
http://mywebpage/this/is/a/forward
which ultimately points to another url:
http://mynewpage/this/is/new
My question is, when I use for example urllib2 in python to fetch the first page, it ultimately fetches the second page. I would like to know if its possible to know what the original link is pointing to. Is there something like a "header" which tells me the second link when I request the first link?
Sorry if this is a really silly question!

When you issue a GET request for the first URL, the web server will return a 300-series reply code, with a Location header whose value is the second URL. You can find out what the second URL was from Python with the geturl method of the object returned by urlopen. If there is more than one redirection involved, it appears that urllib will tell you the last hop and there's no way to get the others.
This will not handle redirections via JavaScript or meta http-equiv="refresh", but you probably aren't in that situation or you wouldn't have asked the question the way you did.

It's most commonly done via a redirection response code (3xx) as defined in RFC2616 although a "pseudo redirect effect" cann be achieved with some javascript in the original page.
This SO question is about how to prevent urllib2 from following redirects, it looks like something you might be able to use.

You can do this using requests:
>>> url = 'http://ofa.bo/foagK7'
>>> r = requests.head(url)
>>> r.headers['location']
'https://my.barackobama.com/page/s/what-does-2000-mean-to-you'

Related

Impossible to extract data from this url

This is my first post here. It has been 5 months since I have been learning python from scratch, on my own, and I did acquire most of my knowledge thanks to this forum, and I am now able to create webbots which can easily scrape all types of data, especially in sport betting sites.
Though for this particular need, there is one site from which I cannot extract what I am looking for:
winamax
I would like to get all links from all football events (on the left side, for example:
"https://www.winamax.fr/paris-sportifs#!/match/prelive/7894014"
but when I look at the source code, or when I print my soup, I just get nothing.
url = "https://www.winamax.fr/paris-sportifs#!/sports"
urlRequest = requests.get(url, proxies=proxies, headers=headers)
#of course, proxies and headers are defined beforehand
soup = BeautifulSoup(urlRequest.content)
print(soup)
For all bookmakers I have already come up with, there was always either a simple html tree structure in which all items were easy to find, or a hidden javascript file, or a json link.
But for this one, even when trying to catch the flow with Firebug, I cannot find anything relevant.
Thanks in advance if someone has an idea on how to get that (I considered using PhantomJS but not tried yet).
EDIT:
#ssundarraj:
Hereunder the header, the same I have been using in all my projects, so not relevant in my opinion, but anyway, here it is:
AgentsFile='UserAgents.txt'
lines = open(AgentsFile).read().splitlines()
myline =random.choice(lines)
headers = {'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset':'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding':'gzip,deflate,sdch',
'Accept-Language':'fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3',
'Referer' : 'https://www.winamax.fr',
'User-Agent': myline}
EDIT2:
#Chris Lear
using firebug, in the net panel, you can search through all the
response bodies (there's a checkbox called "Response Bodies" that
appears when you click the search box). That will show you that the
data is being fetched by json. I'll leave you to try to make sense of
it, but that might give you a start (searching for ids is probably
best)
I checked the box you mentioned hereabove, but with no effect :(
With or without filter, nothing is displayed in my network panel, as you can see on the picture:
nothing caught
Used firebug and find out this.
Make POST request to https://www.winamax.fr/betting/slider/slider.php with parameters:
key=050e42fb0761c96526e8510eda89248f
lang=FR
Don't know if key is changing but now it works.

How to change link url in a HTTP response?

Now I have a http response from website A, I need to change all the link urls in this http response to the url of website B, so that when users get this http response in browser, click on links, they will be directed to website B not A.
I'm using python and django. Is there a package or tool can do this trick?
Thanks in advance.
Depending upon the nature of the response you get from website A, what you want to do with it, and on how important it is that the replacement be efficient, there are a few possible ways of doing things. I'm not 100% clear on your situation and what you want to achieve.
If the links in the response from website A start with website A's hostname, then just get the response as a string and do response = response.replace('http://website-a.com', 'http://website-b.com') before you present the response to the user.
If the response is HTML, and the links are relative, the easiest solution to code would probably be to use lxml.rewrite_links (see http://lxml.de/lxmlhtml.html#working-with-links). I suspect this is what you're looking for.
If you've got some other situation, well, then I dunno what's appropriate. Maybe a regex. Maybe a custom algorithm of your own design. It depends upon what kind of content you're getting back from website A, how you can recognise links in it, and how you want to change them.
If you use Apache as Webserver you could use a module to replace Text in the response like http://mod-replace.sourceforge.net/. This seems to be more reasonable than invoking perl or python for every request. But you have to be aware that all the text might be replaced - not only the links which have an efect. Therefore this would be a very dirty solution.

Python URL Redirect Problem

I've got a link that I know redirects to another end url, and I'm trying to get the address for that end url using python. But the original link is a little weird, and doesn't work like a normal redirect, and I can't figure out why. When I post the link (the link's below for you try, if you'd like) into a browser, it redirects perfectly. But when I run the following code, it doesn't.
import urllib2
request = urllib2.Request('http://www.facebook.com/ajax/emu/end.php?eid=AQJSWpZ3e4cCTHoNdahpJzPYzmzHOENzbTWBVlW4SgIxX0rL9bo6NXmS3q06cjeh5jO9wbsmr3IyGrpbXPSj0GPLbRJl4VUH-EBnmSy_R4j7iYzpMe1ooZ6IEqSEIlBl0-5SEldIhxI82m75YPa5nOhuBdokiwTw79hoiRB-Zn1auxN-6WLVe3e5WNSt3HLAEjZL-2e4ox_7yAyLcBo1nkamEvShTyZ-GfIf0A9oFXylwRnV8oNaqNmUnqrFYqDbUhzh7d6LSm3jbv1ue2coS3w8N7OxTKVwODHa-Hd3qRbYskB9weio8eKdDFtkvDKuzSSq5hjr711UjlDsgpxLuAmdD95xVwpomxeEsBsMCYJoUEQYa-cM7q3W1aiIYBHlyn2__t74qHWVvzK5zaLKFMKjRFQqphDlUMgMni6AP1VHSn1wli_3lgeVD8TzcJMSlJIF7DC_O44WdjBIMY8OufER3ZB_mm2NqwUe6cvV9oV9SNyYHE4UUURYjW_Z6sUxz3SpHG8c6QxJ-ltSeShvU3mIwAhFE3M0jGTg7AQ7nIoOUfC8PDainFZ1NV8g31aqaqDsF7UxdlOmBT6w-Y8TPmHOXfSlWB-M3MQYUBmcWS3UzlbSsavQG8LXPqYbyKfvkAfncSnZS3_tkoqbTksFirQWlSxJ3mgXrO5PqopH63Esd9ynCbFQM1q_3_wgkYvTeGS9XK6G63_Ag3N9dCHsO_bCJToJT4jeHQCSQ83cb1U5Qpe_7EWbw1ilzgyL-LBVrpH424dwK-4AoaL00W-gWzShSdOynjcoGeB7KE0pHbg-XhuaVribSodriSGybNdADBosnddVvZldY22-_97MqEuA&amp&c=4&amp&f=4&amp&ui=6003071106023-id_4e0b51323f9d01393198225&amp&en=1&amp&a=0&amp&sig=78154')
opener = urllib2.build_opener()
f = opener.open(request)
f.geturl()
I simply get my original url back. I encounter the same problem when I save cookies and use mechanize. Any help would be much appreciated! Thanks!
It looks like this is using Javascript to perform the redirect. You'll either have to figure out exactly how the Javascript is performing the redirects and pull out the appropriate urls, or you'll have to actually run the Javascript. As far as I know, running Javascript from python is not an easy task.
(original answer deleted)
If you look at the contents of f.read() you'll see what's going on here. Instead of returning a 301 or 302 that redirects to the new URL, Facebook actually returns a real HTML document - which contains a piece of Javascript that uses document.location.replace to change the URL in the browser.
There's no easy way of replicating that with Python - the best thing to do is to parse the document with something like BeautifulSoup to find the Javascript, and somehow extract the new URL. It won't be pretty.

Using Python to download a document that's not explicitly referenced in a URL

I wrote a web crawler in Python 2.6 using the Bing API that searches for certain documents and then downloads them for classification later. I've been using string methods and urllib.urlretrieve() to download results whose URL ends in .pdf, .ps etc., but I run into trouble when the document is 'hidden' behind a URL like:
http://www.oecd.org/officialdocuments/displaydocument/?cote=STD/CSTAT/WPNA(2008)25&docLanguage=En
So, two questions. Is there a way in general to tell if a URL has a pdf/doc etc. file that it's linking to if it's not doing so explicitly (e.g. www.domain.com/file.pdf)? Is there a way to get Python to snag that file?
Edit:
Thanks for replies, several of which suggest downloading the file to see if it's of the correct type. Only problem is... I don't know how to do that (see question #2, above). urlretrieve(<above url>) gives only an html file with an href containing that same url.
There's no way to tell from the URL what it's going to give you. Even if it ends in .pdf it could still give you HTML or anything it likes.
You could do a HEAD request and look at the content-type, which, if the server isn't lying to you, will tell you if it's a PDF.
Alternatively you can download it and then work out whether what you got is a PDF.
In this case, what you refer to as "a document that's not explicitly referenced in a URL" seems to be what is known as a "redirect". Basically, the server tells you that you have to get the document at another URL. Normally, python's urllib will automatically follow these redirects, so that you end up with the right file. (and - as others have already mentioned - you can check the response's mime-type header to see if it's a pdf).
However, the server in question is doing something strange here. You request the url, and it redirects you to another url. You request the other url, and it redirects you again... to the same url! And again... And again... At some point, urllib decides that this is enough already, and will stop following the redirect, to avoid getting caught in an endless loop.
So how come you are able to get the pdf when you use your browser? Because apparently, the server will only serve the pdf if you have cookies enabled. (why? you have to ask the people responsible for the server...) If you don't have the cookie, it will just keep redirecting you forever.
(check the urllib2 and cookielib modules to get support for cookies, this tutorial might help)
At least, that is what I think is causing the problem. I haven't actually tried doing it with cookies yet. It could also be that the server is does not "want" to serve the pdf, because it detects you are not using a "normal" browser (in which case you would probably need to fiddle with the User-Agent header), but it would be a strange way of doing that. So my guess is that it is somewhere using a "session cookie", and in the case you haven't got one yet, keeps on trying to redirect.
As has been said there is no way to tell content type from URL. But if you don't mind getting the headers for every URL you can do this:
obj = urllib.urlopen(URL)
headers = obj.info()
if headers['Content-Type'].find('pdf') != -1:
# we have pdf file, download whole
...
This way you won't have to download each URL just it's headers. It's still not exactly saving network traffic, but you won't get better than that.
Also you should use mime-types instead of my crude find('pdf').
No. It is impossible to tell what kind of resource is referenced by a URL just by looking at it. It is totally up to the server to decide what he gives you when you request a certain URL.
Check the mimetype with the urllib.info() function. This might not be 100% accurate, it really depends on what the site returns as a Content-Type header. If it's well behaved it'll return the proper mime type.
A PDF should return application/pdf, but that may not be the case.
Otherwise you might just have to download it and try it.
You can't see it from the url directly. You could try to only download the header of the HTTP response and look for the Content-Type header. However, you have to trust the server on this - it could respond with a wrong Content-Type header not matching the data provided in the body.
Detect the file type in Python 3.x and webapp with url to the file which couldn't have an extension or a fake extension. You should install python-magic, using
pip3 install python-magic
For Mac OS X, you should also install libmagic using
brew install libmagic
Code snippet
import urllib
import magic
from urllib.request import urlopen
url = "http://...url to the file ..."
request = urllib.request.Request(url)
response = urlopen(request)
mime_type = magic.from_buffer(response.read())
print(mime_type)

How do I get the URL of an HTTP redirect's target?

I am writing client-side Python unit tests to verify whether the HTTP 302 redirects on my Google App Engine site are pointing to the right pages. So far, I have been calling urllib2.urlopen(my_url).geturl(). However, I have encountered 2 issues:
the URL returned by geturl() does not appear to include URL query strings like ?k1=v1&k2=v2; how can I see these? (I need to check whether I correctly passed along the visitor's original URL query string to the redirect page.)
geturl() shows the final URL after any additional redirects. I just care about the first redirect (the one from my site); I am agnostic to anything after that. For example, let's assume my site is example.com. If a user requests http://www.example.com/somepath/?q=foo, I might want to redirect them to http://www.anothersite.com?q=foo. That other site might do another redirect to http://subdomain.anothersite.com?q=foo, which I can't control or predict. How can I make sure my redirect is correct?
Supply follow_redirects=False to the fetch function, then retrieve the location of the first redirect from the 'location' header in the response, like so:
response = urlfetch.fetch(your_url, follow_redirects=False)
location = response.headers['Location']
Use httplib (and look at the return status and Location header of the response) to avoid the "auto-follow redirects" that's impeding your testing. There's a good example here.

Categories