python downloading data (urllib, urllib2) - python

I have a link like this, direct to a mp3 file. So when I put it in my browser, basically asks me if I want to download the file, however when I do the same thing with python by the following code :
> data = urllib2.urlopen("http://www23.zippyshare.com/d/44123087/497548/Lil%20Wayne%20ft.%20Eminem%20-%20Drop%20The%20World.mp3".read())
I will redirected to another link like this. Therefore, instead of the MP3 data, I am getting the html code for
'http://www23.zippyshare.com/v/44123087/file.html'
any ideas ?
thanks

urllib2 handles redirection transparently. You might want to see what the server is actually doing when it is presenting such a redirection as well allowing you to download. You might want to subclass the redirect handler and see which property of the header is giving you the url and use urlretrieve to download that.
Setting the cookies, trying explicitly might be a good try as well.
import cookielib, urllib2
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
opener.open('yourmp3filelink')

Your link redirects to an HTML webpage, most likely because your download request is timing out. That's often how these download websites work: you never get a static link to the download, only a temporarily assigned link.
My guess is that there's no way to get that static link using that website. You'd have to know where that file was actually coming from.
So no, nothing is wrong with your python code; just your sources.

Related

How can I load all of a site's resources, including AJAX requests, etc.. in Python?

I know how to request a web site and read its text with Python. In the past, I've tried using a library like BeautifulSoup to make all of the requests to links on a site, but that doesn't get things that don't look like full urls, such as AJAX requests and most requests to the original domain (since the "http://example.com" will be missing, and more importantly, isn't in an <a href='url'>Link</a>format, so BeautifulSoup will miss that).
How can I load all of a site's resources in Python? Will it require interacting with something like Selenium, or is there a way that's not too difficult to implement without that? I haven't used Selenium much, so I'm not sure how difficult that will be.
Thanks
It all depends on what you want and how you want it. The closest that may work for you is
from ghost import Ghost
ghost = Ghost()
page, extra_resources = ghost.open("http://jeanphi.fr")
assert page.http_status==200 and 'jeanphix' in ghost.content
You can know more on: http://jeanphix.me/Ghost.py/
Mmm that's a pretty interesting question. For those resources whose URLs are not fully identifiable due to them being generated at runtime or something like that (such as those used in scripts, not only AJAX) you'd need to actually run the website, so scripts get executed and dynamic URLs get created.
One option is using something like what this answer describes, which is using a third party library, like Qt, to actually run the website. To collect all URLs, you need some way of monitoring all requests made by the website, that could be done like this (although it's c++, but the code's essentially the same).
Finally once you have the URL's, you can use something like Requests to download the external resources.
I would love to hear other ways of doing this, especially if they're more concise (easier to remember), but I think this accomplishes my goal. It does not fully answer my original question though--this just gets more of the stuff than using requests.get(url)--which was enough for me in this case`:
import urllib2
url = 'http://example.com'
headers = {'User-Agent' : 'Mozilla/5.0'}
request = urllib2.Request(url,None,headers)
sock = urllib2.urlopen(request)
ch = sock.read()
sock.close()

Python: Request URL via POST and show result in browser

I'm a developer for a big GUI app and we have a web site for bug tracking. Anybody can submit a new bug to the bug tracking site. We can detect certain failures from our desktop app (i.e. an unhandled exception) and in such cases we would like to open the submit-new-bug form in the user predefined browser, adding whatever information we can gather about the failure to some form fields. We can either retrieve the submit-new-bug form using GET or POST http methods and we can provide default field values to that form. So from the http server side everything is pretty much OK.
So far we can successfully open a URL passing the default values as GET parameters in the URL using the webbrowser module from the Python Standard Library. There are, however, some limitations of this method such as the maximum allowed length of the URL for some browsers (specially MS IE). The webbrowser module doesn't seem to have a way to request the URL using POST. OTOH there's the urllib2 module that provides the type of control we want but AFAIK it lacks the possibility of opening the retrieved page in the user preferred browser.
Is there a way to get this mixed behavior we want (to have the fine control of urllib2 with the higher level functionallity of webbrowser)?
PS: We have thought about the possibility of retreiving the URL with urllib2, saving its content to a temp file and opening that file with webbrowser. This is a little nasty solution and in this case we would have to deal with other issues such as relative URLs. Is there a better solution?
This is not proper answer. but it also work
import requests
import webbrowser
url = "https://www.facebook.com/login/device-based/regular/login/?login_attempt=1&lwv=110"
myInput = {'email':'mymail#gmail.com','pass':'mypaass'}
x = requests.post(url, data = myInput)
y = x.text
f = open("home.html", "a")
f.write(y)
f.close()
webbrowser.open('file:///root/python/home.html')
I don't know of any way you can open the result of a POST request in a web browser without saving the result to a file and opening that.
What about taking an alternative approach and temporarily storing the data on the server. Then the page can be opened in the browser with a simple id parameter, and the saved partially filled form would be shown.
You could use tempfile.NamedTemporaryFile():
import tempfile
import webbrowser
import jinja2
t = jinja2.Template('hello {{ name }}!') # you could load template from a file
f = tempfile.NamedTemporaryFile() # deleted when goes out of scope (closed)
f.write(t.render(name='abc'))
f.flush()
webbrowser.open_new_tab(f.name) # returns immediately
A better approach if the server can be easily modified is to make POST request with partial parameters using urllib2 and open url generated by server using webbrowser as suggested by #Acorn.

urllib.urlretrieve is failing

I'm trying to download data using commands below.
import urllib
url = 'http://www.nse-india.com/content/historical/EQUITIES/2002/MAR/cm01MAR2002bhav.csv.zip'
urllib.urlretrieve(url, 'myzip')
What I see in the file generated file my.zip is,
You don't have permission to access "http://www.nse-india.com/content/historical/EQUITIES/2002/MAR/cm01MAR2002bhav.csv.zip" on this server.<P>
Reference #18.7d427b5c.1311889977.25329891
But I'm able to download the file from the website without any problem.
What is the reason for this?
You may need to use urllib2 and set the user-agent header to something it recognizes. It might just be blocking anything that doesn't appear to be a normal user.

Python URL Redirect Problem

I've got a link that I know redirects to another end url, and I'm trying to get the address for that end url using python. But the original link is a little weird, and doesn't work like a normal redirect, and I can't figure out why. When I post the link (the link's below for you try, if you'd like) into a browser, it redirects perfectly. But when I run the following code, it doesn't.
import urllib2
request = urllib2.Request('http://www.facebook.com/ajax/emu/end.php?eid=AQJSWpZ3e4cCTHoNdahpJzPYzmzHOENzbTWBVlW4SgIxX0rL9bo6NXmS3q06cjeh5jO9wbsmr3IyGrpbXPSj0GPLbRJl4VUH-EBnmSy_R4j7iYzpMe1ooZ6IEqSEIlBl0-5SEldIhxI82m75YPa5nOhuBdokiwTw79hoiRB-Zn1auxN-6WLVe3e5WNSt3HLAEjZL-2e4ox_7yAyLcBo1nkamEvShTyZ-GfIf0A9oFXylwRnV8oNaqNmUnqrFYqDbUhzh7d6LSm3jbv1ue2coS3w8N7OxTKVwODHa-Hd3qRbYskB9weio8eKdDFtkvDKuzSSq5hjr711UjlDsgpxLuAmdD95xVwpomxeEsBsMCYJoUEQYa-cM7q3W1aiIYBHlyn2__t74qHWVvzK5zaLKFMKjRFQqphDlUMgMni6AP1VHSn1wli_3lgeVD8TzcJMSlJIF7DC_O44WdjBIMY8OufER3ZB_mm2NqwUe6cvV9oV9SNyYHE4UUURYjW_Z6sUxz3SpHG8c6QxJ-ltSeShvU3mIwAhFE3M0jGTg7AQ7nIoOUfC8PDainFZ1NV8g31aqaqDsF7UxdlOmBT6w-Y8TPmHOXfSlWB-M3MQYUBmcWS3UzlbSsavQG8LXPqYbyKfvkAfncSnZS3_tkoqbTksFirQWlSxJ3mgXrO5PqopH63Esd9ynCbFQM1q_3_wgkYvTeGS9XK6G63_Ag3N9dCHsO_bCJToJT4jeHQCSQ83cb1U5Qpe_7EWbw1ilzgyL-LBVrpH424dwK-4AoaL00W-gWzShSdOynjcoGeB7KE0pHbg-XhuaVribSodriSGybNdADBosnddVvZldY22-_97MqEuA&amp&c=4&amp&f=4&amp&ui=6003071106023-id_4e0b51323f9d01393198225&amp&en=1&amp&a=0&amp&sig=78154')
opener = urllib2.build_opener()
f = opener.open(request)
f.geturl()
I simply get my original url back. I encounter the same problem when I save cookies and use mechanize. Any help would be much appreciated! Thanks!
It looks like this is using Javascript to perform the redirect. You'll either have to figure out exactly how the Javascript is performing the redirects and pull out the appropriate urls, or you'll have to actually run the Javascript. As far as I know, running Javascript from python is not an easy task.
(original answer deleted)
If you look at the contents of f.read() you'll see what's going on here. Instead of returning a 301 or 302 that redirects to the new URL, Facebook actually returns a real HTML document - which contains a piece of Javascript that uses document.location.replace to change the URL in the browser.
There's no easy way of replicating that with Python - the best thing to do is to parse the document with something like BeautifulSoup to find the Javascript, and somehow extract the new URL. It won't be pretty.

Using Python to download a document that's not explicitly referenced in a URL

I wrote a web crawler in Python 2.6 using the Bing API that searches for certain documents and then downloads them for classification later. I've been using string methods and urllib.urlretrieve() to download results whose URL ends in .pdf, .ps etc., but I run into trouble when the document is 'hidden' behind a URL like:
http://www.oecd.org/officialdocuments/displaydocument/?cote=STD/CSTAT/WPNA(2008)25&docLanguage=En
So, two questions. Is there a way in general to tell if a URL has a pdf/doc etc. file that it's linking to if it's not doing so explicitly (e.g. www.domain.com/file.pdf)? Is there a way to get Python to snag that file?
Edit:
Thanks for replies, several of which suggest downloading the file to see if it's of the correct type. Only problem is... I don't know how to do that (see question #2, above). urlretrieve(<above url>) gives only an html file with an href containing that same url.
There's no way to tell from the URL what it's going to give you. Even if it ends in .pdf it could still give you HTML or anything it likes.
You could do a HEAD request and look at the content-type, which, if the server isn't lying to you, will tell you if it's a PDF.
Alternatively you can download it and then work out whether what you got is a PDF.
In this case, what you refer to as "a document that's not explicitly referenced in a URL" seems to be what is known as a "redirect". Basically, the server tells you that you have to get the document at another URL. Normally, python's urllib will automatically follow these redirects, so that you end up with the right file. (and - as others have already mentioned - you can check the response's mime-type header to see if it's a pdf).
However, the server in question is doing something strange here. You request the url, and it redirects you to another url. You request the other url, and it redirects you again... to the same url! And again... And again... At some point, urllib decides that this is enough already, and will stop following the redirect, to avoid getting caught in an endless loop.
So how come you are able to get the pdf when you use your browser? Because apparently, the server will only serve the pdf if you have cookies enabled. (why? you have to ask the people responsible for the server...) If you don't have the cookie, it will just keep redirecting you forever.
(check the urllib2 and cookielib modules to get support for cookies, this tutorial might help)
At least, that is what I think is causing the problem. I haven't actually tried doing it with cookies yet. It could also be that the server is does not "want" to serve the pdf, because it detects you are not using a "normal" browser (in which case you would probably need to fiddle with the User-Agent header), but it would be a strange way of doing that. So my guess is that it is somewhere using a "session cookie", and in the case you haven't got one yet, keeps on trying to redirect.
As has been said there is no way to tell content type from URL. But if you don't mind getting the headers for every URL you can do this:
obj = urllib.urlopen(URL)
headers = obj.info()
if headers['Content-Type'].find('pdf') != -1:
# we have pdf file, download whole
...
This way you won't have to download each URL just it's headers. It's still not exactly saving network traffic, but you won't get better than that.
Also you should use mime-types instead of my crude find('pdf').
No. It is impossible to tell what kind of resource is referenced by a URL just by looking at it. It is totally up to the server to decide what he gives you when you request a certain URL.
Check the mimetype with the urllib.info() function. This might not be 100% accurate, it really depends on what the site returns as a Content-Type header. If it's well behaved it'll return the proper mime type.
A PDF should return application/pdf, but that may not be the case.
Otherwise you might just have to download it and try it.
You can't see it from the url directly. You could try to only download the header of the HTTP response and look for the Content-Type header. However, you have to trust the server on this - it could respond with a wrong Content-Type header not matching the data provided in the body.
Detect the file type in Python 3.x and webapp with url to the file which couldn't have an extension or a fake extension. You should install python-magic, using
pip3 install python-magic
For Mac OS X, you should also install libmagic using
brew install libmagic
Code snippet
import urllib
import magic
from urllib.request import urlopen
url = "http://...url to the file ..."
request = urllib.request.Request(url)
response = urlopen(request)
mime_type = magic.from_buffer(response.read())
print(mime_type)

Categories