I'm trying to connect to a torrent tracker to receive a list of peers to play bit torrent with, however I am having trouble forming the proper GET request.
As far as I understand, I must obtain the 20 byte SHA1 hash of the bencoded 'info' section from the .torrent file. I use the following code:
h = hashlib.new('sha1')
h.update(bencode.bencode(meta_dict['info']))
info_hash = h.digest()
This is where I am stuck. I can not figure out how to create the proper url-encoded info_hash to stick into a URL string as a parameter.
I believe it involves some combination of urllib.urlencode and urllib.quote, however my attempts have not worked so far.
well a bit late but might help someone.
Using module requests encodes the url by it's self. First you need to create a dictionary with the parameters (info_hash, peer_id etc). Then you only have to do a get request
response = requests.get(tracker_url, params=params)
I think that urllib.quote_plus() is all you need.
Related
I am learning about bittorrent protocols and have a question I'm not too sure about.
According to BEP009,
magnet URI format
The magnet URI format is:
v1: magnet:?xt=urn:btih:info-hash&dn=name&tr=tracker-url
v2: magnet:?xt=urn:btmh:tagged-info-hash&dn=name&tr=tracker-url
info-hash Is the info-hash hex encoded, for a total of 40 characters. For compatability with existing links in the wild, clients should also support the 32 character base32 encoded info-hash.
tagged-info-hash Is the multihash formatted, hex encoded full infohash for torrents in the new metadata format. 'btmh' and 'btih' exact topics may exist in the same magnet if they describe the same hybrid torrent.
example magnet link: magnet:?xt=urn:btih:407AEA6F3D7DC846879449B24CA3F57DB280DE5C&dn=ubuntu-educationpack_14+04_all&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fexplodie.org%3A6969
Correct me if i'm wrong, but urn:btih:407AEA6F3D7DC846879449B24CA3F57DB280DE5C is the info-hash from the magnet link, and i will need to decode it to be able to obtain a bencoded metadata such as listed in BEP015. Things such as: downloaded, left, uploaded, event, etc.
My question is, how do I decode this in python?
The info-hash in Magnet Link is the same as the info-hash required for a UDP Tracker (20-bytes SHA-1 hash of bencoded "info" dictionary of a torrent).
Additionally, a UDP Tracker doesn't use bencoded data at all, just bytes!
Bencoded format is used by HTTP/HTTPs trackers though.
You can search some open source code like libtorrent. It's written by C++, so you need to read the bdecode and bencode part. That part is not complex, and then you can write python codes by yourself.
Correct me if i'm wrong, but
urn:btih:407AEA6F3D7DC846879449B24CA3F57DB280DE5C is the info-hash
from the magnet link, and i will need to decode it to be able to
obtain a bencoded metadata such as listed in BEP015. Things such as:
downloaded, left, uploaded, event, etc.
Infohash is a unique SHA1 hash that identifies a torrent. Therefore it cannot be further decoded to obtain any further information, it's just an identifier. Furthermore, if you think about it, the link would constantly need to change if it contained this information.
You must use this infohash in the announce request to a tracker. The purpose of the announce request is to let the tracker know that you are downloading the particular hash, how far along you are and to provide you with peers the tracker knows about.
In your example there are two UDP trackers:
tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fexplodie.org%3A6969
After URL decoding these, they become:
tr=udp://tracker.opentrackr.org:1337/announce&tr=udp://explodie.org:6969
So, these are the trackers you must send your announce request to by implementing https://libtorrent.org/udp_tracker_protocol.html
Note that does not give you any information about the torrent file, for that you need to implement BEP-9.
I am a python newbie. I am currently doing basic web-scraping. On browsing through several GitHub projects, I found one that lets the user download an srt file.
Here's the doubt. Suppose the url is like this:
http://www.opensubtitles.org/en/subtitles/6528547/silicon-valley-the-lady-bs
How to get the random hash value 6528547? On a side note, I request tips on how to get started working with APIs
Assuming that you have the url and just want to get the "hash", the easiest way to get the hash is to split it using '/' as the parameter and then getting the 5th element of the list returned.
url = "" #suppose you have the url here
hash = url.split('/')[5]
Screenshot
I am building a Django-based website, and am having trouble figuring out a decent way to email some larger PDFs and such to my users.
The files in question never touch our servers; they're handled on a CDN. So, my starting point is with the unique URLs for the files, not with the files themselves. It would be nice to find a solution that doesn't involve saving the files locally.
In order for me to be able to send the email in the way I want (with the PDF/DOCX/whatever attached to it), I need to be able to encode the attachment as a base-64 string.
I would prefer not to save the file to our server; I would also prefer not to read a response object in chunks and write it plainly to a file on our server, then encode that file.
That said, given a direct url to a file is there a way to stream the response and encode it in base64 as it comes in?
I have been reading about Django's StreamingHttpResponse and FileWrapper and feel like I am close, but I'm not able to put it together just yet.
Edit: the snippet below is working for now, but I'm worried about memory usage - how well would something like this scale?
import base64
req = requests.get('url')
encoded = base64.b64encode(req.content)
Thanks to beetea I am comfortable implementing the simple:
import base64
req = requests.get('url')
encoded = base64.b64encode(req.content)
As the solution to this issue.
I'm currently looking to put together a quick script using Python 2.x to try and obtain the MD5 hash value of a number of images and movies on specific websites. I have noted on the w3.org website that the HTTP/1.1 protocol does offer an option within the content field to access the MD5 value but I'm wondering if this has to be set by the website admin? My script is as below:-
import httplib
c = httplib.HTTPConnection("www.samplesite.com")
c.request("HEAD", "/sampleimage.jpg")
r = c.getresponse()
res = r.getheaders()
print res
I have a feeling I need to edit 'HEAD' or possibly r.getheaders but I'm just not sure what to replace them with.
Any suggestions? As said, I'm just looking to point at an image and to then capture the MD5 hash value of the said image / movie. Ideally I don't want to have to download the image / movie to save bandwidth hence why I'm trying to do it this way.
Thanks in advance
Yes, it's rare that servers will actually respond to requests with an MD5 header. You can check for that, but in most cases, you'll actually need to download the video or image, unfortunately.
(At least hashlib is simple!)
I wrote a web crawler in Python 2.6 using the Bing API that searches for certain documents and then downloads them for classification later. I've been using string methods and urllib.urlretrieve() to download results whose URL ends in .pdf, .ps etc., but I run into trouble when the document is 'hidden' behind a URL like:
http://www.oecd.org/officialdocuments/displaydocument/?cote=STD/CSTAT/WPNA(2008)25&docLanguage=En
So, two questions. Is there a way in general to tell if a URL has a pdf/doc etc. file that it's linking to if it's not doing so explicitly (e.g. www.domain.com/file.pdf)? Is there a way to get Python to snag that file?
Edit:
Thanks for replies, several of which suggest downloading the file to see if it's of the correct type. Only problem is... I don't know how to do that (see question #2, above). urlretrieve(<above url>) gives only an html file with an href containing that same url.
There's no way to tell from the URL what it's going to give you. Even if it ends in .pdf it could still give you HTML or anything it likes.
You could do a HEAD request and look at the content-type, which, if the server isn't lying to you, will tell you if it's a PDF.
Alternatively you can download it and then work out whether what you got is a PDF.
In this case, what you refer to as "a document that's not explicitly referenced in a URL" seems to be what is known as a "redirect". Basically, the server tells you that you have to get the document at another URL. Normally, python's urllib will automatically follow these redirects, so that you end up with the right file. (and - as others have already mentioned - you can check the response's mime-type header to see if it's a pdf).
However, the server in question is doing something strange here. You request the url, and it redirects you to another url. You request the other url, and it redirects you again... to the same url! And again... And again... At some point, urllib decides that this is enough already, and will stop following the redirect, to avoid getting caught in an endless loop.
So how come you are able to get the pdf when you use your browser? Because apparently, the server will only serve the pdf if you have cookies enabled. (why? you have to ask the people responsible for the server...) If you don't have the cookie, it will just keep redirecting you forever.
(check the urllib2 and cookielib modules to get support for cookies, this tutorial might help)
At least, that is what I think is causing the problem. I haven't actually tried doing it with cookies yet. It could also be that the server is does not "want" to serve the pdf, because it detects you are not using a "normal" browser (in which case you would probably need to fiddle with the User-Agent header), but it would be a strange way of doing that. So my guess is that it is somewhere using a "session cookie", and in the case you haven't got one yet, keeps on trying to redirect.
As has been said there is no way to tell content type from URL. But if you don't mind getting the headers for every URL you can do this:
obj = urllib.urlopen(URL)
headers = obj.info()
if headers['Content-Type'].find('pdf') != -1:
# we have pdf file, download whole
...
This way you won't have to download each URL just it's headers. It's still not exactly saving network traffic, but you won't get better than that.
Also you should use mime-types instead of my crude find('pdf').
No. It is impossible to tell what kind of resource is referenced by a URL just by looking at it. It is totally up to the server to decide what he gives you when you request a certain URL.
Check the mimetype with the urllib.info() function. This might not be 100% accurate, it really depends on what the site returns as a Content-Type header. If it's well behaved it'll return the proper mime type.
A PDF should return application/pdf, but that may not be the case.
Otherwise you might just have to download it and try it.
You can't see it from the url directly. You could try to only download the header of the HTTP response and look for the Content-Type header. However, you have to trust the server on this - it could respond with a wrong Content-Type header not matching the data provided in the body.
Detect the file type in Python 3.x and webapp with url to the file which couldn't have an extension or a fake extension. You should install python-magic, using
pip3 install python-magic
For Mac OS X, you should also install libmagic using
brew install libmagic
Code snippet
import urllib
import magic
from urllib.request import urlopen
url = "http://...url to the file ..."
request = urllib.request.Request(url)
response = urlopen(request)
mime_type = magic.from_buffer(response.read())
print(mime_type)