Check url is a file or directory - python

Hi there.
I have a URL list. I do not know how to check is this address to a file or directory.
examples:
url = "http://example.com/path/to/file.html"
if '.' in url.split('/')[-1]:
return True
but if url is
url = "http://example.com/path/domains/domain.com"
domain.com is a directory not a file. How to detect it?
Checking the file extension is not good, maybe some headers? But I want to do as little as possible internet transfer usage.
Edit:
I need to download a large number of links and map their path to the location in my operating system. eg
example.com/path/to/file.html
~/Downloads/example.com/path/to/
and here download file.html.
eg:
example.com/directory/
create ~/Downlods/example.com/directory/
next url: example.com/directory/dir2
create ~/Downloads/example.com/directory/dir2
next url: example.com/directory/file.html
Download file.html in too ~/Downloads/example.com/directory/
not too create file.html directory

In short, you can't. Accessing the URL http://example.com/path/domains/domain.com would send a 302 redirect (if I remember correctly) to http://example.com/path/domains/domain.com/ by default. There are no headers in the response that indicates if a URL points to a directory. May I ask why you need to know this? I suppose you can add a slash to a URL and see what happens from there. That might get you the results you are looking for.

On HTTP servers, there is no such "file" or "directory" things. You just send an URI to the server which identify a specific resource that depends of the server's configuration.
By default, most of the HTTP servers use the files and directories of your system, but it can be configured (URL Rewriting, ...).

Related

SharePoint list files from under a URL

I have a simple task to do: script reads a free-text string which will contain SharePoint URLs. Now those URLs are provided by the users, basically a copy-paste from their browser. The thing that my app has to do is go to those links and check if there are any files under it.
So from what I can gather, there are many possible SharePoint URLs, for example:
<host>/sites/<site_name>/SitePages/something.aspx - for example a simple post
<host>/:w/r/sites/<site_name>/_layouts/15/something.aspx (like a shortcut URL) - for example a MS Office Word document
<host>/sites/<site_name>/<drive_name>/Forms/something.aspx?[...]&id=%2Fsites%2F<site-name>%2F<drive_name>%2F<path> - a URL to a file tree view of some files on a drive
<host>/:f:/r/sites/<site_name>/<drive_name>/<path_to_a_file>
The last one is perfect, because it contains the path to the directory in the url path. The 3rd one does have it as well, but in the urlencoded query params part.
What I do in this scenario is I parse the URL, extracting:
site name
drive name (not ID)
path (from the path in url or from the encoded &id= part)
Then, I can connect to SharePoint, get a site, list all the site drives (/drives), check if their "web_url" is a substring of my Sharepoint URL (I could search the appropriate drive by name, but the thing returned from the API is the "display name" and in my URL resides an "actual drive name"). Okay, so I've got my drive and now I can get my item by path. This all can be done via the regular MS Graph API (each step is needed for getting the object - site/drive ID) or via a python wrapper (I use python-o365).
As you can see, this is a real pain. Is there a standard way to deal with this? I mean, if I had the site and drive IDs, I could do it in a single API call, but given the fact that I only have a SharePoint link, I can't get those two, right? And how about the URL parsing?

Using relative URL's

I do a simple web application written in Python using cherrypy and Mako. So, my question is also simple.
I have one page with URL http://1.2.3.4/a/page_first. Also there is an image that available on URL http://1.2.3.4/a/page_first/my_image.png. And I want to locate my_image.png on the page_first.
I added a tag <img src="my_image.png"/>, but it is not shown. I looked at web developer tools->Network and saw that request URL for image was http://1.2.3.4/a/my_image.png, instead of http://1.2.3.4/a/page_first/my_image.png.
Why does it happen?
Thanks.
The page address needs to be http://1.2.3.4/a/page_first/ (with trailing slash).
ADDED:
You don't seem to understand relative URLs, so let me explain. When you reference an image like this <img src="my_image.png"/>, the image URL in the tag doesn't have any host/path info, so path is taken from the address of the HTML page that refers to the image. Since path is everything up to the last slash, in your case it is http://1.2.3.4/a/. So the full image URL that the browser will request becomes http://1.2.3.4/a/my_image.png.
You want it to be http://1.2.3.4/a/page_first/my_image.png, so the path part of the HTML page must be /a/page_first/.
Note that the browser will not assume page_first is "a directory" just because it doesn't have an "extension", and will not add the trailing slash automatically. When you access a server publishing static dirs and files and specify a directory name for the path and omit the trailing slash (e. g. http://www.example.com/some/path/here), the server is able to determine that you actually request a directory, and it adds the slash (and usually also a default/index file name) for you. It's not generally the case with dynamic web sites where URLs are programmed.
So basically you need to explicitly include the trailing slash in your page path: dispatcher.connect('page','/a/:number_of_page/', controller=self, action='page_method') and always refer to it with the trailing slash (http://1.2.3.4/a/page_first/), otherwise the route will not be matched.
As a side note, usually you put the images and other static files into a dedicated dir and serve them either with CherryPy's static dir tool, or, if it's a high load site, with a dedicated server.
Try <img src="/page_first/my_image.png"/>

Using python (urllib) to download a file, how to get the real filename?

So I finally managed to get my script to login to a website and download a file... however, in some instances I will have a url like "http://www.test.com/index.php?act=Attach&type=post&id=3345". Firefox finds the filename ok... so I should be able to.
I am unable to find the "Content-Disposition" header via something like remotefile.info()['Content-Disposition']
Also, remotefile.geturl() returns the same url.
What am I missing? How do I get the actual filename? I would prefer using the built-in libraries.
It is the task of the remote server/Service to provide the content-disposition header.
There is nothing you can do unless the remote server/service is under your own control..

how to download in python

I am writing a script which will run on my server. Its purpose is to download the document. If any person hit the particular url he/she should be able to download the document. I am using urllib.urlretrieve but it download document on the server side not on the client. How to download in python at client side?
If the script runs on your server, its purpose is to serve a document, not to download it (the latter would be the urllib solution).
Depending on your needs you can:
Set up static file serving with e.g. Apache
Make the script execute on a certain URL (e.g. with mod_wsgi), then the script should set the Content-Type (provides document type such as "text/plain") and Content-Disposition (provides download filename) headers and send the document data
As your question is not more specific, this answer can't be either.
Set the appropriate Content-type header, then send the file contents.
If the document is on your server and your intention is that the user should be able to download this file, couldn't you just serve the url to that resource as a hyperlink in your HTML code. Sorry if I have been obtuse but this seems the most logical step given your explanation.
You might want to take a look at the SocketServer module.

Using Python to download a document that's not explicitly referenced in a URL

I wrote a web crawler in Python 2.6 using the Bing API that searches for certain documents and then downloads them for classification later. I've been using string methods and urllib.urlretrieve() to download results whose URL ends in .pdf, .ps etc., but I run into trouble when the document is 'hidden' behind a URL like:
http://www.oecd.org/officialdocuments/displaydocument/?cote=STD/CSTAT/WPNA(2008)25&docLanguage=En
So, two questions. Is there a way in general to tell if a URL has a pdf/doc etc. file that it's linking to if it's not doing so explicitly (e.g. www.domain.com/file.pdf)? Is there a way to get Python to snag that file?
Edit:
Thanks for replies, several of which suggest downloading the file to see if it's of the correct type. Only problem is... I don't know how to do that (see question #2, above). urlretrieve(<above url>) gives only an html file with an href containing that same url.
There's no way to tell from the URL what it's going to give you. Even if it ends in .pdf it could still give you HTML or anything it likes.
You could do a HEAD request and look at the content-type, which, if the server isn't lying to you, will tell you if it's a PDF.
Alternatively you can download it and then work out whether what you got is a PDF.
In this case, what you refer to as "a document that's not explicitly referenced in a URL" seems to be what is known as a "redirect". Basically, the server tells you that you have to get the document at another URL. Normally, python's urllib will automatically follow these redirects, so that you end up with the right file. (and - as others have already mentioned - you can check the response's mime-type header to see if it's a pdf).
However, the server in question is doing something strange here. You request the url, and it redirects you to another url. You request the other url, and it redirects you again... to the same url! And again... And again... At some point, urllib decides that this is enough already, and will stop following the redirect, to avoid getting caught in an endless loop.
So how come you are able to get the pdf when you use your browser? Because apparently, the server will only serve the pdf if you have cookies enabled. (why? you have to ask the people responsible for the server...) If you don't have the cookie, it will just keep redirecting you forever.
(check the urllib2 and cookielib modules to get support for cookies, this tutorial might help)
At least, that is what I think is causing the problem. I haven't actually tried doing it with cookies yet. It could also be that the server is does not "want" to serve the pdf, because it detects you are not using a "normal" browser (in which case you would probably need to fiddle with the User-Agent header), but it would be a strange way of doing that. So my guess is that it is somewhere using a "session cookie", and in the case you haven't got one yet, keeps on trying to redirect.
As has been said there is no way to tell content type from URL. But if you don't mind getting the headers for every URL you can do this:
obj = urllib.urlopen(URL)
headers = obj.info()
if headers['Content-Type'].find('pdf') != -1:
# we have pdf file, download whole
...
This way you won't have to download each URL just it's headers. It's still not exactly saving network traffic, but you won't get better than that.
Also you should use mime-types instead of my crude find('pdf').
No. It is impossible to tell what kind of resource is referenced by a URL just by looking at it. It is totally up to the server to decide what he gives you when you request a certain URL.
Check the mimetype with the urllib.info() function. This might not be 100% accurate, it really depends on what the site returns as a Content-Type header. If it's well behaved it'll return the proper mime type.
A PDF should return application/pdf, but that may not be the case.
Otherwise you might just have to download it and try it.
You can't see it from the url directly. You could try to only download the header of the HTTP response and look for the Content-Type header. However, you have to trust the server on this - it could respond with a wrong Content-Type header not matching the data provided in the body.
Detect the file type in Python 3.x and webapp with url to the file which couldn't have an extension or a fake extension. You should install python-magic, using
pip3 install python-magic
For Mac OS X, you should also install libmagic using
brew install libmagic
Code snippet
import urllib
import magic
from urllib.request import urlopen
url = "http://...url to the file ..."
request = urllib.request.Request(url)
response = urlopen(request)
mime_type = magic.from_buffer(response.read())
print(mime_type)

Categories