I am learning & working with python from couple of days.
In my source code need to read the http contents.
I am trying to scrape http using request.get(httplink).
For debugging purpose i am trying to display all the contents of http using print(response.text).
But i could see few of the strings are not displayed when i run the code.
My question is print(response.text) will displays all the strings from the http page??Please someone answer to my question as i am not able to proceed further.
The text property of Response object is just content, but decoded according to headers. First of all try check out content field and find missing strings there. This is about output (console), i presume :)
Related
Use case: Need to check if JSON data from a url has been updated by checking it's created_date field which lies in the first few lines. The entire page's JSON data is huge and i don't want to retrieve the entire page just to check the first few lines.
Currently, For both
x=feedparser.parse(url)
y=requests.get(url).text
#y.split("\n") etc..
the entire url data is retrieved and then parsed.
I want to do some sort of next(url) or reading only first 10 lines (chunks).. thus not sending request for entire page's data...i.e just scroll & check 'created_date' field and exit.
What can be utilized to solve this? Thanks for your knowledge & Apologies for the noob q
Example of URL -> https://www.w3schools.com/xml/plant_catalog.xml
I want to stop reading the entire URL data if the first PLANT object's LIGHT tag hadn't changed from 'Mostly Shady' (without needing to read/get the data below)
Original poster stated below solution worked:
Instead of GET request, one can try HEAD request:
"The GET method requests a representation of the specified resource. Requests using GET should only retrieve data. The HEAD method asks for a response identical to a GET request, but without the response body."
This way, you don't need to request entire JSON, and will therefore speed up the server side part, as well as be more friendly to the hosting server!
So, I'm making a Python script, that gets a webpages content and compares it to a previously saved version to see if the webpage has changed. I'm getting the raw content using this method:
def getcontent(url):
str = requests.get(url)
str = str.text
return(str)
after that I'm doing some cleaning up of the content and quote escaping and such, but that's irrelevant. The issue I keep running into, is, that the webpage has got some JavaScript code, that generates a unique key that my method downloads. Each time you grab the webpage content, the key is different. I have zero idea what that key is for. The issue is, that if the key is different, the new content, and the saved content aren't identical.
How can I disable JavaScript from running when I request a webpage?
The token is generated server-side and can be used for various reasons (for example CSRF-token)
The token will always be in the content of your response, there is no JavaScript needed for that.
You should find a way to ignore / remove the token.
I am using Python 3.7.6 and the "requests" (v2.22.0) module to obtain some gaming data from a public game API. I've had no problems obtaining small amounts of data but have hit an issue with large data volumes.
Using the following URL in a browser: https://api.guildwars2.com/v2/commerce/listings/19683 ,
I get a nice JSON list in the browser showing the order book for an in-game commodity. It's a rather long list.
When attempting to capture the data using Python and the requests module, the retrieved data is truncated. It appears that requests is not returning the entire data set.
Code used:
import json
import requests
URL = 'https://api.guildwars2.com/v2/commerce/listings/19683'
response = requests.get(url=URL, params=None)
response.status_code
response.headers
response.content
response.text
response.json()
jsonApiData = response.json()
When I inspect the data from the response.content and response.text lines, it's obvious that data have been truncated. It's very visible when viewing the json object from response.json. From copying the data from the browser into Atom and comparing the character counts with response.json, it looks like the request response was not fully parsed or truncated.
I've looked in the requests docs (https://requests.readthedocs.io/en/master/) for a maximum character count and found no limitations.
I'm still fairly new with JSON and requests but have not run into things like this even when downloading 100's of MB of data as images from other API's.
Has anyone dealt with a situation like this? How can I retrieve the entire dataset?
Thank you for your help.
The data could have been truncated in atom.
I'd say count the dictionary keys. This would give you a better idea of the actual data is being truncated.
The problem here is the ipython interactive buffer in VS Code. It has a max character limit that I was surpassing. Funny thing is that I since I do all of my coding and debugging in VSCode, the buffer issue was affecting both sides - coding and debugging, making it look like the data itself was affected.
Once I open the JSON in other tools, I can see the full data set. Guess I should get out of VSCode every now and then lolz.
I may post a feature request in Microsoft's GIT for this. There should be a visual indicator showing that data is being hidden. (Much like the ellipses in Pandas Dataframes.) As it is, the data is just cut and posted with a hard transition.
Now I have a http response from website A, I need to change all the link urls in this http response to the url of website B, so that when users get this http response in browser, click on links, they will be directed to website B not A.
I'm using python and django. Is there a package or tool can do this trick?
Thanks in advance.
Depending upon the nature of the response you get from website A, what you want to do with it, and on how important it is that the replacement be efficient, there are a few possible ways of doing things. I'm not 100% clear on your situation and what you want to achieve.
If the links in the response from website A start with website A's hostname, then just get the response as a string and do response = response.replace('http://website-a.com', 'http://website-b.com') before you present the response to the user.
If the response is HTML, and the links are relative, the easiest solution to code would probably be to use lxml.rewrite_links (see http://lxml.de/lxmlhtml.html#working-with-links). I suspect this is what you're looking for.
If you've got some other situation, well, then I dunno what's appropriate. Maybe a regex. Maybe a custom algorithm of your own design. It depends upon what kind of content you're getting back from website A, how you can recognise links in it, and how you want to change them.
If you use Apache as Webserver you could use a module to replace Text in the response like http://mod-replace.sourceforge.net/. This seems to be more reasonable than invoking perl or python for every request. But you have to be aware that all the text might be replaced - not only the links which have an efect. Therefore this would be a very dirty solution.
I wrote a web crawler in Python 2.6 using the Bing API that searches for certain documents and then downloads them for classification later. I've been using string methods and urllib.urlretrieve() to download results whose URL ends in .pdf, .ps etc., but I run into trouble when the document is 'hidden' behind a URL like:
http://www.oecd.org/officialdocuments/displaydocument/?cote=STD/CSTAT/WPNA(2008)25&docLanguage=En
So, two questions. Is there a way in general to tell if a URL has a pdf/doc etc. file that it's linking to if it's not doing so explicitly (e.g. www.domain.com/file.pdf)? Is there a way to get Python to snag that file?
Edit:
Thanks for replies, several of which suggest downloading the file to see if it's of the correct type. Only problem is... I don't know how to do that (see question #2, above). urlretrieve(<above url>) gives only an html file with an href containing that same url.
There's no way to tell from the URL what it's going to give you. Even if it ends in .pdf it could still give you HTML or anything it likes.
You could do a HEAD request and look at the content-type, which, if the server isn't lying to you, will tell you if it's a PDF.
Alternatively you can download it and then work out whether what you got is a PDF.
In this case, what you refer to as "a document that's not explicitly referenced in a URL" seems to be what is known as a "redirect". Basically, the server tells you that you have to get the document at another URL. Normally, python's urllib will automatically follow these redirects, so that you end up with the right file. (and - as others have already mentioned - you can check the response's mime-type header to see if it's a pdf).
However, the server in question is doing something strange here. You request the url, and it redirects you to another url. You request the other url, and it redirects you again... to the same url! And again... And again... At some point, urllib decides that this is enough already, and will stop following the redirect, to avoid getting caught in an endless loop.
So how come you are able to get the pdf when you use your browser? Because apparently, the server will only serve the pdf if you have cookies enabled. (why? you have to ask the people responsible for the server...) If you don't have the cookie, it will just keep redirecting you forever.
(check the urllib2 and cookielib modules to get support for cookies, this tutorial might help)
At least, that is what I think is causing the problem. I haven't actually tried doing it with cookies yet. It could also be that the server is does not "want" to serve the pdf, because it detects you are not using a "normal" browser (in which case you would probably need to fiddle with the User-Agent header), but it would be a strange way of doing that. So my guess is that it is somewhere using a "session cookie", and in the case you haven't got one yet, keeps on trying to redirect.
As has been said there is no way to tell content type from URL. But if you don't mind getting the headers for every URL you can do this:
obj = urllib.urlopen(URL)
headers = obj.info()
if headers['Content-Type'].find('pdf') != -1:
# we have pdf file, download whole
...
This way you won't have to download each URL just it's headers. It's still not exactly saving network traffic, but you won't get better than that.
Also you should use mime-types instead of my crude find('pdf').
No. It is impossible to tell what kind of resource is referenced by a URL just by looking at it. It is totally up to the server to decide what he gives you when you request a certain URL.
Check the mimetype with the urllib.info() function. This might not be 100% accurate, it really depends on what the site returns as a Content-Type header. If it's well behaved it'll return the proper mime type.
A PDF should return application/pdf, but that may not be the case.
Otherwise you might just have to download it and try it.
You can't see it from the url directly. You could try to only download the header of the HTTP response and look for the Content-Type header. However, you have to trust the server on this - it could respond with a wrong Content-Type header not matching the data provided in the body.
Detect the file type in Python 3.x and webapp with url to the file which couldn't have an extension or a fake extension. You should install python-magic, using
pip3 install python-magic
For Mac OS X, you should also install libmagic using
brew install libmagic
Code snippet
import urllib
import magic
from urllib.request import urlopen
url = "http://...url to the file ..."
request = urllib.request.Request(url)
response = urlopen(request)
mime_type = magic.from_buffer(response.read())
print(mime_type)