Django / Python how to get the full request header? [duplicate] - python

This question already has answers here:
How can I get all the request headers in Django?
(10 answers)
Closed 6 years ago.
I've been looking over what I can find about this and found something about denying access to specific user-agents but couldn't find how I can actually get the full request header. I am trying to make a customized analytics app so would like access to the full headers.. any info is appreciated.

All the headers are available in request.META. See the documentation.

Related

Is it possible to override request payload in python? [duplicate]

This question already exists:
How to add/edit data in request-payload available in google chrome dev tools [duplicate]
Closed 3 years ago.
I've been looking for this answer for quite long but still with no results. I'm working with selenium and I need to override one request which is generated after the submit button has been clicked. It contains data in json format under "Request payload" in chrome dev tools. I found something like seleniumwires which provides some functionality like request.overrides but I'm not sure it is working as I want. Can anyone give me some hint where to start or which tools are approporiate to do that ?

getting full content of web page (using Python-requests) [duplicate]

This question already has answers here:
Programmatic Python Browser with JavaScript
(8 answers)
Closed 4 years ago.
I am new to this subject, so my question could prove stupid.. sorry in advance.
My challenge is to do web-scraping, say for this page: link (google)
I try to web-scrape it using Python,
My problem is that once I use Python requests.get, I don't seem to get the full content of the page. I guess it is because that page has many resources, and Python does not get them all. (more than that, once I scroll my mouse up - more data is reviled on Chrome. I can see from the source code that no more data is downloaded to be shown..)
How can I get the full content of a web page? what am I missing?
thanks
requests.get will get you the page web but only what the page decides to give a robot. If you want the full page web as you see it as a human you need to trick it by changing your headers. If you need to scroll or click on buttons in order to see the whole page web, which is what I think you'll need to do, I suggest you take a look at selenium.

Django removes #anchor in urls [duplicate]

This question already has answers here:
How to identify an anchor in a url in Django?
(3 answers)
Closed 6 years ago.
I've got a url set up like in python django 1.9
url(r'^faq/?$', views.faq, name="faq"),
However, if I go to a url with #anchors in them, it keeps removing the #anchor part in all browsers.
So, localhost:5000/faq#12 always goes to localhost:5000/faq.
How do I get django to keep the #anchor section?
UPDATE:
I'm not trying to pass any data to the server. The FAQ page has a bunch of questions with unique id. /faq#12 should take the view directly to the div#12. It's for the browser and doesn't have anything to do with the server side at all.
Anchor part of the url not sent to the server. it only used at client side.
Your url config defines a slash at the end, so you have to use it in the URL as well:
http://localhost:5000/faq/#12
If you miss it, there will be a redirect that removes the anchor.

How do I dynamically update a feed with Flask and AJAX? [duplicate]

This question already has answers here:
Server-Sent Events vs Polling
(2 answers)
Display data streamed from a Flask view as it updates
(3 answers)
Closed 4 years ago.
I have created a system where some information is stored in a database and that information is then displayed on a webpage. This works fine but I want to add is a real time page updater that adds the new information on to the webpage without a refresh like Twitter does with new tweets.
You can achieve this using Server Side Events which is designed specifically for this purpose and is more efficient compared to the AJAX implementation (See this answer).

Scraping Google [duplicate]

This question already has an answer here:
scrape google resultstats with python [closed]
(1 answer)
Closed 9 years ago.
I am attempting to scrape Google search results as the results I receive using the API are not as useful as the results from the main site.
I am using the python requests library to grab the search page. However I am receiving an error:
Instant is off due to connection speed. Press Enter to search.
Is there any way I can disable instant search?
thanks
Python has a search api for python already, might save you some heartache.
https://developers.google.com/appengine/docs/python/search/

Categories