jquery.get not doing an xhr request on Firefox - python

I have just entered the world of jquery and am pretty new to javascript too. I have a small javascript snippet like below:-
<script type="text/javascript">
$(function(){
$('a').click(function(event){
event.preventDefault();
$.get('/_add_navigation_',function(response){
$('#themaincontents').html(response);
})
})
</script>
The html looks like this:-
CLICK Me
<div id="themaincontents"></div>
On the server side I do an xhr header check by something like
if request.is_xhr: send response else:redirect somewhere
Now while this code works fine on Chrome and Opera, on Firefox it is behaving a little weird. The server does not send back the reponse, but rather does a redirect. That means it says that there is no xhr header. Why should this happen while on the other two browsers it is working fine?
(I am using Firefox 3.6.12)
Update - I just had a look at the request headers of Firefox and I find no X-Requested-With:XMLHttpRequest header, but it is present in Chrome.

Not all browsers send the same headers, and you cannot rely on them to be consistent across browsers. The easiest way is to not rely on the browser to send something, but manually send something yourself:
$.get('url', {
xhr: 'yes' // add this extra parameter here
}, function(){
});
then check for that GET variable on the server instead of a header that may or may not be sent by a browser.

Related

do browsers have an address for receiving cookies from the sites and how do websites know that i visited them?

1 - How do websites send cookies to the browser ?
2 - How does the website know the browser address to send it cookies ?
3 - How do websites detect visits ?
4 - How does the browser send cookies back to the website ?
Cookies aren't just 'sent', this could be done, in javascript for example (through an api or user actions), but normally is done on first load.
1. Cookies are set in HTTP headers.
You receive these when you first load the page. You can inspect this in the "network" tab when you press F12 in your browser. Click on an item, and check out it's headers. They look something like this:
They are sent along with the document, HTTP isn't a stream, to keep it simple, it's just UTF-8 text files.
3. Whenever you visit a website, you send a request to the server.
When the server receives the request (Along with maybe headers, and extra data if you submit a form), a server can then preform some logic with the request, and extract data from the request headers and body, or maybe even set a cookie, that tells the server you've visited! Anyways here is an example of a request:
4. Whenever you send that request, the headers, which include your cookies get sent along with it.
They look like:
Disclaimer:
All images are from google.

Simulate CSRF Attack on Login Required URL

I want to simulate/demo a CSRF attack. With Django, I setup a site (a.com) and one url (/foo/) do not enable anti-CSRF stuff.
First I made this URL can be access without login. I can see the POST request (see code below) is done succeed. (Actually, I met a no Access-Control-Allow-Origin issue, the document #main content is not updated. But the post is done with 200 status. i.e. The Money is transferred to my account :).
So I did a successful CSRF attack.
Then I make a.com/foo/ wrapped in a login_required(), which means, you need first login on site a.com before to do actions in a.com/foo/ page.
1) I did a login on one Chrome tab of a.com.
2) And then I open another Tab in this same Chrome window and open localhost:8000 (see below).
When I run the JS function, the POST begin to get 302 status (redirecting to login page of a.com instead of the 200 OK status before).
Why? If this is the case, How CSRF attacks is useful/possible for login-required pages of another domain?
JS Code to do the Fake request (from localhost:8000 etc, a different domain):
function hack_it() {
var http = new XMLHttpRequest();
var url = 'https://a.com/foo/';
var params = 'name=hacker&amount=200';
http.open('POST', url, true);
//Send the proper header information along with the request
http.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
http.onreadystatechange = function() {//Call a function when the state changes.
if(http.readyState == 4 && http.status == 200) {
document.getElementById("main").innerHTML = http.responseText;
}
}
http.send(params);
}
Yes, CSRF attacks are possible for pages which require login.
At the moment, your AJAX request is not sending the session cookie, so you are redirected to the login page.
For your demo attack to work, you need to set withCredentials to true.
var http = new XMLHttpRequest();
http.withCredentials = true;
...

Phantomjs through selenium in python

I am trying to test a webpage's behaviour to requests from different referrers. I am doing the following so far
webdriver.DesiredCapabilities.PHANTOMJS['phantomjs.page.customHeaders.referer'] = referer
The problem is that the webpage has ajax requests which will change some things in the html, and those ajax requests should have as referer the webpage itself and not the referer i gave at the start. It seems that the referer is set once at the start and every subsequent request be it ajax or image or anchor takes that same referer and it never changes no matter how deep you browse, is there a solution to choocing the referer only for the first request and having it dynamic for the rest?
After some search i found this and i tried to achieve it through selenium, but i have not had any success yet with this:
webdriver.DesiredCapabilities.PHANTOMJS['phantomjs.page.onInitialized'] = """function() {page.customHeaders = {};};"""
Any ideas?
From what I can tell you would need to patch PhantomJS to achieve this.
PhantomJS contains a module called GhostDriver which provides the HTTP API that WebDriver uses to communicate with the PhantomJS instance. So anything you want to do via WebDriver needs to be supported by GhostDriver, but it doesn't seem that onInitialized is supported by GhostDriver.
If you're feeling adventurous you could clone the PhantomJS repository and patch the src/ghostdriver/session.js file to do what you want.
The _init method looks like this:
_init = function() {
var page;
// Ensure a Current Window is available, if it's found to be `null`
if (_currentWindowHandle === null) {
// Create the first Window/Page
page = require("webpage").create();
// Decorate it with listeners and helpers
page = _decorateNewWindow(page);
// set session-specific CookieJar
page.cookieJar = _cookieJar;
// Make the new Window, the Current Window
_currentWindowHandle = page.windowHandle;
// Store by WindowHandle
_windows[_currentWindowHandle] = page;
}
},
You could try using the code you found:
page.onInitialized = function() {
page.customHeaders = {};
};
on the page object created there.
Depending on what you test though you might be able to save a lot of effort and ditch the browser and just test HTTP requests directly using something like the requests module.

Enabling cookies in python HTTP POST request

So I am trying to write a script that that submits a form that contains two fields for a username and password in a POST request, but the site responds with:
"This system requires the use of HTTP cookies to verify authorization information. Our system has detected that your browser has disabled HTTP cookies, or does not support them."
*EDIT: So I believe with the new modified code below that I can successfully login to the page. The only thing is that when I print out the page's html text to the terminal it only displays an html element and a head element that contains the url of the page; however, ive inspected the actual html of page when i log in and there is a lot missing, anyone know why this might be?
import requests
url = "https://someurl"
payload = {
'username': 'myname',
'password': '1234'
}
headers = {
'User-Agent': 'Mozilla/5.0'
}
session = requests.Session()
page = session.post(url, data=payload)
Without the precise URL it is very hard to give you an answer.
Many Web pages are dynamically built through JavaScript calls. The execution of the JavaScript will create a DOM that is rendered. If it's the case for the site you are looking at, you will get only the raw HTML response with Python but not the rendered DOM. You need something which actually executes the JS to get the final DOM. For example, SlimerJS

Figuring out url made in ajax post

At this link when hover over any row, then there is an image box which says "i" you can click to get extra data. Then navigate to Lines History. Where is that information coming from? I can't find the URL that is connected with that.
I used dev tools in chrome, and found out that there's an ajax post being made:
Request URL:http://www.sbrforum.com/ajax/?a=[SBR.Odds.Modules]OddsEvent_GetLinesHistory
Form Data: UserId=0&Sport=basketball&League=NBA&EventId=259672&View=LH&SportsbookId=238&DefaultBookId=238&ConsensusBookId=19&PeriodTypeId=&StartDate=2014-03-24&MatchupLink=http%3A%2F%2Fwww.sbrforum.com%2Fnba-basketball%2Fmatchups%2F20140324-602%2F&Key=de2f9e1485ba96a69201680d1f7bace4&theme=default
but when I try to visit this url in browser I got Invalid Ajax Call -- from host:
Any idea?
Like you say, it's probably an HTTP POST request.
When you navigate to the URL with the browser, the browser issues a GET request, without all the form data.
Try curl, wget, or the javascript console in your browser to do a POST.

Categories