Change URL to another URL using mitmproxy - python

I am trying to redirect one page to another by using mitmproxy and Python. I can run my inline script together with mitmproxy without issues, but I am stuck when it comes to changing the URL to another URL. Like if I went to google.com it would redirect to stackoverflow.com
def response(context, flow):
print("DEBUG")
if flow.request.url.startswith("http://google.com/"):
print("It does contain it")
flow.request.url = "http://stackoverflow/"
This should in theory work. I see http://google.com/ in the GUI of mitmproxy (as GET) but the print("It does contain it") never gets fired.
When I try to just put flow.request.url = "http://stackoverflow.com" right under the print("DEBUG") it won't work neither.
What am I doing wrong? I have also tried if "google.com" in flow.request.url to check if the URL contains google.com but that won't work either.
Thanks

The following mitmproxy script will
Redirect requesto from mydomain.com to newsite.mydomain.com
Change the request method path (supposed to be something like /getjson? to a new one `/getxml
Change the destination host scheme
Change the destination server port
Overwrite the request header Host to pretend to be the origi
import mitmproxy
from mitmproxy.models import HTTPResponse
from netlib.http import Headers
def request(flow):
if flow.request.pretty_host.endswith("mydomain.com"):
mitmproxy.ctx.log( flow.request.path )
method = flow.request.path.split('/')[3].split('?')[0]
flow.request.host = "newsite.mydomain.com"
flow.request.port = 8181
flow.request.scheme = 'http'
if method == 'getjson':
flow.request.path=flow.request.path.replace(method,"getxml")
flow.request.headers["Host"] = "newsite.mydomain.com"

You can set .url attribute, which will update the underlying attributes. Looking at your code, your problem is that you change the URL in the response hook, after the request has been done. You need to change the URL in the request hook, so that the change is applied before requesting resources from the upstream server.

Setting the url attribute will not help you, as it is merely constructed from underlying data. [EDIT: I was wrong, see Maximilian’s answer. The rest of my answer should still work, though.]
Depending on what exactly you want to accomplish, there are two options.
(1) You can send an actual HTTP redirection response to the client. Assuming that the client understands HTTP redirections, it will submit a new request to the URL you give it.
from mitmproxy.models import HTTPResponse
from netlib.http import Headers
def request(context, flow):
if flow.request.host == 'google.com':
flow.reply(HTTPResponse('HTTP/1.1', 302, 'Found',
Headers(Location='http://stackoverflow.com/',
Content_Length='0'),
b''))
(2) You can silently route the same request to a different host. The client will not see this, it will assume that it’s still talking to google.com.
def request(context, flow):
if flow.request.url == 'http://google.com/accounts/':
flow.request.host = 'stackoverflow.com'
flow.request.path = '/users/'
These snippets were adapted from an example found in mitmproxy’s own GitHub repo. There are many more examples there.
For some reason, I can’t seem to make these snippets work for Firefox when used with TLS (https://), but maybe you don’t need that.

Related

Using https as standard with django project

I am learning django and trying to complete my first webapp.
I am using shopify api & boilder plate (starter code) and am having an issue with the final step of auth.
Specifically, the redirect URL -- it's using HTTP:// when it should NOT and I don't know how to change it..
#in my view
def authenticate(request):
shop = request.GET.get('shop')
print('shop:', shop)
if shop:
scope = settings.SHOPIFY_API_SCOPE
redirect_uri = request.build_absolute_uri(reverse('shopify_app_finalize')) #try this with new store url?
print('redirect url', redirect_uri) # this equals http://myherokuapp.com/login/finalize/
permission_url = shopify.Session(shop.strip()).create_permission_url(scope, redirect_uri)
return redirect(permission_url)
return redirect(_return_address(request))
Which is a problem because my app uses the Embedded Shopify SDK which causes this error to occur at the point of this request
Refused to frame 'http://my.herokuapp.com/' because it violates the following Content Security Policy directive: "child-src 'self' https://* shopify-pos://*". Note that 'frame-src' was not explicitly set, so 'child-src' is used as a fallback.
How do i change the URL to use HTTPS?
Thank you so much in advance. Please let me know if I can share any other details but my code is practically identical to that starter code
This is what the Django doc says about build_absolute_uri:
Mixing HTTP and HTTPS on the same site is discouraged, therefore
build_absolute_uri() will always generate an absolute URI with the
same scheme the current request has. If you need to redirect users to
HTTPS, it’s best to let your Web server redirect all HTTP traffic to
HTTPS.
So you can do two things:
Make sure your site runs entirely on HTTPS (preferred option): Setup your web server to use HTTPS, see the Heroku documentation on how to do this. Django will automatically use HTTPS for request.build_absolute_uri if the incoming request is on HTTPS.
I'm not sure what gets passed in the shop parameter but if it contains personal data I'd suggest to use HTTPS anyway.
Create the URL yourself:
url = "https://{host}{path}".format(
host = request.get_host(),
path = reverse('shopify_app_finalize'))
But you will still need to configure your server to accept incoming HTTPS requests.

In Flask is there a way to ignore a request for a route that doesn't exist

I'm writing an app using Flask.
I have a set of routes and they work.
What I want to do on the client side is to ignore any requests to invalid URLs. That is I do not want to render any 404/error pages in the app. I would like an alert that says the URL is invalid and for the browser to simply stay on the same page.
I don't want to be checking the URLs in JavaScript on the client, as this would expose them.
I have a route which responds correctly to unknown URLs:
#app.errorhandler(404)
def non_existant_route(error):
return jsonify({"no":"such page"})
If I delete the return statement I get a 500 error.
I can't use abort()
Does this idea violate some HTTP principle?
Thanks
It sounds like you need a "catch-all" endpoint. Typically, it seems a catch-all endpoint would return a generic 404, but in your case, you probably want to return a 200 with some contextual information. Here's basically how you can do it (credit goes to http://flask.pocoo.org/snippets/57/):
from flask import Flask
app = Flask(__name__)
#app.route('/', defaults={'path': ''})
#app.route('/<path:path>')
def catch_all(path):
# returns a 200 (not a 404) with the following contents:
return 'your custom error content\n'
# This is just one of your other valid routes:
#app.route('/stuff')
def stuff():
return 'stuff\n'
if __name__ == '__main__':
app.run()
If you run this and curl various endpoints of the test app, here's what you get:
$ curl localhost:5000/stuff
stuff
$ curl localhost:5000/foo/bar
your custom error content
$ curl localhost:5000/otherstuff
your custom error content
As you can see, your other routes will still work as you expect.
I've decided a solution to this is too hard! I can not find any way to get the browser to ignore a response. There is no response header for 'do nothing'. If there was we would probably never see a webserver error again, which would not be good.
I could ajaxify all the requests as a way to grab the response headers and analyze them before any rendering or redirecting happens. That starts to break all the navigation (back buttons at least) and the pretty URLs. I could bung in a JS routing framework etc, and while I'm leaning how it works I'm not building my app (I already have enough to learn!)
#app.errorhandler(404)
def page_not_found(error):
return redirect(url_for('index'))
If you come up with something great post it anyway, I'm not the first to ask this question, and probably not the last.
Thanks
I remember reading about a javascript library some days ago (but I don't remember the name...). The clou with this library was, that all links and form submits were loaded not directly into the browser "_top" frame/window but into a hidden div and afterwards, when done, the content of the page was replaced by the content of this hidden div.
So if you want to catch bad links and such on client side you could hook up all links and submits and check the http response code. If it is not 200 (ok) you display an error. If it is okay you decide, if you replace the old page with the new content.
But there are two problems with this solution:
1. You would have to change the browsers location (in the address bar) without reloading the page of course!
2. It might get tricky to post some file uploads with javascript.
If I find the link or name of the js-library I saw, I will tell you!

How to read JSON from URL in Python?

I am trying to use Python to get a JSON file from the Web. If I open the URL in my browser (Mozilla or Chromium) I do see the JSON. But when I do the following with the Python:
response = urllib2.urlopen(url)
data = json.loads(response.read())
I get an error message that tells me the following (after translation in English): Errno 10060, a connection troughs an error, since the server after a certain time period did not react, or the connection was erroneous, or the host did not react.
ADDED
It looks like there are many people who faced the described problem. There are also some answers to the similar (or the same) question. For example here we can see the following solution:
import requests
r = requests.get("http://www.google.com", proxies={"http": "http://61.233.25.166:80"})
print(r.text)
It is already a step forward for me (I think that it is very likely that the proxy is the reason of the problem). However, I still did not get it done since I do not know URL of my proxy and I probably will need user name and password. Howe can I find them? How did it happen that my browsers have them I do not?
ADDED 2
I think I am now one step further. I have used this site to find out what my proxy is: http://www.whatismyproxy.com/
Then I have used the following code:
proxies = {'http':'my_proxy.blabla.com/'}
r = requests.get(url, proxies = proxies)
print r
As a result I get
<Response [404]>
Looks not so good, but at least I think that my proxy is correct, because when I randomly change the address of the proxy I get another error:
Cannot connect to proxy
So, I can connect to proxy but something is not found.
I think there might be something wrong, when you're trying to get the json from the online source(URL). Just to make things clear, here is a small code snippet
#!/usr/bin/env python
try:
# For Python 3+
from urllib.request import urlopen
except ImportError:
# For Python 2
from urllib2 import urlopen
import json
def get_jsonparsed_data(url):
response = urlopen(url)
data = str(response.read())
return json.loads(data)
If you still get a connection error, You can try a couple of steps:
Try to urlopen() a random site from the Interpreter (Interactive Mode). If you are able to grab the source code you're good. If not check internet conditions or try the request module. Check here
Check and see if the json in the URL is in the correct syntax. For sample json syntax check here
Try the simplejson module.
Edit 1:
if you want to access websites using a system wide proxy you will have to use a proxy handler to use loopback(local host) to connect to that proxy.. A sample code is shown below.
proxy = urllib2.ProxyHandler({
'http': '127.0.0.1',
'https': '127.0.0.1'
})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
# this way you can send both http and https request using proxies
urllib2.urlopen('http://www.google.com')
urllib2.urlopen('https://www.google.com')
I have not not worked a lot with ProxyHandler. I just know the theory and code. I am sure there are better ways to access websites through proxies; One which does not involve installing the opener everytime you run the program. But hopefully it will point you in the right direction.

Python Flask - Responding with a redirected website

(unsure of how to phrase this question)
Essentially I'm working with Flask + Soundcloud and what I want to do is to request an http site (which i know will redirect me to a new site) and then i want to return that site (with the same headers and info i originally got). Maybe this explains it better:
#app.route('/play')
def SongURL2():
stream_url="https://api.soundcloud.com/tracks/91941888/stream?client_id=MYCLIENTID"
// newurl = HTTP_REQUEST(stream_url) <- This will redirect me to the actual song streaming link (which only lives for a little bit)
// return newurl;
This is because soundcloud's song's streaming url only live for a short period of time and the device I am using to call my RESTful api will not allow me to do a simple redirect to the newlink. So I need to somehow act like a proxy.
You can achieve this using the Request module:
import requests
#app.route('/play')
def SongURL2():
stream_url="https://api.soundcloud.com/tracks/91941888/stream?client_id=MYCLIENTID"
# Get the redirected url
r = request.get(stream_url)
return r.url
Found an interesting way to proxy through Flask, similar to what #Dauros was aiming at. http://flask.pocoo.org/snippets/118/ In the end bare in mind that this puts extra strain on the server.

Python post username and password

I wish to access information from this website using Python. But I cannot figure out how to post my log in information using urllib2
https://www.linksyssmartwifi.com/
Can anyone explain why it isn't working?
Edit - This question has been listed as too broad. I will be more specific.
When I try to use the following code, I can't seem to 'post' the user name and password to the webpage.
import urllib2
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(realm=' ??? ',
uri=' ??? ',
user='USERNAME',
passwd='PASSWORD')
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
urllib2.urlopen('EXAMPLELINK')
I am assuming this is because there is no string in the url containing the data and/or there is no api on this page that allows for posts. I don't understand WHY it is not working, I don't actually have any error code. If I print the content I've received after urlopen I get the html code of a page, but without having it logged in.
But my understanding here may be incorrect.
I actually would like to find out information about how many people are logged in to my home network, using a remote connection. I'm assuming this is the only way. I would like to automate some stuff based on who is logged into my local network, the information is available on this page after I log in.
I would preferably like to use Python or Bash scripting to do this.
This site does not use basic http authentication so the HTTPBasicAuthHandler you made will not even be called. The site just posts the username and password using SSLv3. I checked it out with fiddler.
Also, you may need to create an ssl handler. I had problems with urllib with https and had to use the handler below.
import urllib2, ssl
sslv3_handler = urllib2.HTTPSHandler(context=ssl.SSLContext(ssl.PROTOCOL_SSLv3))
opener = urllib2.build_opener(sslv3_handler)

Categories