Sending parameters to a remote server via python script - python

I'm learning python these days, and I have a rather basic question.
There's a remote server where a webserver is listening in on incoming traffic. If I enter a uri like the following in my browser, it performs certain processing on some files for me:
//223.58.1.10:8000/processVideo?container=videos&video=0eb8c45a-3238-4e2b-a302-b89f97ef0554.mp4
My rather basic question is: how can I achieve sending the above dynamic parameters (i.e. container = container_name and video = video_name) to 223.58.1.10:8000/processVideo via a python script?
I've seen ways to ping an IP, but my requirements are more than that. Guidance from experts will be very helpful. Thanks!

import requests
requests.get("http://223.58.1.10:8000/processVideo",data={"video":"123asd","container":"videos"})
I guess ... its not really clear what you are asking ...
Im sure you could do the same with just urllib and/or urllib2
import urllib
some_url = "http://223.58.1.10:8000/processVideo?video=%s&container=%s"%(video_name,container_name)# you should probably use urllib urllencode here ...
urllib.urlopen(some_url).read()

Related

Simple development http-proxy for multiple source servers

I developed till now with different webapp-servers (Tornado, Django, ...) and am encountering the same problem again and again:
I want a simple web proxy (reverse proxy) that allows me, that I can combine different source entities from other web-servers (that could be static files, dynamic content from an app server or other content) to one set of served files. That means, the browser should see them as if they come from one source.
I know, that I can do that with nginx, but I am searching for an even simpler tool for development. I want something, that can be started on command line and does not need to run as root. Changing the configuration (routing of the requests) should be as simple as possible.
In development, I just want to be able to mashup different sources. For example: On my production server runs something, that I don't want to copy, but I want to connect with static files on a different server and also a new application on my development system.
Speed of the proxy is not the issue, just flexibility and speed of development!
Preferred would be a Python or other scripting solution. I found also a big list of Python proxys, but after scanning the list, I found that all are lacking. Most of them just connect to one destination server and no way to have multiple servers (the proxy has to decide which to take by analysis of the local url).
I am just wondering, that nobody else has this need ...
You do not need to start nginx as root as long as you do not let it serve on port 80. If you want it to run on port 80 as a normal user, use setcap. In combination with a script that converts between an nginx configuration file and a route specification for your reverse proxy, this should give you the most reliable solution.
If you want something simpler/smaller, it should be pretty straight-forward to write a script using Python's BaseHTTPServer and urllib. Here's an example that only implements GET, you'd have to extend it at least to POST and add some exception handling:
#!/usr/bin/env python
# encoding: utf-8
import BaseHTTPServer
import SocketServer
import urllib
import re
FORWARD_LIST = {
'/google/(.*)': r'http://www.google.com/%s',
'/so/(.*)': r'http://www.stackoverflow.com/%s',
}
class HTTPServer(BaseHTTPServer.HTTPServer, SocketServer.ThreadingMixIn):
pass
class ProxyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
for pattern, url in FORWARD_LIST.items():
match = re.search(pattern, self.path)
if match:
url = url % match.groups()
break
else:
self.send_error(404)
return
dataobj = urllib.urlopen(url)
data = dataobj.read()
self.send_response(200)
self.send_header("Content-Length", len(data))
for key, value in dataobj.info().items():
self.send_header(key, value)
self.end_headers()
self.wfile.write(data)
HTTPServer(("", 1234), ProxyHandler).serve_forever()
Your use case should be covered by:
https://mitmproxy.org/doc/features/reverseproxy.html
There is now a proxy that covers my needs (and more) -- very lightweight and very good:
Devd

Get response from an ip address in python

Given an ip, how can i make an http request to this ip in python?
For example, if i want to get a file names 't1.txt' from the server that resides on '8.8.8.8', how can i do so?
I've tried using httplib and urllib2.
(It's better if the proper way will be by using standard Python libs).
Thanks a lot,
Iko.
import urllib
urllib.urlretrieve ("http://8.8.8.8/t1.txt", "t1.txt")
For simple url retrieval, urllib is perfectly OK (and is a standard Python library)...
... but if you are looking for something more easy to use in more complex cases, you should take a look at request:
import requests
r = requests.get('http://8.8.8.8/t1.txt')

Recording HTTP in Python with Scotch

I am trying to record HTTP GET/POST requests sent by my browser using the library scotch.
I am using their sample code: http://darcs.idyll.org/~t/projects/scotch/doc/recipes.html#id2
import scotch.proxy
app = scotch.proxy.ProxyApp()
import scotch.recorder
recorder = scotch.recorder.Recorder(app, verbosity=1)
try:
from wsgiref.simple_server import WSGIServer, WSGIRequestHandler
server_address = ('', 8000)
httpd = WSGIServer(server_address, WSGIRequestHandler)
httpd.set_app(app)
while 1:
httpd.handle_request()
finally:
from cPickle import dump
outfp = open('recording.pickle', 'w')
dump(recorder.record_holder, outfp)
outfp.close()
print 'saved %d records' % (len(recorder.record_holder))
So I ran above code, went over to google chrome, and visited a few sites to see if that would get recorded.
However, I do not see how the code should terminate. It seems that there has to be an error in httpd.handle_request() for the code to terminate.
I tried a variation of the code where I removed the try and finally syntax, and changed the while condition so that the loop ran for 30 seconds. However, that seems to be running forever as well.
Any ideas on how to get this working? I am also open to using other python libraries available for what I am trying to do: record my browser's GET/POST requests, including logons, and replay this within python.
Thanks.
Correct me if I'm wrong, but you're trying to log the activity of your local browser by setting a local proxy. If this is the case your browser needs to go through your proxy in order for your proxy server to log the activity.
The code that you've provided sets a proxy server at localhost:8000, so you need to tell your browser about this. The actual setting depends on the browser, I'm sure you'd be able to google it easily.
When I've asked to check if the code is running I actually mean whether your local proxy accepts some kind of request from the browser. Do you see the 'saved records' print out of your code at some point?

How to add authentication to a (Python) twisted xmlrpc server

I am trying to add authentication to a xmlrpc server (which will be running on nodes of a P2P network) without using user:password#host as this will reveal the password to all attackers. The authentication is so to basically create a private network, preventing unauthorised users from accessing it.
My solution to this was to create a challenge response system very similar to this but I have no clue how to add this to the xmlrpc server code.
I found a similar question (Where custom authentication was needed) here.
So I tried creating a module that would be called whenever a client connected to the server. This would connect to a challenge-response server running on the client and if the client responded correctly would return True. The only problem was that I could only call the module once and then I got a reactor cannot be restarted error. So is there some way of having a class that whenever the "check()" function is called it will connect and do this?
Would the simplest thing to do be to connect using SSL? Would that protect the password? Although this solution would not be optimal as I am trying to avoid having to generate SSL certificates for all the nodes.
Don't invent your own authentication scheme. There are plenty of great schemes already, and you don't want to become responsible for doing the security research into what vulnerabilities exist in your invention.
There are two very widely supported authentication mechanisms for HTTP (over which XML-RPC runs, therefore they apply to XML-RPC). One is "Basic" and the other is "Digest". "Basic" is fine if you decide to run over SSL. Digest is more appropriate if you really can't use SSL.
Both are supported by Twisted Web via twisted.web.guard.HTTPAuthSessionWrapper, with copious documentation.
Based on your problem description, it sounds like the Secure Remote Password Protocol might be what you're looking for. It's a password-based mechanism that provides strong, mutual authentication without the complexity of SSL certificate management. It may not be quite as flexible as SSL certificates but it's easy to use and understand (the full protocol description fits on a single page). I've often found it a useful tool for situations where a trusted third party (aka Kerberos/CA authorities) isn't appropriate.
For anyone that was looking for a full example below is mine (thanks to Rakis for pointing me in the right direction). In this the user and password is stored in a file called 'passwd' (see the first useful link for more details and how to change it).
Server:
#!/usr/bin/env python
import bjsonrpc
from SRPSocket import SRPSocket
import SocketServer
from bjsonrpc.handlers import BaseHandler
import time
class handler(BaseHandler):
def time(self):
return time.time()
class SecureServer(SRPSocket.SRPHost):
def auth_socket(self, socket):
server = bjsonrpc.server.Server(socket, handler_factory=handler)
server.serve()
s = SocketServer.ForkingTCPServer(('', 1337), SecureServer)
s.serve_forever()
Client:
#! /usr/bin/env python
import bjsonrpc
from bjsonrpc.handlers import BaseHandler
from SRPSocket import SRPSocket
import time
class handler(BaseHandler):
def time(self):
return time.time()
socket, key = SRPSocket.SRPSocket('localhost', 1337, 'dht', 'testpass')
connection = bjsonrpc.connection.Connection(socket, handler_factory=handler)
test = connection.call.time()
print test
time.sleep(1)
Some useful links:
http://members.tripod.com/professor_tom/archives/srpsocket.html
http://packages.python.org/bjsonrpc/tutorial1/index.html

Persistent HTTPS Connections in Python

I want to make an HTTPS request to a real-time stream and keep the connection open so that I can keep reading content from it and processing it.
I want to write the script in python. I am unsure how to keep the connection open in my script. I have tested the endpoint with curl which keeps the connection open successfully. But how do I do it in Python. Currently, I have the following code:
c = httplib.HTTPSConnection('userstream.twitter.com')
c.request("GET", "/2/user.json?" + req.to_postdata())
response = c.getresponse()
Where do I go from here?
Thanks!
It looks like your real-time stream is delivered as one endless HTTP GET response, yes? If so, you could just use python's built-in urllib2.urlopen(). It returns a file-like object, from which you can read as much as you want until the server hangs up on you.
f=urllib2.urlopen('https://encrypted.google.com/')
while True:
data = f.read(100)
print(data)
Keep in mind that although urllib2 speaks https, it doesn't validate server certificates, so you might want to try and add-on package like pycurl or urlgrabber for better security. (I'm not sure if urlgrabber supports https.)
Connection keep-alive features are not available in any of the python standard libraries for https. The most mature option is probably urllib3
httplib2 supports this. (I'd have thought this the most mature option, didn't know urllib3 yet, so TokenMacGuy may still be right)
EDIT: while httplib2 does support persistent connections, I don't think you can really consume streams with it (ie. one long response vs. multiple requests over the same connection), which I now realise you may need.

Categories