Given an ip, how can i make an http request to this ip in python?
For example, if i want to get a file names 't1.txt' from the server that resides on '8.8.8.8', how can i do so?
I've tried using httplib and urllib2.
(It's better if the proper way will be by using standard Python libs).
Thanks a lot,
Iko.
import urllib
urllib.urlretrieve ("http://8.8.8.8/t1.txt", "t1.txt")
For simple url retrieval, urllib is perfectly OK (and is a standard Python library)...
... but if you are looking for something more easy to use in more complex cases, you should take a look at request:
import requests
r = requests.get('http://8.8.8.8/t1.txt')
Related
I'm learning python these days, and I have a rather basic question.
There's a remote server where a webserver is listening in on incoming traffic. If I enter a uri like the following in my browser, it performs certain processing on some files for me:
//223.58.1.10:8000/processVideo?container=videos&video=0eb8c45a-3238-4e2b-a302-b89f97ef0554.mp4
My rather basic question is: how can I achieve sending the above dynamic parameters (i.e. container = container_name and video = video_name) to 223.58.1.10:8000/processVideo via a python script?
I've seen ways to ping an IP, but my requirements are more than that. Guidance from experts will be very helpful. Thanks!
import requests
requests.get("http://223.58.1.10:8000/processVideo",data={"video":"123asd","container":"videos"})
I guess ... its not really clear what you are asking ...
Im sure you could do the same with just urllib and/or urllib2
import urllib
some_url = "http://223.58.1.10:8000/processVideo?video=%s&container=%s"%(video_name,container_name)# you should probably use urllib urllencode here ...
urllib.urlopen(some_url).read()
I want to make an HTTPS request to a real-time stream and keep the connection open so that I can keep reading content from it and processing it.
I want to write the script in python. I am unsure how to keep the connection open in my script. I have tested the endpoint with curl which keeps the connection open successfully. But how do I do it in Python. Currently, I have the following code:
c = httplib.HTTPSConnection('userstream.twitter.com')
c.request("GET", "/2/user.json?" + req.to_postdata())
response = c.getresponse()
Where do I go from here?
Thanks!
It looks like your real-time stream is delivered as one endless HTTP GET response, yes? If so, you could just use python's built-in urllib2.urlopen(). It returns a file-like object, from which you can read as much as you want until the server hangs up on you.
f=urllib2.urlopen('https://encrypted.google.com/')
while True:
data = f.read(100)
print(data)
Keep in mind that although urllib2 speaks https, it doesn't validate server certificates, so you might want to try and add-on package like pycurl or urlgrabber for better security. (I'm not sure if urlgrabber supports https.)
Connection keep-alive features are not available in any of the python standard libraries for https. The most mature option is probably urllib3
httplib2 supports this. (I'd have thought this the most mature option, didn't know urllib3 yet, so TokenMacGuy may still be right)
EDIT: while httplib2 does support persistent connections, I don't think you can really consume streams with it (ie. one long response vs. multiple requests over the same connection), which I now realise you may need.
Please advise library for working with soap in python.
Now, I'm trying to use "suds" and I can't undestand how get http headers from server reply
Code example:
from suds.client import Client
url = "http://10.1.0.36/money_trans/api3.wsdl"
client = Client(url)
login_res = client.service.Login("login", "password")
variable "login_res" contain xml answer and doesnt contain http headers. But I need to get session id from them.
I think you actually want to look in the Cookie Jar for that.
client = Client(url)
login_res = client.service.Login("login", "password")
for c in client.options.transport.cookiejar:
if "sess" in str(c).lower():
print "Session cookie:", c
I'm not sure. I'm still a SUDS noob, myself. But this is what my gut tells me.
The response from Ishpeck is on the right path. I just wanted to add a few things about the Suds internals.
The suds client is a big fat abstraction layer on top of a urllib2 HTTP opener. The HTTP client, cookiejar, headers, request and responses are all stored in the transport object. The problem is that none of this activity is cached or stored inside of the transport other than, maybe, the cookies within the cookiejar, and even tracking these can sometimes be problematic.
If you want to see what's going on when debugging, my suggestion would be to add this to your code:
import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger('suds.client').setLevel(logging.DEBUG)
logging.getLogger('suds.transport').setLevel(logging.DEBUG)
Suds makes use of the native logging module and so by turning on debug logging, you get to see all of the activity being performed underneath including headers, variables, payload, URLs, etc. This has saved me tons of times.
Outside of that, if you really need to definitively track state on your headers, you're going to need to create a custom subclass of a suds.transport.http.HttpTransport object and overload some of the default behavior and then pass that to the Client constructor.
Here is a super-over-simplified example:
from suds.transport.http import HttpTransport, Reply, TransportError
from suds.client import Client
class MyTransport(HttpTransport):
# custom stuff done here
mytransport_instance = MyTransport()
myclient = Client(url, transport=mytransport_instance)
I think Suds library has a poor documentation so, I recommend you to use Zeep. It's a SOAP requests library in Python. Its documentation isn't perfect, but it's very much clear than Suds Doc.
I'm writing a web-app that uses several 3rd party web APIs, and I want to keep track of the low level request and responses for ad-hock analysis. So I'm looking for a recipe that will get Python's urllib2 to log all bytes transferred via HTTP. Maybe a sub-classed Handler?
Well, I've found how to setup the built-in debugging mechanism of the library:
import logging, urllib2, sys
hh = urllib2.HTTPHandler()
hsh = urllib2.HTTPSHandler()
hh.set_http_debuglevel(1)
hsh.set_http_debuglevel(1)
opener = urllib2.build_opener(hh, hsh)
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler(sys.stdout))
logger.setLevel(logging.NOTSET)
But I'm still looking for a way to dump all the information transferred.
This looks pretty tricky to do. There are no hooks in urllib2, urllib, or httplib (which this builds on) for intercepting either input or output data.
The only thing that occurs to me, other than switching tactics to use an external tool (of which there are many, and most people use such things), would be to write a subclass of socket.socket in your own new module (say, "capture_socket") and then insert that into httplib using "import capture_socket; import httplib; httplib.socket = capture_socket". You'd have to copy all the necessary references (anything of the form "socket.foo" that is used in httplib) into your own module, but then you could override things like recv() and sendall() in your subclass to do what you like with the data.
Complications would likely arise if you were using SSL, and I'm not sure whether this would be sufficient or if you'd also have to make your own socket._fileobject as well. It appears doable though, and perusing the source in httplib.py and socket.py in the standard library would tell you more.
I have a Pythonic HTTP server that is supposed to determine client's IP. How do I do that in Python? Is there any way to get the request headers and extract it from there?
PS: I'm using WebPy.
Use web.ctx:
class example:
def GET(self):
print web.ctx.ip
More info here
web.env.get('REMOTE_ADDR')