I'm trying to make a simple HTTP request in Python in an SSH terminal:
from requests import get
r = get("https://www.google.com")
However, this command just stalls to infinity. This does not happen when not in SSH.
Is there any way to send the request such that it goes through?
Thanks ahead of time.
EDIT: Running the logging in Joran's link yields only the following line:
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): www.google.com
First, check that you have ability to reach URL using some system-wide tool, like curl: curl -I "https://www.google.com". In case, you will have not timeout error, and got success response, my answer is not for you :)
You code can run forever, just because there is not timeout defined for socket connections. And if for some reason your system is not able to read from socket (at low level), you have to wait for long time.
http://docs.python-requests.org/en/latest/user/quickstart/#timeouts
Try this (assuming you are using python3):
from urllib.request import urlopen
r = urlopen('https://www.google.com').read()
Related
Why does the Python requests.get() function take too long to be cut off (or fail) if the target request URL is a remote URL (i.e. not localhost) and it can't be reached?
If the target request URL is just localhost and it can't be reached, it cuts off or fails very fast so the issue only occurs if the target request URL is a remote URL.
How could I make it quicker?
Yes. A timeout would do the trick, but you'll need to make sure the remote server will execute the request (if it's supposed to). Meaning: don't make the request timeout 1s if the request would take 5s to execute.
Requests Docs for Timeouts
requests.get('https://github.com/', timeout=0.001)
Alternatively, you could (on localhost) point to the server you're after. And have some kind of variable that points to the "correct" server based on if the Python server is local or remote?
What is the python equivalent for following shell command:
curl --interface 10.91.56.2 http:/10.91.55.3/file0.txt
????
I am using CentOS6.5-Linux and I want to send http request from virtual IP addresses like eth0:0,eth0:1,eth0:2,etc simultaneously with eth0. I am actually trying to make one traffic generator tool using python. I have been successful in sending multiple and concurrent http requests and now my next step is to send such requests from multiple ip addresses.I used following cURL command to send request from eth0:1 " curl--interface 10.91.56.2 http:/10.91.55.3/file0.txt" and I was successful in generating traffic from virtual eth0:1. Can anyone guide me how to do this using python? 10.91.56.2 is my virtual eth0:1 IP interface and 10.91.55.3 is my server address... –
Python Urllib2 provides perfect platform for making any HTTP request. In your case you can use urlopen() function...
More about this libary can be found in the below link:
how-to-use-urllib2-in-python
For me, eth0's ip is 10.91.56.3 and eth0:1's ip is 10.91.56.4 so, to generate traffic using 10.91.56.4(eth0:1)
Followed answer by #AKX here
In above answer in 3rd class write your interface's ip instead of 127.0.0.1 eg in my case i did like this:
class BindableHTTPHandler(urllib2.HTTPHandler):
def http_open(self, req):
return self.do_open(BindableHTTPConnectionFactory('10.91.56.4'), req)
I created a python server on port 8000 using python -m SimpleHTTPServer.
When I visit this url from my web browser it shows the below content
Now, I want to get the above content using python. So, for that what I did is
>>> import socket
>>> s = socket.socket(
... socket.AF_INET, socket.SOCK_STREAM)
>>> s.connect(("localhost", 8000))
>>> s.recv(1024)
But after s.recv(1024) nothing happens it just wait there and prints nothing.
So, my question is how to get above directory content output using python. Also can someone suggest me a tutorial on socket programming with python. I didn't liked the official tutorial that much.
I also observed a strange thing when I try to receive contents using python and nothing happens at that time I cannot access localhost:8000 from my web browser but as soon as I kill my python program I can access it.
Arguably the simplest way to get content over http in python is to use the urllib2 module. For example:
from urllib2 import urlopen
f = urlopen('http://localhost:8000')
for line in f:
print line
This will print out the file hosted by SimpleHTTPServer.
But after s.recv(1024) nothing happens it just wait there and prints nothing.
You simply open a socket and waiting for the data, but it's not how HTTP protocol works. You have to send a request first if you want to receive a response (basically, you have to tell the server which directory you want to list or which file to download). If you really want to, you can send the request using raw sockets to train your skills, but the proper library is highly recommended (see Matthew Adams' response and urllib2 example).
I also observed a strange thing when I try to receive contents using python and nothing happens at that time I cannot access localhost:8000 from my web browser but as soon as I kill my python program I can access it.
This is because SimpleHTTServer is single-threaded and doesn't support multiple connections simultaneously. If you would like to fix it, take a look at the answers here: BasicHTTPServer, SimpleHTTPServer and concurrency.
Our client wants a client script that will be installed on their customers' computers to be as trivial to install as possible. This means no extra-install packages, in this case PyCurl.
We need to be able to connect to a website using SSL and expecting a client certificate. Currently this is done calling Curl with os.system() but to get the http return code doing this it looks like we'll have to use the '-v' option to Curl and comb through this output. Not difficult, just a bit icky.
Is there some other way to do this using the standard library that comes with Python 2.6?
I read everything I could find on this and I couldn't see a non-Curl way of doing it.
Thanks in advance for any guidance on this subject whatsoever!
this will do the trick. Note that Verisign don't require a client certificate, it's just a randomly taken HTTPS site.
import httplib
conn = httplib.HTTPSConnection('verisign.com', key_file='./my-key.pem', cert_file='./my-cert.pem')
conn.connect()
conn.request('GET', '/')
conn.set_debuglevel(20)
response = conn.getresponse()
print('HTTP status', response.status)
EDIT: Just for the posterity, Bruno's comment below is a valid one and here's an article how to roll it using the stdlib's socket ssl and socket modules in case it's needed.
EDIT2: Seems I cannot post links - just do a web search for 'Validating SSL server certificate with Python 2.x another day'
I want to to test my application's handling of timeouts when grabbing data via urllib2, and I want to have some way to force the request to timeout.
Short of finding a very very slow internet connection, what method can I use?
I seem to remember an interesting application/suite for simulating these sorts of things. Maybe someone knows the link?
I usually use netcat to listen on port 80 of my local machine:
nc -l 80
Then I use http://localhost/ as the request URL in my application. Netcat will answer at the http port but won't ever give a response, so the request is guaranteed to time out provided that you have specified a timeout in your urllib2.urlopen() call or by calling socket.setdefaulttimeout().
You could set the default timeout as shown above, but you could use a mix of both since Python 2.6 in there is a timeout option in the urlopen method:
import urllib2
import socket
try:
response = urllib2.urlopen("http://google.com", None, 2.5)
except URLError, e:
print "Oops, timed out?"
except socket.timeout:
print "Timed out!"
The default timeout for urllib2 is infinite, and importing socket ensures you that you'll catch the timeout as socket.timeout exception
import socket
socket.setdefaulttimeout(2) # set time out to 2 second.
If you want to set the timeout for each request you can use the timeout argument for urlopen
why not write a very simple CGI script in bash that just sleeps for the required timeout period?
If you're running on a Mac, speedlimit is very cool.
There's also dummynet. It's a lot more hardcore, but it also lets you do some vastly more interesting things. Here's a pre-configured VM image.
If you're running on a Linux box already, there's netem.
I believe I've heard of a Windows-based tool called TrafficShaper, but that one I haven't verified.