Which is the best way to request constant data from a server in Python? I've tried with Urllib3 but for some reason after a while the python script stops. And I am also trying urllib2 (see below the code), but I notice there's a huge delay sometimes (that did not happen as frequently with urllib3) and the response is not every 0.5 seconds (sometimes it's every 6 seconds). What can I do to solve this?
import socket
import urllib2
import time
# timeout in seconds
timeout = 10
socket.setdefaulttimeout(timeout)
while True:
try:
# this call to urllib2.urlopen now uses the default timeout
# we have set in the socket module
req = urllib2.Request('https://www.okcoin.com/api/v1/future_ticker.do?symbol=btc_usd&contract_type=this_week')
response = urllib2.urlopen(req)
r = response.read()
req2 = urllib2.Request('http://market.bitvc.com/futures/ticker_btc_week.js')
response2 = urllib2.urlopen(req2)
r2 = response2.read()
except:
continue
print r + str(time.time())
print r2 + str(time.time())
time.sleep(0.5)
I think I found the problem. I needed to keep an open http session. That way I get the data more continuously. What's the best way of doing this? I did "http = requests.Session()" and using requests now.
Related
I have a link, e.g. www.someurl.com/api/getdata?password=..., and when I open it in a web browser it sends a constantly updating document of text. I'd like to make an identical connection in Python, and dump this data to a file live as it's received. I've tried using requests.Session(), but since the stream of data never ends (and dropping it would lose data), the get request also never ends.
import requests
s = requests.Session()
x = s.get("www.someurl.com/api/getdata?password=...") #never terminates
What's the proper way to do this?
I found the answer I was looking for here: Python Requests Stream Data from API
Full implementation:
import requests
url = "www.someurl.com/api/getdata?password=..."
s = requests.Session()
with open('file.txt','a') as fp:
with s.get(url,stream=True) as resp:
for line in resp.iter_lines(chunk_size=1):
fp.write(str(line))
Note that chunk_size=1 is necessary for the data to immediately respond to new complete messages, rather than waiting for an internal buffer to fill before iterating over all the lines. I believe chunk_size=None is meant to do this, but it doesn't work for me.
You can keep making get requests to the url
import requests
import time
url = "www.someurl.com/api/getdata?password=..."
sess = requests.session()
while True:
req = sess.get(url)
time.sleep(10)
this will terminate the request after 1 second ,
import multiprocessing
import time
import requests
data = None
def get_from_url(x):
s = requests.Session()
data = s.get("www.someurl.com/api/getdata?password=...")
if __name__ == '__main__':
while True:
p = multiprocessing.Process(target=get_from_url, name="get_from_url", args=(1,))
p.start()
# Wait 1 second for get request
time.sleep(1)
p.terminate()
p.join()
# do something with the data
print(data) # or smth else
Good day the problem I am facing is that I want to check if my website is up or not this is the sample pseudo code
Check(website.com)
if checking_time > 10 seconds:
print "No response Recieve"
else:
print "Site is up"
I already try the code below but not working
try:
response = urllib.urlopen("http://insurance.contactnumbersph.com").getcode()
time.sleep(5)
if response == "" or response == "403":
print "No response"
else:
print "ok"
If the website is not up and running, you will get connection refused error and actually doesn't return any status code. So, you can catch the error in python with simple try: and except: blocks.
import requests
URL = 'http://some-url-where-there-is-no-server'
try:
resp = requests.get(URL)
except Exception as e:
# handle here
print(e) # for example
You can also check repeatedly 10 times, each per second to check if there is an exception, if there is you will check again
import requests
URL = 'http://some-url'
canCheck = False
counts = 0
gotConnected = False
while counts < 10 :
try:
resp = requests.get(URL)
gotConnected = True
break
except Exception as e:
counts +=1
time.sleep(1)
The result will be available in gotConnected flag, which you can use later to handle appropriate actions.
note that the timeout that gets passed around by urllib applies to the "wrong thing". that is each individual network operation (e.g. hostname resolution, socket connection, sending headers, reading a few bytes of the headers, reading a few more bytes of the response) each get this same timeout applied. hence passing a "timeout" of 10 seconds could allow a large response to continue for hours
if you want to stick to built in Python code then it would be nice to use a thread to do this, but it doesn't seem to be possible to cancel running threads nicely. an async library like trio would allow better timeout and cancellation handling, but we can make do by using the multiprocessing module instead:
from urllib.request import Request, urlopen
from multiprocessing import Process
from time import perf_counter
def _http_ping(url):
req = Request(url, method='HEAD')
print(f'trying {url!r}')
start = perf_counter()
res = urlopen(req)
secs = perf_counter() - start
print(f'response {url!r} of {res.status} after {secs*1000:.2f}ms')
res.close()
def http_ping(url, timeout):
proc = Process(target=_http_ping, args=(url,))
try:
proc.start()
proc.join(timeout)
success = not proc.is_alive()
finally:
proc.terminate()
proc.join()
proc.close()
return success
you can use https://httpbin.org/ to test this, e.g:
http_ping('https://httpbin.org/delay/2', 1)
should print out a "trying" message, but not a "response" message. you can adjust the delay time and timeout to explore how this behaves...
note that this spins up a new process for each request, but as long as you're doing this less than a thousand pings a second it should be OK
I was trying to send out http post requests in parallel to a web service. To be specific, I'd like to do load test the web service under concurrent requests environment in terms of response time. So I plan to use thread and urllib2 to achieve the job - each thread carries out a http request by urllib2.
Here is how I did it:
import urllib2 as u2
import time
from threading import Thread
def run_job():
try:
req = u2.Request(
url = **the web service url**,
data = **data send to the web service**,
headers = **http header web service requires**,
)
opener = u2.build_opener()
u2.install_opener(opener)
start_time = time.time()
response = u2.urlopen(req, timeout = 60)
end_time = time.time()
html = response.read()
code = response.code
response.close()
if code == 200:
print end_time - start_time
else:
print -1
except Exception, e:
print -2
if __name__ == "__main__":
N = 1
if len(sys.argv) > 1:
N = int(sys.argv[1])
threads = []
for i in range(1, N):
t = Thread(target=run_job, args=())
threads.append(t)
[x.start() for x in threads]
[x.join() for x in threads]
In the meantime, I use fidller2 to capture the requests sending out. Fiddler is a tool to compose http request and send it out, and it can also capture the http requests through the host.
When I looked at the fidller2, those requests are sending out one by one instead of sending out all together at a time, which is what I expect to happen. If what represented in fiddler is right (requests are sending out one by one), to my knowledge, I think there must be some queue that the requests are waiting in. Could someone shed some light what happened behind this? And if it possible, how to do the real parallel requests?
Also, I have put two time stamps before and after the request takes place, if requests are waiting in queue after urllib2.urlopen is executed, then the delta of two time stamps includes the time spending in waiting queue. Is it possible to be more precise - to measure the time between request sending out and received?
Many Thanks,
Keeping the same format like this:
import urllib2
request = urllib2.Request('http://www.example.com', data)
response = urllib2.urlopen(request, timeout=4)
content = response.read()
Instead of using timeout=4, how can I use it with keep connection alive for as long as it takes?
Thanks in advance.
You can specify a very long timeout:
response = urllib2.urlopen(request, timeout=9999)
In addition you should look at requests, a much nicer lib than urllib2:
requests.get('http://www.example.com')
This by default hangs until the connection is closed.
Here is a python script that loads a url and captures response time:
import urllib2
import time
opener = urllib2.build_opener()
request = urllib2.Request('http://example.com')
start = time.time()
resp = opener.open(request)
resp.read()
ttlb = time.time() - start
Since my timer is wrapped around the whole request/response (including read()), this will give me the TTLB (time to last byte).
I would also like to get the TTFB (time to first byte), but am not sure where to start/stop my timing. Is urllib2 granular enough for me to add TTFB timers? If so, where would they go?
you should use pycurl, not urllib2
install pyCurl:
you can use pip / easy_install, or install it from source.
easy_install pyCurl
maybe you should be a superuser.
usage:
import pycurl
import sys
import json
WEB_SITES = sys.argv[1]
def main():
c = pycurl.Curl()
c.setopt(pycurl.URL, WEB_SITES) #set url
c.setopt(pycurl.FOLLOWLOCATION, 1)
content = c.perform() #execute
dns_time = c.getinfo(pycurl.NAMELOOKUP_TIME) #DNS time
conn_time = c.getinfo(pycurl.CONNECT_TIME) #TCP/IP 3-way handshaking time
starttransfer_time = c.getinfo(pycurl.STARTTRANSFER_TIME) #time-to-first-byte time
total_time = c.getinfo(pycurl.TOTAL_TIME) #last requst time
c.close()
data = json.dumps({'dns_time':dns_time,
'conn_time':conn_time,
'starttransfer_time':starttransfer_time,
'total_time':total_time})
return data
if __name__ == "__main__":
print main()
Using your current open / read pair there's only one other timing point possible - between the two.
The open() call should be responsible for actually sending the HTTP request, and should (AFAIK) return as soon as that has been sent, ready for your application to actually read the response via read().
Technically it's probably the case that a long server response would make your application block on the call to read(), in which case this isn't TTFB.
However if the amount of data is small then there won't be much difference between TTFB and TTLB anyway. For a large amount of data, just measure how long it takes for read() to return the first smallest possible chunk.
By default, the implementation of HTTP opening in urllib2 has no callbacks when read is performed. The OOTB opener for the HTTP protocol is urllib2.HTTPHandler, which uses httplib.HTTPResponse to do the actual reading via a socket.
In theory, you could write your own subclasses of HTTPResponse and HTTPHandler, and install it as the default opener into urllib2 using install_opener. This would be non-trivial, but not excruciatingly so if you basically copy and paste the current HTTPResponse implementation from the standard library and tweak the begin() method in there to perform some processing or callback when reading from the socket begins.
To get a good proximity you have to do read(1). And messure the time.
It works pretty well for me.
The ony thing you should keep in mind: python might load more than one byte on the call of read(1). Depending on it's internal buffers. But i think the most tools will behave alike inaccurate.
import urllib2
import time
opener = urllib2.build_opener()
request = urllib2.Request('http://example.com')
start = time.time()
resp = opener.open(request)
# read one byte
resp.read(1)
ttfb = time.time() - start
# read the rest
resp.read()
ttlb = time.time() - start