I am trying to create a simple HTTP server that uses the Python HTTPServer which inherits BaseHTTPServer. [https://github.com/python/cpython/blob/main/Lib/http/server.py][1]
There are numerous examples of this approach online and I don't believe I am doing anything unusual.
I am simply importing the class via:
"from http.server import HTTPServer, BaseHTTPRequestHandler"
in my code.
My code overrides the do_GET() method to parse the path variable to determine what page to show.
However, if I start this server and connect to it locally (ex: http://127.0.0.1:50000) the first page loads fine. If I navigate to another page (via my first page links) that too works fine, however, on occasion (and this is somewhat sporadic), there is a delay and the server log shows a Request timed out: timeout('timed out') error. I have tracked this down to the handle_one_request method in the BaseHTTPServer class:
def handle_one_request(self):
"""Handle a single HTTP request.
You normally don't need to override this method; see the class
__doc__ string for information on how to handle specific HTTP
commands such as GET and POST.
"""
try:
self.raw_requestline = self.rfile.readline(65537)
if len(self.raw_requestline) > 65536:
self.requestline = ''
self.request_version = ''
self.command = ''
self.send_error(HTTPStatus.REQUEST_URI_TOO_LONG)
return
if not self.raw_requestline:
self.close_connection = True
return
if not self.parse_request():
# An error code has been sent, just exit
return
mname = 'do_' + self.command ## the name of the method is created
if not hasattr(self, mname): ## checking that we have that method defined
self.send_error(
HTTPStatus.NOT_IMPLEMENTED,
"Unsupported method (%r)" % self.command)
return
method = getattr(self, mname) ## getting that method
method() ## finally calling it
self.wfile.flush() #actually send the response if not already done.
except socket.timeout as e:
# a read or a write timed out. Discard this connection
self.log_error("Request timed out: %r", e)
self.close_connection = True
return
You can see where the exception is thrown in the "except socket.timeout as e:" clause.
I have tried overriding this method by including it in my code but it is not clear what is causing the error so I run into dead ends. I've tried creating very basic HTML pages to see if there was something in the page itself, but even "blank" pages cause the same sporadic issue.
What's odd is that sometimes a page loads instantly, and almost randomly, it will then timeout. Sometimes the same page, sometimes a different page.
I've played with the http.timeout setting, but it makes no difference. I suspect it's some underlying socket issue, but am unable to diagnose it further.
This is on a Mac running Big Sur 11.3.1, with Python version 3.9.4.
Any ideas on what might be causing this timeout, and in particular any suggestions on a resolution. Any pointers would be appreciated.
After further investigation, this particular appears to be an issue with Safari. Running the exact same code and using Firefox does not show the same issue.
Related
I have a FastAPI app that worked fine until now. Here is the main router:
#router.get('/{inst_name}')
def chart(request: Request, inst_name: str):
# Initializing an instance of class 'LiveData'
lv = tse.LiveData(inst_name, 1)
data = lv.get_data_for_charts('1min')
return templates.TemplateResponse('chart.html',
{
'request': request,
'title': inst_name,
'data': data
})
Today I changed Javascript code that renders chart.html and tried to re-run the app. No matter how much I refreshed the page, I could not see the changes made. So, I decided to do a hard reload on the page, but I got a strange result. The router function was run twice ; the first one did everything as I expected, and the second one failed on the assertion check in the class initialization. Here is the __init__ method of the class:
def __init__(self, inst, avg_volume: int):
print('inst name:', inst)
self._inst_name = inst
inst_id = AllInstCodes().get_inst_id_by_name(inst)
assert inst_id is not None, 'Invalid instrument name.'
I put a print statement inside above method to see what causes the assertion to fail. The inst parameter in the second initialization of the class is favicon.ico !!!
Why the router method runs twice and where does favicon.ico come from?
The GET /favicon.ico request is a request your browser makes to retrieve the site's icon that gets shown on the tab in your browser (or in your bookmarks, etc.)
You can ignore this error; it's a request your browser makes in the background and unless you do something very specific with it (i.e. change internal state, etc.) it shouldn't affect anything. You can create an explicit endpoint for it if you want to handle it properly and return a 404 error instead.
The reason why the endpoint runs twice is that there are two requests - your own and the browser's request for favicon.ico.
Most common REST API designs uses the resource group name as the initial part of the path to avoid conflicts with other groups that gets added later, so you'd have:
#router.get('/instruments/{inst_name}')
instead. This will automagically give a 404 error when anyone attempts to request an undefined path.
I'm using requests to get a URL, such as:
while True:
try:
rv = requests.get(url, timeout=1)
doSth(rv)
except socket.timeout as e:
print e
except Exception as e:
print e
After it runs for a while, it quits working. No exception or any error, just like it suspended. I then stop the process by typing Ctrl+C from the console. It shows that the process is waiting for data:
.............
httplib_response = conn.getresponse(buffering=True) #httplib.py
response.begin() #httplib.py
version, status, reason = self._read_status() #httplib.py
line = self.fp.readline(_MAXLINE + 1) #httplib.py
data = self._sock.recv(self._rbufsize) #socket.py
KeyboardInterrupt
Why is this happening? Is there a solution?
It appears that the server you're sending your request to is throttling you - that is, it's sending bytes with less than 1 second between each package (thus not triggering your timeout parameter), but slow enough for it to appear to be stuck.
The only fix for this I can think of is to reduce the timeout parameter, unless you can fix this throttling issue with the Server provider.
Do keep in mind that you'll need to consider latency when setting the timeout parameter, otherwise your connection will be dropped too quickly and might not work at all.
The default requests doesn't not set a timeout for connection or read.
If for some reason, the server cannot get back to the client within the time, the client will stuck at connecting or read, mostly the read for the response.
The quick resolution is to set a timeout value in the requests object, the approach is well described here: http://docs.python-requests.org/en/master/user/advanced/#timeouts
(Thanks to the guys.)
If this resolves the issue, please kindly mark this a resolution. Thanks.
I'm a novice Python programmer trying to use Python to scrape a large amount of pages from fanfiction.net and deposit a particular line of the page's HTML source into a .csv file. My program works fine, but eventually hits a snag where it stops running. My IDE told me that the program has encountered "Errno 10054: an existing connection was forcibly closed by the remote host".
I'm looking for a way to get my code to reconnect and continue every time I get the error. My code will be scraping a few hundred thousand pages every time it runs; is this maybe just too much for the site? The site doesn't appear to prevent scraping. I've done a fair amount of research on this problem already and attempted to implement a retry decorator, but the decorator doesn't seem to work. Here's the relevant section of my code:
def retry(ExceptionToCheck, tries=4, delay=3, backoff=2, logger=None):
def deco_retry(f):
#wraps(f)
def f_retry(*args, **kwargs):
mtries, mdelay = tries, delay
while mtries > 1:
try:
return f(*args, **kwargs)
except ExceptionToCheck as e:
msg = "%s, Retrying in %d seconds..." % (str(e), mdelay)
if logger:
logger.warning(msg)
else:
print(msg)
time.sleep(mdelay)
mtries -= 1
mdelay *= backoff
return f(*args, **kwargs)
return f_retry # true decorator
return deco_retry
#retry(urllib.error.URLError, tries=4, delay=3, backoff=2)
def retrieveURL(URL):
response = urllib.request.urlopen(URL)
return response
def main():
# first check: 5000 to 100,000
MAX_ID = 600000
ID = 400001
URL = "http://www.fanfiction.net/s/" + str(ID) + "/index.html"
fCSV = open('buffyData400k600k.csv', 'w')
fCSV.write("Rating, Language, Genre 1, Genre 2, Character A, Character B, Character C, Character D, Chapters, Words, Reviews, Favorites, Follows, Updated, Published, Story ID, Story Status, Author ID, Author Name" + '\n')
while ID <= MAX_ID:
URL = "http://www.fanfiction.net/s/" + str(ID) + "/index.html"
response = retrieveURL(URL)
Whenever I run the .py file outside of my IDE, it eventually locks up and stops grabbing new pages after about an hour, tops. I'm also running a different version of the same file in my IDE, and that appears to have been running for almost 12 hours now, if not longer-is it possible that the file could work in my IDE but not when run independently?
Have I set my decorator up wrong? What else could I potentially do to get python to reconnect? I've also seen claims that the SQL native client being out of date could cause problems for a Window user such as myself - is this true? I've tried to update that but had no luck.
Thank you!
You are catching URLErrors, which Errno: 10054 is not, so your #retry decorator is not going to retry. Try this.
#retry(Exception, tries=4)
def retrieveURL(URL):
response = urllib.request.urlopen(URL)
return response
This should retry 4 times on any Exception. Your #retry decorator is defined correctly.
Your code for reconnecting looks good except for one part - the exception that you're trying to catch. According to this StackOverflow question, an Errno 10054 is a socket.error. All you need to do is to import socket and add an except socket.error statement in your retry handler.
I'm using Python and Webtest to test a WSGI application. I found that exceptions raised in the handler code tend to be swallowed by Webtest, which then raises a generic:
AppError: Bad response: 500 Internal Server Error
How do I tell it to raise or print the original error that caused this?
While clj's answer certainly works, you may still want to access the response in your test case. To do this, you can use expect_errors=True (from the webtest documentation) when you make your request to the TestApp, and that way no AppError will be raised. Here is an example where I am expecting a 403 error:
# attempt to access secure page without logging in
response = testapp.get('/secure_page_url', expect_errors=True)
# now you can assert an expected http code,
# and print the response if the code doesn't match
self.assertEqual(403, response.status_int, msg=str(response))
Your WSGI framework and server contains handlers which catch exceptions and performs some action (render a stacktrace in the body, log the backtrace to a logfile, etc). Webtest, by default, does not show the actual response, which might be useful if your framework renders a stacktrace in the body. I use the following extension to Webtest when I need to look at the body of the response:
class BetterTestApp(webtest.TestApp):
"""A testapp that prints the body when status does not match."""
def _check_status(self, status, res):
if status is not None and status != res.status_int:
raise webtest.AppError(
"Bad response: %s (not %s)\n%s", res.status, status, res)
super(BetterTestApp, self)._check_status(status, res)
Getting more control over what happens to the exception depends on what framework and server you are using. For the built in wsgiref module you might be able to override error_output to achieve what you want.
I'm developing a Python Service(Class) for accessing Redis Server. I want to know how to check if Redis Server is running or not. And also if somehow I'm not able to connect to it.
Here is a part of my code
import redis
rs = redis.Redis("localhost")
print rs
It prints the following
<redis.client.Redis object at 0x120ba50>
even if my Redis Server is not running.
As I found that my Python Code connects to the Server only when I do a set() or get() with my redis instance.
So I dont want other services using my class to get an Exception saying
redis.exceptions.ConnectionError: Error 111 connecting localhost:6379. Connection refused.
I want to return proper message/Error code. How can I do that??
If you want to test redis connection once at startup, use the ping() command.
from redis import Redis
redis_host = '127.0.0.1'
r = Redis(redis_host, socket_connect_timeout=1) # short timeout for the test
r.ping()
print('connected to redis "{}"'.format(redis_host))
The command ping() checks the connection and if invalid will raise an exception.
Note - the connection may still fail after you perform the test so this is not going to cover up later timeout exceptions.
The official way to check if redis server availability is ping ( http://redis.io/topics/quickstart ).
One solution is to subclass redis and do 2 things:
check for a connection at instantiation
write an exception handler in the case of no connectivity when making requests
As you said, the connection to the Redis Server is only established when you try to execute a command on the server. If you do not want to go head forward without checking that the server is available, you can just send a random query to the server and check the response. Something like :
try:
response = rs.client_list()
except redis.ConnectionError:
#your error handlig code here
There are already good solutions here, but here's my quick and dirty for django_redis which doesn't seem to include a ping function (though I'm using an older version of django and can't use the newest django_redis).
# assuming rs is your redis connection
def is_redis_available():
# ... get redis connection here, or pass it in. up to you.
try:
rs.get(None) # getting None returns None or throws an exception
except (redis.exceptions.ConnectionError,
redis.exceptions.BusyLoadingError):
return False
return True
This seems to work just fine. Note that if redis is restarting and still loading the .rdb file that holds the cache entries on disk, then it will throw the BusyLoadingError, though it's base class is ConnectionError so it's fine to just catch that.
You can also simply except on redis.exceptions.RedisError which is the base class of all redis exceptions.
Another option, depending on your needs, is to create get and set functions that catch the ConnectionError exceptions when setting/getting values. Then you can continue or wait or whatever you need to do (raise a new exception or just throw out a more useful error message).
This might not work well if you absolutely depend on setting/getting the cache values (for my purposes, if cache is offline for whatever we generally have to "keep going") in which case it might make sense to have the exceptions and let the program/script die and get the redis server/service back to a reachable state.
I have also come across a ConnectionRefusedError from the sockets library, when redis was not running, therefore I had to add that to the availability check.
r = redis.Redis(host='localhost',port=6379,db=0)
def is_redis_available(r):
try:
r.ping()
print("Successfully connected to redis")
except (redis.exceptions.ConnectionError, ConnectionRefusedError):
print("Redis connection error!")
return False
return True
if is_redis_available(r):
print("Yay!")
Redis server connection can be checked by executing ping command to the server.
>>> import redis
>>> r = redis.Redis(host="127.0.0.1", port="6379")
>>> r.ping()
True
using the ping method, we can handle reconnection etc. For knowing the reason for error in connecting, exception handling can be used as suggested in other answers.
try:
is_connected = r.ping()
except redis.ConnectionError:
# handle error
Use ping()
from redis import Redis
conn_pool = Redis(redis_host)
# Connection=Redis<ConnectionPool<Connection<host=localhost,port=6379,db=0>>>
try:
conn_pool.ping()
print('Successfully connected to redis')
except redis.exceptions.ConnectionError as r_con_error:
print('Redis connection error')
# handle exception