I run a simple flask app like this:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def welcome():
return "OK"
app.config.update(
DEBUG = True
)
if __name__ == '__main__':
app.run(use_reloader = False)
when I run it and visit it, sometimes(not always) it could't response the request and throw an except:
Exception happened during processing of request from ('127.0.0.1', 54481)
Traceback (most recent call last):
File "c:\python27\Lib\SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
File "c:\python27\Lib\SocketServer.py", line 321, in process_request
self.finish_request(request, client_address)
File "c:\python27\Lib\SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "c:\python27\Lib\SocketServer.py", line 651, in __init__
self.finish()
File "c:\python27\Lib\SocketServer.py", line 710, in finish
self.wfile.close()
File "c:\python27\Lib\socket.py", line 279, in close
self.flush()
File "c:\python27\Lib\socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 10053]
I can't understand what cause this fault? and how can I solve it?
and how can I use try except to catch it?
I recently ran into this error message while trying to use Flask to serve audio files. I get this error message whenever the client closes the stream before the end of the stream.
This error message doesn't originate from your Flask application, but rather from the underlying SocketServer used to dispatch request data. What is happening is the connection to the client is ending for some reason, but Flask continues to try to write data to the closed socket. You can't catch this exception from your Flask application, because Flask catches it for you. Flask prints it out as a service to you, notifying you that the stream was closed prematurely, i.e. before Flask finished writing data to the stream.
To sum it up, this error message is internal to Flask, Flask is printing it to tell you that it couldn't get all the data to the client before the connection closed. You can't catch it, and you shouldn't have any reason to catch it.
I've found this solution to be a good at least temporary fix.
if __name__ == '__main__':
while True:
try:
app.run(use_reloader = False)
except:
pass
You can add your own exit logic, or leave the program with CTRL + \ which sends SIGQUIT. (important if you're running threaded flask)
You however can't:
except KeyboardInterupt:
Because Flask already catches KeyboardInterupt exceptions and handles them.
error 10052 means you're using windows, so as far as I know, close the command window to exit the program
It is probably due to the port number being used which is 54481 by looking at your error message. It might be clashing with something else. I also suggest not to use the use_reloader parameter since your DEBUG is already set to False. So flask will not reload any code changes. Can you instead do this :
if __name__ == '__main__':
app.run(port=5000)
Related
I'm building a simple IOT device (with a Raspberry Pi Zero) which pulls data from Firebase Realtime Database every 1 second and checks for updates.
However, after a certain time (not sure exactly how much but somewhere between 1 hour and 3 hours) the program exits with a 504 Server Error: Gateway Time-out message.
I couldn't understand exactly why this is happening, I tried to recreate this error by disconnecting the Pi from the internet and I did not get this message. Instead, the program simply paused in a ref.get() line and automatically resumed running once the connection was back.
This device is meant to be always on, so ideally if I get some kind of error, I would like to restart the program / reinitiate the connection / reboot the Pi. Is there a way to achieve something like this?
It seems like the message is actually generated by the firebase_admin package.
Here is the error message:
Traceback (most recent call last):
File "/home/pi/.local/lib/python3.7/site-packages/firebase_admin/db.py", line 944, in request
return super(_Client, self).request(method, url, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/firebase_admin/_http_client.py", line 105, in request
resp.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://someFirebaseProject.firebaseio.com/someRef/subSomeRef/payload.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/Desktop/project/main.py", line 94, in <module>
lastUpdate = ref.get()['lastUpdate']
File "/home/pi/.local/lib/python3.7/site-packages/firebase_admin/db.py", line 223, in get
return self._client.body('get', self._add_suffix(), params=params)
File "/home/pi/.local/lib/python3.7/site-packages/firebase_admin/_http_client.py", line 117, in body
resp = self.request(method, url, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/firebase_admin/db.py", line 946, in request
raise _Client.handle_rtdb_error(error)
firebase_admin.exceptions.UnknownError: Internal server error.
>>>
To reboot the whole Raspberry Pi, you can just run a shell command:
import os
os.system("sudo reboot")
I've had this problem too and usually feel safer with that, but there's obvious downsides. I'd try resetting the wifi connection or network interface in a similar way
I am trying to establish a long running Pull subscription to a Google Cloud PubSub topic.
I am using a code very similar to the example given in the documentation here, i.e.:
def receive_messages(project, subscription_name):
"""Receives messages from a pull subscription."""
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(
project, subscription_name)
def callback(message):
print('Received message: {}'.format(message))
message.ack()
subscriber.subscribe(subscription_path, callback=callback)
# The subscriber is non-blocking, so we must keep the main thread from
# exiting to allow it to process messages in the background.
print('Listening for messages on {}'.format(subscription_path))
while True:
time.sleep(60)
The problem is that I'm receiving the following traceback sometimes:
Exception in thread Consumer helper: consume bidirectional stream:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/path/to/google/cloud/pubsub_v1/subscriber/_consumer.py", line 248, in _blocking_consume
self._policy.on_exception(exc)
File "/path/to/google/cloud/pubsub_v1/subscriber/policy/thread.py", line 135, in on_exception
raise exception
File "/path/to/google/cloud/pubsub_v1/subscriber/_consumer.py", line 234, in _blocking_consume
for response in response_generator:
File "/path/to/grpc/_channel.py", line 348, in __next__
return self._next()
File "/path/to/grpc/_channel.py", line 342, in _next
raise self
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, The service was unable to fulfill your request. Please try again. [code=8a75])>
I saw that this was referenced in another question but here I am asking to how to handle it properly in Python. I have tried to wrap the request in an exception but it seems to run in the background and I am not able to retry in case of that error.
A somewhat hacky approach that is working for me is a custom policy_class. The default one has an on_exception function that ignores DEADLINE_EXCEEDED. You can make a class that inherits the default and also ignores UNAVAILABLE. Mine looks like this:
from google.cloud import pubsub
from google.cloud.pubsub_v1.subscriber.policy import thread
import grpc
class AvailablePolicy(thread.Policy):
def on_exception(self, exception):
"""The parent ignores DEADLINE_EXCEEDED. Let's also ignore UNAVAILABLE.
I'm not sure what triggers that error, but if you ignore it, your
subscriber seems to work just fine. It's probably an intermittent
thing and it reconnects later if you just give it a chance.
"""
# If this is UNAVAILABLE, then we want to retry.
# That entails just returning None.
unavailable = grpc.StatusCode.UNAVAILABLE
if getattr(exception, 'code', lambda: None)() == unavailable:
return
# For anything else, fallback on super.
super(AvailablePolicy, self).on_exception(exception)
subscriber = pubsub.SubscriberClient(policy_class=AvailablePolicy)
# Continue to set up as normal.
It looks a lot like the original on_exception just ignores a different error. If you want, you can add some logging whenever the exception is thrown and verify that everything still works. Future messages will still come through.
I have a webservice running in python 2.7.10 / Tornado that uses SSL. This service throws an error when a non-SSL call comes through (http://...).
I don't want my service to be accessible when SSL is not used, but I'd like to handle it in a cleaner fashion.
Here is my main code that works great over SSL:
if __name__ == "__main__":
tornado.options.parse_command_line()
#does not work on 2.7.6
ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ssl_ctx.load_cert_chain("...crt.pem","...key.pem")
ssl_ctx.load_verify_locations("...CA.crt.pem")
http_server = tornado.httpserver.HTTPServer(application, ssl_options=ssl_ctx, decompress_request=True)
http_server.listen(options.port)
mainloop = tornado.ioloop.IOLoop.instance()
print("Main Server started on port XXXX")
mainloop.start()
and here is the error when I hit that server with http://... instead of https://...:
[E 151027 20:45:57 http1connection:700] Uncaught exception
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/http1connection.py", line 691, in _server_request_loop
ret = yield conn.read_response(request_delegate)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 807, in run
value = future.result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 209, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 810, in run
yielded = self.gen.throw(*sys.exc_info())
File "/usr/local/lib/python2.7/dist-packages/tornado/http1connection.py", line 166, in _read_message
quiet_exceptions=iostream.StreamClosedError)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 807, in run
value = future.result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 209, in result
raise_exc_info(self._exc_info)
File "<string>", line 3, in raise_exc_info
SSLError: [SSL: HTTP_REQUEST] http request (_ssl.c:590)
Any ideas how I should handle that exception?
And what the standard-conform return value would be when I catch a non-SSL call to an SSL-only API?
UPDATE
This API runs on a specific port e.g. https://example.com:1234/. I want to inform a user who is trying to connect without SSL, e.g. http://example.com:1234/ that what they are doing is incorrect by returning an error message or status code. As it is the uncaught exception returns a 500, which they could interpret as a programming error on my part. Any ideas?
There's an excelent discussion in this Tornado issue about that, where Tornado maintainer says:
If you have both HTTP and HTTPS in the same tornado process, you must be running two separate HTTPServers (of course such a feature should not be tied to whether SSL is handled at the tornado level, since you could be terminating SSL in a proxy, but since your question stipulated that SSL was enabled in tornado let's focus on this case first). You could simply give the HTTP server a different Application, one that just does this redirect.
So, the best solution it's to HTTPServer that listens on port 80 and doesn't has the ssl_options parameter setted.
UPDATE
A request to https://example.com/some/path will go to port 443, where you must have an HTTPServer configured to handle https traffic; while a request to http://example.com/some/path will go to port 80, where you must have another instance of HTTPServer without ssl options, and this is where you must return the custom response code you want. That shouldn't raise any error.
I'm using django-paypal as a payment solution inside my django application. I'm trying to implement a IPN handler.
What happens when I receive an IPN message at my IPN-handling URL the django server crashes:
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 281, in run
self.finish_response()
File "/usr/local/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 321, in finish_response
self.write(data)
File "/usr/local/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 417, in write
self._write(data)
File "/usr/lib/python2.6/socket.py", line 300, in write
self.flush()
File "/usr/lib/python2.6/socket.py", line 286, in flush
self._sock.sendall(buffer)
error: [Errno 104] Connection reset by peer
My payments applications urls.py looks like this:
urlpatterns = patterns('mysite.payment.views',
(r'^thank_you/', 'thank_you'),
(r'^canceled/', 'canceled'),
(r'^paypal-ipn/', include('paypal.standard.ipn.urls'))
)
To me the error message is pretty useless. Would be great if someone could help me.
I admit I'm an idiot :)
You don't need ssl for this. But what you need is to do a syncdb before you are able to use it,...
God sometimes it is so easy that you just don't see it.
Can you monitor precisely the packet that paypal is sending your server using tcpdump or wireshark? It looks like they may be terminating the connection early, but it's hard to tell much without a longer traceback and/or a packet dump.
Edit:
I had forgotten about the https messages. Paypal probably requires HTTPS for those callbacks. The dev server won't support that, so unfortunately you will probably need to flesh out your server configuration before you can test that functionality.
I have a cherrypy app that I'm controlling over http with a wxpython ui. I want to kill the server when the ui closes, but I don't know how to do that. Right now I'm just doing a sys.exit() on the window close event but thats resulting in
Traceback (most recent call last):
File "ui.py", line 67, in exitevent
urllib.urlopen("http://"+server+"/?sigkill=1")
File "c:\python26\lib\urllib.py", line 87, in urlopen
return opener.open(url)
File "c:\python26\lib\urllib.py", line 206, in open
return getattr(self, name)(url)
File "c:\python26\lib\urllib.py", line 354, in open_http
'got a bad status line', None)
IOError: ('http protocol error', 0, 'got a bad status line', None)
is that because I'm not stopping cherrypy properly?
How are you stopping CherryPy? By sending a SIGKILL to itself? You should send TERM instead at the least, but even better would be to call cherrypy.engine.exit() (version 3.1+). Both techniques will allow CherryPy to shut down more gracefully, which includes allowing any in-process requests (like your "?sigkill=1" request itself) to finish up and close cleanly.
I use os._exit. I also put it on a thread, so that I can serve up a "you've quit the server" page before exiting.
class MyApp(object):
#cherrypy.expose
def exit(self):
"""
/exit
Quits the application
"""
threading.Timer(1, lambda: os._exit(0)).start()
return render("exit.html", {})