make python print stacktrace when running on local host? - python

I have 3 python programs, the first 2 programs are ran with
python program1.py --bind localhost --port 8000
python program1.py --bind localhost --port 8080
The third program is ran with
python program3.py http://localhost:8000 http://localhost:8080
The exception is here:
try:
result = getattr(self.agents[agent], fn) \
(*args + (self.credits[agent],))
except socket.timeout:
self.credits[agent] = -1.0 # ensure it is counted as expired
raise TimeCreditExpired
except (socket.error, xmlrpc.client.Fault) as e:
logging.error("Agent %d was unable to play step %d." +
" Reason: %s", agent, self.step, e)
Output when I press CTRL+C on program1:
program1:
127.0.0.1 - - [10/Nov/2021 23:28:43] "POST /RPC2 HTTP/1.1" 200 -
program3:
2021-11-10 23:28:43,296 -- ERROR: Agent 1 was unable to play step 2. Reason: <Fault 1: "<class 'KeyboardInterrupt'"
but how can I get the stacktrace?
Thanks

Related

SSHOperator fails to connect to remote host

ssh_hook_france=SSHHook(remote_host="10.33.21.38",username="emedk",password="116322ken",port=2601)
ssh_task_france=SSHOperator(
ssh_hook=ssh_hook_france,
task_id="connect_to_receiver_from_sender",
command="./rmdt_tester -s -S 4G germany.west.top:2601",
conn_timeout=50
)
ssh_hook_germany=SSHHook(remote_host="10.33.21.40",username="emedk",password="116322keny",port=2601)
ssh_task_germany=SSHOperator(
ssh_hook=ssh_hook_germany,
task_id="connect_to_sender_from_receiver",
command="rmdt_tester -l :2601",
conn_timeout=50
)
Here is two sshhooks and two ssh operators which is failed during the performing,the following error i see in the logs:
Failed to connect. Sleeping before retry attempt 1
Failed to connect. Sleeping before retry attempt 2

Python2 webserver: Do not log request from localhost

The following Python2 webserver will log every single request including the one from localhost (127.0.0.1).
webserver.py
import SimpleHTTPServer, SocketServer, sys
Handler = SimpleHTTPServer.SimpleHTTPRequestHandler
port = 80
httpd = SocketServer.TCPServer(("", port), Handler)
sys.stderr = open('/home/user/log.txt', 'w', 1)
httpd.serve_forever()
As example; curl localhost (from the same machine) will produce the following log.
10.0.0.1 - - [10/Jan/2019 00:00:00] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [10/Jan/2019 00:00:01] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [10/Jan/2019 00:01:01] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [10/Jan/2019 00:02:02] "GET / HTTP/1.1" 200 -
My question: Would it be possible to make an exception for local request? I don't want to log any request from localhost/127.0.0.1.
I'm thinking something like this but not really sure how to implement it in Python2 yet.
webserver_v2_do_not_log_localhost.py
import webserver.py # webserver code above, or simply paste everything in here.
if SourceIPAddress == 127.0.0.1:
print('DO NOT log request from localhost/127.0.0.1')
# Script here
else:
print('Log everything')
# Script here
Any idea on the scripts would be highly appreciated. Thanks
Desired Output when performing tail -F log.txt (external IP only, not localhost)
10.0.0.1 - - [10/Jan/2019 00:00:00] "GET / HTTP/1.1" 200 -
You can use logging.Filterclass.
When you declare your logger, do something like that:
import logging
logging.basicConfig(filename='myapp.log', level=logging.INFO)
class Global:
SourceIPAddress = ''
class IpFilter(logging.Filter):
def filter(self, rec):#the rec is part of the function signature.
return not Global.SourceIPAddress == '127.0.0.1'
def main():
log = logging.getLogger('myLogger')
log.addFilter(IpFilter())
log.info("log")
Global.SourceIPAddress = '127.0.0.1'
log.info("Don't log")
if __name__ == '__main__':
main()
Of course I implemented it in a very simple way and you should save the IP in a better place(:
I would also check this links for more info:
https://docs.python.org/3/howto/logging-cookbook.html
https://www.programcreek.com/python/example/3364/logging.Filter

Heroku Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch

I keep getting this error despite trying everything out in internet.
I'm trying to run my flask application on Heroku.
Below is my ProcFile
web gunicorn -b 127.0.0.1:8000 geeni:app
Below is my geeni.py file.
class ChargeUser(Resource):
def post(self):
jsonData = request.get_json(force=True)
stripeid = jsonData['stripeid_customer']
currency = jsonData['currency']
amount = jsonData['amount']
apiKey = jsonData['api_key']
try:
stripe.Charge.create(amount = amount, source=stripeid, currency=currency)
return jsonify({'Msg':'Charged!'})
except:
raise
api.add_resource(ChargeUser,'/')
if __name__ == '__main__':
app.run()
I've setup my heroku push/login everything and have throughly followed tutorials. No luck..
Your Procfile should be web: gunicorn -b 0.0.0.0:$PORT greeni:app. As currently written, Heroku would never see that your application is ready to receive inbound connections:
The 127.0.0.1 interface would not receive any external network traffic. Instead, the 0.0.0.0 string does bind to the all external interfaces.
Heroku passes the required port via the $PORT variable, which is usually 5000.
Remember - Heroku manages the "routing mesh", which receives the inbound HTTP traffic, then forwards it to your application. It assigns the address and port, which can't be hard-coded in your Procfile.

uwsgi --reload refuses incoming connections

I am trying to setup a uwsgi-hosted app such that I get graceful reloads with uwsgi --reload but I am obviously failing. Here is my test uwsgi setup:
[admin2-prod]
http = 127.0.0.1:9090
pyargv = $* --db=prod --base-path=/admin/
max-requests = 3
listen=1000
http-keepalive = 1
pidfile2 =admin.pid
add-header=Connection: keep-alive
workers = 1
master = true
chdir = .
plugins = python,http,router_static,router_uwsgi,router_http
buffer-size = 8192
pythonpath = admin2
file = admin2/app.py
static-map=/admin/static/=admin2/static/
static-map=/admin/v3/build/=admin2/client/build/
disable-logging = false
http-timeout = 100
(please, note that I ran sysctl net.core.somaxconn=1000 before)
And here is my test python script:
import httplib
connection = httplib.HTTPConnection('127.0.0.1', 9090)
connection.connect()
for i in range(0, 1000):
print 'sending... ', i
try:
connection.request('GET', '/x', '', {'Connection' : ' keep-alive'})
response = connection.getresponse()
d = response.read()
print ' ', response.status
except:
connection = httplib.HTTPConnection('127.0.0.1', 9090)
connection.connect()
The above client fails during --reload:
sending... 920
Traceback (most recent call last):
File "./test.py", line 15, in <module>
connection.connect()
File "/usr/lib64/python2.7/httplib.py", line 836, in connect
self.timeout, self.source_address)
File "/usr/lib64/python2.7/socket.py", line 575, in create_connection
raise err
socket.error: [Errno 111] Connection refused
From a tcpdump, it looks like uwsgi is indeed accepting the second incoming TCP request which happens upon the --reload, the client is sending the GET, the server is TCP ACKing it but the connection is finally RSTed by the server before sending back the HTTP response. So, what am I missing that is needed to make the server queue this incoming connection until it is ready to process it and get a real graceful reload ?
you are managing both the app and the proxy in the same uWSGI instance, so when you reload the stack you are killing the frontend web server too (the one you start with the 'http' option).
You have to split the http router in another uWSGI instance, or use nginx/haproxy or similar. Once you have two different stacks you can reload the application without closing the socket
Your exceptions happens when uwsgi process cant accept connections obviously... So, your process have to wait until server restarted - you can use loop with timeout in except block to properly handle this situation. Try this:
import httplib
import socket
import time
connection = httplib.HTTPConnection('127.0.0.1', 8000)
# connection moved below... connection.connect()
for i in range(0, 1000):
print 'sending... ', i
try:
connection.request('GET', '/x', '', {'Connection' : ' keep-alive'})
response = connection.getresponse()
d = response.read()
print ' ', response.status
except KeyboardInterrupt:
break
except socket.error:
while True:
try:
connection = httplib.HTTPConnection('127.0.0.1', 8000)
connection.connect()
except socket.error:
print 'cant connect, will try again in a second...'
time.sleep(1)
else:
break
before restart:
sending... 220
404
sending... 221
404
sending... 222
404
restarting server:
cant connect, will try again in a second...
cant connect, will try again in a second...
cant connect, will try again in a second...
cant connect, will try again in a second...
server up again:
sending... 223
404
sending... 224
404
sending... 225
404
update for your comment:
Obviously, in the real world, you can't rewrite the code of all the
http clients that connect to your server. My question is: what can I
do to get a graceful reload (no failures) for arbitrary clients.
One of universal solutions that i think can handle such problems with clients - simple proxy between client and server. With proxy you can restart server independently of clients (implies that proxy is always on).
And in fact this is commonly used - 502 (bad gateway) errors from web applications frontend proxies - exactly same situation - client receives error from proxy while application server is down! Try nginx, varnish or something similar.
Btw, uwsgi have builtin "proxy/load-balancer/router" plugin:
The uWSGI FastRouter
For advanced setups uWSGI includes the “fastrouter” plugin, a proxy/load-balancer/router speaking the uwsgi
protocol. It is built in by default. You can put it between your
webserver and real uWSGI instances to have more control over the
routing of HTTP requests to your application servers.
docs here: http://uwsgi-docs.readthedocs.io/en/latest/Fastrouter.html

python flask server port deforced by ntpd

I have a rest server implemented by python with flask. And implement an api to restart ntpd. The code test_flask.py:
from flask import Flask
import subprocess
import logging
import sys
app = Flask(__name__)
def run_shell_cmd(cmd):
logging.info("run cmd: %s", cmd)
try:
rc = subprocess.call(cmd, shell=True)
if rc != 0:
logging.error("Fail to run %s , rc: %s" % (cmd, rc))
except OSError as e:
logging.error("Fail to run cmd: %s" % e)
return rc
#app.route("/restart_ntpd")
def restart():
run_shell_cmd("service ntpd restart")
return "Success!"
if __name__ == "__main__":
LOG_FORMAT = '%(asctime)s, %(levelname)s, %(filename)s:%(lineno)d, %(message)s'
logging.basicConfig(
format=LOG_FORMAT,
level=logging.INFO,
stream=sys.stdout,
)
app.run()
Then I operated as follow:
start flask server: python test_flask.py
curl "http://localhost:5000/restart_ntpd. Then ntpd restart & return "success"
stop flask server: just use Ctrl+c to stop
start flask server again, it will raise a exception:
socket.error: [Errno 98] Address already in use.
use sh $ netstat -ntlp | grep 5000, the port was deforced by ntpd
I think the ntpd will use port 123 in default. In my scene, why the port 5000 is deforced by ntpd? Is it the problem of flask?
ntpd is not listening on TCP port 5000 itself, it's the environment where it is running - the process.
And that process is a child of your Flask server process, which opens a socket listening on TCP port 5000.
This socket is inherited in the child process, and since the ntpd process is a long-running one, it continues running with the socket it inherits from you, occupying the port 5000.
Check how to deal with Python BaseHTTPServer killed,but the port is still be occupied? on how to prevent children processes from inheriting the socket.
Of course, first you have to find a way to customize the way Flask start the server.

Categories