I'm writing a python/flask application and would like to add the functionality of reloading the server.
I'm currently running the server with the following option
app.run(debug=True)
which results in the following, each time a code change happens
* Running on http://127.0.0.1:5000/
* Restarting with reloader
In a production environment however, I would rather not have debug=True set, but be able to only reload the application server whenever I need to.
I'm trying to get two things working:
if reload_needed: reload_server(), and
if a user clicks on a "Reload Server" button in the admin panel, the reload_server() function should be called.
However, despite the fact that the server get's reloaded after code changes, I couldn't find a function that let's me do exactly that.
If possible I would like to use the flask/werkzeug internal capabilities. I am aware that I could achieve something like that by adding things like gunicorn/nginx/apache, etc.
I think I've had the same problem.
So there was a python/flask application (XY.py), on clients. I wrote a build step (Teamcity) which deploys this python code to the clients. Let's suppose the XY.py is already running on the clients. After deploying this new/fixed/corrected XY.py I had to restart it for applying the changes on the running code.
The problem what I've had is that after using the fine restarting oneliner os.execl(sys.executable, *([sys.executable]+sys.argv)) my port used by app is still busy/established, so after restarting I can't reach it.
This is how I resolved the problem:
I put my app to run on a separate Process and made a queue for it. To see it more cleanly here is some code.
global some_queue = None
#app.route('/restart')
def restart():
try:
some_queue.put("something")
return "Quit"
def start_flaskapp(queue):
some_queue = queue
app.run(your_parameters)
Add this to your main:
q = Queue()
p = Process(target=start_flaskapp, args=[q,])
p.start()
while True: #wathing queue, sleep if there is no call, otherwise break
if q.empty():
time.sleep(1)
else:
break
p.terminate() #terminate flaskapp and then restart the app on subprocess
args = [sys.executable] + [sys.argv[0]]
subprocess.call(args)
Hope it was clean and short enough and it helped to you!
How following in your Python code in order to kill the server:
#app.route('/quit')
def _quit():
os._exit(0)
When process is killed it will repeat itself in the while loop.
app_run.sh:
#!/bin/bash
while true
do
hypercorn app_async:app -b 0.0.0.0:5000
sleep 1
done
Related
Let's say this is the primary file I would run in terminal i.e locusts -f main.py. Is it possible to have it also run code after the locust instance is terminated or when time limit is reached? Possibly a cleanup script or sending the csv reports generated somewhere.
class setup(HttpUser):
wait_time = between(3, 5)
host = 'www.example.com'
tasks = [a, b, c...]
#do something after time limit reached
There is a test_stop event (https://docs.locust.io/en/stable/extending-locust.html) as well as a quitting event (https://docs.locust.io/en/stable/api.html#locust.event.Events.quitting) you can use for this purpose.
My python flask app runs using nohup. ie it is always live. I see that it creates a thread every time user submits from page. It is because flask.run is with multithread=true. But my problem is even after the processing is over, the thread doesn't seem to be closed. I'm checking this with the ps -eLf |grep userid command. where i see many threads still active long after the code execution is over. and it gets added when another submit is done. All threads are removed when the app itself is restarted.
What is the criteria for the thread to close without restarting the app?
Many posts like these suggests the gc.collect, del object etc..
I have many user defined classes getting instantiated on submit. and one object refers another . So
is it because the memory not getting released?
Should i use gc.collect or del objects?
Pythons should be clearing these objects once the scope of the variable is over. is it correct?
app = Flask(__name__)
#app.route('/submit',methods = ['GET','POST'])
def submit():
#obj1=class1()
#obj2=class2(obj1)
#obj3=class3(obj1)
#refer objects
#process data
#done
if __name__ == "__main__":
app.run(host='0.0.0.0', port=4000, threaded=True, debug=False)
It looks like the problem was with a paramiko object not getting closed. Once the SFTPClient or the SSHClient is opened, it has to be closed explicitly. I have assumed that along with my class object (where paramiko object is defined) it would get closed. But it doesnt.
So on the end of my process i call below lines. Now the threads seems getting closed properly
if objs.ssh:
objs.ssh.close()
if objs.sftp:
objs.t.close()
objs.sftp.close()
del objs
gc.collect()
After adding a runloop code in Python, uWSGI seems to be taking longer to kill.
Setup
Python, Flask
Running on Nginx with uWSGI
Using a psql database
Issue
Stopping uWSGI used to be very quick.
Recently I integrated a background thread to periodically check the database and make changes if needed, every 60 seconds.
This seems to be working just fine, except now every time I try to kill uWSGI, it takes a long time.
'seems' like the longer I leave the server running, the longer it takes to die,
or maybe it always just gets killed after the current 60 second loop ends? (I'm not sure my visual inspection supports this)
sounds like a leak?
Here is the code I recently added:
################################
## deploy.ini module .py file ##
################################
from controllers import runloop
from flask import Flask
from flask import request, redirect,Response
app = Flask(__name__)
runloop.startrunloop()
if __name__ == '__main__':
app.run() #app.run(debug=True)
################################
## runloop.py ##
################################
### initialize run loop ###
## code ref: http://stackoverflow.com/a/22900255/2298002
# "Your additional threads must be initiated from the same app that is called by the WSGI server.
# 'The example below creates a background thread that executes every 5 seconds and manipulates data
# structures that are also available to Flask routed functions."
#####################################################################
POOL_TIME = 60 #Seconds
# variables that are accessible from anywhere
commonDataStruct = {}
# lock to control access to variable
dataLock = threading.Lock()
# thread handler
yourThread = threading.Thread()
def startrunloop():
logfuncname = 'runloop.startrunloop'
logging.info(' >> %s >> ENTER ' % logfuncname)
def interrupt():
logging.info(' %s >>>> interrupt() ' % logfuncname)
global yourThread
yourThread.cancel()
def loopfunc():
logging.info(' %s >>> loopfunc() ' % logfuncname)
global commonDataStruct
global yourThread
with dataLock:
# Do your stuff with commonDataStruct Here
# function that performs at most 15 db queries (right now)
# this function will perform many times more db queries in production
auto_close_dws()
# Set the next thread to happen
yourThread = threading.Timer(POOL_TIME, loopfunc, ())
yourThread.start()
def initfunc():
# Do initialisation stuff here
logging.info(' %s >> initfunc() ' % logfuncname)
global yourThread
# Create your thread
yourThread = threading.Timer(POOL_TIME, loopfunc, ())
yourThread.start()
# Initiate
initfunc()
# When you kill Flask (SIGTERM), clear the trigger for the next thread
atexit.register(interrupt)
Additional info (all flask requests work just fine):
I start server with:
$ nginx
and stop with:
$ nginx -s stop
I start uWSGI with:
$ uwsgi —enable-threads —ini deploy.ini
I stop uWSGI to make python changes with:
ctrl + c (if in the foreground)
Otherwise I stop uWSGI with:
$ killall -s INT uwsgi
Then after making changes to the Python code, I start uWSGI again with:
$ uwsgi —enable-threads —ini deploy.ini
The following is an example Nginx output when I try to kill:
^CSIGINT/SIGQUIT received...killing workers...
Fri May 6 00:50:39 2016 - worker 1 (pid: 49552) is taking too much time to die...NO MERCY !!!
Fri May 6 00:50:39 2016 - worker 2 (pid: 49553) is taking too much time to die...NO MERCY !!!
Any help or hints are greatly appreciated. Please let me know if I need to be more clear with anything or if I’m missing any details.
I know the question is a bit old, but I had the same problem and Google got me here, so I will answer for anyone who gets here in the same boat.
The problem seems to be caused by the --enable-threads option, we have several applications running with uwsgi and flask and only the one with this option has the problem.
If what you want is to have the uwsgi process dying faster, you can add this options:
reload-mercy = *int*
worker-reload-mercy = *int*
They will cause the uwsgi to force the process to quit after int seconds.
On the other hand, if all you need is to reload the uwsgi, try just sending a SIGHUP signal. This will cause the uwsgi to reload its children.
POST NOTE: It seems I had spoken too soon, using SIGHUP also hangs sometimes. I am using the mercy options to avoid the hanging to take too long.
Also, I found the issue report on uwsgi github, if anyone wants to follow it:
https://github.com/unbit/uwsgi/issues/844
Hi I wrote a Python program that should run unattended. What it basically does is fetching some data via http get requests in a couple of threads and fetching data via websockets and the autobahn framework. Running it for 2 days shows me that it has a growing memory demand and even stops without any notice.
The documentation says I have to run the reactor as last line of code in the app.
I read that yappi is capable of profiling threaded applications
Here is some pseudo code
from autobahn.twisted.websocket import WebSocketClientFactory,connectWS
if __name__ == "__main__":
#setting up a thread
#start the thread
Consumer.start()
xfactory = WebSocketClientFactory("wss://url")
cex_factory.protocol = socket
## SSL client context: default
##
if factory.isSecure:
contextFactory = ssl.ClientContextFactory()
else:
contextFactory = None
connectWS(xfactory, contextFactory)
reactor.run()
The example from the yappi project site is the following:
import yappi
def a():
for i in range(10000000): pass
yappi.start()
a()
yappi.get_func_stats().print_all()
yappi.get_thread_stats().print_all()
So I could put yappi.start() at the beginning and yappi.get_func_stats().print_all() plus yappi.get_thread_stats().print_all() after reactor.run() but since this code is never executed I will never get it executed.
So how do I profile a program like that ?
Regards
It's possible to use twistd profilers by the following way:
twistd -n --profile=profiling_results.txt --savestats --profiler=hotshot your_app
hotshot is a default profiler, you are also able to use cprofile.
Or you can run twistd from your python script by means of:
from twistd.scripts import run
run()
And add necessary parameters to script by sys.argv[1:1] = ["--profile=profiling_results.txt", ...]
After all you can convert hotshot format to calltree by means of:
hot2shot2calltree profiling_results.txt > calltree_profiling
And open generated calltree_profiling file:
kcachegrind calltree_profiling
There is a project for profiling of asynchronous execution time twisted-theseus
You can also try tool of pycharm: thread concurrency
There is a related question here sof
You can also run your function by:
reactor.callWhenRunning(your_function, *parameters_list)
Or by reactor.addSystemEventTrigger() with event description and your profiling function call.
My understanding of Python's daemon module is that I can have a script that does stuff, spawns a daemon, and continues to do stuff. When the script finishes the daemon should hang around. Yes?
But that's not happening...
I have a python script that uses curses to manage a whole bunch of scripts and functions. It works wonderfully except when I use the script to spawn a daemon.
Now a daemon in this application is represented by a class. For example:
class TestDaemon(DaemonBase):
def __init__(self,stuff):
logger.debug("TestDaemon.__init__()")
def run(self):
logger.debug("TestDaemon entering loop")
while True:
pass
def cleanup(self):
super(TestDaemon,self).cleanup()
logger.debug("TestDaemon.cleanup()")
def load_config(self):
super(TestDaemon,self).load_config()
logger.debug("TestDaemon.load_config()")
And the daemon is launched with a function like:
def launch(*args,**kwargs):
import daemon
import lockfile
import signal
import os
oDaemon = TestDaemon(stuff)
context = daemon.DaemonContext(
working_directory=os.path.join(os.getcwd(),sAppName),
umask=0o077, #chmod mode = 777 minus umask. Only current user has access
pidfile=lockfile.FileLock('/home/sheena/.daemons/{0}__{1}.pid'.format(sAppName,sProcessName)),
)
context.signal_map = {
signal.SIGTERM: oDaemon.cleanup, #cleanup
signal.SIGHUP: 'terminate',
signal.SIGUSR1: oDaemon.load_config, #reload config
}
logger.debug("launching daemon")
with context:
oDaemon.run()
logger.debug("daemon launched")
The program gets as far as logging "launching daemon".
After this point, everything exits and the daemon doesn't run.
Any ideas why this would happen?
There is no evidence of exceptions - exceptions are set to be logged but there are none.
Question: Any ideas why this could be happening?
Stuff I've tried:
If I have oDaemon.run() in a try block it fails in exactly the same way
I assumed maybe the context is set up wrong so replaced with context with with daemon.DaemonContext(). Same problem
I replaced:
with context:
oDaemon.run()
with
def run():
while True:
pass
with context:
run()
and the main program still exited prematurely but at least it spawned a daemon so I assume it doesn't like the way I put stuff in a class...
We don't know anything about this DaemonBase class, but this:
with context:
oDaemon.run()
is a blocking call, cause you have infinite loop in run(). That is why your program cannot continue further.
Where is the code for starting actual daemon process?