I am building a flask app and need some background processes to run. I decided to go with multiprocess, but it's producing two processes when running within Flask. Does anyone know why this would happen? I've tested it on OS X and Ubuntu 12.04, with the same results. Here is an example:
import time
import multiprocessing
from flask import Flask
app = Flask(__name__)
backProc = None
def testFun():
print('Starting')
while True:
time.sleep(3)
print('looping')
time.sleep(3)
print('3 Seconds Later')
#app.route('/')
def root():
return 'Started a background process with PID ' + str(backProc.pid) + " is running: " + str(backProc.is_alive())
#app.route('/kill')
def kill():
backProc.terminate()
return 'killed: ' + str(backProc.pid)
#app.route('/kill_all')
def kill_all():
proc = multiprocessing.active_children()
for p in proc:
p.terminate()
return 'killed all'
#app.route('/active')
def active():
proc = multiprocessing.active_children()
arr = []
for p in proc:
print(p.pid)
arr.append(p.pid)
return str(arr)
#app.route('/start')
def start():
global backProc
backProc = multiprocessing.Process(target=testFun, args=(), daemon=True)
backProc.start()
return 'started: ' + str(backProc.pid)
if __name__ == '__main__':
app.run(port=int("7879"))
This is a problem with the Flask auto-reload feature, which is used during development to automatically restart the webserver when changes in code is detected, in order to serve up the new code without requiring a manual restart.
In the guide, the “app.run()” call is always placed within an “if __name__ == ‘__main__’” condition, since the reloader is set to on by default. When using multiprocessing, this condition will result in false, so you have to instead disable the Flask autoreload when using it in a function like so:
def startWebserver():
app.run(debug=True, use_reloader=False)
Link for reference:
http://blog.davidvassallo.me/2013/10/23/nugget-post-python-flask-framework-and-multiprocessing/
Related
Recently added Flask to a sample infinite-loop which randomly prints words. However, when adding app.run(host='0.0.0.0') the code after that line won't execute after I stop Flask running.
if __name__ == '__main__':
app.run(host='0.0.0.0')
while True: # won't run until I press stop once (stop Flask) when running directly from IDE
...
What I want is to be able to run the while loop while the Flask app is running.
Is there any way to solve this?
You can use before_first_request instead. Functions decorated with #app.before_first_request will run once before the first request to this instance of the application.
The code looks like this:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
print("index is running!")
return "Hello world"
#app.before_first_request
def before_first_request_func():
print("This function will run once")
if __name__ == "__main__":
app.run(host="0.0.0.0")
The code in before_first_request_func will be executed once before the first request to the server. Therefore, after starting the Flask instance, one can simulate the first request to the server using curl or so.
You can do what you want by using multithreading:
from flask import Flask
import threading
import time
app = Flask(__name__)
#app.route("/")
def hello_world():
return "Hello, World!"
def run_app():
app.run(debug=False, threaded=True)
def while_function():
i = 0
while i < 20:
time.sleep(1)
print(i)
i += 1
if __name__ == "__main__":
first_thread = threading.Thread(target=run_app)
second_thread = threading.Thread(target=while_function)
first_thread.start()
second_thread.start()
Output:
* Serving Flask app "app"
* Environment: production
* Debug mode: off
* Running on [...] (Press CTRL+C to quit)
0
1
2
3
4
5
6
7
8
[...]
The idea is simple:
create 2 functions, one to run the app and an other to execute the wile loop,
and then execute each function in a seperate thread, making them run in parallel
You can do this with multiprocessing instead of multithreading too:
The (main) differences here is that the functions will run on different CPUs and in memory spaces.
from flask import Flask
from multiprocessing import Process
import time
# Helper function to easly parallelize multiple functions
def parallelize_functions(*functions):
processes = []
for function in functions:
p = Process(target=function)
p.start()
processes.append(p)
for p in processes:
p.join()
# The function that will run in parallel with the Flask app
def while_function():
i = 0
while i < 20:
time.sleep(1)
print(i)
i += 1
app = Flask(__name__)
#app.route("/")
def hello_world():
return "Hello, World!"
def run_app():
app.run(debug=False)
if __name__ == '__main__':
parallelize_functions(while_function, run_app)
If you want to use before_first_request proposed by #Triet Doan: you will have to pass the while function as an argument of before_first_request like this:
from flask import Flask
import time
app = Flask(__name__)
def while_function(arg):
i = 0
while i < 5:
time.sleep(1)
print(i)
i += 1
#app.before_first_request(while_function)
#app.route("/")
def index():
print("index is running!")
return "Hello world"
if __name__ == "__main__":
app.run()
In this setup, the while function will be executed, and, when it will be finished, your app will run, but I don't think that was what you were asking for?
tl,dr: How can I programmably execute a python module (not function) as a separate process from a different python module?
On my development laptop, I have a 'server' module containing a bottle server. In this module, the name==main clause starts the bottle server.
#bt_app.post("/")
def server_post():
<< Generate response to 'http://server.com/' >>
if __name__ == '__main__':
serve(bt_app, port=localhost:8080)
I also have a 'test_server' module containing pytests. In this module, the name==main clause runs pytest and displays the results.
def test_something():
_rtn = some_server_function()
assert _rtn == desired
if __name__ == '__main__':
_rtn = pytest.main([__file__])
print("Pytest returned: ", _rtn)
Currently, I manually run the server module (starting the web server on localhost), then I manually start the pytest module which issues html requests to the running server module and checks the responses.
Sometimes I forget to start the server module. No big deal but annoying. So I'd like to know if I can programmatically start the server module as a separate process from the pytest module (just as I'm doing manually now) so I don't forget to start it manually.
Thanks
There is my test cases dir tree:
test
├── server.py
└── test_server.py
server.py start a web server with flask.
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello, World!'
if __name__ == '__main__':
app.run()
test_server.py make request to test.
import sys
import requests
import subprocess
import time
p = None # server process
def start_server():
global p
sys.path.append('/tmp/test')
# here you may want to do some check.
# whether the server is already started, then pass this fucntion
kwargs = {} # here u can pass other args needed
p = subprocess.Popen(['python','server.py'], **kwargs)
def test_function():
response = requests.get('http://localhost:5000/')
print('This is response body: ', response.text)
if __name__ == '__main__':
start_server()
time.sleep(3) # waiting server started
test_function()
p.kill()
Then you can do python test_server to start the server and do test cases.
PS: Popen() needs python3.5+. if older version, use run instead
import logging
import threading
import time
def thread_function(name):
logging.info("Thread %s: starting", name)
time.sleep(2)
logging.info("Thread %s: finishing", name)
if __name__ == "__main__":
format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO,
datefmt="%H:%M:%S")
threads = list()
for index in range(3):
logging.info("Main : create and start thread %d.", index)
x = threading.Thread(target=thread_function, args=(index,))
threads.append(x)
x.start()
for index, thread in enumerate(threads):
logging.info("Main : before joining thread %d.", index)
thread.join()
logging.info("Main : thread %d done", index)
With threading you can run multiple processes at once!
Wim baasically answered this question. I looked into the subprocess module. While reading up on it, I stumbled on the os.system function.
In short, subprocess is a highly flexible and functional program for running a program. os.system, on the other hand, is much simpler, with far fewer functions.
Just running a python module is simple, so I settled on os.system.
import os
server_path = "python -m ../src/server.py"
os.system(server_path)
Wim, thanks for the pointer. Had it been a full fledged answer I would have upvoted it. Redo it as a full fledged answer and I'll do so.
Async to the rescue.
import gevent
from gevent import monkey, spawn
monkey.patch_all()
from gevent.pywsgi import WSGIServer
#bt_app.post("/")
def server_post():
<< Generate response to 'http://server.com/' >>
def test_something():
_rtn = some_server_function()
assert _rtn == desired
print("Pytest returned: ",_rtn)
sleep(0)
if __name__ == '__main__':
spawn(test_something) #runs async
server = WSGIServer(("0.0.0.0", 8080, bt_app)
server.serve_forever()
Getting the specifics out of the way, I'm writing an open source P2P social network over IPFS and Flask -- I know, it's been done. I'm choosing Flask because pyinstaller can put it in an exe file.
I am attempting to update my IPNS every 10 minutes to publish all status updates I've added to the network during said 10 minutes. The cron function from setup class (from library.py) is where that updater function is stored. At first, I threaded the cron function from init of setup. The server hung. Then I moved the threading process over to app.before_first_request. The server still hangs.
https://pastebin.com/bXHTuH83 (main.py)
from flask import Flask, jsonify
from library import *
#=========================TO BE DELETED=========================================
def pretty(json):
json = dumps(loads(json), indent=4, sort_keys=True)
return json
#===============================================================================
app = Flask(__name__)
GANN = setup()
#app.before_first_request
def cron_job():
Thread(target=GANN.cron())
#app.route("/")
def home():
return "Hello World!!!"
if __name__ == "__main__":
app.run(port="80", debug=True, threaded=True)
https://pastebin.com/W5P8Tpvd (library.py)
from threading import Thread
from time import time, sleep
import urllib.request
from json import loads, dumps
def api(*argv, **kwargs):
url = "http://127.0.0.1:5001/api/v0/"
for arg in argv:
arg = arg.replace(" ", "/")
if arg[:-1] != "/":
arg += "/"
url += arg
url = url[0:-1]
if kwargs:
url+="?"
for val in kwargs:
url = url + val + "=" + kwargs[val] + "&"
url = url[0:-1]
print(url)
try:
with urllib.request.urlopen(url, timeout=300) as response:
return response.read()
except:
return b"""{"ERROR": "CANNOT CONNECT TO IPFS!"}"""
class setup():
def __init__(self):
api("files", "mkdir", arg="/GANN", parents="True")
self.root_hash = ""
def update_root(self):
try:
for entry in loads(api("files", "ls", l="True").decode())["Entries"]:
if entry["Name"] == "GANN":
self.root_hash = entry["Hash"]
except:
return """{"ERROR": "CANNOT FIND ROOT DIRECTORY"}"""
def publish_root(self):
api("name", "publish", arg=self.root_hash)
def cron(self):
while True:
print("CRON Thread Started!")
self.update_root()
self.publish_root()
sleep(600)
I have searched the web for a couple days and have yet to find a threading technique that will split from the main process and not hang the server from taking other requests. I believe I'm on a single stream connection, as IPFS blocks connections to every other device in my home when it's started. It takes a couple minutes for the CLI IPNS update to go through, so I set urllib's timeout to 300 seconds.
Well what I think the threading code is not correct.
#app.before_first_request
def cron_job():
Thread(target=GANN.cron())
Here you created a Thread object. The argument must be callable, but you called your method already here. so the right way would be
Thread(target=GANN.cron)
So the thread can call the cron function later. having said that, the Thread must be started, so it will call the function target you gave. So it must be ike
thread_cron = Thread(target=GANN.cron)
thread_cron.start()
Since you called the GANN.cron() , the method starts executing and your app hung!
I'm very new to Flask so please bear with me.
So I'm working with an app that deals with a function that will take 30 minutes to complete (burn in test for a QA suite). Because this takes so long I'd like to have the app launch a "loading..." screen so the user isn't just staring at a hanging site for 30 minutes. I did some searching around and found this thread. Which I have set up in my own way as such (only necessary functions shown)
from flask import Flask, render_template, request
import subprocess
import tests
from threading import Thread
app = Flask(__name__)
def async_slow_function(test, arguments):
thr = Thread(target=test, args=arguments)
thr.start()
print("Thread starting...")
return thr
#app.route('/strobe')
def strobe():
print(async_slow_function(tests.strobe, ""))
return index()
if __name__ == '__main__':
app.run(debug=True, threaded=True, host='0.0.0.0')
However, this setup for me still has the app hanging while the test is running. Even when the test is complete the app still hangs. Which leaves me to believe that the thread is still running.
Any ideas?
UPDATE
Just in case anyone in the future has the same issue as I did, below is my updated code that works.
from flask import Flask, render_template, request
import subprocess
import tests
from threading import Thread
app = Flask(__name__)
def async_slow_function(test, argue):
if argue != None:
thr = Thread(target=test, args=[argue])
else:
thr = Thread(target=test)
thr.start()
return thr
#app.route('/strobe')
def strobe():
async_slow_function(tests.strobe, None)
return render_template('testing.html')
#app.route('/fade')
def fade():
async_slow_function(tests.fade, None)
return render_template('testing.html')
if __name__ == '__main__':
app.run(threaded=True, host='0.0.0.0')
It had something to do with me returning the index() function. Instead I just rendered my testing template.
This question already has answers here:
Why does running the Flask dev server run itself twice?
(7 answers)
Closed 5 years ago.
I am running into a seemingly bizarre problem that I can't figure out - I am trying to create a single object, however for some reason, two are always made.
The two files control a robot i am building.
App.py
A flask server which receives in coming commands through a socket.io connection with clients
Hypervisor.py
Robot Controller, takes in user commands, sensor data, current agenda, and feeds commands to the MotionController.
I am trying to create a single instance of a the Hypervisor class from within the Flask server app.py, however two are consistently created.
Below is my code and console output showing the double object creation. Why is this happeneing!!?!?!?
App.py
#!/usr/bin/env python
from flask import Flask, render_template, session, request, send_from_directory, send_file
from flask_socketio import SocketIO, emit, join_room, leave_room, close_room, rooms, disconnect
import time
import json
import datetime
import logging
import platform
from bColors import bcolors
from RobotSystem.Hypervisor import Hypervisor
from RobotSystem.Services.Utilities.RobotUtils import RobotUtils
async_mode = None
app = Flask(__name__, static_url_path='/static')
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app, async_mode=async_mode)
log = logging.getLogger("werkzeug")
log.setLevel(logging.ERROR)
thread = None
connections = 0
#app.route('/', methods=['GET', 'POST'])
def index():
return render_template('index.html', async_mode=socketio.async_mode)
def background_thread():
while True:
socketio.sleep(1)
#socketio.on('valueUpdate')
def valueUpdateHandler(message):
RobotUtils.ColorPrinter("app.py",'Value update fired ', 'OKBLUE')
quadbot.inputData(message)
data = {}
data['Recieved'] = True
return json.dumps(data)
#socketio.on('connect')
def test_connect():
global connections
connections+=1
print_str = "Client connected. "+ str(connections)+ " current connections"
RobotUtils.ColorPrinter("app.py",print_str, 'OKBLUE')
global thread, quadbotThread
if thread is None:
print "init"
thread = socketio.start_background_task(target=background_thread)
#socketio.on('disconnect')
def test_disconnect():
global connections
connections -= 1
RobotUtils.ColorPrinter("app.py",str( 'Client disconnected. ' +str(connections)+ " current connections" ), 'OKBLUE')
if __name__ == '__main__':
global quadbot
quadbot = Hypervisor()
socketio.run(app, debug=True)
Hypervisor.py
#!/usr/bin/python
from Services import *
import time
import math
import json
import sys
import threading
import os
from Queue import Queue,Empty
class Hypervisor():
def __init__(self):
if RobotUtils.LIVE_TESTING:
self.pwm = PWM()
self.pwm.setPWMFreq(RobotUtils.FREQUENCY)
else:
self.pwm = None
self.inputQueue = Queue()
self.agendaThread = threading.Thread(group=None,target=self.updateAgendaLoop,name="agendaThread")
self.agendaThread.start()
self.data_file_name = RobotUtils.DATA_FILE
self.front_left = None
self.front_right = None
self.back_left = None
self.back_right = None
self.TURN_LEFT = RobotUtils.TURN_LEFT
self.TURN_RIGHT = RobotUtils.TURN_RIGHT
self.FORWARD = RobotUtils.FORWARD
self.BACKWARD = RobotUtils.BACKWARD
self.STOP = RobotUtils.STOP
self.AUTONOMOUS = RobotUtils.AUTONOMOUS
self.INVALID_DATA_ERROR = RobotUtils.INVALID_DATA_ERROR
self.horizVidMotor = Motor(50, RobotUtils.HORIZONTAL_VID_PIN, RobotUtils.HORIZONTAL_VID_MIN_VAL, RobotUtils.HORIZONTAL_VID_MAX_VAL, 0, "horizontal video motor", self.pwm)
self.vertVidMotor = Motor( 50, RobotUtils.VERTICAL_VID_PIN, RobotUtils.VERTICAL_VID_MIN_VAL, RobotUtils.VERTICAL_VID_MAX_VAL, 0, "vertical video motor", self.pwm)
self.setup()
self.motors = [self.front_left, self.front_right,self.back_left,self.back_right, self.horizVidMotor, self.vertVidMotor ]
self.MotionController = MotionController(self.TURN_LEFT, self.TURN_RIGHT, self.FORWARD, self.BACKWARD, self.STOP,self.AUTONOMOUS,self.INVALID_DATA_ERROR,
self.motors, RobotUtils
)
self.stand()
RobotUtils.ColorPrinter(self.__class__.__name__, '__init__() finished. Robot Created with id ' +str(id(self)), 'OKBLUE')
# loads json data and creates Leg objects with add_leg()
def setup(self):
with open(self.data_file_name) as data_file:
data = json.load(data_file)
constants = data["constants"]
for i in range(len(data["legs"])):
self.add_leg(data["legs"][i],constants)
# reads dictuanary values from input, creates a Leg object, and adds it to leg variables
def add_leg(self,legData,constants):
leg_name = legData["name"]
body_pin = legData["motors"]["body"]["pinValue"]
body_offset = legData["motors"]["body"]["offset"]
body_center = constants["bodyCenterValue"] + body_offset
body_min = constants["bodyRange"]["min"]
body_max = constants["bodyRange"]["max"]
mid_horiz_value = legData["motors"]["middle"]["horizValue"]
middle_pin = legData["motors"]["middle"]["pinValue"]
middle_min = constants["middleRange"]["min"]
middle_max = constants["middleRange"]["max"]
middle_offset_to_center = constants["midOffsetFromHoriz"]
leg_horiz_value = legData["motors"]["leg"]["horizValue"]
leg_pin = legData["motors"]["leg"]["pinValue"]
leg_min = constants["legRange"]["min"]
leg_max = constants["legRange"]["max"]
leg_offset_to_center = constants["legOffsetFromHoriz"]
leg = Leg( self.pwm, leg_name, body_pin, body_min, body_max, body_center, mid_horiz_value, middle_pin, middle_min, middle_max, middle_offset_to_center, leg_horiz_value, leg_pin, leg_min, leg_max, leg_offset_to_center)
if leg_name == "FR":
self.front_right = leg
elif leg_name == "FL":
self.front_left = leg
elif leg_name == "BL":
self.back_left = leg
elif leg_name == "BR":
self.back_right = leg
else:
print "ERROR: LEG CANNOT BE IDENTIFIED"
# Called by server when a change in user data is detected
def inputData(self,data):
self.inputQueue.put(data)
def updateAgendaLoop(self):
while True:
try:
data = self.inputQueue.get_nowait()
self.updateAgenda(data)
except Empty:
pass
time.sleep(RobotUtils.AGENDA_UPDATE_SPEED)
print '\033[94m' + "Robot: QUEUE READING FINISHED" + '\033[0m'
sys.exit()
# acts as central coordinator for the robot - raeads incoming data + state of the bot and calls methods accordingly
def updateAgenda(self,data):
self.MotionController.updateCameras(data)
nextMove = self.MotionController.NextMove(data)
if nextMove == self.INVALID_DATA_ERROR:
print "Fix this"
else:
self.MotionController.MakeMove(nextMove)
Console Output
First of all a little soapboxing:
If you provide a SSCCE (Short, Self-Contained, Correct (Compilable), Example) then you are more likely to get a response. Also by trimming the example to the minimum required to reproduce you may very well identify the source of the problem yourself. For example the following would be a SSCCE for your issue:
Required dependancies:
pip install flask
pip install flask-socketio
Code:
import logging
from flask import Flask
from flask_socketio import SocketIO
logging.basicConfig(level=logging.ERROR)
app = Flask(__name__)
socketio = SocketIO(app)
class Hypervisor():
def __init__(self):
print('Hypervisor initialized')
if __name__ == '__main__':
quadbot = Hypervisor()
socketio.run(app, debug=True)
Output:
Hypervisor initialized
Hypervisor initialized
Explanation
If you use a debugger the answer presents itself fairly easily. You can use the debugger tools included in your IDE, or you can always use The Python Debugger from the python standard library (a.k.a. pdb).
While it's beyond the scope of this answer to provide a complete tutorial of pdb, the method we will use to debug is to break into the debugger by importing pdb and inserting the following pdb.set_trace() at the point you want to begin debugging.
Since the issue is with Hypervisor creation the logical point to insert the debugger is just before the initialization of Hypervisor like so:
if __name__ == '__main__':
import pdb
global quadbot
pdb.set_trace() # debugging begins here
quadbot = Hypervisor()
socketio.run(app, debug=True)
From this point when you run your app, it will drop into pdb right before the initialization of Hypervisor. Before you do so though, you will need to know two commands for using pdb.
The first is n for next which continues execution until to the next line of code in the current function.
The second is s for step which will step into the current code stopping at first opportunity.
Or as the docs explain it:
The difference between next and step is that step stops inside a called function, while next executes called functions at (nearly) full speed, only stopping at the next line in the current function.
So armed with this knowledge run your app, and you will be presented with the pdb prompt which should look something like this:
-> quadbot = Hypervisor()
(Pdb)
The -> shows the current line of code about to be executed. (Pdb) is the prompt awaiting input. At this point let's initialize the Hypervisor by proceeding to the next line of code by typing n and enter. At this point you should see a single Hypervisor has been created.
[4/3/2017 20:02:46 ] Hypervisor: __init__() finished. Robot Created with id 4218654299
And be returned to the pdb prompt for the next line:
-> socketio.run(app, debug=True)
(Pdb)
So since there's only one more line of code left to run, the issue is somewhere in socketio.run. So this time we will step into the current line of code by typing s and enter which will bring you to:
-> def run(self, app, host=None, port=None, **kwargs):
(Pdb)
From this point, keep stepping to the next line of code until you see the second Hypervisor initialization. Looking at the trace you should see something like the following:
-> app.run(host=host, port=port, threaded=True,
(Pdb) n
-> use_reloader=use_reloader, **kwargs)
(Pdb) n
-> quadbot = Hypervisor()
(Pdb) n
[4/3/2017 20:03:52 ] Hypervisor: __init__() finished. Robot Created with id 4367452293
This shows you that right after the app.run executes (it takes two n's to execute since the command spans two lines of code), it returns to the quadbot = Hypervisor() line of code we started at. So a closer look at app.run shows an argument called use_reloader. If you haven't guessed already, looking at the Flask-SocketIO docs tells us:
use_reloader - True to enable the Flask reloader, False to disable it.
With a little more digging we can find the following two nuggets of wisdom in the Flask docs:
debug
The debug flag. Set this to True to enable debugging of the application. In debug mode the debugger will kick in when an unhandled exception occurs and the integrated server will automatically reload the application if changes in the code are detected.
and
run...
Flask will suppress any server error with a generic error page unless it is in debug mode. As such to enable just the interactive debugger without the code reloading, you have to invoke run() with debug=True and use_reloader=False. Setting use_debugger to True without being in debug mode won’t catch any exceptions because there won’t be any to catch.
So armed with this information you can choose to either disable debug, or pass the option use_reloader=False to socketio.run.
TL;DR
You most likely want to disable the Flask reloader by passing use_reloader=False to socketio.run
if __name__ == '__main__':
global quadbot
quadbot = Hypervisor()
socketio.run(app, debug=True, use_reloader=False)