I'm building a web-app to control some hardware connected to a Raspberry Pi.
The goal:
On server startup do some tasks once and store information globally (open COM Ports and store as objects)
On POST/GET call from a view start a thread with the serialports from 1) plus some data (names as Strings) from the POST/GET call. This thread is some game logic for the hardware and runs for around one hour each
Have possibility to terminate this thread anytime from a view and from within the thread
Have possibility to get variables from thread and display on web-app
At the moment I'm struggeling with the general design of this task. I'm new to threading and Django and can't quite tie these together. My Implementation right now only has a simple endless loop which should print out some debugging info if a global variable has changed.
On startup of the server, I'm starting a runner script in a new thread. The global var is stored in cfg.py and imported in apps.py and views.py. All three files are in the root directory of the app.
cfg.py
RUNNING = False
views.py
from django.shortcuts import render
from .models import Player
from . import cfg
def index(request):
player = Player.objects.all()
if request.method == 'POST':
try:
print('got POST request')
cfg.RUNNING = True
except Exception as exc:
print(exc)
return render(request, 'mygame/index.html', {
'player' : player
})
apps.py
import os
import threading
import datetime
import time
from django.apps import AppConfig
from . import cfg
class MyGameConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'mygame'
def ready(self):
run_once = os.environ.get('CMDLINERUNNER_RUN_ONCE')
if run_once is not None:
return
os.environ['CMDLINERUNNER_RUN_ONCE'] = 'True'
#### Code to execute on startup
try:
bp_thread = threading.Thread(target=runner, args=('John', 'James', ), daemon=True)
bp_thread.start()
except Exception as exc:
raise exc
def runner(p1, p2, *args, **kwargs):
# dummy function running in background
starttime = datetime.datetime.now()
stroftime = starttime.isoformat()
print(f'process started at {stroftime}')
while True:
if cfg.RUNNING:
print('RUNNING = True')
print('setting back to False for testing')
cfg.RUNNING = False
else:
pass
time.sleep(1)
Why can't my def index(request) function change the val of the cfg.RUNNING var? I get the debugging print with got POST request but the runner function is not recognizing the changed state of cfg.RUNNING.
I also tried an aproach with global RUNNING without success. If it was only strings I have to pass I'd use the model database but since the serial ports are objects I can't store in the SQLite3 db, this doesn't seem to suit my needs. Also the vals should be reset on server restart.
Maybe my whole design is flawed and I'm open to any suggestions to achive my goal. Thanks a lot in advance!
Related
Getting the specifics out of the way, I'm writing an open source P2P social network over IPFS and Flask -- I know, it's been done. I'm choosing Flask because pyinstaller can put it in an exe file.
I am attempting to update my IPNS every 10 minutes to publish all status updates I've added to the network during said 10 minutes. The cron function from setup class (from library.py) is where that updater function is stored. At first, I threaded the cron function from init of setup. The server hung. Then I moved the threading process over to app.before_first_request. The server still hangs.
https://pastebin.com/bXHTuH83 (main.py)
from flask import Flask, jsonify
from library import *
#=========================TO BE DELETED=========================================
def pretty(json):
json = dumps(loads(json), indent=4, sort_keys=True)
return json
#===============================================================================
app = Flask(__name__)
GANN = setup()
#app.before_first_request
def cron_job():
Thread(target=GANN.cron())
#app.route("/")
def home():
return "Hello World!!!"
if __name__ == "__main__":
app.run(port="80", debug=True, threaded=True)
https://pastebin.com/W5P8Tpvd (library.py)
from threading import Thread
from time import time, sleep
import urllib.request
from json import loads, dumps
def api(*argv, **kwargs):
url = "http://127.0.0.1:5001/api/v0/"
for arg in argv:
arg = arg.replace(" ", "/")
if arg[:-1] != "/":
arg += "/"
url += arg
url = url[0:-1]
if kwargs:
url+="?"
for val in kwargs:
url = url + val + "=" + kwargs[val] + "&"
url = url[0:-1]
print(url)
try:
with urllib.request.urlopen(url, timeout=300) as response:
return response.read()
except:
return b"""{"ERROR": "CANNOT CONNECT TO IPFS!"}"""
class setup():
def __init__(self):
api("files", "mkdir", arg="/GANN", parents="True")
self.root_hash = ""
def update_root(self):
try:
for entry in loads(api("files", "ls", l="True").decode())["Entries"]:
if entry["Name"] == "GANN":
self.root_hash = entry["Hash"]
except:
return """{"ERROR": "CANNOT FIND ROOT DIRECTORY"}"""
def publish_root(self):
api("name", "publish", arg=self.root_hash)
def cron(self):
while True:
print("CRON Thread Started!")
self.update_root()
self.publish_root()
sleep(600)
I have searched the web for a couple days and have yet to find a threading technique that will split from the main process and not hang the server from taking other requests. I believe I'm on a single stream connection, as IPFS blocks connections to every other device in my home when it's started. It takes a couple minutes for the CLI IPNS update to go through, so I set urllib's timeout to 300 seconds.
Well what I think the threading code is not correct.
#app.before_first_request
def cron_job():
Thread(target=GANN.cron())
Here you created a Thread object. The argument must be callable, but you called your method already here. so the right way would be
Thread(target=GANN.cron)
So the thread can call the cron function later. having said that, the Thread must be started, so it will call the function target you gave. So it must be ike
thread_cron = Thread(target=GANN.cron)
thread_cron.start()
Since you called the GANN.cron() , the method starts executing and your app hung!
I am trying to simulate an environment with vms and trying to run an object method in background thread. My code looks like the following.
hyper_v.py file :
import random
from threading import Thread
from virtual_machine import VirtualMachine
class HyperV(object):
def __init__(self, hyperv_name):
self.hyperv_name = hyperv_name
self.vms_created = {}
def create_vm(self, vm_name):
if vm_name not in self.vms_created:
vm1 = VirtualMachine({'vm_name': vm_name})
self.vms_created[vm_name] = vm1
vm1.boot()
else:
print('VM:', vm_name, 'already exists')
def get_vm_stats(self, vm_name):
print('vm stats of ', vm_name)
print(self.vms_created[vm_name].get_values())
if __name__ == '__main__':
hv = HyperV('temp')
vm_name = 'test-vm'
hv.create_vm(vm_name)
print('getting vm stats')
th2 = Thread(name='vm1_stats', target=hv.get_vm_stats(vm_name) )
th2.start()
virtual_machine.py file in the same directory:
import random, time, uuid, json
from threading import Thread
class VirtualMachine(object):
def __init__(self, interval = 2, *args, **kwargs):
self.vm_id = str(uuid.uuid4())
#self.vm_name = kwargs['vm_name']
self.cpu_percentage = 0
self.ram_percentage = 0
self.disk_percentage = 0
self.interval = interval
def boot(self):
print('Bootingup', self.vm_id)
th = Thread(name='vm1', target=self.update() )
th.daemon = True #Setting the thread as daemon thread to run in background
print(th.isDaemon()) #This prints true
th.start()
def update(self):
# This method needs to run in the background simulating an actual vm with changing values.
i = 0
while(i < 5 ): #Added counter for debugging, ideally this would be while(True)
i+=1
time.sleep(self.interval)
print('updating', self.vm_id)
self.cpu_percentage = round(random.uniform(0,100),2)
self.ram_percentage = round(random.uniform(0,100),2)
self.disk_percentage = round(random.uniform(0,100),2)
def get_values(self):
return_json = {'cpu_percentage': self.cpu_percentage,
'ram_percentage': self.ram_percentage,
'disk_percentage': self.disk_percentage}
return json.dumps(return_json)
The idea is to create a thread that keeps on updating the values and on request, we read the values of the vm object by calling the vm_obj.get_values() we would be creating multiple vm_objects to simulate multiple vms running in parallel and we need to get the information from a particular vm on request.
The problem, that I am facing, is that the update() function of the vm doesnot run in the background (even though the thread is set as daemon thread).
The method call hv.get_vm_stats(vm_name) waits until the completion of vm_object.update() (which is called by vm_object.boot()) and then prints the stats. I would like to get the stats of the vm on request by keeping the vm_object.update() running in the background forever.
Please share your thoughts if I am overlooking anything related to the basics. I tried looking into the issues related to the python threading library but I could not come to any conclusion. Any help is greatly appreciated. The next steps would be to have a REST api to call these functions to get the data of any vm but I am struck with this problem.
Thanks in advance,
As pointed out by #Klaus D in the comments, my mistake was using the braces when specifying the target function in the thread definition, which resulted in the function being called right away.
target=self.update() will call the method right away. Remove the () to
hand the method over to the thread without calling it.
This question already has answers here:
Why does running the Flask dev server run itself twice?
(7 answers)
Closed 5 years ago.
I am running into a seemingly bizarre problem that I can't figure out - I am trying to create a single object, however for some reason, two are always made.
The two files control a robot i am building.
App.py
A flask server which receives in coming commands through a socket.io connection with clients
Hypervisor.py
Robot Controller, takes in user commands, sensor data, current agenda, and feeds commands to the MotionController.
I am trying to create a single instance of a the Hypervisor class from within the Flask server app.py, however two are consistently created.
Below is my code and console output showing the double object creation. Why is this happeneing!!?!?!?
App.py
#!/usr/bin/env python
from flask import Flask, render_template, session, request, send_from_directory, send_file
from flask_socketio import SocketIO, emit, join_room, leave_room, close_room, rooms, disconnect
import time
import json
import datetime
import logging
import platform
from bColors import bcolors
from RobotSystem.Hypervisor import Hypervisor
from RobotSystem.Services.Utilities.RobotUtils import RobotUtils
async_mode = None
app = Flask(__name__, static_url_path='/static')
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app, async_mode=async_mode)
log = logging.getLogger("werkzeug")
log.setLevel(logging.ERROR)
thread = None
connections = 0
#app.route('/', methods=['GET', 'POST'])
def index():
return render_template('index.html', async_mode=socketio.async_mode)
def background_thread():
while True:
socketio.sleep(1)
#socketio.on('valueUpdate')
def valueUpdateHandler(message):
RobotUtils.ColorPrinter("app.py",'Value update fired ', 'OKBLUE')
quadbot.inputData(message)
data = {}
data['Recieved'] = True
return json.dumps(data)
#socketio.on('connect')
def test_connect():
global connections
connections+=1
print_str = "Client connected. "+ str(connections)+ " current connections"
RobotUtils.ColorPrinter("app.py",print_str, 'OKBLUE')
global thread, quadbotThread
if thread is None:
print "init"
thread = socketio.start_background_task(target=background_thread)
#socketio.on('disconnect')
def test_disconnect():
global connections
connections -= 1
RobotUtils.ColorPrinter("app.py",str( 'Client disconnected. ' +str(connections)+ " current connections" ), 'OKBLUE')
if __name__ == '__main__':
global quadbot
quadbot = Hypervisor()
socketio.run(app, debug=True)
Hypervisor.py
#!/usr/bin/python
from Services import *
import time
import math
import json
import sys
import threading
import os
from Queue import Queue,Empty
class Hypervisor():
def __init__(self):
if RobotUtils.LIVE_TESTING:
self.pwm = PWM()
self.pwm.setPWMFreq(RobotUtils.FREQUENCY)
else:
self.pwm = None
self.inputQueue = Queue()
self.agendaThread = threading.Thread(group=None,target=self.updateAgendaLoop,name="agendaThread")
self.agendaThread.start()
self.data_file_name = RobotUtils.DATA_FILE
self.front_left = None
self.front_right = None
self.back_left = None
self.back_right = None
self.TURN_LEFT = RobotUtils.TURN_LEFT
self.TURN_RIGHT = RobotUtils.TURN_RIGHT
self.FORWARD = RobotUtils.FORWARD
self.BACKWARD = RobotUtils.BACKWARD
self.STOP = RobotUtils.STOP
self.AUTONOMOUS = RobotUtils.AUTONOMOUS
self.INVALID_DATA_ERROR = RobotUtils.INVALID_DATA_ERROR
self.horizVidMotor = Motor(50, RobotUtils.HORIZONTAL_VID_PIN, RobotUtils.HORIZONTAL_VID_MIN_VAL, RobotUtils.HORIZONTAL_VID_MAX_VAL, 0, "horizontal video motor", self.pwm)
self.vertVidMotor = Motor( 50, RobotUtils.VERTICAL_VID_PIN, RobotUtils.VERTICAL_VID_MIN_VAL, RobotUtils.VERTICAL_VID_MAX_VAL, 0, "vertical video motor", self.pwm)
self.setup()
self.motors = [self.front_left, self.front_right,self.back_left,self.back_right, self.horizVidMotor, self.vertVidMotor ]
self.MotionController = MotionController(self.TURN_LEFT, self.TURN_RIGHT, self.FORWARD, self.BACKWARD, self.STOP,self.AUTONOMOUS,self.INVALID_DATA_ERROR,
self.motors, RobotUtils
)
self.stand()
RobotUtils.ColorPrinter(self.__class__.__name__, '__init__() finished. Robot Created with id ' +str(id(self)), 'OKBLUE')
# loads json data and creates Leg objects with add_leg()
def setup(self):
with open(self.data_file_name) as data_file:
data = json.load(data_file)
constants = data["constants"]
for i in range(len(data["legs"])):
self.add_leg(data["legs"][i],constants)
# reads dictuanary values from input, creates a Leg object, and adds it to leg variables
def add_leg(self,legData,constants):
leg_name = legData["name"]
body_pin = legData["motors"]["body"]["pinValue"]
body_offset = legData["motors"]["body"]["offset"]
body_center = constants["bodyCenterValue"] + body_offset
body_min = constants["bodyRange"]["min"]
body_max = constants["bodyRange"]["max"]
mid_horiz_value = legData["motors"]["middle"]["horizValue"]
middle_pin = legData["motors"]["middle"]["pinValue"]
middle_min = constants["middleRange"]["min"]
middle_max = constants["middleRange"]["max"]
middle_offset_to_center = constants["midOffsetFromHoriz"]
leg_horiz_value = legData["motors"]["leg"]["horizValue"]
leg_pin = legData["motors"]["leg"]["pinValue"]
leg_min = constants["legRange"]["min"]
leg_max = constants["legRange"]["max"]
leg_offset_to_center = constants["legOffsetFromHoriz"]
leg = Leg( self.pwm, leg_name, body_pin, body_min, body_max, body_center, mid_horiz_value, middle_pin, middle_min, middle_max, middle_offset_to_center, leg_horiz_value, leg_pin, leg_min, leg_max, leg_offset_to_center)
if leg_name == "FR":
self.front_right = leg
elif leg_name == "FL":
self.front_left = leg
elif leg_name == "BL":
self.back_left = leg
elif leg_name == "BR":
self.back_right = leg
else:
print "ERROR: LEG CANNOT BE IDENTIFIED"
# Called by server when a change in user data is detected
def inputData(self,data):
self.inputQueue.put(data)
def updateAgendaLoop(self):
while True:
try:
data = self.inputQueue.get_nowait()
self.updateAgenda(data)
except Empty:
pass
time.sleep(RobotUtils.AGENDA_UPDATE_SPEED)
print '\033[94m' + "Robot: QUEUE READING FINISHED" + '\033[0m'
sys.exit()
# acts as central coordinator for the robot - raeads incoming data + state of the bot and calls methods accordingly
def updateAgenda(self,data):
self.MotionController.updateCameras(data)
nextMove = self.MotionController.NextMove(data)
if nextMove == self.INVALID_DATA_ERROR:
print "Fix this"
else:
self.MotionController.MakeMove(nextMove)
Console Output
First of all a little soapboxing:
If you provide a SSCCE (Short, Self-Contained, Correct (Compilable), Example) then you are more likely to get a response. Also by trimming the example to the minimum required to reproduce you may very well identify the source of the problem yourself. For example the following would be a SSCCE for your issue:
Required dependancies:
pip install flask
pip install flask-socketio
Code:
import logging
from flask import Flask
from flask_socketio import SocketIO
logging.basicConfig(level=logging.ERROR)
app = Flask(__name__)
socketio = SocketIO(app)
class Hypervisor():
def __init__(self):
print('Hypervisor initialized')
if __name__ == '__main__':
quadbot = Hypervisor()
socketio.run(app, debug=True)
Output:
Hypervisor initialized
Hypervisor initialized
Explanation
If you use a debugger the answer presents itself fairly easily. You can use the debugger tools included in your IDE, or you can always use The Python Debugger from the python standard library (a.k.a. pdb).
While it's beyond the scope of this answer to provide a complete tutorial of pdb, the method we will use to debug is to break into the debugger by importing pdb and inserting the following pdb.set_trace() at the point you want to begin debugging.
Since the issue is with Hypervisor creation the logical point to insert the debugger is just before the initialization of Hypervisor like so:
if __name__ == '__main__':
import pdb
global quadbot
pdb.set_trace() # debugging begins here
quadbot = Hypervisor()
socketio.run(app, debug=True)
From this point when you run your app, it will drop into pdb right before the initialization of Hypervisor. Before you do so though, you will need to know two commands for using pdb.
The first is n for next which continues execution until to the next line of code in the current function.
The second is s for step which will step into the current code stopping at first opportunity.
Or as the docs explain it:
The difference between next and step is that step stops inside a called function, while next executes called functions at (nearly) full speed, only stopping at the next line in the current function.
So armed with this knowledge run your app, and you will be presented with the pdb prompt which should look something like this:
-> quadbot = Hypervisor()
(Pdb)
The -> shows the current line of code about to be executed. (Pdb) is the prompt awaiting input. At this point let's initialize the Hypervisor by proceeding to the next line of code by typing n and enter. At this point you should see a single Hypervisor has been created.
[4/3/2017 20:02:46 ] Hypervisor: __init__() finished. Robot Created with id 4218654299
And be returned to the pdb prompt for the next line:
-> socketio.run(app, debug=True)
(Pdb)
So since there's only one more line of code left to run, the issue is somewhere in socketio.run. So this time we will step into the current line of code by typing s and enter which will bring you to:
-> def run(self, app, host=None, port=None, **kwargs):
(Pdb)
From this point, keep stepping to the next line of code until you see the second Hypervisor initialization. Looking at the trace you should see something like the following:
-> app.run(host=host, port=port, threaded=True,
(Pdb) n
-> use_reloader=use_reloader, **kwargs)
(Pdb) n
-> quadbot = Hypervisor()
(Pdb) n
[4/3/2017 20:03:52 ] Hypervisor: __init__() finished. Robot Created with id 4367452293
This shows you that right after the app.run executes (it takes two n's to execute since the command spans two lines of code), it returns to the quadbot = Hypervisor() line of code we started at. So a closer look at app.run shows an argument called use_reloader. If you haven't guessed already, looking at the Flask-SocketIO docs tells us:
use_reloader - True to enable the Flask reloader, False to disable it.
With a little more digging we can find the following two nuggets of wisdom in the Flask docs:
debug
The debug flag. Set this to True to enable debugging of the application. In debug mode the debugger will kick in when an unhandled exception occurs and the integrated server will automatically reload the application if changes in the code are detected.
and
run...
Flask will suppress any server error with a generic error page unless it is in debug mode. As such to enable just the interactive debugger without the code reloading, you have to invoke run() with debug=True and use_reloader=False. Setting use_debugger to True without being in debug mode won’t catch any exceptions because there won’t be any to catch.
So armed with this information you can choose to either disable debug, or pass the option use_reloader=False to socketio.run.
TL;DR
You most likely want to disable the Flask reloader by passing use_reloader=False to socketio.run
if __name__ == '__main__':
global quadbot
quadbot = Hypervisor()
socketio.run(app, debug=True, use_reloader=False)
This is the first time I'm using Celery, and honestly, I'm not sure I'm doing it right. My system has to run on Windows, so I'm using RabbitMQ as the broker.
As a proof of concept, I'm trying to create a single object where one task sets the value, another task reads the value, and I also want to show the current value of the object when I go to a certain url. However I'm having problems sharing the object between everything.
This is my celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE','cesGroundStation.settings')
app = Celery('cesGroundStation')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind = True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
The object I'm trying to share is:
class SchedulerQ():
item = 0
def setItem(self, item):
self.item = item
def getItem(self):
return self.item
This is my tasks.py
from celery import shared_task
from time import sleep
from scheduler.schedulerQueue import SchedulerQ
schedulerQ = SchedulerQ()
#shared_task()
def SchedulerThread():
print ("Starting Scheduler")
counter = 0
while(1):
counter += 1
if(counter > 100):
counter = 0
schedulerQ.setItem(counter)
print("In Scheduler thread - " + str(counter))
sleep(2)
print("Exiting Scheduler")
#shared_task()
def RotatorsThread():
print ("Starting Rotators")
while(1):
item = schedulerQ.getItem()
print("In Rotators thread - " + str(item))
sleep(2)
print("Exiting Rotators")
#shared_task()
def setSchedulerQ(schedulerQueue):
schedulerQ = schedulerQueue
#shared_task()
def getSchedulerQ():
return schedulerQ
I'm starting my tasks in my apps.py...I'm not sure if this is the right place as the tasks/workers don't seem to work until I start the workers in a separate console where I run the celery -A cesGroundStation -l info.
from django.apps import AppConfig
from scheduler.schedulerQueue import SchedulerQ
from scheduler.tasks import SchedulerThread, RotatorsThread, setSchedulerQ, getSchedulerQ
class SchedulerConfig(AppConfig):
name = 'scheduler'
def ready(self):
schedulerQ = SchedulerQ()
setSchedulerQ.delay(schedulerQ)
SchedulerThread.delay()
RotatorsThread.delay()
In my views.py I have this:
def schedulerQ():
queue = getSchedulerQ.delay()
return HttpResponse("Your list: " + queue)
The django app runs without errors, however my output from "celery -A cesGroundStation -l info" is this: Celery command output
First it seems to start multiple "SchedulerThread" tasks, secondly the "SchedulerQ" object isn't being passed to the Rotators, as it's not reading the updated value.
And if I go to the url for which shows the views.schedulerQ view I get this error:
Django views error
I have very, very little experience with Python, Django and Web Development in general, so I have no idea where to start with that last error. Solutions suggest using Redis to pass the object to the views, but I don't know how I'd do that using RabbitMQ. Later on the schedulerQ object will implement a queue and the scheduler and rotators will act as more of a producer/consumer dynamic with the view showing the contents of the queue, so I believe using the database might be too resource intensive. How can I share this object across all tasks, and is this even the right approach?
The right approach would be to use some persistence layer, such as a database or results back end to store the information you want to share between tasks if you need to share information between tasks (in this example, what you are currently putting in your class).
Celery operates on a distributed message passing paradigm - a good way to distill that idea for this example, is that your module will be executed independently every time a task is dispatched. Whenever a task is dispatched to Celery, you must assume it is running in a seperate interpreter and loaded independently of other tasks. That SchedulerQ class is instantiated anew each time.
You can share information between tasks in ways described in the docs linked previously and some best practice tips discuss data persistence concerns.
I wanted to add a filter to the python logger in my Django project; the filter has a while loop in it (there to wait and allow the user to provide a response). However, when I try to implement my filter the server seems to immediately break and my view goes into an infinite loop (which doesn't happen when I test the filter in a terminal window).
My code I have looks like this:
import logging, time, os
# in my Utils.py file:
def ErrorReport(msg, notify=False, warnOnly=False):
logger.exception("ErrorReport:%s" % msg, exc_info=True)
raise RuntimeError('Aborting execution. Terrible things might happen. Check logs for more information')
# in my views.py file:
class ErrorFilter(logging.Filter):
def __init__(self,level):
self.level = level
print level
def filter(self,record):
if record.levelno == self.level:
os.environ["ERROR_FLAG"] = "True"
timeout = time.time() + 60*1 #set the timeout to 1 minute
while True:
print "waiting..."
keep_going = os.environ.get("CONTINUE_FLAG")
#wait for user to respond
if keep_going == "False" or time.time() > timeout:
Utils.ErrorReport("There's a problem, quiting current process.")
if keep_going == "True":
break
logger = Utils.getLogger() #I format the logger in another python script called Utils
logger.addFilter(ErrorFilter(logging.ERROR))
def home(request):
context_dict = {'boldmessage': "--> To view logs, please turn off any popup blockers <--",'version':"1.1.6"}
logger.error("faking an error to test new filter")
return render(request, 'InterfaceApp/home.html', context_dict)
I've tested a simpler version of this filter in a terminal window and didn't have any problems with it getting stuck in an infinite loop. When I try to open the home page of my django site it looks like it's loading forever, and when I look at the developers window I can see that I've lost the connection to my server. In the terminal window where I'm running the server, I get the print statement waiting... forever, even though I've set and tested the timeout on the while loop.
Is there a fundamental problem with running while loops in a django project, or have I missed something?