I need to read a csv file just after running server. It can't be done in any view because it need to be preload to execute all views, so I need to do it immediately after "manage.py runserver". Is there any file where I can write the code that I need to execute in first place?
Code put in settings.py file may run when Django application as #salman-arshad suggested, but it is not the best way of doing it. It could be problematic or even dangerous according to the context of what you are running.
The first problem is code will run twice when the application starts. Django uses the settings.py file many times during startup and running. Just put print('Hello world') at the end of the settings.py file and you will see it printed twice. It means the code ran twice. Secondly, the settings.py file does not serve the purpose of running arbitrary code. It is dedicated to your project settings. Thirdly if you try to import anything from within the application in settings.py and use it (for instance a Model), it would cause errors. Because Django's internal app registry is not ready yet.
So the best place for running this type of code is in the ready hook of the AppConfig class. In any Django application, there is an apps.py file that defines a configuration class. You can override the ready function in it. This function will run only once when you start the application like this. Say you have an app named app_name
class AppNameConfig(AppConfig):
name = 'app_name'
def ready(self):
pass
# write your startup code here you can import application code here
#from app_name.models import MyModel
then put the following line in that app's __init__.py file
default_app_config = 'app_name.apps.AppNameConfig'
Now, this code will run at every startup without problems.
Just add that script in settings.py file. Because settings.py file of those files which are executed prior to views.py file
Related
I have the following code at the end of my Django settings:
if not TESTING:
# Don't use local settings for tests, so that tests are always reproducible.
try:
from .local_settings import *
except ImportError:
pass
local_settings.py contains all the external dependencies URLs used by my Django application, such as database server URL, email server URL and external APIs URLs. Currently the only external dependency my test suite uses is a local test database; everything else is mocked.
However, now I'd like to add some tests that validate responses from the external APIs I use, so that I can detect quickly when an external API changes without prior notice. I'd like to add a --external-deps command line argument to "./manage.py test" and only run tests that depend on external APIs if this flag is enabled.
I know that I can process arguments passed to that command by overriding the add_arguments() method of the DiscoverRunner class, as described in Django manage.py : Is it possible to pass command line argument (for unit testing) , but my conditional Django settings loading are run before that, so the following won't work:
if not TESTING or TEST_EXTERNAL_DEPS:
# Don't use local settings for tests, so that tests are always reproducible.
try:
from .local_settings import *
except ImportError:
pass
Is there a way to achieve what I want?
I'm new to python, therefore I believe solution might be quick one. I've spent hours, but couldn't make it work.
I need to access app outside main class.
Package structure below:
app/
app.py
another_class.py
In app.py:
app = Flask(__name__)
In another_class.py:
from flask import current_app as app
app.config['test_key']
Of course I receive error
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
to interface with the current application object in some way. To solve
this, set up an application context with app.app_context(). See the
documentation for more information.
I've tried running it in block of
with app.app_context:
but it didn't seem to work.
What do I do wrong?
Your problem is right here.
I need to access app outside main class
And you are trying to solve it the wrong way. current_app is useful to use features that are originally used externally. Typically, a use case would be to simulate routes, but offline.
What you want to do, is have a file that will "manage" your application, such as manage.py for instance. Then, another file app.py that will contain the configuration of your application.
In your manage.py file, you will import app in order to run it. And if you then need to access to your app object, you can import app from your another file. Basically, the app object you're instantiating in the app.py acts like a pointer, and the other files importing the app will also be affected by the change you make in another file to this object app.
Your tree should look like this.
your_api/
app.py # Instantiate the app object.
manage.py # Import the app.py package to run it.
another_file.py # If this file imports app.py and modifies the app object
# inside, the changes will also affect the object imported by manage.py.
I have a functional Django app that has many Google Text-To-Speech API calls and database reads/writes in my view. When testing locally it takes about 3 seconds to load a page, but when I deploy the app live to Heroku it takes about 15 seconds to load the webpage. So I am trying to reduce load time.
I came across this article: https://devcenter.heroku.com/articles/python-rq that suggests I should use background tasks by queueing jobs to workers using an RQ (Redis Queue) library. I followed their steps and included their worker.py file in the same directory as my manage.py file (not sure if that's the right place to put it). I wanted to test it out locally with a dummy function and view to see if it would run without errors.
# views.py
from rq import Queue
from worker import conn
def dummy(foo):
return 2
def my_view(request):
q = Queue(connection=conn)
for i in range(10):
dummy_foo = q.enqueue(dummy, "howdy")
return render(request, 'dummy.html', {})
In separate terminals I run:
$ python worker.py
$ python manage.py runserver
But when loading the webpage I received many "Apps aren't loaded yet." error messages in the python worker.py terminal. I haven't tried to deploy to Heroku yet, but I'm wondering why I am I getting this error message locally?
Better late than never.
Django-rq requires Django2.0, unfortunately for our project there is no plan to upgrade to the latest version.
So if you are in the same situation, you can still use plain RQ, you just need to add the two following lines in worker.py (worker_django_1_11) :
import django
django.setup()
and pass the worker class like :
> DJANGO_SETTINGS_MODULE=YOURPROJECT.settings rq worker --worker-class='worker_django_1_11.Worker'
You didn't post the code of worker.py, but I'd wager it does not properly initialize Django. Take a look at the contents of manage.py to see an example. So, if worker.py tries to instantiate (or even import) any models, views, etc, you'll get that kind of error. Django needs to resolve settings.py (among other things), then use that to look up database settings, resolve models/relationships, etc.
Simplest path is to use django-rq, a simple library that integrates RQ and Django to handle all this. Your worker.py essentially just becomes python manage.py rqworker.
I want memcached to be flushed on every restart/reload of django server. I use cherrypy for production and builtin server for development.
I would add this to settings.py, right after CACHES:
from django.core.cache import cache
cache.clear()
but it makes a recursive import:
Error: Can't find the file 'settings.py' in the directory containing 'manage.py'. It appears you've customized things.
You'll have to run django-admin.py, passing it your settings module.
(If the file settings.py does indeed exist, it's causing an ImportError somehow.)
make: *** [server] Error 1
Any other suggestions? Thanks.
It's bad practice to put code in settings.py other than assignments. It's better suited as a management command:
from django.core.management.base import BaseCommand
from django.core.cache import cache
class Command(BaseCommand):
def handle(self, *args, **kwargs):
cache.clear()
self.stdout.write('Cleared cache\n')
Which you can add to your project by sticking it in someapp/management/commands. For instance you could create a new app called utils and add that to your INSTALLED_APPS and the directory structure would look like this:
utils
├── __init__.py
└── management
├── __init__.py
└── commands
├── __init__.py
└── clearcache.py
You can now clear cache by doing ./manage.py clearcache. If you want to run clearcache every time you runserver you can just write a shell alias to do that:
alias runserver='./manage.py clearcache && ./manage.py runserver'
Alternately I think you can write it as a stand-alone script and configure the settings it requires by hand:
from django.conf import settings
# obviously change CACHES to your settings
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-snowflake'
}
}
settings.configure(CACHES=CACHES) # include any other settings you might need
from django.core.cache import cache
cache.clear()
Writing your stand-alone script like this will prevent circular imports, and allow you to import it from your settings.py. Although there is no guarantee that settings.py will be imported only once, so in general I'd avoid this. It'd be nice if the signal framework could fire off an event once every time the app is started, after settings are loaded for stuff like this.
Django Extensions lets you wipe cache via
manage.py clear_cache
more info and many further commands in their docs.
You typically only wan't to invalidate your caches if the code changes in a way that requires a new cache. Not on every restart.
This is best handled by using the Django feature: settings.CACHES.VERSION, and increase that number every time you change the code that changes the format of cached data.
That way, on a deploy, you automatically will use a fresh cache when you deploy new code, but keep the cache if you're code is cache-compatible with the previous code.
How about this? Define a boolean value in settings.py, for example CLEAR_CACHE_ON_RESTART = True and then check in other place if it is True. If it is, then clear cache and set it to False. This code can be placed in any view (like a main view) and probably even in manage.py or urls.py (although I didn't check this and it doesn't look too good). Give it a try!
If you have multiple cache backends, django.core.cache.cache.clear() will only clear the default cache. To make sure your clear_cache command clears cache from all your backends, you should use the following command:
from django.conf import settings
from django.core.cache import caches
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = "Clear cache"
def handle(self, **options):
for k in settings.CACHES.keys():
caches[k].clear()
self.stdout.write("Cleared cache '{}'.\n".format(k))
I recently configured my app to use the new AppStats feature of GAE. However, while debugging it, the extremely verbose logging from AppStats is annoying & I'd like to disable it while I'm debugging and then turn it back on later. Surely there is a single line I can add to or modify in a config file that will let me do this.
See the configuring appstats docs: configuration is performed by creating your own appengine_config.py in your app's root directory. Best documentation of what you can do in that config file is the sample one supplied with your SDK, which you can also look at here. To disable stats, if you're using Django, just comment out the line
google.appengine.ext.appstats.recording.AppStatsDjangoMiddleware
in your Diango settings.py file; if you're not using Django, in the function that should be in your appengine_config.py file and read
def webapp_add_wsgi_middleware(app):
from google.appengine.ext.appstats import recording
app = recording.appstats_wsgi_middleware(app)
return app
just comment out the first two lines of the body, so it reads instead
def webapp_add_wsgi_middleware(app):
# from google.appengine.ext.appstats import recording
# app = recording.appstats_wsgi_middleware(app)
return app
If you insist on it being a single-line change, you can avoid commenting the from statement -- per se, it's innocuous, though it may microscopically slow you down (which is why I'd comment it out even though innocuous;-).
I know this is old, but how about this:
Add a config.py where you define the DEBUG flag (or if you have it defined elsewhere, even better). And then:
from config import DEBUG
def webapp_add_wsgi_middleware(app):
if not DEBUG:
from google.appengine.ext.appstats import recording
app = recording.appstats_wsgi_middleware(app)
return app
EDIT: Advantage of this method is you can use the same debug flag elsewhere in your app.