Is it allowed to group custom Django commands to separate folders inside the same Django app?
I have a lot of them and wanted to group them logically by purpose. Created folders but Django can't find them.
Maybe I'm trying to run them wrong. Tried:
python manage.py process_A_related_data
the same plus imported all commands in __init__.py
python manage.py folderA process_A_related_data
python manage.py folderA.process_A_related_data
python manage.py folderA/process_A_related_data
Got following error:
Unknown command: 'folderA/process_A_related_data'
Type 'manage.py help' for usage.
I think you can create a basic custom command which will run other commands from relevent folders. Here is an approach you can take:
First make a folder structure like this:
management/
commands/
folder_a/
process_A_related_data.py
folder_b/
process_A_related_data.py
process_data.py
Then inside process_data.py, update the command like this:
from django.core import management
from django.core.management.base import BaseCommand
import importlib
class Command(BaseCommand):
help = 'Folder Process Commands'
def add_arguments(self, parser):
parser.add_argument('-u', '--use', type=str, nargs='?', default='folder_a.process_A_related_data')
def handle(self, *args, **options):
try:
folder_file_module = options['use'] if options['use'].startswith('.') else '.' + options['use']
command = importlib.import_module(folder_file_module, package='your_app.management.commands')
management.call_command(command.Command())
except ModuleNotFoundError:
self.stderr.write(f"No relevent folder found: {e.name}")
Here I am using call_command method to call other managment commands.
Then run commands like this:
python manage.py process_data --use folder_a.process_A_related_data
Finally, if you want to run commands like python manage.py folder_a.process_A_related_data, then probably you need to change in manage.py. Like this:
import re
...
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
if re.search('folder_[a-z].*', sys.argv[-1]):
new_arguments = sys.argv[:-1] + ['process_data','--use', sys.argv[-1]]
execute_from_command_line(new_arguments)
else:
execute_from_command_line(sys.argv)
You should be able to partition the code by using mixins (I have not tried this in this context, though)
A standard management command looks like
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = 'FIXME A helpful comment goes here'
def add_arguments(self, parser):
parser.add_argument( 'name', ...)
# more argument definitions
def handle(self, *args, **options):
# do stuff
Which can probably be replaced by a "stub" in app/management/commands:
from wherever.commands import FooCommandMixin
from django.core.management.base import BaseCommand
class Command(FooCommandMixin, BaseCommand):
# autogenerated -- do not put any code in here!
pass
and in wherever/commands
class FooCommandMixin( object):
help = 'FIXME A helpful comment goes here'
def add_arguments(self, parser):
parser.add_argument( 'name', ...)
# more argument definitions
def handle(self, *args, **options):
# do the work
It would not be hard to write a script to go through a list of file names or paths (using glob.glob) using re.findall to identify appropriate class declarations, and to (re)generate a matching stub for each in the app's management/commands folder.
Also/instead Python's argparse allows for the definition of sub-commands. So you should be able to define a command that works like
./manage.py foo bar --aa --bb something --cc and
./manage.py foo baz --bazzy a b c
where the syntax after foo is determined by the next word (bar or baz or ...). Again I have no experience of using subcommands in this context.
I found no mention of support for this feature in the release notes. It looks to be that this is still not supported as of version Django 3.0. I would suggest that you use meaningful names for your files that help you specify. You could always come up w/ a naming convention!
A workaround could be: create a specific Django "satellite" app for each group of management commands.
In recent version of Django, the requirements for a Python module to be an app are minimal: you won't need to provide any fake models.py or other specific files as happened in the old days.
While far from perfect from a stylistic point of view, you still gain a few advantages:
no need to hack the framework at all
python manage.py will list the commands grouped by app
you can control the grouping by providing suitable names to the apps
you can use these satellite apps as container for specific unit tests
I always try to avoid fighting against the framework, even when this means to compromise, and sometimes accept it's occasional design limitations.
Related
Terminal says that i didn't defined DJANGO_SETTINGS_MODULE but i tried as i thought every method to do it and so far nothing helper (coding on windows)
Can someone help me out please? I am stuck and dont know what to do
Maby this will help - i run my virtual env through Anaconda -
conda activate djangoenv
raise ImproperlyConfigured(django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
from faker import Faker
from app.models import AccessRecrod, Webpage, Topic
import random
import django
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
django.setup()
fakegen = Faker()
topics = ['Search', 'Social', 'Marketplace', 'News', 'Games']
def add_topic():
t = Topic.objects.get_or_create(top_name=random.choice(topics))[0]
t.save()
return t
def populate(N=5):
for entry in range(N):
# get topic for entry
top = add_topic()
# create fake data for entry
fake_url = fakegen.url()
fake_date = fakegen.date()
fake_name = fakegen.company()
# create new webpage entry
webpg = Webpage.objects.get_or_create(
topic=top, url=fake_url, name=fake_name)[0]
# create fake access record
acc_rec = AccessRecrod.objects.get_or_create(
name=webpg, date=fake_date)[0]
if __name__ == '__main__':
print('populating script!')
populate(20)
print('populating complete!')
The Problem is simply your import of models. You just need to setup django before you import anything related to django:
import django
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
django.setup()
from faker import Faker
from app.models import AccessRecrod, Webpage, Topic
import random
rest of your code
If you check the code in manage.py and django code on github you will essentially find the same sequence of code (besides a lot of additional stuff) when you run a management command by "python manage.py your_command"
To use the management command directly as described in the answer from #Code-Apprentice would be the other "normal" or "more django way" to integrate your script.
But you asked for the reason of the error message in your way of doing it.
If you study the complete error trace you will also find that it starts at the import command.
Instead of trying to shoe horn a regular python script into the Django environment like this, I suggest you create a command which you can run with ./manage.py. You can learn how to create your own custom commands here. When you do this correctly, you can run a command like ./manage.py create_topics or whatever name you give your command. This allows manage.py to load the Django environment for you.
Alternatively, you could look at creating fixtures with ./manage.py dumpdata and ./manage.py loaddata. These commands will allow you to create data and save them to a .json file which you can load into your database at any time.
We are using Django as backend for a website that provides various things, among others using a Neural Network using Tensorflow to answer to certain requests.
For that, we created an AppConfig and added loading of this app config to the INSTALLED_APPS in Django's settings.py. This AppConfig then loads the Neural Network as soon as it is initialized:
settings.py:
INSTALLED_APPS = [
...
'bert_app.apps.BertAppConfig',
]
.../bert_apps/app.py:
class BertAppConfig(AppConfig):
name = 'bert_app'
if 'bert_app.apps.BertAppConfig' in settings.INSTALLED_APPS:
predictor = BertPredictor() #loads the ANN.
Now while that works and does what it should, the ANN is now loaded for every single command run through manage.py. While we of course want it to be executed if you call manage.py runserver, we don't want it to be run for manage.py migrate, or manage.py help and all other commands.
I am generally not sure if this is the proper way how to load an ANN for a Django-Backend in general, so does anybody have any tips how to do this properly? I can imagine that loading the model on startup is not quite best practice, and I am very open to suggestions on how to do that properly instead.
However, there is also some other code besides the actual model-loading that also takes a few seconds and that is definitely supposed to be executed as soon as the server starts up (so on manage.py runserver), but also not on manage.py help (as it takes a few seconds as well), so is there some quick fix for how to tell Django to execute it only on runserver and not for its other commands?
I had a similar problem, solved it with checking argv.
class SomeAppConfig(AppConfig):
def ready(self, *args, **kwargs):
is_manage_py = any(arg.casefold().endswith("manage.py") for arg in sys.argv)
is_runserver = any(arg.casefold() == "runserver" for arg in sys.argv)
if (is_manage_py and is_runserver) or (not is_manage_py):
init_your_thing_here()
Now a bit closer to the if not is_manage_py part: in production you run your web server with uwsgi/uvicorn/..., which is still a web server, except it's not run with manage.py. Most likely, it's the only thing that you will ever run without manage.py
Use AppConfig.ready() - it's intended for it:
Subclasses can override this method to perform initialization tasks such as registering signals. It is called as soon as the registry is fully populated. - [django documentation]
To get your AppConfig back, use:
from django.apps import apps
apps.get_app_config(app_name)
# apps.get_app_configs() # all
This is another way, in your manage.py will have something probably look like this
def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'slambook.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
# check if has runserver
if `runserver` in sys.argv:
#execute your custom function
if __name__ == '__main__':
main()
you can check sys.argv if it have runserver, if so then execute your script or function
I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.
So I'm very noob in dealing with nose plugins.
I've been searching a lot but docs regarding nose plugins seem scarce.
I read and tried what's in the following links to try to write a simple nose plugin
and run it with nosetests, without success:
https://nose.readthedocs.org/en/latest/doc_tests/test_init_plugin/init_plugin.html
https://nose.readthedocs.org/en/latest/plugins/writing.html
I don't want to write my own test-runner or run the tests from any other script (via run(argv=argv, suite=suite(), ...)),
like they do in the first link.
I wrote a file myplugin.py with a class like this:
import os
from nose.plugins import Plugin
class MyCustomPlugin(Plugin):
name = 'myplugin'
def options(self, parser, env=os.environ):
parser.add_option('--custom-path', action='store',
dest='custom_path', default=None,
help='Specify path to widget config file')
def configure(self, options, conf):
if options.custom_path:
self.make_some_configs(options.custom_path)
self.enabled = True
def make_some_configs(self, path):
# do some stuff based on the given path
def begin(self):
print 'Maybe print some useful stuff...'
# do some more stuff
and added a setup.py like this:
try:
from setuptools import setup, find_packages
except ImportError:
import distribute_setup
distribute_setup.use_setuptools()
from setuptools import setup, find_packages
setup(
name='mypackage',
...
install_requires=['nose==1.3.0'],
py_modules=['myplugin'],
entry_points={
'nose.plugins.1.3.0': [
'myplugin = myplugin:MyCustomPlugin'
]
}
)
Both files are in the same directory.
Every time I run nosetests --custom-path [path], I get:
nosetests: error: no such option: --custom-path
From the links mentioned above, I thought that's all that was required to register and enable a custom plugin.
But it seems that, either I'm doing something really wrong, or nose's docs are outdated.
Can someone please point me the correct way to register and enable a plugin, that I can use with nosetests?
Thanks a lot!! :)
You don't want the nose version in entry_points in setup.py. Just use nose.plugins.0.10 as the docs say. The dotted version in the entry point name is not so much a nose version as a plugin API version.
I want to be able to run the Scrapy web crawling framework from within Django. Scrapy itself only provides a command line tool scrapy to execute its commands, i.e. the tool was not intentionally written to be called from an external program.
The user Mikhail Korobov came up with a nice solution, namely to call Scrapy from a Django custom management command. For convenience, I repeat his solution here:
# -*- coding: utf-8 -*-
# myapp/management/commands/scrapy.py
from __future__ import absolute_import
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def run_from_argv(self, argv):
self._argv = argv
return super(Command, self).run_from_argv(argv)
def handle(self, *args, **options):
from scrapy.cmdline import execute
execute(self._argv[1:])
Instead of calling e.g. scrapy crawl domain.com I can now do python manage.py scrapy crawl domain.com from within a Django project. However, the options of a Scrapy command are not parsed at all. If I do python manage.py scrapy crawl domain.com -o scraped_data.json -t json, I only get the following response:
Usage: manage.py scrapy [options]
manage.py: error: no such option: -o
So my question is, how to extend the custom management command to adopt Scrapy's command line options?
Unfortunately, Django's documentation of this part is not very extensive. I've also read the documentation of Python's optparse module but afterwards it was not clearer to me. Can anyone help me in this respect? Thanks a lot in advance!
Okay, I have found a solution to my problem. It's a bit ugly but it works. Since the Django project's manage.py command does not accept Scrapy's command line options, I split the options string into two arguments which are accepted by manage.py. After successful parsing, I rejoin the two arguments and pass them to Scrapy.
That is, instead of writing
python manage.py scrapy crawl domain.com -o scraped_data.json -t json
I put spaces in between the options like this
python manage.py scrapy crawl domain.com - o scraped_data.json - t json
My handle function looks like this:
def handle(self, *args, **options):
arguments = self._argv[1:]
for arg in arguments:
if arg in ('-', '--'):
i = arguments.index(arg)
new_arg = ''.join((arguments[i], arguments[i+1]))
del arguments[i:i+2]
arguments.insert(i, new_arg)
from scrapy.cmdline import execute
execute(arguments)
Meanwhile, Mikhail Korobov has provided the optimal solution. See here:
# -*- coding: utf-8 -*-
# myapp/management/commands/scrapy.py
from __future__ import absolute_import
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def run_from_argv(self, argv):
self._argv = argv
self.execute()
def handle(self, *args, **options):
from scrapy.cmdline import execute
execute(self._argv[1:])
I think you're really looking for Guideline 10 of the POSIX argument syntax conventions:
The argument -- should be accepted as a delimiter indicating the end of options.
Any following arguments should be treated as operands, even if they begin with
the '-' character. The -- argument should not be used as an option or as an operand.
Python's optparse module behaves this way, even under windows.
I put the scrapy project settings module in the argument list, so I can create separate scrapy projects in independent apps:
# <app>/management/commands/scrapy.py
from __future__ import absolute_import
import os
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def handle(self, *args, **options):
os.environ['SCRAPY_SETTINGS_MODULE'] = args[0]
from scrapy.cmdline import execute
# scrapy ignores args[0], requires a mutable seq
execute(list(args))
Invoked as follows:
python manage.py scrapy myapp.scrapyproj.settings crawl domain.com -- -o scraped_data.json -t json
Tested with scrapy 0.12 and django 1.3.1