In Django I wrote a custom command, called update.
This command is continously called by a shell script and updates some values in the database. The updates are done in threads. I put the class doing all-threading related things in the same module as the custom command.
On my production server, when I try to run that shell script, I get an error when I try to access my models:
antennas = Antenna.objects.all()
The error is:
AttributeError: 'NoneType' object has no attribute 'objects'
As you can see, however, I did import app.models.Antenna in that file.
So to me, it seems as if the reference to the whole site is somehow "lost" inside the threading class.
site/app/management/commands/update.py
(I tried to remove all non-essential code here, as it would clutter everything, but I left the imports intact)
from django.core.management.base import NoArgsCommand, CommandError
from django.utils import timezone
from datetime import datetime, timedelta
from decimal import *
from django.conf import settings
from _geo import *
import random, time, sys, traceback, threading, Queue
from django.db import IntegrityError, connection
from app.models import Bike, Antenna
class Command(NoArgsCommand):
def handle_noargs(self, **options):
queue = Queue.Queue()
threadnum = 2
for bike in Bike.objects.filter(state__idle = False):
queue.put(bike)
for i in range(threadnum):
u = UpdateThread(queue)
u.setDaemon(True)
u.start()
queue.join()
return
class UpdateThread(threading.Thread):
def init(self, queue):
threading.Thread.init(self)
self.queue = queue
def run(self):
antennas = Antenna.objects.all()
while not self.queue.empty():
try:
[...]
except Exception:
traceback.print_exc(file=sys.stdout)
finally:
self.queue.task_done()
When I try to from app.models import Antenna inside the UpdateThread-class I get an error:
ImportError: No module named app.models
The site runs great on a webserver. No model-related problems there, only when I do anything inside a thread - all app.models-imports are failing.
Additionally, it seems to me as exactly the same configuration (I'm using git) does run on another machine, which runs the same operating system (debian wheezy, python 2.7.3rc2 & django 1.4.2).
Very probably I'm missing something obvious about threading here but I'm stuck for far too long now. Your help would be very much appreciated!
PS: I did check for circular imports - and those shouldn't happen anyway, as I can use my models in the other (management-)class, right?
Related
I've been trying to figure out how best to set this up. Cutting it down as much as I can. I have 4 python files: core.py (main), logger_controler.py, config_controller.py, and a 4th as a module or singleton well just call it tool.py.
The way I have it setup is logging has an init function that setup pythons built in logging with the necessary levels, formatter, directory location, etc. I call this init function in main.
import logging
import logger_controller
def main():
logger_controller.init_log()
logger = logging.getLogger(__name__)
if __name__ == "__main__":
main()
config_controller is using configparser and is mainly a singleton as a controller for my config.
import configparser
import logging
logger = logging.getLogger(__name__)
class ConfigController(object):
def __init__(self, *file_names):
self.config_parser = configparser.ConfigParser()
found_files = self.config_parser.read(file_names)
if not found_files:
raise ValueError("No config file found.")
self._validate()
def _validate(self):
...
def read_config(self, section, field):
try:
data = self.config_parser.get(section, field)
except (configparser.NoSectionError, configparser.NoOptionError) as e:
logger.error(e)
data = None
return data
config = ConfigController("config.ini")
And then my problem is trying to create the 4th file and making sure both my logger and config parser are running before it. I'm also wanting this 4th one to be a singleton so it's following a similar format as the config_controller.
So tool.py uses config_controller to pull anything it needs from the config file. It also has some error checking for if config_controller's read_config returns None as that isn't validated in _validate. I did this as I wanted my logging to have a general layer for error checking and a more specific layer. So _validate just checks if required fields and sections are in the config file. Then wherever the field is read will handle extra error checking.
So my main problem is this:
How do I have it where my logger and configparser are both running and available before anything else. I'm very much willing to rework all of this, but I'd like to keep the functionality of it all.
One attempt I tried that works, but seems very messy is making my logger_controler a singleton that just returns python's logging object.
import logging
import os
class MyLogger(object):
def __new__(cls, *args, **kwargs):
init_log()
return logging
def init_log():
...
mylogger = MyLogger()
Then in core.py
from logger_controller import mylogger
logger = mylogger.getLogger(__name__)
I feel like there should be a better way to do the above, but I'm honestly not sure how.
A few ideas:
Would I be able to extend the logging class instead of just using that init_log function?
Maybe there's a way I can make all 3 individual modules such that they each initialize in a correct order? My attempts here didn't quite work as I also have some internal data that I wouldn't want exposed to classes using the module, just the functionality.
I'd like to have it where all 3, logging, configparsing, and the tool, available anywhere I import them.
How I have it setup now "works" but if I were to import the tool.py anywhere in core.py and an error occurs that I need to catch, then my logger won't be able to log it as this tool is loading before the init of my logger.
I have a Celery shared_task in a module tasks that looks like this:
#shared_task
def task():
from core.something import send_it
send_it()
and I am writing a test attempting to patch the send_it method. So far I have:
from ..tasks import task
class TestSend(TestCase):
#override_settings(CELERY_TASK_ALWAYS_EAGER=True)
#patch("core.tasks.send_it")
def test_task(self, send_it_mock):
task()
send_it_mock.assert_called_once()
When I run this, I get the error: AttributeError: <module 'core.tasks' from 'app/core/tasks.py'> does not have the attribute 'send_it'
Out of desperation I've used #patch("tasks.task.send_it") instead, as the import happens inside the shared_task, but I get a similar result. Does anyone know how I can effectively patch the send_it call? Thanks!
I have following directory structure in my Python project:
- dump_specs.py
/impa
- __init__.py
- server.py
- tasks.py
I had a problem with circular references. dump_specs.py needs a reference to app from server.py. server.py is a Flask app which needs a references to celery tasks from tasks.py. So dump_specs.py looks like:
#!/usr/bin/env python3
import impa.server
def dump_to_dir(dir_path):
# Do something
client = impa.server.app.test_client()
# Do the rest of things
impa/server.py looks like:
#!/usr/bin/env python3
import impa.tasks
app = Flask(__name__)
# Definitions of endpoints, some of them use celery tasks -
# that's why I need impa.tasks reference
And impa/tasks.py:
#!/usr/bin/env python3
from celery import Celery
import impa.server
def make_celery(app):
celery = Celery(app.import_name,
broker=app.config['CELERY_BROKER_URL'],
backend=app.config['CELERY_RESULT_BACKEND'])
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
celery = make_celery(impa.server.app)
When I'm trying to dump specs with ./dump_specs.py I've got an error:
./dump_specs.py specs
Traceback (most recent call last):
File "./dump_specs.py", line 9, in <module>
import impa.server
File "/build/impa/server.py", line 23, in <module>
import impa.tasks
File "/build/impa/tasks.py", line 81, in <module>
celery = make_celery(impa.server.app)
AttributeError: module 'impa' has no attribute 'server'
And I can't understand what's wrong. Could someone explain what's happening and how to get rid of this error?
If I have managed to reproduce your problem correctly on my host, it should help youto insert import impa.tasks into dump_specs.py above import impa.server.
The way your modules depend on each other, the loading order is important. IIRC (the loading machinery is described in greater details in the docs), when you first try to import impa.server, it will on line 23 try to import impa.tasks, but import of impa.server is not complete at this point. There is import impa.server in impa.tasks, but we do not got back and import it at this time (we'd otherwise end up in a full circle) and continue importing impa.tasts until we want to access impa.server.app, but we haven't gotten to the point we could do that yet, impa.server has not been imported yet.
When possible, it would also help if the code accessing another module in your package wasn't executed on import (directly called as part of the modules instead of being in a function or a class which would be called/used after the imports have completed).
For sure I'm missing something in Flask and unit test integration (or logger configuration maybe)
But when I'm trying to unittest some class methods that have some app.logger I'm having troubles with RuntimeError: working outside of the application context
So a practical example:
utils.py
import boto3
from flask import current_app as app
class CustomError(BaseException):
type = "boto"
class BotoManager:
def upload_to_s3(self, file):
try:
# do something that can trigger a boto3 error
except boto3.exceptions.Boto3Error as e:
app.logger.error(e)
raise CustomError()
test_utils.py
import pytest
from utils.py import CustomError, BotoManager
def test_s3_manager_trigger_error():
boto_manager = BotoManager()
with pytest.raises(CustomError):
boto_manager.upload_to_s3('file.txt') # file doesn't exist so trigger error
So the thing is that when I run it show me the error:
RuntimeError: Working outside of application context.
Becuase the app is not created and I'm not working with the app, so have sense.
So I only see two possible solutions (spoiler I don't like any of them):
Don't log anything with app.logger outside of the views (I think I can use the python logging system, but this is not the desired behaviour)
Don't unittest the parts that use app.logger
Did someone face this problem already? How did you solve it? Any other possible solution?
I have a Python project that relies on a particular module, receivers.py, being imported.
I want to write a test to make sure it is imported, but I also want to write other tests for the behaviour of the code within the module.
The trouble is, that if I have any tests anywhere in my test suite that import or patch anything from receivers.py then it will automatically import the module, potentially making the test for import pass wrongly.
Any ideas?
(Note: specifically this is a Django project.)
One (somewhat imperfect) way of doing it is to use the following TestCase:
from django.test import TestCase
class ReceiverConnectionTestCase(TestCase):
"""TestCase that allows asserting that a given receiver is connected
to a signal.
Important: this will work correctly providing you:
1. Do not import or patch anything in the module containing the receiver
in any django.test.TestCase.
2. Do not import (except in the context of a method) the module
containing the receiver in any test module.
This is because as soon as you import/patch, the receiver will be connected
by your test and will be connected for the entire test suite run.
If you want to test the behaviour of the receiver, you may do this
providing it is a unittest.TestCase, and there is no import from the
receiver module in that test module.
Usage:
# myapp/receivers.py
from django.dispatch import receiver
from apples.signals import apple_eaten
from apples.models import Apple
#receiver(apple_eaten, sender=Apple)
def my_receiver(sender, **kwargs):
pass
# tests/integration_tests.py
from apples.signals import apple_eaten
from apples.models import Apple
class TestMyReceiverConnection(ReceiverConnectionTestCase):
def test_connection(self):
self.assert_receiver_is_connected(
'myapp.receivers.my_receiver',
signal=apple_eaten, sender=Apple)
"""
def assert_receiver_is_connected(self, receiver_string, signal, sender):
receivers = signal._live_receivers(sender)
receiver_strings = [
"{}.{}".format(r.__module__, r.__name__) for r in receivers]
if receiver_string not in receiver_strings:
raise AssertionError(
'{} is not connected to signal.'.format(receiver_string))
This works because Django runs django.test.TestCases before unittest.TestCases.