cherrypy mvc with mysql issue - python

Problem setting up the MVC design with Cherrypy/MySQL. Here is the setup: (assume all the imports are correct)
##controller.py
class User(object):
def __init__(self):
self.model = model.User()
#cherrypy.expose
def index(self):
return 'some HTML to display user home'
## model.py
class Model(object):
_db = None
def __init__(self):
self._db = cherrypy.thread_data.db
class User(Model):
def getuser(self, email):
#get the user with _db and return result
##service.py
class UserService(object):
def __init__(self):
self._model = model.User()
def GET(self, email):
return self._model.getuser(email)
##starting the server
user = controller.User()
user.service = service.UserService()
cherrypy.tree.mount(user, '/user', self.config)
#app.merge(self.config)
cherrypy.engine.subscribe("start_thread", self._onThreadStart)
self._onThreadStart(-1)
def _onThreadStart(self, threadIndex):
cherrypy.thread_data.db = mysql.connect(**self.config["database"])
if __name__ == '__main__':
cherrypy.engine.start()
cherrypy.engine.block()
the above code has error in model.py at the line: cherrypy.thread_data.db.
I got:
AttributeError: '_ThreadData' object has no attribute 'db'
not sure why, could you please point me into the right direction? I can get the connection, and pull info from controller.py at User index, but not in model.py?
Please help.. thanks.

CherryPy doesn't decide for you what tools to use. It is up to you to pick the tools that fit you and your tasks the best. Thus CherryPy doesn't setup any database connection, your cherrypy.thread_data.db, it's your job.
Personally I use the same concept of responsibility separation, kind of MVC, for my CherryPy apps so there follow two possible ways to achieve what you want.
Design note
I would like to note that the simple solution of thread-mapped database connections, at least in case of MySQL, works pretty well in practice. And additional complexity of more old-fashioned connection pools may not be necessary.
There're however points that shouldn't be overlooked. Your database connection may become killed, lost or be in any other state that won't allow you to make queries on it. In this case reconnection must be preformed.
Also pay attention to avoid connection sharing between threads as it will result in hard-to-debug errors and Python crashes. In your example code, it may relate to a service dispatcher and its cache.
Bootstrapping phase
In your code that sets configuration, mounts CherryPy apps, etc.
bootstrap.py
# ...
import MySQLdb as mysql
def _onThreadStart(threadIndex):
cherrypy.thread_data.db = mysql.connect(**config['database'])
cherrypy.engine.subscribe('start_thread', _onThreadStart)
# useful for tests to have db connection on current thread
_onThreadStart(-1)
model.py
import cherrypy
import MySQLdb as mysql
class Model(object):
'''Your abstract model'''
_db = None
def __init__(self):
self._db = cherrypy.thread_data.db
try:
# reconnect if needed
self._db.ping(True)
except mysql.OperationalError:
pass
I wrote a complete CherryPy deployment tutorial, cherrypy-webapp-skeleton, a couple of years ago. You can take a look at the code, as the demo application uses exactly this approach.
Model property
To achieve less code coupling and to avoid import cycles it could be a good idea to move all database related code to model module. It may include, initial connection queries like setting operation timezone, making MySQLdb converters timzeone-aware, etc.
model.py
class Model(object):
def __init__(self):
try:
# reconnect if needed
self._db.ping(True)
except mysql.OperationalError:
pass
#property
def _db(self):
'''Thread-mapped connection accessor'''
if not hasattr(cherrypy.thread_data, 'db'):
cherrypy.thread_data.db = mysql.connect(**config['database'])
return cherrypy.thread_data.db

Related

Celery - assign session dynamicall/reusing connections

Breaking my head for few days for simple task(Thought it simple...not anymore):
Main program sends hundreds of sql queries to fetch data from Multiple DBs .
I thought Celery can be the right choice as it can scale and also simplify the threading/async orchestration .
The "clean" solution would be one generic class supposed to looks something like:
#app.task(bind=True , name='fetch_data')
def fetch_data(self,*args,**kwargs):
db= kwargs['db']
sql= kwargs['sql']
session = DBContext().get_session(db)
result = session.query(sql).all()
...
But having trouble to implement such DBContext class which will instantiate once for each DB and reuse the DB sessions for request and once requests done - close it .
(or any other recommendation you suggest ).
I was thinking about using a Base class to decorate the function and
keep the all available connections there ,
But the problem such class can't init dynamically but once ...
maybe there's way to make it work but not sure how ...
class DatBaseFactory(Task):
def __call__(self, *args, **kwargs):
print("In class",self.db)
self.engine = DBContext.get_db(self.db)
return super().__call__(*args, **kwargs)
#app.task(bind=True ,base=DatBaseFactory, name='test_db', db=db ,engine='' )
def test_db(self,*args,**kwargs):
print("From Task" ,self.engine)
Other alterative would be duplicating the functions as number of the DB and "preserved" them the sessions - but that's quite ugly solution .
Hope some1 can help here with this trouble ....

How to mock a Django internal library using patch decorator

I'm mocking an internal library class (Server) of python that provides connection to HTTP JSON-RPC server. But when running the test the class is not mocking. The class is used calling a project class that is a wrapper for other class that effectively instantiates the Server class.
I extract here the parts of the code that give sense for what I'm talking about.
Unit test:
#patch('jsonrpc_requests.jsonrpc.Server')
def test_get_question_properties(self, mockServer):
lime_survey = Questionnaires()
# ...
Class Questionnaires:
class Questionnaires(ABCSearchEngine):
""" Wrapper class for LimeSurvey API"""
def get_question_properties(self, question_id, language):
return super(Questionnaires, self).get_question_properties(question_id, language)
Class Questionnaires calls the method get_question_properties from class ABCSearchEnginge(ABC). This class initializes the Server class to provide the connection to the external API.
Class ABCSearchEnginge:
class ABCSearchEngine(ABC):
session_key = None
server = None
def __init__(self):
self.get_session_key()
def get_session_key(self):
# HERE the self.server keep getting real Server class instead the mocked one
self.server = Server(
settings.LIMESURVEY['URL_API'] + '/index.php/admin/remotecontrol')
As the test is mocking Server class why it's not mocking? What is the missing parts?
From what i see you didnt add a return value.
Were did you put the mocked value in : #patch('jsonrpc_requests.jsonrpc.Server') ?
If you try to add a MagicMock what happend (Dont forget to add from mock import patch, MagicMock)?
#patch('jsonrpc_requests.Server', MagicMock('RETURN VALUE HERE'))
You also need to Mock the __init__ method (Where Server is this one from jsonrpc_requests import Server):
#patch.object(Server, '__init__', MagicMock(return_value=None))
I extrapolated your problem from my own understanding, maybe you need to fix some path (Mock need the exact path to do the job).

Non-lazy instance creation with Pyro4 and instance_mode='single'

My aim is to provide to a web framework access to a Pyro daemon that has time-consuming tasks at the first loading. So far, I have managed to keep in memory (outside of the web app) a single instance of a class that takes care of the time-consuming loading at its initialization. I can also query it with my web app. The code for the daemon is:
Pyro4.expose
#Pyro4.behavior(instance_mode='single')
class Store(object):
def __init__(self):
self._store = ... # the expensive loading
def query_store(self, query):
return ... # Useful query tool to expose to the web framework.
# Not time consuming, provided self._store is
# loaded.
with Pyro4.Daemon() as daemon:
uri = daemon.register(Thing)
with Pyro4.locateNS() as ns:
ns.register('thing', uri)
daemon.requestLoop()
The issue I am having is that although a single instance is created, it is only created at the first proxy query from the web app. This is normal behavior according to the doc, but not what I want, as the first query is still slow because of the initialization of Thing.
How can I make sure the instance is already created as soon as the daemon is started?
I was thinking of creating a proxy instance of Thing in the code of the daemon, but this is tricky because the event loop must be running.
EDIT
It turns out that daemon.register() can accept either a class or an object, which could be a solution. This is however not recommended in the doc (link above) and that feature apparently only exists for backwards compatibility.
Do whatever initialization you need outside of your Pyro code. Cache it somewhere. Use the instance_creator parameter of the #behavior decorator for maximum control over how and when an instance is created. You can even consider pre-creating server instances yourself and retrieving one from a pool if you so desire? Anyway, one possible way to do this is like so:
import Pyro4
def slow_initialization():
print("initializing stuff...")
import time
time.sleep(4)
print("stuff is initialized!")
return {"initialized stuff": 42}
cached_initialized_stuff = slow_initialization()
def instance_creator(cls):
print("(Pyro is asking for a server instance! Creating one!)")
return cls(cached_initialized_stuff)
#Pyro4.behavior(instance_mode="percall", instance_creator=instance_creator)
class Server:
def __init__(self, init_stuff):
self.init_stuff = init_stuff
#Pyro4.expose
def work(self):
print("server: init stuff is:", self.init_stuff)
return self.init_stuff
Pyro4.Daemon.serveSimple({
Server: "test.server"
})
But this complexity is not needed for your scenario, just initialize the thing (that takes a long time) and cache it somewhere. Instead of re-initializing it every time a new server object is created, just refer to the cached pre-initialized result. Something like this;
import Pyro4
def slow_initialization():
print("initializing stuff...")
import time
time.sleep(4)
print("stuff is initialized!")
return {"initialized stuff": 42}
cached_initialized_stuff = slow_initialization()
#Pyro4.behavior(instance_mode="percall")
class Server:
def __init__(self):
self.init_stuff = cached_initialized_stuff
#Pyro4.expose
def work(self):
print("server: init stuff is:", self.init_stuff)
return self.init_stuff
Pyro4.Daemon.serveSimple({
Server: "test.server"
})

Connecting django signal handlers in tests

Using django-cacheops, I want to test that my views are getting cached as I intend them to be. In my test case I'm connecting cacheops cache_read signal to a handler that should increment a value in the cache for hits or misses. However, the signal is never fired. Does anyone know the correct way to connect a django signal handler in a testcase, purely for use in that testcase?
here's what I have so far
from cacheops.signals import cache_read
cache.set('test_cache_hits', 0)
cache.set('test_cache_misses', 0)
def cache_log(sender, func, hit, **kwargs):
# never called
if hit:
cache.incr('test_cache_hits')
else:
cache.incr('test_cache_misses')
class BootstrapTests(TestCase):
#classmethod
def setUpClass(cls):
super(BootstrapTests, cls).setUpClass()
cache_read.connect(cache_log)
assert cache_read.has_listeners()
def test_something_that_should_fill_and_retrieve_cache(self):
....
hits = cache.get('test_cache_hits') # always 0
I've also tried connecting the signal handler at the module level, and in the regular testcase setUp method, all with the same result.
EDIT:
Here's my actual test code, plus the object I'm testing. I'm using the cached_as decorator to cache a function. This test is currently failing.
boostrap.py
class BootstrapData(object):
def __init__(self, app, person=None):
self.app = app
def get_homepage_dict(self, context={}):
url_name = self.app.url_name
#cached_as(App.objects.filter(url_name=url_name), extra=context)
def _get_homepage_dict():
if self.app.homepage is None:
return None
concrete_module_class = MODULE_MAPPING[self.app.homepage.type]
serializer_class_name = f'{concrete_module_class.__name__}Serializer'
serializer_class = getattr(api.serializers, serializer_class_name)
concrete_module = concrete_module_class.objects.get(module=self.app.homepage)
serializer = serializer_class(context=context)
key = concrete_module_class.__name__
return {
key: serializer.to_representation(instance=concrete_module)
}
return _get_homepage_dict()
test_bootstrap.py
class BootstrapDataTest(TestCase):
def setUp(self):
super(BootstrapDataTest, self).setUp()
def set_signal(signal=None, **kwargs):
self.signal_calls.append(kwargs)
self.signal_calls = []
cache_read.connect(set_signal, dispatch_uid=1, weak=False)
self.app = self.setup_basic_app() # creates an 'App' model and saves it
def tearDown(self):
cache_read.disconnect(dispatch_uid=1)
def test_boostrap_data_is_cached(self):
obj = BootstrapData(self.app)
obj.get_homepage_dict()
# fails, self.signal_calls == []
self.assertEqual(self.signal_calls, [{'sender': App, 'func': None, 'hit': False }])
self.signal_calls = []
obj.get_homepage_dict()
self.assertEqual(self.signal_calls, [{'sender': App, 'func': None, 'hit': True}])
I can't see why this is happening but I will try to make a useful answer anyway.
First, if you want to test whether cache works you shouldn't rely on its own side effects to check that, and signals are side effects of its primary function - preventing db calls. Try testing that:
def test_it_works(self):
with self.assertNumQueries(1):
obj.get_homepage_dict()
with self.assertNumQueries(0):
obj.get_homepage_dict()
Second, if you want to know what's going on you may dig in adding prints everywhere including cacheops code and see where it stops. Alternatively, you can make a test for me to see, the instruction is here https://github.com/Suor/django-cacheops#writing-a-test.
Last, your test is a bit wrong. For #cached_as() sender would be None and func would be decorated function.
In this specific case, it turned out to be that my test cases subclassed django rest framework's APITestCase, which in turn subclasses django's SimpleTestCase.
looking in the cacheops sources, I found that those tests subclass TransactionTestCase, and switching out the test case fixed this issue.
Would be interested to know why this is the case but the issue is solved for now.

Can't mock the class method in python

I have a class that I try to mock in tests. The class is located in server/cache.py and looks like:
class Storage(object):
def __init__(self, host, port):
# set up connection to a storage engine
def store_element(self, element, num_of_seconds):
# store something
def remove_element(self, element):
# remove something
This class is used in server/app.py similar to this one:
import cache
STORAGE = cache.Storage('host', 'port')
STORAGE.store_element(1, 5)
Now the problem arise when I try to mock it in the tests:
import unittest, mock
import server.app as application
class SomeTest(unittest.TestCase):
# part1
def setUp(self):
# part2
self.app = application.app.test_client()
This clearly does not work during the test, if I can't connect to a storage. So I have to mock it somehow by writing things in 'part1, part2'.
I tried to achieve it with
#mock.patch('server.app.cache') # part 1
mock.side_effect = ... # hoping to overwriting the init function to do nothing
But it still tries to connect to a real host. So how can I mock a full class here correctly? P.S. I reviewed many many questions which look similar to me, but in vain.

Categories