Django: object reference kept on exception - python

I've got a weird behaviour when I raise an exception inside my Django application. Please loot at this snippet (I removed all the unnecessary code):
#csrf_exempt
def main(request):
ems_db = EmsDatabase()
# raise AssertionError
return HttpResponse('OK\n', content_type='text/plain')
this is the EmsDatabase class:
class EmsDatabase:
def __init__(self):
pass
def __del__(self):
print('>>>>>>>>>>>>>>>> DEL')
running this function (obviously through a proper http call) the EmsDatabase class is instantiated e properly garbage-collected; I see the print output in the Django server log.
But if I uncomment the raise AssertionError line I got no print output and the object is still alive; just modifying a source file to trigger the server reload makes the object to lose the reference to itself and be garbage-collected (the print line appears).
The same thing happens running Django through Lighttpd + Gunicorn.
Why is Django (v2.0.7, python 3.6, Linux) keeping a reference to my object or, more likely, to the frame of the main() function? What can I do?

Related

Inherit class Worker on Odoo15

In one of my Odoo installation I need to setup the socket_timeout variable of WorkerHTTP class directly from Python code, bypassing the usage of environment variable ODOO_HTTP_SOCKET_TIMEOUT.
If you never read about it, you can check here for more info: https://github.com/odoo/odoo/commit/49e3fd102f11408df00f2c3f6360f52143911d74#diff-b4207a4658979fdb11f2f2fa0277f483b4e81ba59ed67a5e84ee260d5837ef6d
In Odoo15, which i'm using, Worker classes are located at odoo/service/server.py
My idea was to inherit constructor for Worker class and simply setup self.sock_timeout = 10 or another value, but I can't make it work with inheritance.
EDIT: I almost managed it to work, but I have problems with static methods.
STEP 1:
Inherit WorkerHTTP constructor and add self.socket_timeout = 10
Then, I also have to inherit PreforkServer and override process_spawn() method so I can pass WorkerHttpExtend instead of WorkerHTTP, as argument for worker_spawn() method.
class WorkerHttpExtend(WorkerHTTP):
""" Setup sock_timeout class variable when WorkerHTTP object gets initialized"""
def __init__(self, multi):
super(WorkerHttpExtend, self).__init__(multi)
self.sock_timeout = 10
logging.info(f'SOCKET TIMEOUT: {self.sock_timeout}')
class PreforkServerExtend(PreforkServer):
""" I have to inherit PreforkServer and override process_spawn()
method so I can pass WorkerHttpExtend
instead of WorkerHTTP, as argument for worker_spawn() method.
"""
def process_spawn(self):
if config['http_enable']:
while len(self.workers_http) < self.population:
self.worker_spawn(WorkerHttpExtend, self.workers_http)
if not self.long_polling_pid:
self.long_polling_spawn()
while len(self.workers_cron) < config['max_cron_threads']:
self.worker_spawn(WorkerCron, self.workers_cron)
STEP 2:
static method start() should initialize PreforkServer with PreforkServerExtend, not with PreforkServer (last line in the code below). This is where I start to have problems.
def start(preload=None, stop=False):
"""Start the odoo http server and cron processor."""
global server
load_server_wide_modules()
if odoo.evented:
server = GeventServer(odoo.service.wsgi_server.application)
elif config['workers']:
if config['test_enable'] or config['test_file']:
_logger.warning("Unit testing in workers mode could fail; use --workers 0.")
server = PreforkServer(odoo.service.wsgi_server.application)
STEP 3:
At this point if I wanna go further (which I did) I should copy the whole start() method and import all package I need to make it work
import odoo
from odoo.service.server import WorkerHTTP, WorkerCron, PreforkServer, load_server_wide_modules, \
GeventServer, _logger, ThreadedServer, inotify, FSWatcherInotify, watchdog, FSWatcherWatchdog, _reexec
from odoo.tools import config
I did it and then in my custom start() method I wrote line
server = PreforkServerExtend(odoo.service.wsgi_server.application)
but even then, how do I tell to execute my start() method, instead of the original one??
I'm sure this would eventually work (mabe not safely, but would work) because at some point I wasn't 100% sure what I was doing, so I put my inherit classes WorkerHttpExtend and PreforkServerExtend in the original odoo/service/server.py and initialized server obj with PreforkServerExtend instead of PreforkServer.
server = PreforkServer(odoo.service.wsgi_server.application)
It works then: I get custom socket timeout value, print and logging info when Odoo service start, because PreforkServerExtend will call custom class on cascade at that point, otherwise my inherited class are there but they will never be called.
So I guess if I could tell the system to run my start() method I would have done it.
STEP 4 (not reached yet):
I'm pretty sure that start() method is called in odoo/cli/server.py, in main() method:
rc = odoo.service.server.start(preload=preload, stop=stop)
I could go deeper but I don't think the effort is worth for what I need.
So technically if I would be able to tell the system which start() method to choose, I would have done it. Still not sure it is safe procedure (probably not much actually, but at this point I was just experimenting), but I wonder if there is an easier method to set up socket timeout without using environment variable ODOO_HTTP_SOCKET_TIMEOUT.
I'm pretty sure there is an easier method than i'm doing, with low level python or maybe even with a class in odoo/service/server, but I can't figure out for now. If some one has an idea, let me know!
Working solution: I have been introduced to Monkeypatch in this post
Possible for a class to look down at subclass constructor?
This has solved my problem, now I'm able to patch process_request method of class WorkerHTTP :
import errno
import fcntl
import socket
import odoo
import odoo.service.server as srv
class WorkerHttpProcessRequestPatch(srv.WorkerHTTP):
def process_request(self, client, addr):
client.setblocking(1)
# client.settimeout(self.sock_timeout)
client.settimeout(10) # patching timeout setup to a needed value
client.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
flags = fcntl.fcntl(client, fcntl.F_GETFD) | fcntl.FD_CLOEXEC
fcntl.fcntl(client, fcntl.F_SETFD, flags)
self.server.socket = client
try:
self.server.process_request(client, addr)
except IOError as e:
if e.errno != errno.EPIPE:
raise
self.request_count += 1
# Switch process_request class attribute - this is what I needed to make it work
odoo.service.server.WorkerHTTP.process_request = WorkerHttpProcessRequestPatch.process_request

sys profile function becomes none after exception is raised

I am currently using a custom profile function that I set with sys.setprofile. The goal of the function is to raise an exception if we are in the main thread and if a timeout thread gave the signal to do so and we are within a specific scope of the project that properly handles specific exceptions.
def kill(frame, event, arg):
whitelist = [
r'filepath\fileA.py',
r'filepath\fileB.py'
]
if event == 'call' and frame.f_code.co_filename in whitelist and kill_signal and threading.current_thread() is threading.main_thread():
raise BackgroundTimeoutError
return kill
It works perfectly, however, as soon as the exception is raised the profile function becomes unset. So after raising an exception inside a try/except block and calling sys.getProfile() it returns None.
sys.setprofile(kill)
print(sys.getProfile()) <----- this returns the profile function
try:
function_that_raises_backgroundtimeouterror()
except:
pass
print(sys.getProfile()) <----- this returns None
Why is the profile function being unset and how can I overcome this? Is there a way to permanently set the function or reset it when unset?
Profile functions are meant for profiling, not for raising exceptions. If a profile function raises an exception, Python assumes it's broken and unsets it.
The documentation says:
Error in the profile function will cause itself unset.

SQLAlchemy error with class but not with function

The following works when run in the context of a flask app executed in Jenkins:
A function is imported from one file to another:
def return_countries(db):
db.session.expire_on_commit = False
q = db.session.query(Country.countrycode).distinct().all()
q = [t[0] for t in q]
return q
Inside the file where it is imported I call it like this:
print(return_countries(db))
When this job is run in Jenkins, it works, no problems.
The thing is, I want one database call and then to use the returned list of country codes multiple times, not unnecessary keep querying the same thing. To keep it neat have tried to put it in a class. Yes, it could be done without the class but it may be needed in several parts of the code base, maybe with additional methods so would be cleaner if the class would work.
So I commented out that function definition and inserted the following class into the one file:
class checkCountries:
def __init__(self, db):
db.session.expire_on_commit = False
__q = db.session.query(Country.countrycode).distinct().all()
__q = [t[0] for t in __q]
print(__q)
def check(self, __q, countrycode):
if countrycode in __q:
return True
else:
return False
...updated the import statement in the other file and instantiated like this directly under the import statements:
cc = checkCountries(db)
and replaced the above print statement with this one:
print('check for DE: ', cc.check('DE'))
print('check for ES: ', cc.check('ES'))
print('check for GB: ', cc.check('GB'))
But it generated an error in the Jenkins log. The traceback is very long but below are what are probably the relevant excerpts.
Any ideas? Have a feeling something is wrong with what I have done with __q
/python3.6/site-packages/sqlalchemy/util/_collections.py", line 988, in __call__
15:26:01 return self.registry[key]
15:26:01 KeyError: 123456789123456
15:26:01
15:26:01 During handling of the above exception, another exception occurred:
15:26:01
15:26:01 Traceback (most recent call last):
/python3.6/site-packages/flask_sqlalchemy/__init__.py", line 922, in get_app
15:26:01 raise RuntimeError('application not registered on db '
15:26:01 RuntimeError: application not registered on db instance and no application bound to current context
15:26:02 Build step 'Execute shell' marked build as failure
15:26:02 Finished: FAILURE
I got it to work, so here is the class I am using which now works:
class checkCountries:
def __init__(self, db, app):
db.init_app(app)
with app.app_context():
db.session.expire_on_commit = False
self.__q = db.session.query(Country.countrycode).distinct().all()
self.__q = [t[0] for t in self.__q]
print(self.__q)
def check(self, countrycode):
if countrycode in self.__q:
return True
else:
return False
Then, the line which instantiates it now takes 2 arguments:
cc = checkCountries(db, app)
The rest is the same as in the question above.
This answer from #mark_hildreth helped, thanks!
Yes, there were more than one thing wrong with it. The class wasn't quite right but debugged that in a simplified app and got it working but this error message specific to sqlalchemy was still there.
Maybe it had something to do with the variables in scope in the class vs in the function that stopped it binding to the Flask application context

Error in Python2.7 takes exactly 2 arguments (1 given)

I am a beginner in Python. I wonder why it is throwing an error.
I am getting an error saying TypeError: client_session() takes exactly 2 arguments (1 given)
The client_session method returns the SecureCookie object.
I have this code here
from werkzeug.utils import cached_property
from werkzeug.contrib.securecookie import SecureCookie
from werkzeug.wrappers import BaseRequest, AcceptMixin, ETagRequestMixin,
class Request(BaseRequest):
def client_session(self,SECRET_KEY1):
data = self.cookies.get('session_data')
print " SECRET_KEY " , SECRET_KEY1
if not data:
print "inside if data"
cookie = SecureCookie({"SECRET_KEY": SECRET_KEY1},secret_key=SECRET_KEY1)
cookie.serialize()
return cookie
print 'self.form[login.name] ', self.form['login.name']
print 'data new' , data
return SecureCookie.unserialize(data, SECRET_KEY1)
#and another
class Application(object):
def __init__(self):
self.SECRET_KEY = os.urandom(20)
def dispatch_request(self, request):
return self.application(request)
def application(self,request):
return request.client_session(self.SECRET_KEY).serialize()
# This is our externally-callable WSGI entry point
def __call__(self, environ, start_response):
"""Invoke our WSGI application callable object"""
return self.wsgi_app(environ, start_response)
Usually, that means you're calling the client_session as an unbound method, giving it only one argument. You should introspect a bit, and look at what is exactly the request you're using in the application() method, maybe it is not what you're expecting it to be.
To be sure what it is, you can always add a debug printout point:
print "type: ", type(request)
print "methods: ", dir(request)
and I expect you'll see that request is the original Request class that werkzeug gives you...
Here, you're extending the BaseRequest of werkzeug, and in application() you expect that werkzeug knows about your own implementation of the BaseRequest class magically. But if you read the zen of python, you will know that "explicit is better than implicit", so python never does stuff magically, you have to tell your library that you made a change somehow.
So after reading werkzeug's documentation, you can find out that this is actually the case:
The request object is created with the WSGI environment as first argument and will add itself to the WSGI environment as 'werkzeug.request' unless it’s created with populate_request set to False.
This may not be totally clear for people who don't know what werkzeug is, and what is the design logic behind.
But a simple google lookup, showed usage examples of BaseRequest:
http://werkzeug.pocoo.org/docs/exceptions/
https://github.com/mitsuhiko/werkzeug/blob/master/examples/i18nurls/application.py or
expand werkzeug useragent class
I only googled for from werkzeug.wrappers import BaseRequest`
So now, you should be able to guess what's to be changed in your application. As you only gave a few parts of the application, I can't advise you exactly where/what to change.

same line behaves correctly or gives me an error message depending on file position

gives me an error message:
class logger:
session = web.ctx.session #this line
doesn't give me an error message:
class create:
def GET(self):
# loggedout()
session = web.ctx.session #this line
form = self.createform()
return render.create(form)
Why?
web.ctx can't be used in that scope. It's a thread-local object that web.py initializes before it calls GET/POST/etc. and gets discarded afterwards.
class logger:
print('Hi')
prints Hi. Statements under a class definition gets run at definition time.
A function definition like this one:
def GET(self):
# loggedout()
session = web.ctx.session #this line
form = self.createform()
return render.create(form)
is also a statement. It creates the function object which is named GET. But the code inside the function does not get run until the GET method is called.
This is why you get an error message in the first case, but not the second.

Categories