Reload Variable Data from DB without restarting Django - python

I have a function in my Django Views.py and I use the data in another function but if a user changes True to False then I want to update it without having to restart Django.
def update_list():
global processess
processess = botactive.objects.filter(active=True).values_list('user')
update_list()
I use processess which fetches users who have a model field set to True but if they set it to False I want to not include them if there is a new request.
listi = [row[0] for row in processess]
def wallet_verify(listi):
# print(listi)
database = Bybitapidatas.objects.filter(user = listi)
...
This is the request I make and want it to use fresh data without restarting Django and also in python and not using html.
def verifyt(request):
with ProcessPoolExecutor(max_workers=4, initializer=django.setup) as executor:
results = executor.map(wallet_verify, listi)
return HttpResponse("done")

Ignoring the relative merits of globals in Django for the moment, you could just recreate the query in verifyt() to make sure its fresh.
def verifyt(request):
v_processess = botactive.objects.filter(active=True).values_list('user')
v_listi = [row[0] for row in v_processess]
with ProcessPoolExecutor(max_workers=4, initializer=django.setup) as executor:
results = executor.map(wallet_verify, v_listi)
return HttpResponse("done")
(It might be worth noting, Django queries are lazily evaluated, so, by the looks of it, your query won't actually be performed until listi is set anyway, which may do unpredictable things to your global.)
Another option might be to make your query into a function so you can call it when you need it and always get the latest
def get_listi():
processess = botactive.objects.filter(active=True).values_list('user')
listi = [row[0] for row in processess]
return listi
def verifyt(request):
listi = get_listi()
with ProcessPoolExecutor(max_workers=4, initializer=django.setup) as executor:
results = executor.map(wallet_verify, listi)
return HttpResponse("done")
def wallet_verify(user_from_listi):
database = Bybitapidatas.objects.filter(user = user_from_listi)
...

Related

How can I update cache rows within the flask-cache (FileSystemCache)

I am using flask-cache (FileSystemCache)to store an entire table worth of data (to prevent constant database IO)
This works great, and really speeds up the reading of the records, but my app also allows users to "update rows" in the database.
I am fine with the IO in this case, however I would also like to update the local cache of the row in this situation (because if the user revisits the last updated row, the cache will be what was previously fetched from the database and will not reflect the most recent user update).
I can see the cache is generated and it is stored in some binary way (pickle?), which I can see contains all the rows (and as mentioned, the cache is working as expected for "reads"). I don't know how to either "get" or "set" specific rows within the cache file though.
Below is the simplified code of what I am doing:
#cache.cached(timeout=500, key_prefix='all_docs')
def cache_all_db_rows(table_name):
engine = create_sql_alchemy_engine()
connection = engine.connect()
results = connection.execute(stmt).fetchall()
return [row for row in results]
#site.route('/display/<doc_id>', methods=["GET", "POST"])
#login_required
def display(doc_id):
form = CommentForm(request.form)
results = cache_all_db_rows(table_name)
if request.method == "POST":
if form.validate_on_submit():
comments = form.comment.data
relevant = form.relevant.data
database_rate_or_add_comment(comments=comments, relevant_flag=relevant, doc_id=doc_id)
# Ideally I would set the update to the cache here (after a successful db update)
cache.set("foo", comments)
return render_template("display.html", form = form)
I tried a few things, but can't seem to query the cache (pickle file?)... I tried adding code to query what is actually in the cache file by doing this:
obj = []
file_name = "./cache/dfaec33f482d83493ed6ae7e87ace5f9"
with open(file_name,"rb") as fileOpener:
while True:
try:
obj.append(pickle.load(fileOpener))
except EOFError:
break
app.logger.info(str(obj))
but I am receiving an error: _pickle.UnpicklingError: invalid load key, '\xfb'.
I am not sure how to interact with the flask-cache.

python loop until length

so I'm having trouble getting some data to my DB.
I'm not that good with python and trying to learn.
so this is the data I'm sending to the Django server:
as you can see I'm getting FILES called doc[i] to the server and I want to save the name of the file in the DB.
but I don't know how to loop through it.
that's what I'm doing for now:
def submit_quality_dept_application(request, application_id):
doc0 = request.FILES['doc0']
length = request.data['length']
application = Application.objects.get(id=application_id)
application_state = application.application_state
application_state['doc0'] = doc0.name
Application.objects.filter(id=application_id).update(
application_state=application_state)
return Response(length, status=status.HTTP_200_OK)
that way it's working for doc0 and I can save its name in the DB.
but I want to loop through every doc[i] and save it in DB.
any suggestions?
You can enumerate over the items with a range(…) [Python-doc]:
def submit_quality_dept_application(request, application_id):
n = int(request.data['length'])
application = Application.objects.get(id=application_id)
application_state = application.application_state
for i in range(n):
doc = request.FILES[f'doc{i}']
application_state[f'doc{i}'] = doc.name
Application.objects.filter(id=application_id).update(application_state=application_state)
return Response(length, status=status.HTTP_200_OK)
But I'm not sure if the is the best way to handle multiple files. It might be better to submit a list of files as request, for example for the same key.

How to debug a flask-restful api with pdb

I want to use pdb to step into some flask-restful code. I have an endpoint which returns a token. I then use the token to access another endpoint which returns the required data. I would like to view the result of a database query. How do I go about this?
I tried setting a breakpoint inside the class, but it does not get triggered when I send a request using the request library.
class FetchData(Resource):
#jwt_required
def get(self, args):
engine = create_engine('mysql+pymysql://')
conn = engine.connect()
tablemeta = MetaData()
tablemeta.reflect(bind=engine)
keydate = tablemeta.tables['KEYDATE']
coefficient = tablemeta.tables['COEFFICIENT']
vessel = tablemeta.tables['VESSEL']
update_dict = {}
s = select([coefficient])
s = s.where(coefficient.c.updated_date >= args["dt"])
rp = conn.execute(s)
result = []
for r in rp:
j = coefficient.join(vessel, r['idvessel'] == vessel.c.idvessel)
import pdb
pdb.set_trace()
vdm_id = select([vessel.c.vessel_id]).select_from(j)
vdm_id = conn.execute(vdm_id).scalar()
intermediate = []
intermediate.append({"vdm_id": vdm_id})
intermediate.append([dict(r)])
result.append(intermediate)
Or possibly there's another debugger I should be using?
You should put your pdb before the loop as it will never get to pdb if you don't get any results.
I have been using pdb for the last few years in flask without any problems.
Just use print(variable-you-want), this should be faster and efficient.

Tastypie get full resource only works the second time

I'm developing an Android application with backend developed using Tastypie and Django. I have a get request for which I want to optionally be able to retrieve the entire object (with complete related fields, rather than URIs). Below is part of the python code for the resource I'm talking about:
class RideResource(ModelResource):
user = fields.ForeignKey(UserResource, 'driver')
origin = fields.ForeignKey(NodeResource, 'origin', full=True)
destination = fields.ForeignKey(NodeResource, 'destination', full=True)
path = fields.ForeignKey(PathResource, 'path')
# if the request has full_path=1 then we perform a deep query, returning the entire path object, not just the URI
def dehydrate(self, bundle):
if bundle.request.GET.get('full_path') == "1":
self.path.full = True
else:
ride_path = bundle.obj.path
try:
bundle.data['path'] = _Helpers.serialise_path(ride_path)
except ObjectDoesNotExist:
bundle.data['path'] = []
return bundle
As you can see the RideResource has a foreign key pointing to PathResource. I'm using the dehydrate function to be able to inspect if the GET request has a parameter "full_path" set to 1. In that case I set programmatically the path variable to full=True. Otherwise I simply return the path URI.
The thing is that the code seems to work only the second time the GET is performed. I've tested it hundreds of times and, when I perform my GET with full_path=1, even tho it enters the if and sets self.path.full = True, the first time it only returns the URI of the PathResource object. While, if I relaunch the exact same request a second time it works perfectly...
Any idea what's the problem?
EDIT AFTER SOLUTION FOUND THANKS TO #Tomasz Jakub Rup
I finally managed to get it working using the following code:
def full_dehydrate(self, bundle, for_list=False):
self.path.full = bundle.request.GET.get('full_path') == "1"
return super(RideResource, self).full_dehydrate(bundle, for_list)
def dehydrate(self, bundle):
if not bundle.request.GET.get('full_path') == "1":
try:
bundle.data['path'] = _Helpers.serialise_path(bundle.obj.path)
except ObjectDoesNotExist:
bundle.data['path'] = []
return bundle
dehydrate is called after full_dehydrate. Overwrite full_dehydrate function.
def full_dehydrate(self, bundle, for_list=False):
self.path.full = bundle.request.GET.get('full_path') == "1"
return super(RideResource, self).full_dehydrate(bundle, for_list)

django database inserts not getting picked up

We have a little bit of a complicated setup:
In our normal code, we connect manually to a mysql db. We're doing this because I guess the connections django normally uses are not threadsafe? So we let django make the connection, extract the information from it, and then use a mysqldb connection to do the actual querying.
Our code is largely an update process, so we have autocommit turned off to save time.
For ease of creating test data, I created django models that represent the tables, and use them to create rows to test on. So I have functions like:
def make_thing(**overrides):
fields = deepcopy(DEFAULT_THING)
fields.update(overrides)
s = Thing(**fields)
s.save()
transaction.commit(using='ourdb')
reset_queries()
return s
However, it doesn't seem to actually be committing! After I make an object, I later have code that executes raw sql against the mysqldb connection:
def get_information(self, value):
print self.api.rawSql("select count(*) from thing")[0][0]
query = 'select info from thing where column = %s' % value
return self.api.rawSql(query)[0][0]
This print statement prints 0! Why?
Also, if I turn autocommit off, I get
TransactionManagementError: This is forbidden when an 'atomic' block is active.
when we try to alter the autocommit level later.
EDIT: I also just tried https://groups.google.com/forum/#!topic/django-users/4lzsQAWYwG0, which did not help.
EDIT2: I checked from a shell against the database--the commit is working, it's just not getting picked up. I've tried setting the transaction isolation level but it isn't helping. I should add that a function further up from get_information uses this decorator:
def single_transaction(fn):
from django.db import transaction
from django.db import connection
def wrapper(*args, **kwargs):
prior_autocommit = transaction.get_autocommit()
transaction.set_autocommit(False)
connection.cursor().execute('set transaction isolation level read committed')
connection.cursor().execute("SELECT ##session.tx_isolation")
try:
result = fn(*args, **kwargs)
transaction.commit()
return result
finally:
transaction.set_autocommit(prior_autocommit)
django.db.reset_queries()
gc.collect()
wrapper.__name__ = fn.__name__
return wrapper

Categories