How can I update cache rows within the flask-cache (FileSystemCache) - python

I am using flask-cache (FileSystemCache)to store an entire table worth of data (to prevent constant database IO)
This works great, and really speeds up the reading of the records, but my app also allows users to "update rows" in the database.
I am fine with the IO in this case, however I would also like to update the local cache of the row in this situation (because if the user revisits the last updated row, the cache will be what was previously fetched from the database and will not reflect the most recent user update).
I can see the cache is generated and it is stored in some binary way (pickle?), which I can see contains all the rows (and as mentioned, the cache is working as expected for "reads"). I don't know how to either "get" or "set" specific rows within the cache file though.
Below is the simplified code of what I am doing:
#cache.cached(timeout=500, key_prefix='all_docs')
def cache_all_db_rows(table_name):
engine = create_sql_alchemy_engine()
connection = engine.connect()
results = connection.execute(stmt).fetchall()
return [row for row in results]
#site.route('/display/<doc_id>', methods=["GET", "POST"])
#login_required
def display(doc_id):
form = CommentForm(request.form)
results = cache_all_db_rows(table_name)
if request.method == "POST":
if form.validate_on_submit():
comments = form.comment.data
relevant = form.relevant.data
database_rate_or_add_comment(comments=comments, relevant_flag=relevant, doc_id=doc_id)
# Ideally I would set the update to the cache here (after a successful db update)
cache.set("foo", comments)
return render_template("display.html", form = form)
I tried a few things, but can't seem to query the cache (pickle file?)... I tried adding code to query what is actually in the cache file by doing this:
obj = []
file_name = "./cache/dfaec33f482d83493ed6ae7e87ace5f9"
with open(file_name,"rb") as fileOpener:
while True:
try:
obj.append(pickle.load(fileOpener))
except EOFError:
break
app.logger.info(str(obj))
but I am receiving an error: _pickle.UnpicklingError: invalid load key, '\xfb'.
I am not sure how to interact with the flask-cache.

Related

Reload Variable Data from DB without restarting Django

I have a function in my Django Views.py and I use the data in another function but if a user changes True to False then I want to update it without having to restart Django.
def update_list():
global processess
processess = botactive.objects.filter(active=True).values_list('user')
update_list()
I use processess which fetches users who have a model field set to True but if they set it to False I want to not include them if there is a new request.
listi = [row[0] for row in processess]
def wallet_verify(listi):
# print(listi)
database = Bybitapidatas.objects.filter(user = listi)
...
This is the request I make and want it to use fresh data without restarting Django and also in python and not using html.
def verifyt(request):
with ProcessPoolExecutor(max_workers=4, initializer=django.setup) as executor:
results = executor.map(wallet_verify, listi)
return HttpResponse("done")
Ignoring the relative merits of globals in Django for the moment, you could just recreate the query in verifyt() to make sure its fresh.
def verifyt(request):
v_processess = botactive.objects.filter(active=True).values_list('user')
v_listi = [row[0] for row in v_processess]
with ProcessPoolExecutor(max_workers=4, initializer=django.setup) as executor:
results = executor.map(wallet_verify, v_listi)
return HttpResponse("done")
(It might be worth noting, Django queries are lazily evaluated, so, by the looks of it, your query won't actually be performed until listi is set anyway, which may do unpredictable things to your global.)
Another option might be to make your query into a function so you can call it when you need it and always get the latest
def get_listi():
processess = botactive.objects.filter(active=True).values_list('user')
listi = [row[0] for row in processess]
return listi
def verifyt(request):
listi = get_listi()
with ProcessPoolExecutor(max_workers=4, initializer=django.setup) as executor:
results = executor.map(wallet_verify, listi)
return HttpResponse("done")
def wallet_verify(user_from_listi):
database = Bybitapidatas.objects.filter(user = user_from_listi)
...

How to Update a variable between Flask pages?

I ask a very similar question like this yesterday and was directed here. I took what was posted there (using a session) to take in a user input, update a second page with a data table. However, If I get to that second route through any other means, it resorts back to the default which is an empty data table. So I'm thinking that the variable is not being updated and saved or being rewritten. Code below
#app.route('/', methods=['GET','POST'])
def index():
loadform = LoadDataForm()
session['dataset'] = toy_data.get_empty_df()
if loadform.validate_on_submit():
dataset, headers = toy_data.get_dataset(int(loadform.selectToyData.data))
session['dataset'] = dataset
session.modified = True
return render_template('DataTable.html',dataset=dataset)
return render_template('LoadData.html',form=loadform)
#app.route('/DataTable', methods=['GET','POST'])
def index_data():
dataset = session.get('dataset',None)
return render_template('DataTable.html',dataset=dataset)
The data you are setting is added to the session.
That’s why it’s not set in a different session.

peewee savepoint does not exist

I'm using peewee to interface a MySQL database. I have a list of entries which must be inserted into database and updated in case they're already present there. I'm using create_or_get function for this. I also use threading to speed up the process; code looks like this:
# pool is just a map wrapper around standard threading module
pool = utils.ThreadPool()
for page in xrange(0, pages):
pool.add_task(self.update_page, page)
pool.wait_completion()
def update_page(self, num):
for entry in self.get_entries_from_page(num):
self.push_entry(entry)
def push_entry(self, entry):
with _db.execution_context():
result, new = EntryModel.create_or_get(**entry) # << error here
if not new :
if entry['date'] > result.date:
result.hits += 1
result.date = track['date']
result.save()
Database initialization:
_db.initialize(playhouse.pool.PooledMySQLDatabase(n, user = u, passwd = w, host = h, port = p))
Everything was running smoothly, but all of sudden I began to receive a lot of errors on the mentioned line:
(1305, 'SAVEPOINT s449cd5a8d165440aaf47b205e2932362 does not exist')
Savepoint number changes every time and data is not being written to database. Recreating database did not help. What can lead to this error?
Try removing autocommit=True during database connection create.

django database inserts not getting picked up

We have a little bit of a complicated setup:
In our normal code, we connect manually to a mysql db. We're doing this because I guess the connections django normally uses are not threadsafe? So we let django make the connection, extract the information from it, and then use a mysqldb connection to do the actual querying.
Our code is largely an update process, so we have autocommit turned off to save time.
For ease of creating test data, I created django models that represent the tables, and use them to create rows to test on. So I have functions like:
def make_thing(**overrides):
fields = deepcopy(DEFAULT_THING)
fields.update(overrides)
s = Thing(**fields)
s.save()
transaction.commit(using='ourdb')
reset_queries()
return s
However, it doesn't seem to actually be committing! After I make an object, I later have code that executes raw sql against the mysqldb connection:
def get_information(self, value):
print self.api.rawSql("select count(*) from thing")[0][0]
query = 'select info from thing where column = %s' % value
return self.api.rawSql(query)[0][0]
This print statement prints 0! Why?
Also, if I turn autocommit off, I get
TransactionManagementError: This is forbidden when an 'atomic' block is active.
when we try to alter the autocommit level later.
EDIT: I also just tried https://groups.google.com/forum/#!topic/django-users/4lzsQAWYwG0, which did not help.
EDIT2: I checked from a shell against the database--the commit is working, it's just not getting picked up. I've tried setting the transaction isolation level but it isn't helping. I should add that a function further up from get_information uses this decorator:
def single_transaction(fn):
from django.db import transaction
from django.db import connection
def wrapper(*args, **kwargs):
prior_autocommit = transaction.get_autocommit()
transaction.set_autocommit(False)
connection.cursor().execute('set transaction isolation level read committed')
connection.cursor().execute("SELECT ##session.tx_isolation")
try:
result = fn(*args, **kwargs)
transaction.commit()
return result
finally:
transaction.set_autocommit(prior_autocommit)
django.db.reset_queries()
gc.collect()
wrapper.__name__ = fn.__name__
return wrapper

Storing JSON into sqlite database using Python

I am posting a JSON object back to the server side and retrieving that information through a request. Right now this is my code for my views.py
#csrf_exempt
def save(request):
if request.method == 'POST':
rawdata = request.body
JSONData= json.dumps(rawdata)
return HttpResponse(rawdata)
when I return rawdata my response looks like this:
[{"time_elapsed":"0","volts":"239.3","amps":"19.3","kW":"4.618","kWh":"0","session":"1"},...]
when I return JSONdata my response looks like this:
"[{\"time_elapsed\":\"0\",\"volts\":\"239.1\",\"amps\":\"20.8\",\"kW\":\"4.973\",\"kWh\":\"0\",\"session\":\"1\"},....]
which response is better when trying to insert this data into a sqlite database using Python/Django?
Also how would I start a loop for this do I have to do this kind of code?
conn = sqlite3.connect('sqlite.db')
c = conn.cursor()
c.execute("INSERT STATEMENTS")
I assume I have to do a loop for the INSERT STATEMENTS portion of that code, but I don't have any key to work off of. In my data everything between {} is one row. How do I iterate through this array saying everytime you see {...data...} insert it into a new row?
Here is how I eventually solved my problem. It was a matter of figuring out how to translate the JSON object to something python could recognize and then writing a simple loop to iterate through all the data that was produced.
#csrf_exempt
def save(request):
if request.method == 'POST':
rawdata1 = request.body
rawdata2 = json.loads(rawdata1)
length = len(rawdata2)
for i in range(0,length,1):
x = meterdata(time_elapsed=rawdata2[i]['time_elapsed'], volts=rawdata2[i]['volts'], amps=rawdata2[i]['amps'], kW=rawdata2[i]['kW'], kWh=rawdata2[i]['kWh'], session=rawdata2[i]['session'])
x.save()
return HttpResponse("Success!")
The big differences is the json.loads rather than dumps and in the for loop how to access the newly converted data. The first bracket specifies the row to look in and the second specifies what item to look for. for the longest time I was trying to do data[0][0]. May this help anyone who finds this in the future.
probably if you need to store that data in a db is best to create a model representing it, then you create a ModelForm with associated your model for handling your POST.
In this manner saving the model to the db is trivial and serializing it as a json response is something like
data = serializers.serialize('json',
YourModel.objects.filter(id=id),
fields=('list','of','fields'))
return HttpResponse(data, mimetype='application/json')

Categories