I'm loading a mysql database on a tablewidget.
I have two functions which are connected to the item changed signal in the my QTable Widget. The first makes a list of the changed cells while the second is used to manage the data types input in the table widget cells bringing up an error message if the wrong datatype is put in.
The problem is that the functions work before I load my database in. Effectively storing a list of cells I don't want and popping continuous error messages while it is loading.
How do I halt the functions working till after the database is loaded?
def log_change(self, item):
self.changed_items.append([item.row(),item.column()])
def item_changed(self, Qitem,item):
if (item.column())%2 == 0:
try:
test = float(Qitem.text())
except ValueError:
Msgbox = QMessageBox()
Msgbox.setText("Error, value must be number!")
Msgbox.exec()
Qitem.setText(str(''))
There are 2 options:
Use a flag:
# in constructor
self.is_loading = False
def load_from_db(self):
# in load function
self.is_loading = True
# load from db
# ...
self.is_loading = False
# foo_callback is item_changed by example
def foo_callback(self, arg1, arg2, ...):
if self.is_loading:
return
# do work
Use blockSignals
self.tablewidget.blockSignals(True)
# load from db
# ...
self.tablewidget.blockSignals(False)
Related
I'm having trouble with passing a value from one script to another, trying to take it a step at a time but the big picture would be to print the value obj1.get_predval to my Django view and wait for the users' input.
active_learner.obj1.get_predval in my beta.py script doesn't work, it just prints out the initial value which makes sense because it's not running the main.py but I'm not sure how I'd pass the value of obj1.set_predval(machine_prediction) from main.py. It properly outputs the obj1.get_predval in the main.py script.
I'm assuming I have a fundamental misunderstanding, for now, all I'm trying to return is the value of obj1.get_predval in function beta.py, when it gets to the line return value and wait for user input then continue.
main.py script below
obj1 = MachinePred()
def main():
model = load_model('model_new.h5')
DATAFILE = "c_user_model_data100000.csv"
dataset = loadtxt(DATAFILE, delimiter=",")
X_pool, Y = dataset[:, 0:5], dataset[:, 5:]
sc_x, sc_y = StandardScaler(), StandardScaler()
X_pool, Y = sc_x.fit_transform(X_pool), sc_y.fit_transform(Y)
learner = ActiveLearner(
estimator = model,
query_strategy = uncertainty_sampling
)
for i in range(3):
query_idx, query_inst = learner.query(X_pool)
print("The machine queried:\n{}\nat index {}".format(
sc_x.inverse_transform(query_inst),
query_idx
)
)
machine_prediction = learner.predict(X_pool[query_idx])
obj1.set_predval(machine_prediction)
print("predvalue:", (obj1.get_predval()))
ratings = []
cc_factor = ["delay", "speed", "missing_words", "paraphrasing"]
for f in cc_factor:
user_answer = input("How would you rate the quality of {} between [1-5]: ".format(f))
ratings.append(user_answer)
print(ratings, np.array([ratings]).reshape(1,-1))
if __name__ == '__main__':
main()
beta.py
This is the script I'm trying to pass the value to below
import active_learner
print(A is: ", active_learner.obj1.get_predval)
mac.py Simple python script using the get and set methods below.
class MachinePred:
predval = 0 # Default value of the 'x' configuration setting
def __init__(self):
self.predval = 0
def set_predval(self, val):
self.predval = val
def get_predval(self):
return self.predval
So the solution to this was very simple, from my understanding it could be done using generator-based coroutines or splitting it into two functions inside a class and use an OO design. The coroutine method would use"yield" which would force exit the function returning the value then re-enter the function but this would limit your ability to use non-generator based coroutines which I did need to await input from my front end.
Using a class though you could put the active learner model and data in an "init method" then split from the machine_prediction = learner.predict(X_pool[query_idx]) for the first function after returning the values and perform the rest in a second function.
I want to create a function in a class that sequently runs a number of functions. But what functions are executed should depend on conditions, for example a function that could write results to the disk or to a database or to both of that. But with millions of calculations I dont want a if statement that asked everytime if the condtion for database or disk writing is True or False in that single function. I wonder what is the best solution for that. I could write some kind of choseFunction(), that fills a list with functions if conditions are True and execute all functions in that list in the writing functions. Or create a Writing Class that only have those functions if they met the condtions and inherit them to the main class as writing function. What is the common way to do such a thing?
import sys
def log(name):
print("running:" + name)
def proc1():
log ( sys._getframe().f_code.co_name)
def proc2():
log ( sys._getframe().f_code.co_name)
def proc3():
log ( sys._getframe().f_code.co_name)
def procA():
log ( sys._getframe().f_code.co_name)
def procB():
log ( sys._getframe().f_code.co_name)
def procC():
log ( sys._getframe().f_code.co_name)
def setMyFunctions(conditions):
# set your funtions here
option = {
1: [proc1, proc2, proc3],
2: [procA, procB, procC],
};
return option[conditions]
# ready to run as is, just try it
x = True # change the x here to see the inpact
if (x):
status = 1
else:
status = 2
toDoList = setMyFunctions(status)
i = 0; lastTask = len(toDoList)
# log the start
print ( "main started")
while i < lastTask:
# run or deliver your task list here
toDoList[i]()
i += 1
print ( "main finished")
I am trying to run the same exact test on a single obj which is a models.Model instance and has some relations with other models. I do not want to persist changes in that instance, so effectively I want the same effect of the tearDown method which rollbacks transactions.
To illustrate this:
class MyTestCase(django.test.TestCase):
def test():
# main test that calls the same test using all
# different states of `obj` that need to be tested
# different values of data that update the state of `obj`
# with state I simply mean the values of `obj`'s attributes and relationships
data = [state1, state2, state3]
for state in data:
obj = Obj.objects.get(pk=self.pk) # gets that SINGLE object from the test db
# applies the state data to `obj` to change its state
obj.update(state)
# performs the actual test on `obj` with this particular state
self._test_obj(obj)
def _test_obj(self, obj):
self.assertEqual(len(obj.vals), 10)
self.assertLess(obj.threshold, 99)
# more assert statements...
This design has two problems:
The changes on obj persist on the test database, so on the next iteration the data would be tainted. I would want to rollback those changes and get a fresh instance of obj as if the test method was just called and we are getting the data straight from the fixtures.
If an assert statement fails I will be able to see which one it it, but I won't be able to determine what case (state) failed because of the for loop. I can try-except the _test_obj_ call in the test method but then I wouldn't be able to tell what assert failed.
Does django.test provide any tool to run the same test for different states of the same model? If it doesn't, how can I do what I am trying to do while solving both points mentioned above?
Simply rollback after you're done with the object.
You can use the new subTest in python 3.4+
Here's how your code should look:
class TestProductApp(TestCase):
def setUp(self):
self.product1 = ...
def test_multistate(self):
state1 = dict(name='p1')
state2 = dict(name='p2')
data = [state1, state2]
for i, state in enumerate(data):
with self.subTest(i=i):
try:
with transaction.atomic():
product = Product.objects.get(id=self.product1.id)
product.name = state['name']
product.save()
self.assertEqual(len(product.name), 2)
raise DatabaseError #forces a rollback
except DatabaseError:
pass
print(Product.objects.get(id=self.product1.id)) #prints data created in setUp/fixture
This answer can be improved. Rather than forcing a rollback with an error, you can simply set a rollback for the atomic block. See set_rollback()
class TestProductApp(TestCase):
def setUp(self):
self.product1 = ...
def test_multistate(self):
state1 = dict(name='p1')
state2 = dict(name='p2')
data = [state1, state2]
for i, state in enumerate(data):
with self.subTest(i=i):
with transaction.atomic():
product = Product.objects.get(id=self.product1.id)
product.name = state['name']
product.save()
self.assertEqual(len(product.name), 2)
transaction.set_rollback(True) # forces a rollback
print(Product.objects.get(id=self.product1.id)) #prints data created in setUp/fixture
I want to log every action what will be done with some SQLAlchemy-Models.
So, I have a after_insert, after_delete and before_update hooks, where I will save previous and current representation of model,
def keep_logs(cls):
#event.listens_for(cls, 'after_delete')
def after_delete_trigger(mapper, connection, target):
pass
#event.listens_for(cls, 'after_insert')
def after_insert_trigger(mapper, connection, target):
pass
#event.listens_for(cls, 'before_update')
def before_update_trigger(mapper, connection, target):
prev = cls.query.filter_by(id=target.id).one()
# comparing previous and current model
MODELS_TO_LOGGING = (
User,
)
for cls in MODELS_TO_LOGGING:
keep_logs(cls)
But there is one problem: when I'm trying to find model in before_update hook, SQLA returns modified (dirty) version.
How can I get previous version of model before updating it?
Is there a different way to keep model changes?
Thanks!
SQLAlchemy tracks the changes to each attribute. You don't need to (and shouldn't) query the instance again in the event. Additionally, the event is triggered for any instance that has been modified, even if that modification will not change any data. Loop over each column, checking if it has been modified, and store any new values.
#event.listens_for(cls, 'before_update')
def before_update(mapper, connection, target):
state = db.inspect(target)
changes = {}
for attr in state.attrs:
hist = attr.load_history()
if not hist.has_changes():
continue
# hist.deleted holds old value
# hist.added holds new value
changes[attr.key] = hist.added
# now changes map keys to new values
I had a similar problem but wanted to be able to keep track of the deltas as changes are made to sqlalchemy models instead of just the new values. I wrote this slight extension to davidism's answer to do that along with slightly better handling of before and after, since they are lists sometimes or empty tuples other times:
from sqlalchemy import inspect
def get_model_changes(model):
"""
Return a dictionary containing changes made to the model since it was
fetched from the database.
The dictionary is of the form {'property_name': [old_value, new_value]}
Example:
user = get_user_by_id(420)
>>> '<User id=402 email="business_email#gmail.com">'
get_model_changes(user)
>>> {}
user.email = 'new_email#who-dis.biz'
get_model_changes(user)
>>> {'email': ['business_email#gmail.com', 'new_email#who-dis.biz']}
"""
state = inspect(model)
changes = {}
for attr in state.attrs:
hist = state.get_history(attr.key, True)
if not hist.has_changes():
continue
old_value = hist.deleted[0] if hist.deleted else None
new_value = hist.added[0] if hist.added else None
changes[attr.key] = [old_value, new_value]
return changes
def has_model_changed(model):
"""
Return True if there are any unsaved changes on the model.
"""
return bool(get_model_changes(model))
If an attribute is expired (which sessions do by default on commit) the old value is not available unless it was loaded before being changed. You can see this with the inspection.
state = inspect(entity)
session.commit()
state.attrs.my_attribute.history # History(added=None, unchanged=None, deleted=None)
# Load history manually
state.attrs.my_attribute.load_history()
state.attrs.my_attribute.history # History(added=(), unchanged=['my_value'], deleted=())
In order for attributes to stay loaded you can not expire entities by settings expire_on_commit to False on the session.
I am confused as to how I can use certain attributes that are returned after a query to a local SQLite database. I can populate a qlistwidget with one of the attributes but I do not know how to get the other attributes when a user clicks on the listwidget item.
The following code was created using Eric which pre populates some of the signals and slots
#pyqtSignature("QString")
def on_searchText_textEdited(self, p0):
"""
Slot documentation goes here.
"""
# TODO: not implemented yet
self.resultsList.clear()
self.searchItem = self.searchText.text()
self.search()
#pyqtSignature("QListWidgetItem*")
def on_resultsList_itemClicked(self, item):
"""
Slot documentation goes here.
"""
# TODO: not implemented yet
result = str(item.text())
QMessageBox.about(self, "Clicked Item", "%s")%(result)
#pyqtSignature("")
def on_cancelButton_clicked(self):
"""
Slot documentation goes here.
"""
self.close()
def search(self):
conn = sqlite3.connect("C:\\file.sqlite")
cur = conn.cursor()
sqlqry = "SELECT name, number, size FROM lookup WHERE name LIKE '%s' LIMIT 100;"%("%"+self.searchItem+"%")
try:
c = cur.execute(sqlqry)
data = c.fetchall()
for i in data:
self.resultsList.addItem(i[0])
except sqlite3.Error, e:
QMessageBox.about(self, "Error message", "Error")
So my resultsList gets populated when the user enters text into the line edit but then when a user clicks on an item I get an error with the messagebox saying something about a NoneType and str.
However, what I really need to use are the second and third attributes for somewhere else in my code.
So how do I select that attributes through the itemClicked signal and create two new variables?
i hope that makes sense, it has been a long day going round in circles
You just need to query from the database again and work with the new row.
#pyqtSignature("QListWidgetItem*")
def on_resultsList_itemClicked(self, item):
"""
Slot documentation goes here.
"""
result = str(item.text())
QMessageBox.about(self, "Clicked Item", "%s")%(result)
conn = sqlite3.connect("C:\\file.sqlite")
cur = conn.cursor()
sqlqry = "SELECT name, number, size FROM lookup WHERE name = '%s' LIMIT 1;"%(result)
try:
c = cur.execute(sqlqry)
data = c.fetchone()
# Do something with data
except sqlite3.Error, e:
QMessageBox.about(self, "Error fetching %s"%name, "Error")
Obviously, this doesn't deal with the database santisation issues you might have, and assumes that name is unique in the database.