Multiple Try-Excepts followed by an Else in python - python

Is there some way to have several consecutive Try-Except clauses that trigger a single Else only if all of them are successful?
As an example:
try:
private.anodization_voltage_meter = Voltmeter(voltage_meter_address.value) #assign voltmeter location
except(visa.VisaIOError): #channel time out
private.logger.warning('Volt Meter is not on or not on this channel')
try:
private.anodization_current_meter = Voltmeter(current_meter_address.value) #assign voltmeter as current meter location
except(visa.VisaIOError): #channel time out
private.logger.warning('Ammeter is not on or not on this channel')
try:
private.sample_thermometer = Voltmeter(sample_thermometer_address.value)#assign voltmeter as thermomter location for sample.
except(visa.VisaIOError): #channel time out
private.logger.warning('Sample Thermometer is not on or not on this channel')
try:
private.heater_thermometer = Voltmeter(heater_thermometer_address.value)#assign voltmeter as thermomter location for heater.
except(visa.VisaIOError): #channel time out
private.logger.warning('Heater Thermometer is not on or not on this channel')
else:
private.logger.info('Meters initialized')
As you can see, you only want to print meters initialized if all of them went off, however as currently written it only depends on the heater thermometer. is there some way to stack these?

Personally I'd just have a trip variable init_ok or somesuch.
Set it as True, and have all the except clauses set it False, then test at the end?

Consider breaking the try/except structure into a function that returns True if the call worked and False if it failed, then use e.g. all() to see that they all succeded:
def initfunc(structure, attrname, address, desc):
try:
var = Voltmeter(address.value)
setattr(structure, attrname, var)
return True
except(visa.VisaIOError):
structure.logger.warning('%s is not on or not on this channel' % (desc,))
if all([initfunc(*x) for x in [(private, 'anodization_voltage_meter', voltage_meter_address, 'Volt Meter'), ...]]):
private.logger.info('Meters initialized')

Try something like this. keeping the original behaviour of not stopping after the first exception
success = True
for meter, address, message in (
('anodization_voltage_meter',voltage_meter_address,'Volt Meter is not on or not on this channel'),
('anodization_current_meter',current_meter_address,'Ammeter is not on or not on this channel'),
('sample_thermometer',sample_thermometer_address,'Sample Thermometer is not on or not on this channel'),
('heater_thermometer',heater_thermometer_address,'Heater Thermometer is not on or not on this channel')):
try:
setattr(private,meter, Voltmeter(address.value):
except (visa.VisaIOError,):
success = False
private.logger.warning(message)
if success: # everything is ok
private.logger.info('Meters initialized')

You can keep a boolean, initialized at the beginning to: everythingOK=True
Then set it to False in all the except blocks and log the final line only if true.

You've got a number of quite similar objects, and in cases like that it's often better to treat them uniformly and use a data-driven approach.
So, I'd start with your ..._meter_address objects. To me they sound like configuration for the meters, so I'd have a class that looks something like this:
class MeterConfiguration(object):
def __init__(self, name, address):
self.name = name
self.address = address
def english_name(self):
"""A readable form of the name for this meter."""
return ' '.join(x.title() for x in self.name.split('_')) + ' Meter'
Perhaps you have some more configuration (currently stored in variables) that could go in here too.
Then, I'd have a summary of all the meters that your program deals with. I'm creating these statically, but it may well be right for you to read all or part of this information from a config file. I've no idea what your addresses look like, so I made something up :)
ALL_METERS = [
MeterConfiguration('anodization_voltage', 'PORT0001'),
MeterConfiguration('anodization_current', 'PORT0002'),
MeterConfiguration('sample_thermometer', 'PORT0003'),
MeterConfiguration('heater_thermometer', 'PORT0004')
]
Cool, now all configuration is in one place, and we can use this to make things uniform and simpler.
private.meters = {}
any_errors = False
for meter in ALL_METERS:
try:
private.meters[meter.name] = Voltmeter(meter.address)
except VisaError:
logging.error('%s not found at %s', meter.english_name(), meter.address)
any_errors = True
if not any_errors:
logging.info('All meters initialized.')
You can use, for example private.meters['anodization_voltage'] to refer to a particular meter, or iterate over the meters dict if you need to do something to them all.

Related

How to speed up stacked queries in Django using caching

Current in my project, I have a layer of models as such:
Sensor class: which stores different sensors
Entry class: which stores whenever a new data entry happens
Data class: which stores data pairs for each entry.
Generally, that additional class was used since I have different fields for different sensors and I wanted to store them in one database, so how I was doing that was through two methods, one for entry to get the data for it, and one in the sensor as follows:
class Data(models.Model):
key = models.CharField(max_length=50)
value = models.CharField(max_length=50)
entry = models.ForeignKey(Entry, on_delete=CASCADE, related_name="dataFields")
def __str__(self):
return str(self.id) + "-" + self.key + ":" + self.value
class Entry(models.Model):
time = models.DateTimeField()
sensor = models.ForeignKey(Sensor, on_delete=CASCADE, related_name="entriesList")
def generateFields(self, field=None):
"""
Generate the fields by merging the Entry and data fields.
Outputs a dictionary which contains all fields.
"""
if field is None:
field_set = self.dataFields.all()
else:
field_set = self.dataFields.filter(key=field)
output = {"timestamp": self.time.timestamp()}
for key, value in field_set.values_list("key", "value"):
try:
floated = float(value)
isnan = math.isnan(floated)
except (TypeError, ValueError):
continue
if isnan:
return None
else:
output[key] = floated
return output
class Sensor(models.Model):
.
.
.
def generateData(self, fromTime, toTime, field=None):
fromTime = datetime.fromtimestamp(fromTime)
toTime = datetime.fromtimestamp(toTime)
entries = self.entriesList.filter(time__range=(toTime, fromTime)).order_by(
"time"
)
output = []
for entry in entries:
value = entry.generateFields(field)
if value is not None:
output.append(value)
return output
After trying to troubleshoot the issues of time (as running this query for a ~5000-10000 entries took too long, almost 10 seconds!), i found that most of the time (about 95%) was spent on the method for generateFields(), I've been looking at options for caching it (by using cached_property), using different methods but none have really worked so far.
Is there a method to store the results of generateFields() in the database automatically upon saving the model perhaps? Or possibly just saving the results of the reverse-query self.dataFields.all()? I can figure out that's the main culprit since for 5000 entries, there are 25000 data fields on average at least.
(Thanks to Jacinator for the notes and improvements) above is the code after Jacinator's changes, but the issue (and the question) still stands at large, as caching the fields would speed up the process by almost 25-50 times(!!) which is critical when my actual dataset can be much larger (1~ minute to run a query isnt really acceptable)
I think this is how I would consider writing generateFields.
class Entry(models.Model):
...
def generateFields(self, field=None):
"""
Generate the fields by merging the Entry and data fields.
Outputs a dictionary which contains all fields.
"""
if field is None:
field_set = self.dataFields.all()
else:
field_set = self.dataFields.filter(key=field)
output = {"timestamp": self.time.timestamp()}
for key, value in field_set.values_list("key", "value"):
try:
floated = float(value)
isnan = math.isnan(floated)
except (TypeError, ValueError):
continue
if isnan:
return None
else:
output[key] = floated
return output
First, I'm going to avoid comparing the field (if it's provided) in Python. I can use the queryset .filter to pass that off to the SQL.
if field is None:
field_set = self.dataFields.all()
else:
field_set = self.dataFields.filter(key=field)
Second, I'm using QuerySet.values_list to retrieve the values from the entries. I could be wrong (please correct me if so), but I think this also passes off the attribute retrieval to the SQL. I don't actually know if it's faster, but I suspect it might be.
for key, value in field_set.values_list("key", "value"):
I've restructured the try/except block, but that has less to do with increasing the speed and more to do with making it explicit what errors are being caught and what lines are raising them.
try:
floated = float(value)
isnan = math.isnan(floated)
except (TypeError, ValueError):
continue
The lines outside of the try/except now should be problem free.
if isnan:
return None
else:
output[key] = floated
I'm a little unfamiliar with QuerySet.prefetch_related, but I think adding it to this line would also help.
entries = self.entriesList.filter(time__range=(toTime, fromTime)).order_by(
"time").prefetch_related("dataFields")
While it is not the perfect solution, below is the approach I am using at the moment (I am still looking for a better solution, but for this case I believe this might be useful to others who face the exact same situation as me here.)
I am using django-picklefield to store the cachedData, declaring it as:
from picklefield.fields import PickledObjectField
.
.
.
class Entry(models.Model):
time = models.DateTimeField()
sensor = models.ForeignKey(Sensor, on_delete=CASCADE, related_name="entriesList")
cachedValues = PickledObjectField(null=True)
Next, I am adding a property to both generate the value, and return it as:
#property
def data(self):
if self.cachedValues is None:
fields = self.generateFields()
self.cachedValues = fields
self.save()
return fields
else:
return self.cachedValues
Normaly, this field will be set automatically when new data is added, however since I have a large amount of data already present, while I can wait until its accessed (as future access will be much faster), i decided to quick-index it by running this:
def mass_set(request):
clear = lambda: os.system("clear")
entries = Entry.objects.all()
length = len(entries)
for count, entry in enumerate(entries):
_ = entry.data
clear()
print(f"{count}/{length} done")
Finally, below are the benchmarks for a set of 2230 fields running on my local development machine, measuring the main loop as such:
def generateData(self, fromTime, toTime, field=None):
fromTime = datetime.fromtimestamp(fromTime)
toTime = datetime.fromtimestamp(toTime)
# entries = self.entriesList.filter().order_by('time')
entries = self.entriesList.filter(time__range=(toTime, fromTime)).order_by(
"time"
)
output = []
totalTimeLoop = 0
totalTimeGenerate = 0
stage1 = time.time()
stage2 = time.time()
for entry in entries:
stage1 = time.time()
totalTimeLoop += stage1 - stage2
# Value = entry.generateFields(field)
value = entry.data
stage2 = time.time()
totalTimeGenerate += stage2 - stage1
if value is not None:
output.append(value)
print(f"Total time spent generating the loop: {totalTimeLoop}")
print(f"Total time spent creating the fields: {totalTimeGenerate}")
return output
Before:
Loop Generation Time: 0.1659650
Fields Generation Time: 3.1726377
After:
Loop Generation Time: 0.1614456
Fields Generation Time: 0.0032608
About a thousand-fold decrease in fields generation time, and in overall time a 20x speed increase
As for the cons, there are two mainly:
While applying it to my dataset currently (with 167 thousand fields), it takes a considerable amount of time to update, it took around 23 minutes on my local machine, and is expected to take about an hour or two on live servers. However this is a one-time process as all future entries will be added automatically with minimum performance effect with adding the following lines:
_ = newEntry.data
newEntry.save()
The other is database size, while went from 100.8 MB to 132.9 MB, a significant increase of 31% that can be problematic for a database a few scales larger than mine, but a good enough solution small/medium DBs where storage is less important than speed

Need to delay the item changed signal till my data is loaded

I'm loading a mysql database on a tablewidget.
I have two functions which are connected to the item changed signal in the my QTable Widget. The first makes a list of the changed cells while the second is used to manage the data types input in the table widget cells bringing up an error message if the wrong datatype is put in.
The problem is that the functions work before I load my database in. Effectively storing a list of cells I don't want and popping continuous error messages while it is loading.
How do I halt the functions working till after the database is loaded?
def log_change(self, item):
self.changed_items.append([item.row(),item.column()])
def item_changed(self, Qitem,item):
if (item.column())%2 == 0:
try:
test = float(Qitem.text())
except ValueError:
Msgbox = QMessageBox()
Msgbox.setText("Error, value must be number!")
Msgbox.exec()
Qitem.setText(str(''))
There are 2 options:
Use a flag:
# in constructor
self.is_loading = False
def load_from_db(self):
# in load function
self.is_loading = True
# load from db
# ...
self.is_loading = False
# foo_callback is item_changed by example
def foo_callback(self, arg1, arg2, ...):
if self.is_loading:
return
# do work
Use blockSignals
self.tablewidget.blockSignals(True)
# load from db
# ...
self.tablewidget.blockSignals(False)

How can I use an 'update' function to 'open' and update if the object is not yet open?

My example is a progress bar
In its simplest form a progress bar is
bar = ProgressBar.Open()
for item in list:
bar.Update(count, len(list))
I would instead like my calling code to be
for item in list:
bar.Update(count, len(list))
I want my Update() function to Open() a bar for the caller if one is not open. The caller doesn't need any other access to the bar than to update it so there's no value in having the meter` handle.
How can I retain state to tell if the Update had been previously called?
I could create a global variable and keep track that way, but I have a gut sense there's a Pythonista way of doing it.
Trying again, but in a way that has no application to stumble on.
The base question is:
I have a function that will be called multiple times.
I want to do something different the first time it is called.
How can a function in Python do that?
In C, that of course would be a...
static variable
I'm just now kinda figuring it out as I type, sorry.
========================
I'm sure all these edits are not how stackoverflow is supposed to work. I'm sorry for not getting it right yet, but am very appreciative of the replies.
Despite it sounding like I'm breaking all the rules of good practices, it's when looked at from the CALLER'S point of view that I had hoped to make an impact.
What if the only thing you needed to do to add a progress meter, even for debugging, to your program was make a call to a progress meter update in the location you want to show progress?
That's the underlying motivation. Slide in 1-line, get something cool for the trouble.
This progress meter was added to my otherwise boring file de-duplicator by adding just the single call:
msg = f'Deduplicating {idx} of {total_files} files\n' f'{dup_count} Dupes found\n' f'{small_count} Too small'
not_cancelled = sGUI.ProgressBar('De-dupe', msg, idx, total_files)
To avoid using global variables, you can use decorator. Here's a simple example:
def open():
print 'open'
def update():
print 'update'
def call_once(func1, *args1, **kwargs1):
def decorator(func2):
called = [False]
def wrapper(*args2 ,**kwargs2):
if not called[0]:
func1(*args1, **kwargs1)
called[0] = True
return func2(*args2, **kwargs2)
return wrapper
return decorator
#call_once(open)
def my_update():
update()
for i in xrange(5):
my_update()
which give the result:
open
update
update
update
update
update
For more information about decorator, please visit: https://wiki.python.org/moin/PythonDecorators
For what you want, you can use a class:
class ProgressBar:
def __init__(self):
self._opened = False
def Open(self):
print("Open")
def Update(self):
if self._opened:
print("Update!")
else:
self.Open()
print("set flag")
self._opened = True
print("Update")
In action:
In [32]: bar = ProgressBar()
In [33]: bar.Update()
Open
set flag
Update
In [34]: bar.Update()
Update!
Note: I copied your casing so as to make it more clear to you, however, the official Python style would be like this:
class ProgressBar:
def __init__(self):
self._opened = False
def open(self):
pass # open stuff
def update(self):
if self._opened:
pass # update stuff
else:
self.open()
self._opened = True
Using snake_case for everything except the ClassName.
OK, I found a solution using 'globals'. I thought that a nested function was the way to do it... then I mixed the two.
By 'globals' I meant variables declared outside the scope of a function. I want to be able to import my module without the import creating anything.
Here's the code that shows how to do this with globals
def my_update(amount):
global flag
if 'flag' in globals():
print('I have been here')
else:
print('I have not been here')
flag = True
return
for i in range(10):
print(f'Calling number {i}')
result = my_update(1)
It does the job for the goals I had set out, but I'm SURE there are better, safer ways that are more elegant as well.
I posted this question on a Python forum and got back the best answer so far using a function attribute. It's brilliant and it works.
Here is code that demonstrates this construct... it should go in everyone's 'Favorite Python Constructs' notebook in my opinion.
def func():
if not hasattr(func, 'static_variable'):
func.static_variable = 0
func.static_variable += 1
return func.static_variable
def main():
for i in range(10):
print('func = {}'.format(func()))
if __name__ == '__main__':
main()
The output is
func = 1
func = 2
func = 3
func = 4
func = 5
func = 6
func = 7
func = 8
func = 9
func = 10

In Python 2.7, how can I return calculations without defining variables in the constructor?

My question is about getter/setter-type functionality in Python. I have a class, Week_Of_Meetings, that takes a blob of data from my Google Calendar and does some calculations on it.
wom = Week_Of_Meetings(google_meetings_blob)
I want to be able to return something like:
wom.total_seconds_in_meetings() # returns 36000
But, I'm not understanding how the getters/setters-type #property decorator can help me do this. In Java, I would use member variables, but you don't interact with them the same way in Python. How can I return calculations without starting with them in the constructor?
Class Week_Of_Meetings:
def __init__(self, google_meetings_blob)
self.google_meetings_blob = google_meetings_blob
def get_meetings_list(self, google_meetings_blob):
meetings_list = []
for meeting_id, meeting in enumerate(self.google_meetings_blob, 1):
summary = self._get_summary(meeting)
start = parse(meeting['start'].get('dateTime', meeting['start'].get('date')))
end = parse(meeting['end'].get('dateTime', meeting['end'].get('date')))
duration = end - start
num_attendees = self._get_num_attendees(meeting.get('attendees'))
m = Meeting(meeting_id, summary, start, end, duration, num_attendees)
meetings_list.append(m)
return meetings_list
def _get_summary(self, meeting):
summary = meeting.get('summary', 'No summary given')
return summary
def _get_num_attendees(self, num_attendees):
if num_attendees == None:
num_attendees = 1 # if invited only self to meeting
else:
num_attendees = len(num_attendees)
return num_attendees
When I add self.total_seconds_in_meetings to the
__init__()
I get "NameError: global name 'total_seconds_in_meetings' is not defined." That makes sense. It hasn't been defined. But I can't define it when it's supposed to be the result of calculations done on the google_meetings_blob. So, I'm confused where the 'total_seconds_in_meetings' goes in the class.
Thank you for the help!
Of course Python has member variables. How would classes work without them? You can set and get any instance data via self, as you are already doing with self.google_meetings_blob in __init__.

how to get entities which don't have certain attribute in datastore

I'm trying to make an appraisal system
This is my class
class Goal(db.Expando):
GID = db.IntegerProperty(required=True)
description = db.TextProperty(required=True)
time = db.FloatProperty(required=True)
weight = db.IntegerProperty(required=True)
Emp = db.UserProperty(auto_current_user=True)
Status = db.BooleanProperty(default=False)
Following things are given by employee,
class SubmitGoal(webapp.RequestHandler):
def post(self):
dtw = simplejson.loads(self.request.body)
try:
maxid = Goal.all().order("-GID").get().GID + 1
except:
maxid = 1
try:
g = Goal(GID=maxid, description=dtw[0], time=float(dtw[1]), weight=int(dtw[2]))
g.put()
self.response.out.write(simplejson.dumps("Submitted"))
except:
self.response.out.write(simplejson.dumps("Error"))
Now, here Manager checks the goals and approve it or not.. if approved then status will be stored as true in datastore else false
idsta = simplejson.loads(self.request.body)
try:
g = db.Query(Goal).filter("GID =", int(idsta[0])).get()
if g:
if idsta[1]:
g.Status=True
try:
del g.Comments
except:
None
else:
g.Status=False
g.Comments=idsta[2]
db.put(g)
self.response.out.write(simplejson.dumps("Submitted"))
except:
self.response.out.write(simplejson.dumps("Error"))
Now, this is where im stuck..."filter('status=',True)".. this is returning all the entities which has status true.. means which are approved.. i want those entities which are approved AND which have not been assessed by employee yet..
def get(self):
t = []
for g in Goal.all().filter("Status = ",True):
t.append([g.GID, g.description, g.time, g.weight, g.Emp])
self.response.out.write(simplejson.dumps(t))
def post(self):
idasm = simplejson.loads(self.request.body)
try:
g = db.Query(Goal).filter("GID =", int(idasm[0])).get()
if g:
g.AsmEmp=idasm[1]
db.put(g)
self.response.out.write(simplejson.dumps("Submitted"))
except:
self.response.out.write(simplejson.dumps("Error"))
How am I supposed to do this? as I know that if I add another filter like "filter('AsmEmp =', not None)" this will only return those entities which have the AsmEmp attribute what I need is vice versa.
You explicitly can't do this. As the documentation states:
It is not possible to query for entities that are missing a given property.
Instead, create a property for is_assessed which defaults to False, and query on that.
could you not simply add another field for when employee_assessed = db.user...
and only populate this at the time when it is assessed?
The records do not lack the attribute in the datastore, it's simply set to None. You can query for those records with Goal.all().filter('status =', True).filter('AsmEmp =', None).
A few incidental suggestions about your code:
'Status' is a rather unintuitive name for a boolean.
It's generally good Python style to begin properties and attributes with a lower-case letter.
You shouldn't iterate over a query directly. This fetches results in batches, and is much less efficient than doing an explicit fetch. Instead, fetch the number of results you need with .fetch(n).
A try/except with no exception class specified and no action taken when an exception occurs is a very bad idea, and can mask a wide variety of issues.
Edit: I didn't notice that you were using an Expando - in which case #Daniel's answer is correct. There doesn't seem to be any good reason to use Expando here, though. Adding the property to the model (and updating existing entities) would be the easiest solution here.

Categories