clingo compiler computing multiple values for #min - python

I’m having a basic issue whilst using python scripting in ASP / `clingo (Version 4+). I’ve reconstructed the problem with a minimal example, to illustrate the point. Obviously, in the example, I don’t need to use scripts. In my more complicated application, however, I do, whence I have artificially recreated the problem, in a more comprehendible fashion.
The issue is, that whilst calling an aggregate/optimisation, the compiler somehow does not register all the full predicate being used to index the values. Instead, it appears to successively compute the minimum and as a result, spits out all the values along the way. (See the output below: notice that the minimum goes from 59, to 19, then does not change to 29. This is highly sensitive of the order of prg.groundcalls in the #script (python) part of the code.)
This is highly undesirable, and I would like to know how to avoid this problem. I. e., how can I amend the below code still utilising a python-script (potentially modified), so that the correct model is computed. (In the example, obviously, the solution to the predicate min_sel_weight/1 is min_sel_weight(19)with no further values.
The Programme.
weight("ant",3). weight("bat",53). weight("cat",19). weight("dot",13). weight("eel",29).
#script (python)
import gringo;
def main(prg):
prg.ground([('base', [])]);
prg.ground([('sel', ['bat'])]);
prg.ground([('sel', ['cat'])]);
prg.ground([('sel', ['eel'])]);
prg.solve();
#end.
%% call python-script, to select certain objects.
#program sel(t). sel(t).
%% compute minimum of weights of selected objects:
min_sel_weight(X) :- weight(_,X), #min {XX : weight(OBJ,XX),sel(OBJ)} = X.
#show sel/1. #show min_sel_weight/1.
Calling clingo 0 myprogramme.lp I obtain the following output:
clingo version 4.5.4
Reading from myprogramme.lp
Solving...
Answer: 1
sel("bat")
min_sel_weight(53)
sel("cat")
min_sel_weight(19)
sel("eel")
SATISFIABLE
Models : 1
Calls : 1
Time : 0.096s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s)
CPU Time : 0.040s

Try this:
% instance
weight("ant",3). weight("bat",53). weight("cat",19). weight("dot",13). weight("eel",29).
% Assuming you will get certain selected objects like this:
selected("cat"). selected("bat"). selected("eel"). %this will be python generated
% encoding
selectedWeight(OBJ, XX):- weight(OBJ,XX), selected(OBJ).
1{min_sel_weight(X)}1 :- selectedWeight(_,X), #min {XX : selectedWeight(OBJ,XX),selected(OBJ)} = X.
#show min_sel_weight/1.
Output:

Related

How to get the list of matched featured names along with the predict_prob in CalibratedClassifierCV?

I am trying to find the profanity score of a given text which is received on the chats.
For this is I went through a couple of python(base) libraries and found some of the relevant ones as:
profanity-check
alt-profanity-check -- (currently using)
profanity-filter
detoxify
Now, The one which I am using (profanity-check) is giving me proper results when using
predict and predict_prob against the calibrated_classifier used underhood after training.
The problem is that I am unable to identify the words which were used to give the prediction or calculate the probability. In short the list of feature names (profane words) used in the test data when passed as an input.
I know there are no methods to return the same, but I would like to fork and use the library.
I wanted to understand if we can add something to this place (edit) to create a method for the same.
e.g
text = ["this is crap"]
predict([text]) - array([1])
predict_prob([text]) - array([0.99868968])
> predict_words([text]) - array(["crap"]) ---- (NEED THIS)

Dynamo Revit set formula for a parameter in a family

I am trying to add a formula to a parameter within a Revit Family.
Currently I have multiple families in a project. I run Dynamo from within that project then I extract the families that I want to modify using Dynamo standard nodes.
Then I use a python script node that goes through every selected family and find the parameter I am interested in, and assign a formula for it.
That seemed fine until I noticed that it is not assigning the formula, but it is entering it as a string — as in it is in quotes. And sure enough, the code i am using will only work with Text type parameters.
Can someone shed the light on how to assign a formula to a parameter using dynamo?
see line 32 in code below
Thanks
for family in families:
TransactionManager.Instance.ForceCloseTransaction()
famdoc = doc.EditFamily(family)
FamilyMan = famdoc.FamilyManager
found.append(family.Name)
TransactionManager.Instance.EnsureInTransaction(famdoc)
check = 0
# Loop thru the list of parameters to assign formula values to them... these are given as imput
for r in range(len(param_name_lst)):
# Loop thru the list of parameters in the current family per the families outter loop above.
for param in FamilyMan.Parameters:
#for param in FamilyMan.get_Parameter(param_name_lst[r]):
# for each of the parameters get their name and store in paramName.
paramName = param.Definition.Name
# Check if we have a match in parameter name.
if param_name_lst[r] in paramName:
if param.CanAssignFormula:
canassignformula.append(param_name_lst[r])
else:
cannotassignformula.append(param_name_lst[r])
try:
# Make sure that the parameter is not locked.
if FamilyMan.IsParameterLocked(param):
FamilyMan.SetParameterLocked(param,False)
locked.append(paraName)
# Enter formula value to parameter.
FamilyMan.SetFormula(param, param_value_lst[r])
check += 1
except:
failed.append(paramName)
else:
continue
Actually, you can access the family from the main project, and you can assign a formula automatically.... That's what i currently do, i load all the families i want in one project and run the script.
After a lot of work, i was able to figure out what i was doing wrong, and in it is not in my code... my code was fine.
The main problem is that i need to have all of my formula's dependencies lined up.... just like in manual mode.
so if my formula is:
size_lookup(MY_ID_tbl, "MY_VAR", "MY_DefaultValue", ND1,ND2)
then i need to have the following:
MY_ID_tbl should exist and be assigned a valid value, in this case it should have a csv filename. Moreover, that file should be also loaded. This is important for the next steps.
MY_VAR should be defined in that csv file, so Does ND1, ND2
The default value (My_Default_Value) should match what that csv file says about that variable...in this case, it is a text.
Needless to say, i did not have all of the above lined up as it should be, once i fixed that, my setFormula code did its job. And i had to change my process altogether, cause i have to first create the MY_ID_tbl and load the csv file which i also do using dynamo, then i go and enter the formulas using dynamo.
Revit parameters can only be assigned to a formula inside the family editor only, that is the first point, so you should run your dynamo script inside the family editor for each family which will be a waste of time and you just edit the parameter's formula manually inside each family.
and the second point, I don't even think that it is possible to set a certain parameter's formula automatically, it must be done manually ( I haven't seen anything for it in the Revit API docs).

How to handle unit conversions while interacting with FMUs?

I have a python script that filters and lists the parameters, their units and default values from a fmu using the read_model_description function from FMPy library and writes in an excel sheet (related discussion). Then using the simulate_fmu function the script simulates the fmu and writes the results with units back in the excel sheet.
In filtering the parameters and output variable, I use this line to get their units.
unit = variable.declaredType.unit if hasattr(variable.declaredType,'unit') else '-'
While interacting with the fmu, the parameter and variable values are in default SI units. I guess this is according to the FMI standard. However, in the modelDescription.xml under <UnitDefinitions> I see that there is information regarding the default SI unit to displayUnit conversion. For example:
<Unit
name="Pa">
<BaseUnit kg="1"
m="-1"
s="-2"/>
<DisplayUnit
name="bar"
factor="1E-05"/>
<DisplayUnit
name="ftH2O"
factor="0.0003345525633129686"/>
</Unit>
Is there a way to be able to get the parameter values and output variables in displayUnits if the conversion factors are already available in the modelDescription.xml?
Or is there a easier solution using python libraries like pint that can act as a wrapper around fmu to convert the units in desired unit system (i.e. SI to IP) while interacting with it?
In the FMPy source I did not find any place where unit conversion is implemented.
But all the relevant information is read in model_description.py.
The display unit information ends up in modelDescription.unitDefinitions. E.g. to convert a value val = 1.013e5 # Pa to all defined display units, the following might work:
for unit in modelDescription.unitDefinitions:
if unit.name == "Pa":
for display_unit in unit.displayUnits:
print(display_unit.name)
# not sure about the brackets here
print( (val - display_unit.offset)/display_unit.factor )
break
Take a look at the FMI Specification 2.01, chapter 2.2.2 Definition of Units (UnitDefinitions) to get the full picture.

Error while creating KNearestNeighborsClassifier with CoreMLTools 3 beta and question how to set dimensions correctly

for a project I want to create a Core ML 3 model, which is receiving some text (i.e from mails) and classify it. In addition, the model should be updatable and trained on the devices. Therefore, I found that KNearestNeighborsClassifier can be updatable and wanted to used them for my approach.
However, first of all I got an error
" RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: Error compiling model: "Error reading protobuf spec. validator error: KNearestNeighborsClassifier requires k to be a positive integer."
while creating such a model with a script (see below). In addition, I am not sure how to use the KNearestNeighborsClassifier for my problem correctly. Especially, which number of dimension is correct one if I want classify some texts? And how will I have to use the model correctly in the app? Maybe you know some useful guide, which I have not found yet=
My script for creating the KNearestNeighborsClassifier is based on this guide: https://github.com/apple/coremltools/blob/master/examples/updatable_models/updatable_nearest_neighbor_classifier.ipynb
I have installed and I am using coremltools==3.0b6.
Here my actual script for creating the model:
number_of_dimensions = 128
from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder
builder = KNearestNeighborsClassifierBuilder(input_name='input',
output_name='output',
number_of_dimensions=number_of_dimensions,
default_class_label='defaultLabel',
number_of_neighbors=3,
weighting_scheme='inverse_distance',
index_type='linear')
builder.author = 'Christian'
builder.license = 'MIT'
builder.description = 'Classifies {} dimension vector based on 3 nearest neighbors'.format(number_of_dimensions)
builder.spec.description.input[0].shortDescription = 'Input vector to classify'
builder.spec.description.output[0].shortDescription = 'Predicted label. Defaults to \'defaultLabel\''
builder.spec.description.output[1].shortDescription = 'Probabilities / score for each possible label.'
builder.spec.description.trainingInput[0].shortDescription = 'Example input vector'
builder.spec.description.trainingInput[1].shortDescription = 'Associated true label of each example vector'
#This lets the developer of the app change the number of neighbors at runtime from anywhere between 1 and 10, with a default of 3.
builder.set_number_of_neighbors_with_bounds(3, allowed_range=(1, 10))
# Let's set the index to kd_tree with leaf size of 30
builder.set_index_type('kd_tree', 30)
# By default an empty knn model is updatable
print(builder.is_updatable)
print(builder.number_of_dimensions)
print(builder.number_of_neighbors)
print(builder.number_of_neighbors_allowed_range())
print(builder.index_type)
mlmodel_updatable_path = './UpdatableKNN.mlmodel'
# Save the updated spec
from coremltools.models import MLModel
mlmodel_updatable = MLModel(builder.spec)
mlmodel_updatable.save(mlmodel_updatable_path)
I hope that you can help me by telling me if overall my approach using the KNearestNeighborsClassifier for text classification is senseful and hopefully you can help me to create successfully the CoreML model.
Many thanks in advance.
Not sure why you're getting that error, although make sure you're using the latest (beta) version of coremltools (3.0b6 currently).
As for the number of dimensions, you'll need to convert your text into a vector of a fixed length somehow. Exactly how you do that is totally up to the problem you're trying to solve.
For example, you could use the bag-of-words technique to turn a phrase into such a vector. You can use word embeddings, or a neural network, or any of the other common techniques for this.
But you need some way to turn the text into feature vectors.

Why is a part of my Python code interpreted diffently when I add a seemingly unrelated part?

Some background: I'm implementing a GUI to interact with equipment via GPIB. The issue arises in this method:
from tkinter import *
from tkinter import ttk
import visa #PyVisa Package. pyvisa.readthedocs.io
from time import sleep
import numpy as np #NumPy Package. Scipy.org
def oneDSweep():
Voltage =[]
Current =[]
Source = []
try:
#Gate = parseGate(Gate1Input.get()) #Not implemented yet.
Min = float(Gate1MinInput.get()) #Add a check for valid input
#if Min < .001:
#Throw exception
Max = float(Gate1MaxInput.get()) #Add a check for valid input
VoltageInterval = .02 #Prompt user for interval?
rm = visa.ResourceManager()
SIM900 = rm.open_resource("GPIB0::1::INSTR") #Add a check that session is open.
x = 0
Volt = Min
while Volt <= Max:
SIM900.write("SNDT 1, 'VOLT " + str(Volt) + "'") #Set voltage.
SIM900.write("SNDT 7, 'VOLT? 1'") #Ask a port for voltage.
Vnow = SIM900.query("GETN? 7, 50") #Retrieve data from previous port.
Vnow = Vnow[6:15]
Vnow = float(Vnow) ############Error location
Voltage = np.append(Voltage, Vnow)
SIM900.write("SNDT 1, 'VOLT?'") #Ask a different port for voltage.
Snow = SIM900.query("GETN? 1, 50") #Retrieve data.
print(Snow) #Debugging method. Probably not problematic.
Snow = Snow[4:]
Snow = float(Snow)
sleep(1) #Add a delay for science reasons.
#The code below helps the while loop act like a for loop.
x = x+1
Volt = Min + VoltageInterval*x
Volt = float(truncate(Volt, 7))
finally:
print(Voltage)
print(Source)
Voltage.tofile("output.txt.",sep=",")
SIM900.write("FLSH")#Flush the ports' memories to ensure no bad data stays there.
I get a simple ValueError at the marked location during the first pass of the while loop; Python says it cannot convert the string to a float(more on this later). However, simply remove these five lines of code:
SIM900.write("SNDT 1, 'VOLT?'")
Snow = SIM900.query("GETN? 1, 50")
print(Snow)
Snow = Snow[4:]
Snow = float(Snow)
and the program runs perfectly. I understand the source of the error. With those lines added, when I send these two lines to my instrument:
SIM900.write("SNDT 7, 'VOLT? 1'")
Vnow = SIM900.query("GETN? 7, 50")
I get essentially a null error. #3000 is returned, which is a blank message the machine sends when it is asked to output data and it has none to output. However, these same two lines produce something like #3006 00.003 when the four lines I mentioned are excluded from the program. In other words, simply adding those four lines to my program has changed the message sent to the instrument at the beginning of the while loop, despite adding them near the end.
I am convinced that Python's interpreter is at fault here. Earlier, I was cleaning up my code and discovered that one particular set of quotes, when changed from ' to ", produced this same error, despite no other quote pair exhibiting this behavior, even within the same line. My question is, why does the execution of my code change dependent upon unrelated alterations to the code(would also appreciate a fix)? I understand this problem is difficult to replicate given my somewhat specific application, so if there is more information that would be helpful that I can provide, please let me know.
EDIT: Functionality has improved after moving from the command prompt to IDLE. I'm still baffled by what happened, but due to my meager command prompt skills, I can't provide any proof. Please close this question.
Python is telling you exactly what is wrong with your code -- a ValueError. It even gives you the exact line number and the value that is causing the problem.
'#3006 00.003'
That is the value of SNOW that is being printed out. Then you do this
SNOW = SNOW[4:]
Now SNOW is
'6 00.003'
You then try to call float() on this string. 6 00.003 can't be converted to a float because it's a nonsensical number.
I am convinced that Python's interpreter is at fault here. Earlier, I was cleaning up my code and discovered that one particular set of quotes, when changed from ' to ", produced this same error, despite no other quote pair exhibiting this behavior, even within the same line.
Python generates exactly the same bytecode for single and double quoted strings (unless embedded quotes are involved, of course). So either the environment you're running your script in is seriously broken (I'm counting the python interpreter as part of the "environment"), or your diagnosis is incorrect. I'd put my money on the second.
Here's an alternative explanation. For whatever reason, the hardware you hooked up is returning inconsistent results. So one time you get what you expect, the next time you get an error-- you think your changes to the code account for the differences, but there's no relationship between cause and effect and you end up pulling your hair out. When you run the same code several times in a row, do you get consistent results? I.e. do you consistently get the odd behavior? Even if you do, the problem must be with the hardware or the hookup, not with Python.

Categories