I have a python script that filters and lists the parameters, their units and default values from a fmu using the read_model_description function from FMPy library and writes in an excel sheet (related discussion). Then using the simulate_fmu function the script simulates the fmu and writes the results with units back in the excel sheet.
In filtering the parameters and output variable, I use this line to get their units.
unit = variable.declaredType.unit if hasattr(variable.declaredType,'unit') else '-'
While interacting with the fmu, the parameter and variable values are in default SI units. I guess this is according to the FMI standard. However, in the modelDescription.xml under <UnitDefinitions> I see that there is information regarding the default SI unit to displayUnit conversion. For example:
<Unit
name="Pa">
<BaseUnit kg="1"
m="-1"
s="-2"/>
<DisplayUnit
name="bar"
factor="1E-05"/>
<DisplayUnit
name="ftH2O"
factor="0.0003345525633129686"/>
</Unit>
Is there a way to be able to get the parameter values and output variables in displayUnits if the conversion factors are already available in the modelDescription.xml?
Or is there a easier solution using python libraries like pint that can act as a wrapper around fmu to convert the units in desired unit system (i.e. SI to IP) while interacting with it?
In the FMPy source I did not find any place where unit conversion is implemented.
But all the relevant information is read in model_description.py.
The display unit information ends up in modelDescription.unitDefinitions. E.g. to convert a value val = 1.013e5 # Pa to all defined display units, the following might work:
for unit in modelDescription.unitDefinitions:
if unit.name == "Pa":
for display_unit in unit.displayUnits:
print(display_unit.name)
# not sure about the brackets here
print( (val - display_unit.offset)/display_unit.factor )
break
Take a look at the FMI Specification 2.01, chapter 2.2.2 Definition of Units (UnitDefinitions) to get the full picture.
Related
After trying to use openpyxl to try to know which styles is applied in order to get the the actual background color of a cell after the conditional formatting has been applied and realized that I would have to write a formula parser (and it makes no sense to re-write excel and I would have to deal with chained formula cell values, etc).
I am now reaching the PyUno interface to get access via a libreoffice instance running headless and reaching the XSheetConditionalEntry object trough the PyOO interface.
Looks that I have reached the exact same place, I have the cell and the formula; but no way of knowing which of the conditional formatting styles applies or not:
def processFile(filename):
soffice = subprocess.Popen(officeCommand, shell=True)
desktop = pyoo.Desktop(pipe='hello')
doc = desktop.open_spreadsheet(filename)
sheet = doc.sheets['STOP FS 2023']
cell = sheet[5,24]
cellUno = cell._get_target()
print(f"{cellUno.getPropertyValue('CellBackColor')=}")
print(f"{cellUno.getPropertyValue('CellStyle')=}")
for currentConditionalFormat in cellUno.getPropertyValue('ConditionalFormat'):
print(f"{currentConditionalFormat.getStyleName()=}")
print(f"{currentConditionalFormat.getOperator()=}")
getting the following results
cellUno.getPropertyValue('CellBackColor')=-1
cellUno.getPropertyValue('CellStyle')='Default'
currentConditionalFormat.getStyleName()='ConditionalStyle_4'
currentConditionalFormat.getOperator()=<Enum instance com.sun.star.sheet.ConditionOperator ('BETWEEN')>
currentConditionalFormat.getStyleName()='ConditionalStyle_3'
currentConditionalFormat.getOperator()=<Enum instance com.sun.star.sheet.ConditionOperator ('NONE')>
currentConditionalFormat.getStyleName()='ConditionalStyle_2'
currentConditionalFormat.getOperator()=<Enum instance com.sun.star.sheet.ConditionOperator ('NONE')>
currentConditionalFormat.getStyleName()='ConditionalStyle_1'
currentConditionalFormat.getOperator()=<Enum instance com.sun.star.sheet.ConditionOperator ('NONE')>
The style that is being applied is the ConditoinalStyle_3
This post has helped a bit but it is intended to work inside of a macro, and looks like heir forum sign up is broken, as I would would have tried to ask the same question over there.
I am seeing that VS Code is getting intellisense hints from the comments I put in my class functions:
def GyroDriveOnHeading(self, desiredHeading, desiredDistance):
"""
Drives the robot very straight on a given heading for a \
given distance, using the acceleration and the gyro. \
Accelerates to prevent wheel slipping. \
Gyro keeps the robot pointing on the desired heading.
Minimum distance that this will work for is about 16cm.
If you need to go a very short distance, use move_tank.
Parameters
-------------
desiredHeading: On what heading should the robot drive (float)
type: float
values: any. Best if the desired heading is close to the current heading. Unpredictable robot movement may occur for large heading differences.
default: no default value
desiredDistance: How far the robot should go in cm (float)
type: float
values: any value above 16.0. You can enter smaller numbers, but the robot will still go 16cm
default: no default value
Example
-------------
import base_robot
br = base_robot.BaseRobot()
br.GyroDriveOnHeading(90, 40) #drive on heading 90 for 40 cm
"""
Which gives me a really nice popup when I use that function:
As you can see here, since I am about to enter the first parameter, desiredHeading, the intellisense was smart enough to know that the line in the comments under "Parameters" that starts with the variable name should be the first thing displayed in the hint. And indeed, once I type the first parameter and a comma, the first line of the intellisense popup changes to show the information about desiredDistance.
But I would like to know more about how the comments should be written. I read about the numpy style guide as being close to a standard most widely adopted, but when I change the parameter documentation format to match numpy (and somethihng called Sphinx has something to do with this too, I think), the popups were not the same. Really, I just want to see the documentation on how to document (yikes!) my python code so it renders correct intellisense. For example, how can I bold a word in the middle of a sentence? Are there other formatting options available?
This is just for a middle-school robotics club, nothing like production code for real programmers. Nothing is broken, I just want to learn more about how this works.
That's it for docstrings in python, about it's introduction:
https://docs.python.org/3.10/tutorial/controlflow.html#documentation-strings
https://peps.python.org/pep-0287/
In addition, you can use the type stub of the parameter in this way.
def open(url: str, new: int = ..., autoraise: bool = ...) -> bool: ...
I am trying to add a formula to a parameter within a Revit Family.
Currently I have multiple families in a project. I run Dynamo from within that project then I extract the families that I want to modify using Dynamo standard nodes.
Then I use a python script node that goes through every selected family and find the parameter I am interested in, and assign a formula for it.
That seemed fine until I noticed that it is not assigning the formula, but it is entering it as a string — as in it is in quotes. And sure enough, the code i am using will only work with Text type parameters.
Can someone shed the light on how to assign a formula to a parameter using dynamo?
see line 32 in code below
Thanks
for family in families:
TransactionManager.Instance.ForceCloseTransaction()
famdoc = doc.EditFamily(family)
FamilyMan = famdoc.FamilyManager
found.append(family.Name)
TransactionManager.Instance.EnsureInTransaction(famdoc)
check = 0
# Loop thru the list of parameters to assign formula values to them... these are given as imput
for r in range(len(param_name_lst)):
# Loop thru the list of parameters in the current family per the families outter loop above.
for param in FamilyMan.Parameters:
#for param in FamilyMan.get_Parameter(param_name_lst[r]):
# for each of the parameters get their name and store in paramName.
paramName = param.Definition.Name
# Check if we have a match in parameter name.
if param_name_lst[r] in paramName:
if param.CanAssignFormula:
canassignformula.append(param_name_lst[r])
else:
cannotassignformula.append(param_name_lst[r])
try:
# Make sure that the parameter is not locked.
if FamilyMan.IsParameterLocked(param):
FamilyMan.SetParameterLocked(param,False)
locked.append(paraName)
# Enter formula value to parameter.
FamilyMan.SetFormula(param, param_value_lst[r])
check += 1
except:
failed.append(paramName)
else:
continue
Actually, you can access the family from the main project, and you can assign a formula automatically.... That's what i currently do, i load all the families i want in one project and run the script.
After a lot of work, i was able to figure out what i was doing wrong, and in it is not in my code... my code was fine.
The main problem is that i need to have all of my formula's dependencies lined up.... just like in manual mode.
so if my formula is:
size_lookup(MY_ID_tbl, "MY_VAR", "MY_DefaultValue", ND1,ND2)
then i need to have the following:
MY_ID_tbl should exist and be assigned a valid value, in this case it should have a csv filename. Moreover, that file should be also loaded. This is important for the next steps.
MY_VAR should be defined in that csv file, so Does ND1, ND2
The default value (My_Default_Value) should match what that csv file says about that variable...in this case, it is a text.
Needless to say, i did not have all of the above lined up as it should be, once i fixed that, my setFormula code did its job. And i had to change my process altogether, cause i have to first create the MY_ID_tbl and load the csv file which i also do using dynamo, then i go and enter the formulas using dynamo.
Revit parameters can only be assigned to a formula inside the family editor only, that is the first point, so you should run your dynamo script inside the family editor for each family which will be a waste of time and you just edit the parameter's formula manually inside each family.
and the second point, I don't even think that it is possible to set a certain parameter's formula automatically, it must be done manually ( I haven't seen anything for it in the Revit API docs).
I’m having a basic issue whilst using python scripting in ASP / `clingo (Version 4+). I’ve reconstructed the problem with a minimal example, to illustrate the point. Obviously, in the example, I don’t need to use scripts. In my more complicated application, however, I do, whence I have artificially recreated the problem, in a more comprehendible fashion.
The issue is, that whilst calling an aggregate/optimisation, the compiler somehow does not register all the full predicate being used to index the values. Instead, it appears to successively compute the minimum and as a result, spits out all the values along the way. (See the output below: notice that the minimum goes from 59, to 19, then does not change to 29. This is highly sensitive of the order of prg.groundcalls in the #script (python) part of the code.)
This is highly undesirable, and I would like to know how to avoid this problem. I. e., how can I amend the below code still utilising a python-script (potentially modified), so that the correct model is computed. (In the example, obviously, the solution to the predicate min_sel_weight/1 is min_sel_weight(19)with no further values.
The Programme.
weight("ant",3). weight("bat",53). weight("cat",19). weight("dot",13). weight("eel",29).
#script (python)
import gringo;
def main(prg):
prg.ground([('base', [])]);
prg.ground([('sel', ['bat'])]);
prg.ground([('sel', ['cat'])]);
prg.ground([('sel', ['eel'])]);
prg.solve();
#end.
%% call python-script, to select certain objects.
#program sel(t). sel(t).
%% compute minimum of weights of selected objects:
min_sel_weight(X) :- weight(_,X), #min {XX : weight(OBJ,XX),sel(OBJ)} = X.
#show sel/1. #show min_sel_weight/1.
Calling clingo 0 myprogramme.lp I obtain the following output:
clingo version 4.5.4
Reading from myprogramme.lp
Solving...
Answer: 1
sel("bat")
min_sel_weight(53)
sel("cat")
min_sel_weight(19)
sel("eel")
SATISFIABLE
Models : 1
Calls : 1
Time : 0.096s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s)
CPU Time : 0.040s
Try this:
% instance
weight("ant",3). weight("bat",53). weight("cat",19). weight("dot",13). weight("eel",29).
% Assuming you will get certain selected objects like this:
selected("cat"). selected("bat"). selected("eel"). %this will be python generated
% encoding
selectedWeight(OBJ, XX):- weight(OBJ,XX), selected(OBJ).
1{min_sel_weight(X)}1 :- selectedWeight(_,X), #min {XX : selectedWeight(OBJ,XX),selected(OBJ)} = X.
#show min_sel_weight/1.
Output:
I'm having a problem loading log files based on parameter input and was wondering whether someone would be able to provide some guidance. The logs in question are Omniture logs, stored in subdirectories based on year, month, and day (eg. /year=2013/month=02/day=14), and with the date stamp in the filename. For any day, multiple logs could exist, each hundreds of MB.
I have a Pig script which currently processes logs for an entire month, with the month and the year specified as script parameters (eg. /year=$year/month=$month/day=*). It works fine and we're quite happy with it. That said, we want to switch to weekly processing of logs, which means the previous LOAD path glob won't work (weeks can wrap months as well as years). To solve this, I have a Python UDF which takes a start date and spits out the necessary glob for a week's worth of logs, eg:
>>> log_path_regex(2013, 1, 28)
'{year=2013/month=01/day=28,year=2013/month=01/day=29,year=2013/month=01/day=30,year=2013/month=01/day=31,year=2013/month=02/day=01,year=2013/month=02/day=02,year=2013/month=02/day=03}'
This glob will then be inserted in the appropriate path:
> %declare omniture_log_path 's3://foo/bar/$week_path/*.tsv.gz';
> data = LOAD '$omniture_log_path' USING OmnitureTextLoader(); // See http://github.com/msukmanowsky/OmnitureTextLoader
Unfortunately, I can't for the life of me figure out how to populate $week_path based on $year, $month and $day script parameters. I tried using %declare but grunt complains, says its logging but never does:
> %declare week_path util.log_path_regex(year, month, day);
2013-02-14 16:54:02,648 [main] INFO org.apache.pig.Main - Apache Pig version 0.10.1 (r1426677) compiled Dec 28 2012, 16:46:13
2013-02-1416:54:02,648 [main] INFO org.apache.pig.Main - Logging error messages to: /tmp/pig_1360878842643.log % ls /tmp/pig_1360878842643.log
ls: cannot access /tmp/pig_1360878842643.log: No such file or directory
The same error results if I prefix the parameters with dollar signs or surround prefixed parameters with quotes.
If I try to use define (which I believe only works for static Java functions), I get the following:
> define week_path util.log_path_regex(year, month, day);
2013-02-14 17:00:42,392 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: <file script.pig, line 11, column 37> mismatched input 'year' expecting RIGHT_PAREN
As with %declare, I get the same error if I prefix the parameters with dollar signs or surround prefixed parameters with quotes.
I've searched around and haven't come up with a solution. I'm possibly searching for the wrong thing. Invoking a shell command may work, but would be difficult as it would complicate our script deploy and may not be feasible given we're retrieving logs from S3 and not a mounted directory. Similarly, passing the generated glob as a single parameter may complicate an automated job on an instantiated MapReduce cluster.
It's also likely there's a nice Pig-friendly way to restrict LOAD other than using globs. That said, I'd still have to use my UDF which seems to be the root of the issue.
This really boils down to me wanting to include a dynamic path glob built inside Pig in my LOAD statement. Pig doesn't seem to be making that easy.
Do I need to convert my UDF to a static Java method? Or will I run into the same issue? (I hesitate to do this on the off-chance it will work. It's an 8-line Python function, readily deployable and much more maintainable by others than the equivalent Java code would be.)
Is a custom LoadFunc the answer? With that, I'd presumably have to specify /year=/month=/day=* and force Pig to test every file name for a date stamp which falls between two dates. That seems like a huge hack and a waste of resources.
Any ideas?
I posted this question to the Pig user list. My understanding is that Pig will first pre-process its scripts to substitute parameters, imports and macros before building the DAG. This makes building new variables based on existing ones somewhat impossible, and explains my failure to build a UDF to construct a path glob.
If you are a Pig developer requiring new variables to be built based on existing parameters, you can either use another script to construct those variables and pass them as parameters to your Pig script, or you can explore where you need to use those new variables and build them in a separate construct based on your needs.
In my case, I reluctantly opted to create a custom LoadFunc as described by Cheolsoo Park. This LoadFunc accepts the day, month and year for the beginning of the period for the report in its constructor, and builds a pathGlob attribute to match paths for that period. That pathGlob is then inserted into a location in setLocation(). eg.
/**
* Limit data to a week starting at given day. If day is 0, month is assumed.
*/
public WeeklyOrMonthlyTextLoader(String year, String month, String day) {
super();
pathGlob = getPathGlob(
Integer.parseInt(year),
Integer.parseInt(month),
Integer.parseInt(day)
);
}
/**
* Replace DATE_PATH in location with glob required for reading in this
* month or week of data. This assumes the following directory structure:
*
* <code>/year=>year</month=>month</day=>day</*</code>
*/
#Override
public void setLocation(String location, Job job) throws IOException {
location = location.replace(GLOB_PLACEHOLDER, pathGlob);
super.setLocation(location, job);
}
This is then be called from a Pig script like so:
DEFINE TextLoader com.foo.WeeklyOrMonthlyTextLoader('$year', '$month', '$day');
Note that the constructor accepts String, not int. This is because parameters in Pig are strings and cannot be cast or converted to other types within the Pig script (except when used in MR tasks).
While creating a custom LoadFunc may seem overkill compared to a wrapper script, I wanted the solution to be pure Pig to avoid forcing analysts to perform a setup task before working with their scripts. I also wanted to readily use a stock Pig script for different periods when creating an Amazon MapReduce cluster for a scheduled job.