I would like to find the length between two points from an IFC model.This is an example of an IfcWall from the IFC model.
#26322= IFCWALL('3vpWoB_K1EZ8RCaYmNGsB2',#42,'Basiswand:Bestand 08.0:162343',$,'Basiswand:Bestand 08.0:161894',#25861,#26318,'162343',.NOTDEFINED.);
#26325= IFCPROPERTYSET('3vpWoB_K1EZ8RCcT4NGsB2',#42,'Pset_WallCommon',$,(#787,#788,#848,#25851));
#26327= IFCRELDEFINESBYPROPERTIES('0rDc6OePf5NBrNT2GfJ3hm',#42,$,$,(#26322),#26325);
#26331= IFCCARTESIANPOINT((12.5832056790716,5.54096330043285,0.));
#26333= IFCAXIS2PLACEMENT3D(#26331,#20,#18);
#26334= IFCLOCALPLACEMENT(#140,#26333);
#26335= IFCCARTESIANPOINT((4.24,0.));
#26337= IFCPOLYLINE((#10,#26335));
#26339= IFCSHAPEREPRESENTATION(#102,'Axis','Curve2D',(#26337));
The IFCPOLYLINE has two Points (#10=0,0 and #26335=4.24,0.) and i would like to find out the distance between these two points.
The other walls have a length deposited, but this one wall does not. Here is an example of the other walls:
#730= IFCWALL('1ZwJH$85D3YQG5AK5ER10a',#42,'Basiswand:Bestand 50.0:148105',$,'Basiswand:Bestand 50.0:150882',#701,#726,'148105',.NOTDEFINED.);
#745= IFCQUANTITYLENGTH('Height',$,$,4.99,$);
#746= IFCQUANTITYLENGTH('Length',$,$,16.675,$);
This is my code example:
import ifcopenshell
walls = ifc_file.by_type('IfcWall')
print(len(walls))
import math
p1 = [0.,0.]
p2 = [16.765,0.]
distance = math.sqrt( ((p1[0]-p2[0])**2)+((p1[1]-p2[1])**2) )
print(distance)
To apply the mathematical formula, I have to extract the coordinates from the wall for p1 and p2. I am not getting further here.
thank you in advance!
You need to work your way through the object graph, starting at the wall:
#26322 IfcWall.Representation (attribute 7) references #26318
#26318 is not included in your snippet, but likely an IfcProductDefinitionShape
From there you would likely find another polyline similar to the one you have included in the snippet. See below for how to get there. There is likely another wall which is represented by the polyline #25337. Starting from there you arrive at the polyline as follows:
#XXXXX IfcWall.Representation (attribute 7) references #YYYYY likely IfcProductDefinitionShape
#YYYYY IfcProductDefinitionShape.Representations (attribute 4) likely references #26339 (the 2D representation) and a 3D representation
#26339 IfcShapeRepresentation.Items (attribute 4) references #26337
#25337 IfcPolyline.Points (attribute 1) references #10 and #26335
You can study the IFC specification to find out how the entities are connected via their attributes and how the attributes are called.
It might be easy for a this particular case to trace the object graph. The hard part is the semantic richness of the schema with a variety of types which share some attributes and vary in others, which is organized through inheritance. For example, the Item attribute of an IfcShapeRepresentation entity references entities of type IfcRepresentationItem, which has many subtypes, IfcPolyline being only one of those. You will have to check which type you encounter and only if it is an IfcPolyline, your method of calculation would be applicable - not if it would be an IfcBSplineCurve, for example.
Libraries such as IfcOpenShell have invested a good deal of work into covering all or at least most of the schema, particularly the geometry, and can also calculate measures such as length, area, volume, if I am not mistaken.
Related
I am trying to write a script that will automatically mesh geometries for CFD analysis using the Gmsh Python API. There are a few issues I am running into:
First of all, I would like to be able to write Gmsh script files (.geo) for debugging purposes. I looked through the source code of the Gmsh API and found that the .geo_unrolled extension is supported for the gmsh.write() function, but not just .geo. This extension does the trick mostly, but it seems that any meshing operations (such as marking curves as transfinite) or transformations (such as dilate) are not written to the output file when using gmsh.write('test.geo_unrolled'). I assume this has something to do with the _unrolled part. But is there any way to get the full Gmsh script out of the API?
Secondly, when I try to make a copy of a spline like in this example:
p1 = gmsh.model.geo.addPoint(-1, 0.5, 0, 0.1)
p2 = gmsh.model.geo.addPoint(0, 1, 0, 0.1)
p3 = gmsh.model.geo.addPoint(1, 0.5, 0, 0.1)
s1 = gmsh.model.geo.addSpline([p1, p2, p3])
s2 = gmsh.model.geo.copy([s1])
I get ValueError: ('gmshModelGeoCopy returned non-zero error code: ', 1). The error code, 1, seems to indicate that the tag of the original spline (s1) cannot be found when copy() is called. Am I missing something here? I have tried, for example, to call gmsh.model.geo.synchronize() before attempting to call copy(), but this had no effect.
Finally, when I use the dilate transformation in the Gmsh GUI using Modules - Geometry - Elementary entities - Transform - Scale, checking the Apply scaling on copy option in the dialog, on the example spline from above, I indeed get a scaled version of the curve as expected, including the three points. Assuming I was able to accomplish the same with the API, how do I then refer to the three new points that the scaled spline goes through, for example, if I wanted to draw a line between the start point of the original spline and that of the scaled spline?
In the end, what I want to accomplish is the following: draw a spline through a list of points, create a scaled copy of this spline, draw lines between the start and end points, and create a plane surface bounded by the two splines and lines. Is there a better way to do this than what I am trying to do with the dilation?
It's probably too late, but you never know.
I've never had to create .geo files using the API. But I found this discussion in the Gmsh mailing list archive, which may be helpful.
Regarding your error with copy, you have to specify the dimension of the entity to be copied, and not just the tag (check the documentation, which refers to dimTag). It's the same thing with transformations such as rotate, symmetrize etc. Using the following should work:
s2 = gmsh.model.geo.copy([(1, s1)])
NB: when copying only one entity, I think either the inner parentheses or the brackets are superfluous, and otherwise you have to provide a list of tuples of the form [(dim_1, tag_1), (dim_2, tag_2), ..., (dim_n, tag_n)].
Keep in mind that copy will return a variable of the same kind (list of tuples), i.e. in your case the variable s2 will be [(1, tag_s2)]. Therefore you might not want to use the same kind of variable name, since in order to get the tag you'll have to use s2[0][1] instead of simply s2.
Here you have a partial answer to the following question, as the tags of copied entities will be contained in your return variable.
Hope that helps you or others!
There is a boundary inside China, which divide the region as North-South. I have drawn this boundary as a polyline format shapefile Download link.
I want to divide those points in the following figures into "North" and "South". Is there any useful function in Python can achieve this.
fiona has point.within function to test points within/out a polygon, but I have not searched a suitable function to divide multiple points by polyline.
Any advices or tips would be appreciated!
updated
According to the valuable suggestion made by Prune, I worked it out. The codes are provided as follows:
from shapely.geometry import shape
from shapely.geometry import LineString
# loading the boundary layer
import fiona
fname = './N-S_boundary.shp'
line1 = fiona.open(fname)
line1 = shape(line1.next()['geometry'])
# set a end point which is the southernmost for all stations.
end_point = (dy[dy['lat']==dy['lat'].min()]['lon'].values[0],dy[dy['lat']==dy['lat'].min()]['lat'].values[0])
# loop all monitoring stations for classification
dy['NS']= np.nan
for i in range(0,len(dy),1):
start_point = (dy['lon'].iloc[i],dy['lat'].iloc[i])
line2 = LineString([start_point, end_point])
if line1.intersection(line2).is_empty:
dy["NS"].iloc[i]='S'
else:
dy["NS"].iloc[i]='N'
color_dict= {'N':'steelblue','S':'r'}
dy['site_color']=dy['NS'].map(color_dict)
You can apply a simple property from topology.
First, make sure that your boundary partitions the universe (all available points you're dealing with). You may need to extend the boundary through the ocean to finish this.
Now, pick any reference point that is labeled as to the region -- to define "North" and "South", you must have at least one such point. w.l.o.g. assume it's a "South" point called Z.
Now, for each point A you want to classify, draw a continuous path (a straight one is usually easiest, but not required) from A to Z. Find the intersections of this path with the boundary. If you have an even quantity of intersections, then A is in the same class ("South") as Z; other wise, it's in the other class ("North").
Note that this requires a topological property of "partition" -- there are no tangents to the boundary line: if your path touches the boundary, it must cross completely.
All
have file from CAD (SW) in STEP format and was able to read it via Python OCC binding:
importer = aocxchange.step.StepImporter(fname)
shapes = importer.shapes
shape = shapes[0]
# promote up
if (shape.ShapeType() == OCC.TopAbs.TopAbs_SOLID):
sol = OCC.TopoDS.topods.Solid(shape)
I could display it, poke at it, check flags etc
t = OCC.BRepCheck.BRepCheck_Analyzer(sol)
print(t.IsValid())
print(sol.Checked())
print(sol.Closed())
print(sol.Convex())
print(sol.Free())
print(sol.Infinite())
So far so good. It really looks like small tube bent along some complex path.
Question: how I could extract geometry features from what I have? I really need tube parameters and path it follows. Any good example in Python and/or C++ would be great
In OpenCASCADE there's a separation between topology and geometry. So, usually your first contact will be the topological entities (i.e.: TopoDS_Wire or a TopoDS_Edge), that can give you access to the geometry (take a look here for more details).
In your case, after reading the STEP file you ended up with a TopoDS_Shape. This is the highest level topological entity and most probably is formed by one or more sub-shapes.
Assuming that your shape is formed by a bspline curve (it seems to be!), you could explore the shape, looking for TopoDS_Edge objects (they are the topological entities that map to geometric curves):
TopExp_Explorer myEdgeExplorer(shape, TopAbs_EDGE);
while (myEdgeExplorer.More())
{
double u0, u1;
auto edge = TopoDS::Edge(myEdgeExplorer.Current());
auto curve = BRep_Tool::Curve(edge, u0, u1);
// now you have access to the curve ...
// to get a point lying on it, check
// the method curve->Value(u);
myEdgeExplorer.Next();
}
I tried to create a LP model by using pyomo.environ. However, I'm having a hard time on creating sets. For my problem, I have to create two sets. One set is from a bunch of nodes, and the other one is from several arcs between nodes. I create a network by using Networkx to store my nodes and arcs.
The node data is saved like (Longitude, Latitude) in tuple form. The arcs are saved as (nodeA, nodeB), where nodeA and nodeB are both coordinates in tuple.
So, a node is something like:
(-97.97516252657978, 30.342243012086083)
And, an arc is something like:
((-97.97516252657978, 30.342243012086083),
(-97.976196300350608, 30.34247219922803))
The way I tried to create a set is as following:
# import pyomo.envrion as pe
# create a model m
m = pe.ConcreteModel()
# network is an object I created by Networkx module
m.node_set = pe.Set(initialize= self.network.nodes())
m.arc_set = pe.Set(initialize= self.network.edges())
However, I kept getting an error message on arc_set.
ValueError: The value=(-97.97516252657978, 30.342243012086083,
-97.976196300350608, 30.34247219922803) does not have dimension=2,
which is needed for set=arc_set
I found it's weird that somehow my arc_set turned into one tuple instead of two. Then I tried to convert my nodes and arcs into string but still got the error.
Could somebody show me some hint? Or how do delete this bug?
Thanks!
Underneath the hood, Pyomo "flattens" all indexing sets. That is, it removes nested tuples so that each set member is a single tuple of scalar values. This is generally consistent with other algebraic modeling languages, and helps to make sure that we can consistently (and correctly) retrieve component members regardless of how the user attempted to query them.
In your case, Pyomo will want each member of the the arc set as a single 4-member tuple. There is a utility in PyUtilib that you can use to flatten your tuples when constructing the set:
from pyutilib.misc import flatten
m.arc_set = pe.Set(initialize=(tuple(flatten(x)) for x in self.network.edges())
You can also perform some error checking, in this case to make sure that all edges start and end at known nodes:
from pyutilib.misc import flatten
m.node_set = pe.Set( initialize=self.network.nodes() )
m.arc_set = pe.Set(
within=m.node_set*m.node_set,
initialize=(tuple(flatten(x)) for x in self.network.edges() )
This is particularly important for models like this where you are using floating point numbers as indices, and subtle round-off errors can produce indices that are nearly the same but not mathematically equal.
There has been some discussion among the developers to support both structured and flattened indices, but we have not quite reached consensus on how to best support it in a backwards compatible manner.
I would like to know what's the most efficient way to create a lookup table for floats (and collection of floats) in Python. Since both sets and dicts need the keys to be hashable, I guess can't use some sort of closeness to check for proximity to already inserted, can I? I have seen this answer and it's not quite what I'm looking for as I don't want to give the burden of creating the right key to the user and also I need to extend it for collections of floats.
For example, given the following code:
>>> import numpy as np
>>> a = {np.array([0.01, 0.005]): 1}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'numpy.ndarray'
>>> a = {tuple(np.array([0.01, 0.005])): 1}
>>> tuple(np.array([0.0100000000000001,0.0050002])) in a
False
I would like the last statement to return True. Coming from a C++ world, I would create a std::map and provide a compare function that can do the comparison with some user defined tolerance to check if the values have been added to the data structure. Of course this question extends naturally to the lookup tables of arrays (for example numpy arrays). So what's the most efficient way to accomplish what I'm looking for?
Since you are interested in 3D points, you could think about using some data-structure that is optimized for storing spatial data, such as a KD-tree. This is available in Scipy and allows the lookup of the point closest to a given coordinate. After you have looked up the this point, you could then do a check to see if you are within some tolerance to accept the new point or not.
Usage should be something like this (untested, never used it myself):
from scipy.spatial import KDTree
points = ... # points is [Nx3]
tree = KDTree(points)
new_point = ... # array of length 3
distance, nearest_index = tree.query(new_point)
if distance > tolerance: # add point
points = np.vstack((points, new_point))
tree = KDTree(points) # generate tree from scratch
Note that a KD-tree is efficient for looking up a point in a static collection of points (cost of a lookup is O(log(N)), but they are not optimized for repeatedly adding new points. The Scipy implementation even lacks a method to add new points, so you have to generate a new tree every time you insert a new point. Since this operation is probably O(N*log(N)), it might be faster to just to do brute-force calculation of all distances, which costs O(N). Note also that there is an alternative version cKDTree, which might be implemented in C for speed, the documentation is not really clear on this.