I am trying to write a script that will automatically mesh geometries for CFD analysis using the Gmsh Python API. There are a few issues I am running into:
First of all, I would like to be able to write Gmsh script files (.geo) for debugging purposes. I looked through the source code of the Gmsh API and found that the .geo_unrolled extension is supported for the gmsh.write() function, but not just .geo. This extension does the trick mostly, but it seems that any meshing operations (such as marking curves as transfinite) or transformations (such as dilate) are not written to the output file when using gmsh.write('test.geo_unrolled'). I assume this has something to do with the _unrolled part. But is there any way to get the full Gmsh script out of the API?
Secondly, when I try to make a copy of a spline like in this example:
p1 = gmsh.model.geo.addPoint(-1, 0.5, 0, 0.1)
p2 = gmsh.model.geo.addPoint(0, 1, 0, 0.1)
p3 = gmsh.model.geo.addPoint(1, 0.5, 0, 0.1)
s1 = gmsh.model.geo.addSpline([p1, p2, p3])
s2 = gmsh.model.geo.copy([s1])
I get ValueError: ('gmshModelGeoCopy returned non-zero error code: ', 1). The error code, 1, seems to indicate that the tag of the original spline (s1) cannot be found when copy() is called. Am I missing something here? I have tried, for example, to call gmsh.model.geo.synchronize() before attempting to call copy(), but this had no effect.
Finally, when I use the dilate transformation in the Gmsh GUI using Modules - Geometry - Elementary entities - Transform - Scale, checking the Apply scaling on copy option in the dialog, on the example spline from above, I indeed get a scaled version of the curve as expected, including the three points. Assuming I was able to accomplish the same with the API, how do I then refer to the three new points that the scaled spline goes through, for example, if I wanted to draw a line between the start point of the original spline and that of the scaled spline?
In the end, what I want to accomplish is the following: draw a spline through a list of points, create a scaled copy of this spline, draw lines between the start and end points, and create a plane surface bounded by the two splines and lines. Is there a better way to do this than what I am trying to do with the dilation?
It's probably too late, but you never know.
I've never had to create .geo files using the API. But I found this discussion in the Gmsh mailing list archive, which may be helpful.
Regarding your error with copy, you have to specify the dimension of the entity to be copied, and not just the tag (check the documentation, which refers to dimTag). It's the same thing with transformations such as rotate, symmetrize etc. Using the following should work:
s2 = gmsh.model.geo.copy([(1, s1)])
NB: when copying only one entity, I think either the inner parentheses or the brackets are superfluous, and otherwise you have to provide a list of tuples of the form [(dim_1, tag_1), (dim_2, tag_2), ..., (dim_n, tag_n)].
Keep in mind that copy will return a variable of the same kind (list of tuples), i.e. in your case the variable s2 will be [(1, tag_s2)]. Therefore you might not want to use the same kind of variable name, since in order to get the tag you'll have to use s2[0][1] instead of simply s2.
Here you have a partial answer to the following question, as the tags of copied entities will be contained in your return variable.
Hope that helps you or others!
Related
Hi There
I want to increase the accuracy of the marker detection from aruco.detectMarkers. So, I want to use Corner Refine Method with CORNER_REFINE_SUBPIX, but I do not understand how it is implemented in python.
Sample code:
frame = cv.imread("test.png")
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
para = aruco.DetectorParameters_create()
det_corners, ids, rejected = aruco.detectMarkers(gray,dictionary,parameters=para)
aruco.drawDetectedMarkers(frame,det_corners,ids)
Things I have tried:
para.cornerRefinementMethod()
para.cornerRefinementMethod(aruco.CORNER_REFINE_SUBPIX)
para.cornerRefinementMethod.CORNER_REFINE_SUBPIX
para = aruco.DetectorParameters_create(aruco.CORNER_REFINE_SUBPIX)
para = aruco.DetectorParameters_create(para.cornerRefinementMethod(aruco.CORNER_REFINE_SUBPIX))
They did not work, I’m pretty new to python ArUco so I hope that there is a simple and obvious solution.
I would also Like to implement enclosed markers like in the Documentation(Page 4). Do you happen to know if there is a way to generate these enclosed markers in python?
Concerning the first part of your question, you were pretty close: I assume your trouble is in switching and tweaking the "para" options. If so, you only need to set the corresponding values in the parameters object like
para.cornerRefinementMethod = aruco.CORNER_REFINE_SUBPIX
Note that "aruco.CORNER_REFINE_SUBPIX" is simply an integer. You can verify this by typing type(aruco.CORNER_REFINE_SUBPIX) in the console. Thus assigning values to the "para" object works like mentioned above.
You might also want to tweak the para.cornerRefinementWinSize which seems to be implemented in units of code pixels, not actual image pixel units.
Concerning the second part, you might have to write a function, that adds the boxes at the corner points, which you can get using the detectMarker function. Note that the corner points are always ordered clockwise, thus you can easily assign the correct offset values (like "up & left", "up & right" etc.).
para.cornerRefinementMethod = 1
may work.
I'm learning how to use Python and Basemap and would like to create a loop that produces a map of each projection type.
The projection types are: cea, mbtfpq, aeqd, sinu, poly, etc. So I just want a loop that does Basemap(width=x, height=y, projection=[projection type], ...) but can't figure out how to return the actual types of possible projections.
So far I've tried things like
proj = Basemap()
print(dir(proj))
and
proj = Basemap().projection
print(dir(proj))
but neither returns the types of projections it could be. I tried
for value in Basemap().projection:
print (value)
But it just returned
c
y
l
and that's it.
Closest I've gotten is
for value in Basemap().__dict__.items():
print (value)
but that returns a lot of info, seemingly the default values, but one of them is cyl, which is the default projection. I'm getting close but can't see how to iterate through each projection.
(My semantics are incorrect, so please correct me if I'm wrong!)
Edit: I'd like to learn how to do this without "cheating", i.e. since I know the types of projections possible, load those into an array and loop through the array. I'm trying to learn how to do it if I didn't know the possible values.
There's no need to cheat; looking at the source, you have a supported_projections list that contains all supported projections. You can just use that.
I tried to create a LP model by using pyomo.environ. However, I'm having a hard time on creating sets. For my problem, I have to create two sets. One set is from a bunch of nodes, and the other one is from several arcs between nodes. I create a network by using Networkx to store my nodes and arcs.
The node data is saved like (Longitude, Latitude) in tuple form. The arcs are saved as (nodeA, nodeB), where nodeA and nodeB are both coordinates in tuple.
So, a node is something like:
(-97.97516252657978, 30.342243012086083)
And, an arc is something like:
((-97.97516252657978, 30.342243012086083),
(-97.976196300350608, 30.34247219922803))
The way I tried to create a set is as following:
# import pyomo.envrion as pe
# create a model m
m = pe.ConcreteModel()
# network is an object I created by Networkx module
m.node_set = pe.Set(initialize= self.network.nodes())
m.arc_set = pe.Set(initialize= self.network.edges())
However, I kept getting an error message on arc_set.
ValueError: The value=(-97.97516252657978, 30.342243012086083,
-97.976196300350608, 30.34247219922803) does not have dimension=2,
which is needed for set=arc_set
I found it's weird that somehow my arc_set turned into one tuple instead of two. Then I tried to convert my nodes and arcs into string but still got the error.
Could somebody show me some hint? Or how do delete this bug?
Thanks!
Underneath the hood, Pyomo "flattens" all indexing sets. That is, it removes nested tuples so that each set member is a single tuple of scalar values. This is generally consistent with other algebraic modeling languages, and helps to make sure that we can consistently (and correctly) retrieve component members regardless of how the user attempted to query them.
In your case, Pyomo will want each member of the the arc set as a single 4-member tuple. There is a utility in PyUtilib that you can use to flatten your tuples when constructing the set:
from pyutilib.misc import flatten
m.arc_set = pe.Set(initialize=(tuple(flatten(x)) for x in self.network.edges())
You can also perform some error checking, in this case to make sure that all edges start and end at known nodes:
from pyutilib.misc import flatten
m.node_set = pe.Set( initialize=self.network.nodes() )
m.arc_set = pe.Set(
within=m.node_set*m.node_set,
initialize=(tuple(flatten(x)) for x in self.network.edges() )
This is particularly important for models like this where you are using floating point numbers as indices, and subtle round-off errors can produce indices that are nearly the same but not mathematically equal.
There has been some discussion among the developers to support both structured and flattened indices, but we have not quite reached consensus on how to best support it in a backwards compatible manner.
I am writing a custom export script to parse all the objects in a blender file, filter them by name, then check to make sure that they meet some specific criteria.
I am using Blender 2.68a. I've created a blender file with some basic 2d and 3d meshes, as well as some that should fail my test criteria. I am working in the internal Python console inside of Blender. This is the only way to work with the blender python API, as their python environment is customized.
I've sorted how to iterate through the objects using a for loop and the D.objects iterator, then check for name matches using regular expressions, and then get a mesh from the object using:
mesh = obj.to_mesh(C.scene, True, 'RENDER') #where obj is an bpy.data.object[index] in the scene
mesh.update(True, True)
mesh.polygons[index].<long list of possible functions>
lets me access an array of polygons to know if there is a set of vertices with edges that form a polygon, and what their key values are.
What I can't sort out is how to determine from the python console if a poly is a face or just a poly. Is there a built in function, or what tests can i perform to programmatically determine this? For example, I can have a mesh 4 vertices with 4 edges that do not have a face, and I do not want to export this, but if i were to edit the same 4 vertices/edges and put a face on it, then it becomes a desirable export.
Can anyone explain the bpy.data.object data structure or explain where the "faces" are stored? it seems as though it would be a property of the npolys themselves, but the API does not make it obvious. Any assistance in clarifying this would be greatly appreciated. Cheers.
So, i asked this question on the blender.org forums, http://www.blender.org/forum/viewtopic.php?t=28286&postdays=0&postorder=asc&start=0 and a very helpful individual has helped me over the past few days each time I got stuck in my own efforts to plow through this.
The short list of answers is:
1) All polygons are faces. If it isnt stored as a polygon, it isnt a face.
2) using the to_mesh() function on an object returns a copy of the function, and so any selections that are done to the copy are not reflected by the context and therefore the methodology I was using was flawed. The only way to access the live object is through use of:
bpy.data.objects[<index or object name>].data.vertices[<index>].co[<0,1,2> which correspond to x,y,z respectively]
bpy.data.objects[<index or object name>].data.polygons[<index>].edge_keys
The first one gives you access to an ordered index of all the vertices in the object(assuming it is of type 'MESH'), and their coordinates.
The second one gives you access to an 2d array of ordered pairs which represent edges. The numbers it contains within the tuples correspond to the index value in the vertices list from the first command, and so you can get the coordinates which the edge goes between.
One can also create a new BMesh object and copy the object you are interested in into the BMesh. This gives you a lot more functionality that you can't access on the live object. The code in answer 3 shows an example of this.
3)see below for answer to my question regarding checking faces in a mesh.
it turns out that one way to determine if an object has faces and all edges are a part of a face is to use the following code snippet written by a helpful user, CoDEmanX on the above thread.
import bpy, bmesh
for ob in bpy.context.scene.objects:
if ob.type != 'MESH':
continue
bm = bmesh.new()
bm.from_object(ob, bpy.context.scene)
if len(bm.faces) > 0 and 0 not in (len(e.link_faces) for e in bm.edges):
print(ob.name, "is valid")
else:
print(ob.name, "has errors")
I changed this a little bit, as i didnt want it to loop through all the objects, and instead i've got this as a function that returns true if the object passed in is valid and false otherwise. This lets me serialize my calls so that my addon only tries to validate the objects which have a name which matches a regex.
def validate(obj):
import bpy, bmesh
if obj.type == 'MESH':
bm = bmesh.new()
bm.from_object(obj, bpy.context.scene)
if len(bm.faces) > 0 and 0 not in (len(e.link_faces) for e in bm.edges):
return True
return False
Let us assume we are looking for this template:
The corners of our template are transparent, so the background will vary, like so:
Assuming we could use the following mask with our template:
It would be very easy to find it.
What I have tried:
I have tried matchTemplate but it doesn't support masks (as far as I know), and using the alpha channel (transparency) in the template does not achieve this, as it compares the alpha channels instead of ignoring those pixels.
I have also looked into "region of interest", which I thought would be the solution, but with it you can only specify a rectangular area. I'm not even sure if it works on the template or not.
I'm sure this is possible to do by writing my own algorithm, but I was hoping this is possible via. standard OpenCV to avoid reinventing the wheel. Not to mention, it would most likely be more optimised than my own.
So, how could I do something like this with OpenCV + Python?
This could be achieved using only matchTemplate function, but a little workaround is needed.
Lets analyse the default metrics(CV_TM_SQDIFF_NORMED). According to matchTemplate documentation
the default metrics looks like this
R(x, y) = sum (I(x+x', y+y') - T(x', y'))^2
Where I is image matrix, T is template, R is result matrix. Summation is done over template coordinates x' and y',
So, lets alter this metrics by inserting weight matrix W, which has the same dimensions as
T.
Q(x, y) = sum W(x', y')*(I(x+x', y+y') - T(x', y'))^2
In this case, by setting W(x', y') = 0 you can actually make pixel be ignored. So, how to make such metrics? With simple math:
Q(x, y) = sum W(x', y')*(I(x+x', y+y') - T(x', y'))^2
= sum W(x', y')*(I(x+x', y+y')^2 - 2*I(x+x', y+y')*T(x', y') + T(x', y')^2)
= sum {W(x', y')*I(x+x', y+y')^2} - sum{W(x', y')*2*I(x+x', y+y')*T(x', y')} + sum{W(x', y')*T(x', y')^2)}
So, we divided Q metrics into tree separate sums. And all those sums could be calculated
with matchTemplate function (using CV_TM_CCORR method). Namely
sum {W(x', y')*I(x+x', y+y')^2} = matchTemplate(I^2, W, method=2)
sum{W(x', y')*2*I(x+x', y+y')*T(x', y')} = matchTemplate(I, 2*W*T, method=2)
sum{W(x', y')*T(x', y')^2)} = matchTemplate(T^2, W, method=2) = sum(W*T^2)
The last element is a constant, so, for minimisation it does not have any effect. On the other hand, it still might me useful to see if our template have perfect match (if Q is approaching to zero). Nonetheless, for last element we actually do not need matchTemplate function, since it could be calculated directly.
The final pseudocode looks like this:
result = matchTemplate(I^2, W, method=2) - matchTemplate(I, 2*W*T, method=2) + as.scalar(sum(W*T^2))
Does it really do exactly as defined? Mathematically yes.
Practically, there is some small rounding error, because matchTemplate function
works on 32-bit floating-point, but I believe it is not a big problem.
Please note, that you can extent analysis and have weighted equivalents for any metrics offered by matchTemplate.
This actually worked for me. I am sorry I don't give actual code. I am working in R, so
I don't have the code in Python. But idea is quite straightforward.
I hope this will help.
What worked for me the one time I needed this was to fill the "mask" areas with white noise. Then it gets effectively washed out of the correlation when looking for matches. Otherwise I got, as I presume you did, false matches on the masked areas.
One answer to your question is convolution. Use the template as kernel and filter the image.
The destination Mat will have dense bright areas where your template might be. You'll have to cluster the results (e.g. Mean-shift).
In that way, you'll have a very simplistic implementation of the Generalized Hough Transform or a Template-based convolution matching.
Imagemagick 7.0.3.9 now has a masked compare capability so that you can limit the template matching region. See http://www.imagemagick.org/discourse-server/viewtopic.php?f=4&t=31053
Also, I see that OpenCV 3.0 now has masked template matching. See http://docs.opencv.org/3.0.0/df/dfb/group__imgproc__object.html#ga586ebfb0a7fb604b35a23d85391329be
However, it is only for method == CV_TM_SQDIFF and method == CV_TM_CCORR_NORMED. see python opencv matchTemplate is mask feature implemented?
ImageMagick has logic for finding subimages in other images and it works quite well.
compare -verbose -dissimilarity-threshold 0.1 -subimage-search subimage bigimage
I've used it to find and blur watermarks off some products. Don't ask.
(Sometimes you have to do what you have to do..)
2021 Update: I've been trying to find a solution for transparency in templates throughout the day, and I think I finally found a way to do it. matchTemplate() has a mask parameter, which apparently works exactly like OP wants it to: ignore certain pixels from a template when searching for it in another image. And since my templates already contain transparency in them, I decided to use my template as both a template and mask parameter. Surprisingly, it worked.
I'm using JavaScript with opencv4nodejs, so the following python code snippet might be completely off, but the theory is there and I'm fairly positive it should work.
# Import OpenCV
import cv2 as cv
# Read both the image and the template
image = cv.imread("image.png", cv.IMREAD_COLOR)
template = cv.imread("template.png", cv.IMREAD_COLOR)
# Match with template as both template and mask parameter
result = cv.matchTemplate(image, template, cv.TM_CCORR_NORMED, None, template)
Here's a gist for JavaScript with opencv4nodejs if you're interested.
Now that I think about it, it seems really stupid and way too good to be true, but I've been getting good matches (0.98+) on most tests. Hope this helps!