I have created a graph (graph1.xml) which I have saved in a previous script. I have now loaded that graph and am trying to draw it. When I type the following in python2.7 (on Ubuntu):
load_graph('graph1.xml')
I receive a message saying:
<Graph object, directed, with 10194124 vertices and 25920412 edges at 0x7fbb837a2e10>
So the graph object clearly contains a lot of vertices and quite a number of edges. Thus I proceed to execute the following code:
g = load_graph('graph1.xml')
root_vertex = find_vertex(g, g.vp.vprop, '774123')
root_vertex = root_vertex[0]
graph_draw(g, pos=radial_tree_layout(g, root_vertex), output="test-radial1.png")
Which returns message saying:
<PropertyMap object with key type 'Vertex' and value type 'vector<double>', for Graph 0x7fbb83747410, at 0x7fbb837476d0>
When I open the folder in which I have run the code a file by the name test-radial1.png does appear, however it seems to only show some vertices:
Why might that be?
This is because the default edge width is smaller than the resolution of the figure. You can fix this by increasing its size via the output_size option of graph_draw(), or by passing to it the parmeter edge_pen_width with an appropriately large value.
Related
In my pursuit after a fast graph library for python, I stumbled upon retworkx,
and I'm trying to achieve the same (desired) result I've achieved using networkx.
In my networkx code, I instantiate a digraph object with an array of weighted edges,
activate it's built-in shortest_path (dijkstra-based), and receive that path. I do so by using the following code:
graph = nx.DiGraph()
in_out_weight_triplets = np.concatenate((in_node_indices, out_node_indices,
np.abs(weights_matrix)), axis=1)
graph.add_weighted_edges_from(in_out_weight_triplets)
shortest_path = nx.algorithms.shortest_path(graph, source=n_nodes, target=n_nodes + 1,
weight='weight')
when trying to reproduce the same shortest path using retworkx:
graph = rx.PyDiGraph(multigraph=False)
in_out_weight_triplets = np.concatenate((in_node_indices.astype(int),
out_node_indices.astype(int),
np.abs(weights_matrix)), axis=1)
unique_nodes = np.unique([in_node_indices.astype(int), out_node_indices.astype(int)])
graph.add_nodes_from(unique_nodes)
graph.extend_from_weighted_edge_list(list(map(tuple, in_out_weight_triplets)))
shortest_path = rx.digraph_dijkstra_shortest_paths(graph, source=n_nodes,
target=n_nodes + 1)
but for using a triplet with float weights I get the error:
"C:\Users\tomer.d\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py",
line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-23-752a42ce79d7>", line 1, in <module>
graph.extend_from_weighted_edge_list(list(map(tuple, in_out_weight_triplets))) TypeError: argument 'edge_list':
'numpy.float64' object cannot be interpreted as an integer ```
and when i try the workaround of multiplying the weights by factor of 10^4 and casting them to ints:
np.concatenate((in_node_indices.astype(int), out_node_indices.astype(int),
(np.abs(weights_matrix) * 10000).astype(int), axis=1)
so that I supposedly won't lose the weight subtleties - no errors are being raised,
but the output of the shortest path is different than the one I get when using networkx.
I'm aware of the fact that the weights aren't necessarily the issue here,
but they are currently my main suspect.
any other advice would be thankfully accepted.
Without knowing what in_node_indices, out_node_indices and weights_matrix contain in the code snippets it's hard to provide an exact working example for your use case. But, I can take a guess based on the error message. I think the issue you're hitting here is likely because you're trying to use the values in in_node_indices and out_node_indices as retworkx indices but there isn't a 1:1 mapping necessarily. The retworkx index for a node is assigned when the node is added and is the returned value. So if you do something like graph.add_node(3), the return from that will not necessarily be 3 it will be the node index assigned to that instance of 3 when it's added as a node in the graph. If you ran graph.add_nodes_from([3, 3]) you'd get two different indices returned. This is different from networkx which treats the data payloads as a lookup key in the graph (so graph.add_node(3) adds a node 3 to the graph which you look up by 3, but then you can only have a single node with the payload 3). You can refer to the documentation on retworkx for networkx users for more details here: https://qiskit.org/documentation/retworkx/networkx.html
So when you call add_nodes_from() you need to map the value at a position in the input array to the returned index from the method at the same position to identify that node in the graph. I think if you do something like:
import retworkx as rx
graph = rx.PyDiGraph(multigraph=False)
unique_indices = np.unique([in_node_indices, out_node_indices])
rx_indices = graph.add_nodes_from(unique_indices)
index_map = dict(zip(unique_indices, rx_indices))
in_out_weight_triplets = np.concatenate((in_node_indices, out_node_indices,
np.abs(weights_matrix)), axis=1)
graph.add_nodes_from([(index_map[in], index_map[out], weight) for in, out, weight in in_out_weight_triplets])
I haven't tested the above snippet (so there might be typos or other issues with it) because I don't know what the contents of in_node_indices, out_node_indices, and weight_matrix are. But it should give you an a better idea of what I described above.
That being said I do wonder if weight_matrix is an adjacency matrix, if it is then it's probably easier to just do:
import retworkx
graph = retworkx.PyDiGraph.from_adjacency_matrix(weight_matrix)
this is also typically faster (assuming you already have the matrix) because it uses the numpy C api and avoids the type conversion going between python and rust and all the pre-processing steps.
There was also an issue opened similar to this in the retworkx issue tracker recently: https://github.com/Qiskit/retworkx/issues/546. My response there contains more details on the internals of retworkx.
I am checking the elements beneath a surface for their labels & the coordinates of their nodes in the following code,
mySurf = mdb.models['Model-1'].rootAssembly.surfaces['Surf-1']
surfEls = mySurf.elements[:]
surfNodes = []
for eNode in mySurf.nodes:
surfNodes.append(eNode.coordinates)
This does something but when I check the sizes of each list then I get more element labels than I do sets of node coordinates!
I also tried the following to get the nodal coordinates,
surfNodes = mySurf.nodes[:]
surfNodesCoords = surfNodes.coordinates[:]
But this just throws up an error,
AttributeError: 'MeshSequence' object has no attribute 'coordinates'
Which I confess has dumbfounded me. Does anybody have a deeper understanding of the methods used above, who can explain this behaviour to me?
The problem is that the MeshSequenceObject does not have method 'coordinates'. However, a member of MeshSequenceObject may have this method, if sequence contains nodes. Just apply it to each member of the sequence:
surfNodesCoords = [Node.coordinates for Node in SurfNodes]
The latter will make the list with coordinates of all nodes.
P.S. The first part of the question is working fine. The number of nodes is bigger than the number of elements.
My question is: what is the difference between dataset.add() and graph.add() in rdflib for python. I was working under the assumption that graph.add was used for the object type properties and dataset.add was for the data type properties. However I am not sure.
graph.add() adds a triple to a graph,
dataset.add() adds a triple to the default graph, or a quad to a dataset
Example from http://rdflib.readthedocs.io [1]:
Create a new Dataset
ds = Dataset()
simple triples goes to default graph
ds.add((URIRef('http://example.org/a'),URIRef('http://www.example.org/b'),Literal('foo')))
Create a graph in the dataset, if the graph name has already been used, the corresponding graph will be returned (ie, the Dataset keeps track of the constituent graphs)
g = ds.graph(URIRef('http://www.example.com/gr'))
add triples to the new graph as usual
g.add((URIRef('http://example.org/x'),URIRef('http://example.org/y'),Literal('bar')))
alternatively: add a quad to the dataset -> goes to the graph
ds.add((URIRef('http://example.org/x'),URIRef('http://example.org/z'),Literal('foo-bar'),g))
It has nothing to do with whether something is an object property or a datatype property.
[1] http://rdflib.readthedocs.io/en/stable/apidocs/rdflib.html?highlight=dataset#rdflib.graph.Dataset
I have written a Python Script for ABAQUS to create several parts with many partitions. To get a structered mesh I have to select several edges. Now there is one edge I apparently cannot select in ABAQUS 6.10 & 6.11. Oddly, everything is fine with ABAQUS 6.13+.
p = mdb.models[name_model].parts[name_part_1]
e = p.edges
pickedEdges = e.getByBoundingBox(((cos(alpha_rad)*ri)-delta_p),((sin(alpha_rad)*ri)-delta_p),0.0,
((cos(alpha_rad)*d_core/2)+delta_p),((sin(alpha_rad)*d_core/2)+delta_p),0.0)
p.seedEdgeByBias(biasMethod=SINGLE, end2Edges=pickedEdges, ratio=bias_f, number=elem_num_rad, constraint=FINER)
Here, 'ri' is used to describe a radius, 'delta_p' (=0.001) is used to get a boundingbox slightly bigger than the original edge.
I also tried to use a bigger boundingbox by increasing delta_p but nothing works.
Any ideas? Thank you in advance! :)
for a sketch:
click me
the described bounding box is box E and I try to get the orange line
Its not clear from your post why the method isn't working.
you could determine a point on your edge and use the findAt method instead of getByBoundingBox.
By setting delta_p to a very large number, you should select every edge in your model? Its not clear what you mean by "not working"
I am writing a custom export script to parse all the objects in a blender file, filter them by name, then check to make sure that they meet some specific criteria.
I am using Blender 2.68a. I've created a blender file with some basic 2d and 3d meshes, as well as some that should fail my test criteria. I am working in the internal Python console inside of Blender. This is the only way to work with the blender python API, as their python environment is customized.
I've sorted how to iterate through the objects using a for loop and the D.objects iterator, then check for name matches using regular expressions, and then get a mesh from the object using:
mesh = obj.to_mesh(C.scene, True, 'RENDER') #where obj is an bpy.data.object[index] in the scene
mesh.update(True, True)
mesh.polygons[index].<long list of possible functions>
lets me access an array of polygons to know if there is a set of vertices with edges that form a polygon, and what their key values are.
What I can't sort out is how to determine from the python console if a poly is a face or just a poly. Is there a built in function, or what tests can i perform to programmatically determine this? For example, I can have a mesh 4 vertices with 4 edges that do not have a face, and I do not want to export this, but if i were to edit the same 4 vertices/edges and put a face on it, then it becomes a desirable export.
Can anyone explain the bpy.data.object data structure or explain where the "faces" are stored? it seems as though it would be a property of the npolys themselves, but the API does not make it obvious. Any assistance in clarifying this would be greatly appreciated. Cheers.
So, i asked this question on the blender.org forums, http://www.blender.org/forum/viewtopic.php?t=28286&postdays=0&postorder=asc&start=0 and a very helpful individual has helped me over the past few days each time I got stuck in my own efforts to plow through this.
The short list of answers is:
1) All polygons are faces. If it isnt stored as a polygon, it isnt a face.
2) using the to_mesh() function on an object returns a copy of the function, and so any selections that are done to the copy are not reflected by the context and therefore the methodology I was using was flawed. The only way to access the live object is through use of:
bpy.data.objects[<index or object name>].data.vertices[<index>].co[<0,1,2> which correspond to x,y,z respectively]
bpy.data.objects[<index or object name>].data.polygons[<index>].edge_keys
The first one gives you access to an ordered index of all the vertices in the object(assuming it is of type 'MESH'), and their coordinates.
The second one gives you access to an 2d array of ordered pairs which represent edges. The numbers it contains within the tuples correspond to the index value in the vertices list from the first command, and so you can get the coordinates which the edge goes between.
One can also create a new BMesh object and copy the object you are interested in into the BMesh. This gives you a lot more functionality that you can't access on the live object. The code in answer 3 shows an example of this.
3)see below for answer to my question regarding checking faces in a mesh.
it turns out that one way to determine if an object has faces and all edges are a part of a face is to use the following code snippet written by a helpful user, CoDEmanX on the above thread.
import bpy, bmesh
for ob in bpy.context.scene.objects:
if ob.type != 'MESH':
continue
bm = bmesh.new()
bm.from_object(ob, bpy.context.scene)
if len(bm.faces) > 0 and 0 not in (len(e.link_faces) for e in bm.edges):
print(ob.name, "is valid")
else:
print(ob.name, "has errors")
I changed this a little bit, as i didnt want it to loop through all the objects, and instead i've got this as a function that returns true if the object passed in is valid and false otherwise. This lets me serialize my calls so that my addon only tries to validate the objects which have a name which matches a regex.
def validate(obj):
import bpy, bmesh
if obj.type == 'MESH':
bm = bmesh.new()
bm.from_object(obj, bpy.context.scene)
if len(bm.faces) > 0 and 0 not in (len(e.link_faces) for e in bm.edges):
return True
return False