My question is: what is the difference between dataset.add() and graph.add() in rdflib for python. I was working under the assumption that graph.add was used for the object type properties and dataset.add was for the data type properties. However I am not sure.
graph.add() adds a triple to a graph,
dataset.add() adds a triple to the default graph, or a quad to a dataset
Example from http://rdflib.readthedocs.io [1]:
Create a new Dataset
ds = Dataset()
simple triples goes to default graph
ds.add((URIRef('http://example.org/a'),URIRef('http://www.example.org/b'),Literal('foo')))
Create a graph in the dataset, if the graph name has already been used, the corresponding graph will be returned (ie, the Dataset keeps track of the constituent graphs)
g = ds.graph(URIRef('http://www.example.com/gr'))
add triples to the new graph as usual
g.add((URIRef('http://example.org/x'),URIRef('http://example.org/y'),Literal('bar')))
alternatively: add a quad to the dataset -> goes to the graph
ds.add((URIRef('http://example.org/x'),URIRef('http://example.org/z'),Literal('foo-bar'),g))
It has nothing to do with whether something is an object property or a datatype property.
[1] http://rdflib.readthedocs.io/en/stable/apidocs/rdflib.html?highlight=dataset#rdflib.graph.Dataset
Related
As a result of my simulation, I want the volume of a surface body (computed using a convex hull algorithm). This calculation is done in seconds but the plotting of the results takes a long time, which becomes a problem for the future design of experiment. I think the main problem is that a matrix (size = number of nodes =over 33 000 nodes) is filled with the same volume value in order to be plotted. Is there any other way to obtain that value without creating this matrix? (the value retrieved must be selected as an output parameter afterwards)
It must be noted that the volume value is computed in python in an intermediate script then saved in an output file that is later read by Ironpython in the main script in Ansys ACT.
Thanks!
The matrix creation in the intermediate script (myICV is the volume computed) :
import numpy as np
NodeNo=np.array(Col_1)
ICV=np.full_like(NodeNo,myICV)
np.savetxt(outputfile,(NodeNo,ICV),delimiter=',',fmt='%f')
Plot of the results in main script :
import csv #after the Cpython function
resfile=opfile
reader=csv.reader(open(resfile,'rb'),quoting=csv.QUOTE_NONNUMERIC) #read the node number and the scaled displ
NodeNos=next(reader)
ICVs=next(reader)
#ScaledUxs=next(reader)
a=int(NodeNos[1])
b=ICVs[1]
ExtAPI.Log.WriteMessage(a.GetType().ToString())
ExtAPI.Log.WriteMessage(b.GetType().ToString())
userUnit=ExtAPI.DataModel.CurrentUnitFromQuantityName("Length")
DispFactor=units.ConvertUnit(1,userUnit,"mm")
for id in collector.Ids:
collector.SetValues(int(NodeNos[NodeNos.index(id)]), {ICVs[NodeNos.index(id)]*DispFactor}) #plot results
ExtAPI.Log.WriteMessage("ICV read")
So far the result looks like this
Considering that your 'CustomPost' object is not relevant in terms of visualization but just to pass the volume calculation as a parameter, without adding many changes to the workflow, I suggest you to change the 'Scoping Method' to 'Geometry' and then selecting a single node (if the extension result type is 'Node'; you can check data on the xml file), instead of 'All Bodies'.
If you code runs slow due to the plotting this should fix it, cause you will be requesting just one node.
As you are referring to DoE, I understand you are expecting to run this model iteratively and read the parameter result. An easy trick might be to generate a 'NamedSelection' by 'Worksheet' and select 'Mesh Node' (Entity Type) with 'NodeID' as Criterion and equal to '1', for example. So even if through your iterations you change the mesh, we expect to have always node ID 1, so your NamedSelection is guaranteed to be generated successfully in each iteration.
Then you can scope you 'CustomPost' to 'NamedSelection' and then to the one you created. This should work.
If your extension does not accept 'NamedSelection' as 'Scoping Method' and you are changing the mesh in each iteration (if you are not, you can directly scope a node), I think it is time to manually write the parameter as an 'Input Parameter', in the 'Parameter Set'. But in this way you will have to control the execution of the model from Workbench platform.
I am curious to see how it goes.
In my pursuit after a fast graph library for python, I stumbled upon retworkx,
and I'm trying to achieve the same (desired) result I've achieved using networkx.
In my networkx code, I instantiate a digraph object with an array of weighted edges,
activate it's built-in shortest_path (dijkstra-based), and receive that path. I do so by using the following code:
graph = nx.DiGraph()
in_out_weight_triplets = np.concatenate((in_node_indices, out_node_indices,
np.abs(weights_matrix)), axis=1)
graph.add_weighted_edges_from(in_out_weight_triplets)
shortest_path = nx.algorithms.shortest_path(graph, source=n_nodes, target=n_nodes + 1,
weight='weight')
when trying to reproduce the same shortest path using retworkx:
graph = rx.PyDiGraph(multigraph=False)
in_out_weight_triplets = np.concatenate((in_node_indices.astype(int),
out_node_indices.astype(int),
np.abs(weights_matrix)), axis=1)
unique_nodes = np.unique([in_node_indices.astype(int), out_node_indices.astype(int)])
graph.add_nodes_from(unique_nodes)
graph.extend_from_weighted_edge_list(list(map(tuple, in_out_weight_triplets)))
shortest_path = rx.digraph_dijkstra_shortest_paths(graph, source=n_nodes,
target=n_nodes + 1)
but for using a triplet with float weights I get the error:
"C:\Users\tomer.d\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py",
line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-23-752a42ce79d7>", line 1, in <module>
graph.extend_from_weighted_edge_list(list(map(tuple, in_out_weight_triplets))) TypeError: argument 'edge_list':
'numpy.float64' object cannot be interpreted as an integer ```
and when i try the workaround of multiplying the weights by factor of 10^4 and casting them to ints:
np.concatenate((in_node_indices.astype(int), out_node_indices.astype(int),
(np.abs(weights_matrix) * 10000).astype(int), axis=1)
so that I supposedly won't lose the weight subtleties - no errors are being raised,
but the output of the shortest path is different than the one I get when using networkx.
I'm aware of the fact that the weights aren't necessarily the issue here,
but they are currently my main suspect.
any other advice would be thankfully accepted.
Without knowing what in_node_indices, out_node_indices and weights_matrix contain in the code snippets it's hard to provide an exact working example for your use case. But, I can take a guess based on the error message. I think the issue you're hitting here is likely because you're trying to use the values in in_node_indices and out_node_indices as retworkx indices but there isn't a 1:1 mapping necessarily. The retworkx index for a node is assigned when the node is added and is the returned value. So if you do something like graph.add_node(3), the return from that will not necessarily be 3 it will be the node index assigned to that instance of 3 when it's added as a node in the graph. If you ran graph.add_nodes_from([3, 3]) you'd get two different indices returned. This is different from networkx which treats the data payloads as a lookup key in the graph (so graph.add_node(3) adds a node 3 to the graph which you look up by 3, but then you can only have a single node with the payload 3). You can refer to the documentation on retworkx for networkx users for more details here: https://qiskit.org/documentation/retworkx/networkx.html
So when you call add_nodes_from() you need to map the value at a position in the input array to the returned index from the method at the same position to identify that node in the graph. I think if you do something like:
import retworkx as rx
graph = rx.PyDiGraph(multigraph=False)
unique_indices = np.unique([in_node_indices, out_node_indices])
rx_indices = graph.add_nodes_from(unique_indices)
index_map = dict(zip(unique_indices, rx_indices))
in_out_weight_triplets = np.concatenate((in_node_indices, out_node_indices,
np.abs(weights_matrix)), axis=1)
graph.add_nodes_from([(index_map[in], index_map[out], weight) for in, out, weight in in_out_weight_triplets])
I haven't tested the above snippet (so there might be typos or other issues with it) because I don't know what the contents of in_node_indices, out_node_indices, and weight_matrix are. But it should give you an a better idea of what I described above.
That being said I do wonder if weight_matrix is an adjacency matrix, if it is then it's probably easier to just do:
import retworkx
graph = retworkx.PyDiGraph.from_adjacency_matrix(weight_matrix)
this is also typically faster (assuming you already have the matrix) because it uses the numpy C api and avoids the type conversion going between python and rust and all the pre-processing steps.
There was also an issue opened similar to this in the retworkx issue tracker recently: https://github.com/Qiskit/retworkx/issues/546. My response there contains more details on the internals of retworkx.
I tried to create a LP model by using pyomo.environ. However, I'm having a hard time on creating sets. For my problem, I have to create two sets. One set is from a bunch of nodes, and the other one is from several arcs between nodes. I create a network by using Networkx to store my nodes and arcs.
The node data is saved like (Longitude, Latitude) in tuple form. The arcs are saved as (nodeA, nodeB), where nodeA and nodeB are both coordinates in tuple.
So, a node is something like:
(-97.97516252657978, 30.342243012086083)
And, an arc is something like:
((-97.97516252657978, 30.342243012086083),
(-97.976196300350608, 30.34247219922803))
The way I tried to create a set is as following:
# import pyomo.envrion as pe
# create a model m
m = pe.ConcreteModel()
# network is an object I created by Networkx module
m.node_set = pe.Set(initialize= self.network.nodes())
m.arc_set = pe.Set(initialize= self.network.edges())
However, I kept getting an error message on arc_set.
ValueError: The value=(-97.97516252657978, 30.342243012086083,
-97.976196300350608, 30.34247219922803) does not have dimension=2,
which is needed for set=arc_set
I found it's weird that somehow my arc_set turned into one tuple instead of two. Then I tried to convert my nodes and arcs into string but still got the error.
Could somebody show me some hint? Or how do delete this bug?
Thanks!
Underneath the hood, Pyomo "flattens" all indexing sets. That is, it removes nested tuples so that each set member is a single tuple of scalar values. This is generally consistent with other algebraic modeling languages, and helps to make sure that we can consistently (and correctly) retrieve component members regardless of how the user attempted to query them.
In your case, Pyomo will want each member of the the arc set as a single 4-member tuple. There is a utility in PyUtilib that you can use to flatten your tuples when constructing the set:
from pyutilib.misc import flatten
m.arc_set = pe.Set(initialize=(tuple(flatten(x)) for x in self.network.edges())
You can also perform some error checking, in this case to make sure that all edges start and end at known nodes:
from pyutilib.misc import flatten
m.node_set = pe.Set( initialize=self.network.nodes() )
m.arc_set = pe.Set(
within=m.node_set*m.node_set,
initialize=(tuple(flatten(x)) for x in self.network.edges() )
This is particularly important for models like this where you are using floating point numbers as indices, and subtle round-off errors can produce indices that are nearly the same but not mathematically equal.
There has been some discussion among the developers to support both structured and flattened indices, but we have not quite reached consensus on how to best support it in a backwards compatible manner.
I am writing a custom export script to parse all the objects in a blender file, filter them by name, then check to make sure that they meet some specific criteria.
I am using Blender 2.68a. I've created a blender file with some basic 2d and 3d meshes, as well as some that should fail my test criteria. I am working in the internal Python console inside of Blender. This is the only way to work with the blender python API, as their python environment is customized.
I've sorted how to iterate through the objects using a for loop and the D.objects iterator, then check for name matches using regular expressions, and then get a mesh from the object using:
mesh = obj.to_mesh(C.scene, True, 'RENDER') #where obj is an bpy.data.object[index] in the scene
mesh.update(True, True)
mesh.polygons[index].<long list of possible functions>
lets me access an array of polygons to know if there is a set of vertices with edges that form a polygon, and what their key values are.
What I can't sort out is how to determine from the python console if a poly is a face or just a poly. Is there a built in function, or what tests can i perform to programmatically determine this? For example, I can have a mesh 4 vertices with 4 edges that do not have a face, and I do not want to export this, but if i were to edit the same 4 vertices/edges and put a face on it, then it becomes a desirable export.
Can anyone explain the bpy.data.object data structure or explain where the "faces" are stored? it seems as though it would be a property of the npolys themselves, but the API does not make it obvious. Any assistance in clarifying this would be greatly appreciated. Cheers.
So, i asked this question on the blender.org forums, http://www.blender.org/forum/viewtopic.php?t=28286&postdays=0&postorder=asc&start=0 and a very helpful individual has helped me over the past few days each time I got stuck in my own efforts to plow through this.
The short list of answers is:
1) All polygons are faces. If it isnt stored as a polygon, it isnt a face.
2) using the to_mesh() function on an object returns a copy of the function, and so any selections that are done to the copy are not reflected by the context and therefore the methodology I was using was flawed. The only way to access the live object is through use of:
bpy.data.objects[<index or object name>].data.vertices[<index>].co[<0,1,2> which correspond to x,y,z respectively]
bpy.data.objects[<index or object name>].data.polygons[<index>].edge_keys
The first one gives you access to an ordered index of all the vertices in the object(assuming it is of type 'MESH'), and their coordinates.
The second one gives you access to an 2d array of ordered pairs which represent edges. The numbers it contains within the tuples correspond to the index value in the vertices list from the first command, and so you can get the coordinates which the edge goes between.
One can also create a new BMesh object and copy the object you are interested in into the BMesh. This gives you a lot more functionality that you can't access on the live object. The code in answer 3 shows an example of this.
3)see below for answer to my question regarding checking faces in a mesh.
it turns out that one way to determine if an object has faces and all edges are a part of a face is to use the following code snippet written by a helpful user, CoDEmanX on the above thread.
import bpy, bmesh
for ob in bpy.context.scene.objects:
if ob.type != 'MESH':
continue
bm = bmesh.new()
bm.from_object(ob, bpy.context.scene)
if len(bm.faces) > 0 and 0 not in (len(e.link_faces) for e in bm.edges):
print(ob.name, "is valid")
else:
print(ob.name, "has errors")
I changed this a little bit, as i didnt want it to loop through all the objects, and instead i've got this as a function that returns true if the object passed in is valid and false otherwise. This lets me serialize my calls so that my addon only tries to validate the objects which have a name which matches a regex.
def validate(obj):
import bpy, bmesh
if obj.type == 'MESH':
bm = bmesh.new()
bm.from_object(obj, bpy.context.scene)
if len(bm.faces) > 0 and 0 not in (len(e.link_faces) for e in bm.edges):
return True
return False
Using Python's Networkx library, I created an undirected graph to represent a relationship network between various people. A snippet of my code is below:
import networkx as nx
def creategraph(filepath):
G=nx.Graph()
#All the various nodes and edges are added in this stretch of code.
return G
From what I understand, each node is basically a dictionary. The problem that this presents to me is that I want to perform a different kind of Random Walk algorithm. Now before you jump on me and tell me to use one of the standard functions of the Networkx library, I want to point out that it is a custom algorithm. Suppose I run the creategraph function, and the G object is returned and stored in another object (let's call it X). I want to start off at a node called 'Bob.' Bob is connected to Alice and Joe. Now, I want to reassign Y to point to either Alice or Bob at random (with the data I'm dealing with, a given node could have hundreds of edges leaving it). How do I go about doing this? Also, how do I deal with unicode entries in a given node's dict (like how Alice and Joe are listed below?)
X = creategraph("filename")
Y=X['Bob']
print Y
>> {u'Alice': {}, u'Joe': {}}
The choice function in the random module could help with the selection process. You don't really need to worry about the distinction between unicode and string unless you're trying to write them out somewhere as sometimes unicode characters aren't translatable into the ASCII charset that Python defaults to.
The way you'd use random.choice would be something along the lines of:
Y = Y[random.choice(Y.keys())]