How to create two dimensional set objects under pyomo.environ module - python

I tried to create a LP model by using pyomo.environ. However, I'm having a hard time on creating sets. For my problem, I have to create two sets. One set is from a bunch of nodes, and the other one is from several arcs between nodes. I create a network by using Networkx to store my nodes and arcs.
The node data is saved like (Longitude, Latitude) in tuple form. The arcs are saved as (nodeA, nodeB), where nodeA and nodeB are both coordinates in tuple.
So, a node is something like:
(-97.97516252657978, 30.342243012086083)
And, an arc is something like:
((-97.97516252657978, 30.342243012086083),
(-97.976196300350608, 30.34247219922803))
The way I tried to create a set is as following:
# import pyomo.envrion as pe
# create a model m
m = pe.ConcreteModel()
# network is an object I created by Networkx module
m.node_set = pe.Set(initialize= self.network.nodes())
m.arc_set = pe.Set(initialize= self.network.edges())
However, I kept getting an error message on arc_set.
ValueError: The value=(-97.97516252657978, 30.342243012086083,
-97.976196300350608, 30.34247219922803) does not have dimension=2,
which is needed for set=arc_set
I found it's weird that somehow my arc_set turned into one tuple instead of two. Then I tried to convert my nodes and arcs into string but still got the error.
Could somebody show me some hint? Or how do delete this bug?
Thanks!

Underneath the hood, Pyomo "flattens" all indexing sets. That is, it removes nested tuples so that each set member is a single tuple of scalar values. This is generally consistent with other algebraic modeling languages, and helps to make sure that we can consistently (and correctly) retrieve component members regardless of how the user attempted to query them.
In your case, Pyomo will want each member of the the arc set as a single 4-member tuple. There is a utility in PyUtilib that you can use to flatten your tuples when constructing the set:
from pyutilib.misc import flatten
m.arc_set = pe.Set(initialize=(tuple(flatten(x)) for x in self.network.edges())
You can also perform some error checking, in this case to make sure that all edges start and end at known nodes:
from pyutilib.misc import flatten
m.node_set = pe.Set( initialize=self.network.nodes() )
m.arc_set = pe.Set(
within=m.node_set*m.node_set,
initialize=(tuple(flatten(x)) for x in self.network.edges() )
This is particularly important for models like this where you are using floating point numbers as indices, and subtle round-off errors can produce indices that are nearly the same but not mathematically equal.
There has been some discussion among the developers to support both structured and flattened indices, but we have not quite reached consensus on how to best support it in a backwards compatible manner.

Related

How to create set from list of sets in Python?

In abaqus Python script, several Plies have a large number of copies, each of which has many fibers. In each Fiber, a set of edges has been selected: App1-1, App1-2, ..., App99-1, App99-2, ..., App99-88. How to create a new set that will contain all or some of these set of edges?
Thank you.
Code:
allApps=[]
...
for i in range(Plies):
...
for j in range (Fiber):
appSet = Model.rootAssembly.Set(edges=
Model.rootAssembly.instances['Part'+str(i+1)+'-'+str(1+j)].edges[0:0+1],
name='App'+str(i+1)+'-'+str(1+j))
allApps.append(appSet)
I can guess it should be something like this:
Model.rootAssembly.Set(name='allAppEdges', edges=.?.Array(allApps))
but I'm not sure about this and I have no idea about correct syntax
I tested the following on a simple part and it worked for me. I think you could adapt this to achieve what you're trying to do for your specific model. The key is the part.EdgeArray type. For whatever reason Abaqus requires your edges be supplied within that type, rather than a simple list or tuple. The Abaqus documentation is not clear on this, and when you pass a list of edges it will fail with a vague error: Feature creation failed.
from abaqus import *
import part
mdl = mdb.models['Model-1']
inst = mdl.rootAssembly.instances['Part-1-1']
# Loop through all edges in the instance and add them to a list
my_edges = []
for e in inst.edges:
my_edges.append(e)
# Create a new set with these edges
#mdl.rootAssembly.Set(name='my_edges', edges=my_edges) # This will fail because my_edges needs to be an EdgeArray
mdl.rootAssembly.Set(name='my_edges', edges=part.EdgeArray(my_edges))
For others that may find themselves here - similar types are available for vertices, faces, and cells: part.VertexArray, part.FaceArray, and part.CellArray.

Loop through possible values a class type

I'm learning how to use Python and Basemap and would like to create a loop that produces a map of each projection type.
The projection types are: cea, mbtfpq, aeqd, sinu, poly, etc. So I just want a loop that does Basemap(width=x, height=y, projection=[projection type], ...) but can't figure out how to return the actual types of possible projections.
So far I've tried things like
proj = Basemap()
print(dir(proj))
and
proj = Basemap().projection
print(dir(proj))
but neither returns the types of projections it could be. I tried
for value in Basemap().projection:
print (value)
But it just returned
c
y
l
and that's it.
Closest I've gotten is
for value in Basemap().__dict__.items():
print (value)
but that returns a lot of info, seemingly the default values, but one of them is cyl, which is the default projection. I'm getting close but can't see how to iterate through each projection.
(My semantics are incorrect, so please correct me if I'm wrong!)
Edit: I'd like to learn how to do this without "cheating", i.e. since I know the types of projections possible, load those into an array and loop through the array. I'm trying to learn how to do it if I didn't know the possible values.
There's no need to cheat; looking at the source, you have a supported_projections list that contains all supported projections. You can just use that.

Efficient lookup table for collection of numpy arrays

I would like to know what's the most efficient way to create a lookup table for floats (and collection of floats) in Python. Since both sets and dicts need the keys to be hashable, I guess can't use some sort of closeness to check for proximity to already inserted, can I? I have seen this answer and it's not quite what I'm looking for as I don't want to give the burden of creating the right key to the user and also I need to extend it for collections of floats.
For example, given the following code:
>>> import numpy as np
>>> a = {np.array([0.01, 0.005]): 1}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'numpy.ndarray'
>>> a = {tuple(np.array([0.01, 0.005])): 1}
>>> tuple(np.array([0.0100000000000001,0.0050002])) in a
False
I would like the last statement to return True. Coming from a C++ world, I would create a std::map and provide a compare function that can do the comparison with some user defined tolerance to check if the values have been added to the data structure. Of course this question extends naturally to the lookup tables of arrays (for example numpy arrays). So what's the most efficient way to accomplish what I'm looking for?
Since you are interested in 3D points, you could think about using some data-structure that is optimized for storing spatial data, such as a KD-tree. This is available in Scipy and allows the lookup of the point closest to a given coordinate. After you have looked up the this point, you could then do a check to see if you are within some tolerance to accept the new point or not.
Usage should be something like this (untested, never used it myself):
from scipy.spatial import KDTree
points = ... # points is [Nx3]
tree = KDTree(points)
new_point = ... # array of length 3
distance, nearest_index = tree.query(new_point)
if distance > tolerance: # add point
points = np.vstack((points, new_point))
tree = KDTree(points) # generate tree from scratch
Note that a KD-tree is efficient for looking up a point in a static collection of points (cost of a lookup is O(log(N)), but they are not optimized for repeatedly adding new points. The Scipy implementation even lacks a method to add new points, so you have to generate a new tree every time you insert a new point. Since this operation is probably O(N*log(N)), it might be faster to just to do brute-force calculation of all distances, which costs O(N). Note also that there is an alternative version cKDTree, which might be implemented in C for speed, the documentation is not really clear on this.

Pattern for associating pyparsing results with a linked-list of nodes

I have defined a pyparsing rule to parse this text into a syntax-tree...
TEXT COMMANDS:
add Iteration name = "Cisco 10M/half"
append Observation name = "packet loss 1"
assign Observation results_text = 0.0
assign Observation results_bool = True
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append Observation name = "packet loss 2"
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
SYNTAX TREE:
['add', 'Iteration', ['name', 'Cisco 10M/half']]
['append', 'Observation', ['name', 'packet loss 1']]
['assign', 'Observation', ['results_text', '0.0']]
['assign', 'Observation', ['results_bool', 'True']]
['append', 'DataPoint']
['assign', 'DataPoint', ['metric', 'txpackets']]
['assign', 'DataPoint', ['units', 'packets']]
...
I'm trying to associate all the nested key-value pairs in the syntax-tree above into a linked-list of objects... the heirarchy looks something like this (each word is a namedtuple... children in the heirarchy are on the parents' list of children):
Log: [
Iteration: [
Observation:
[DataPoint, DataPoint],
Observation:
[DataPoint, DataPoint]
]
]
The goal of all this is to build a generic test data-acquisition platform to drive the flow of tests against network gear, and record the results. After the data is in this format, the same data structure will be used to build a test report. To answer the question in the comments below, I chose a linked list because it seemed like the easiest way to sequentially dequeue the information when writing the report. However, I would rather not assign Iteration or Observation sequence numbers before finishing the tests... in case we find problems and insert more Observations in the course of conducting the test. My theory is that the position of each element in the list is sufficient, but I'm willing to change that if it's part of the problem.
The problem is that I'm getting lost trying to assign Key-Values to objects in the linked list after it's built. For instance, after I insert an Observation namedtuple into the first Iteration, I have trouble reliably handling the update of assign Observation results_bool = True in the example above.
Is there a generalized design pattern to handle this situation? I have googled this for while, but I can't seem to make the link between parsing the text (which I can do) and managing the data-heirarchy (the main problem). Hyperlinks or small demo code is fine... I just need pointers to get on the right track.
I am not aware of an actual design pattern for what you're looking for, but I have a great passion for the issue at hand. I work heavily with network devices and parsing and organizing the data is a large ongoing challenge for me.
It's clear that the problem is not parsing the data, but what you do with it afterwards. This is where you need to think about the meaning you are attaching to the data you have parsed. The nested-list method might work well for you if the objects containing the lists are also meaningful.
Namedtuples are great for quick-and-dirty class-ish behavior, but they fall flat when you need them to do anything outside of basic attribute access, especially considering that as tuples they are immutable. It seems to me that you'll want to replace certain namedtuple objects with full-blown classes. This way you can highly customize the behavior and methods available.
For example, you know that an Iteration will always contain 1 or more Observation objects which will then contain 1 or more DataPoint objects. If you can accurately describe the relationships, this sets you on the path to handling them.
I wound up using textfsm, which allows me to keep state between different lines while parsing the configuration file.

Double linking array in Python

Since I'm pretty new this question'll certainly sound stupid but I have no idea about how to approach this.
I'm trying take a list of nodes and for each of the nodes I want to create an array of predecessors and successors in the ordered array of all nodes.
Currently my code looks like this:
nodes = self.peers.keys()
nodes.sort()
peers = {}
numPeers = len(nodes)
for i in nodes:
peers[i] = [self.coordinator]
for i in range(0,len(nodes)):
peers[nodes[i%numPeers]].append(nodes[(i+1)%numPeers])
peers[nodes[(i+1)%numPeers]].append(nodes[i%numPeers])
# peers[nodes[i%numPeers]].append(nodes[(i+4)%numPeers])
# peers[nodes[(i+4)%numPeers]].append(nodes[i%numPeers])
The last two lines should later be used to create a skip graph, but that's not really important. The problem is that it doesn't really work reliably, sometimes a predecessor or a successor is skipped, and instead the next one is used, and so forth. Is this correct at all or is there a better way to do this? Basically I need to get the array indices with certain offsets from each other.
Any ideas?
I would almost bet that when the error occurs, the values in nodes have duplicates, which would cause your dictionary in peers to get mixed up. Your code assumes the values in nodes are unique.

Categories