Since I'm pretty new this question'll certainly sound stupid but I have no idea about how to approach this.
I'm trying take a list of nodes and for each of the nodes I want to create an array of predecessors and successors in the ordered array of all nodes.
Currently my code looks like this:
nodes = self.peers.keys()
nodes.sort()
peers = {}
numPeers = len(nodes)
for i in nodes:
peers[i] = [self.coordinator]
for i in range(0,len(nodes)):
peers[nodes[i%numPeers]].append(nodes[(i+1)%numPeers])
peers[nodes[(i+1)%numPeers]].append(nodes[i%numPeers])
# peers[nodes[i%numPeers]].append(nodes[(i+4)%numPeers])
# peers[nodes[(i+4)%numPeers]].append(nodes[i%numPeers])
The last two lines should later be used to create a skip graph, but that's not really important. The problem is that it doesn't really work reliably, sometimes a predecessor or a successor is skipped, and instead the next one is used, and so forth. Is this correct at all or is there a better way to do this? Basically I need to get the array indices with certain offsets from each other.
Any ideas?
I would almost bet that when the error occurs, the values in nodes have duplicates, which would cause your dictionary in peers to get mixed up. Your code assumes the values in nodes are unique.
Related
This question is about a particular approach on finding the intersection and NOT a general method of doing so. My approach to solving the problem at the beginning was that of kenntym's in his answer to this question. His answer is quoted below.
This takes O(M+N) time and O(1) space, where M and N are the total
length of the linked lists. Maybe inefficient if the common part is
very long (i.e. M,N >> m,n)
Traverse the two linked list to find M and N.
Get back to the heads, then traverse |M − N| nodes on the longer list.
Now walk in lock step and compare the nodes until you found the common ones.
I have trouble understanding the 3rd solution of 160. Intersection of Two Linked Lists on Leetcode. The approach is different but I feel it might be similar to the above solution. Could anyone show me how the two could be similar? I still can't see how one gets to the intersection from this. The solution presented there is:
Approach #3 (Two Pointers) [Accepted]
Maintain two pointers pA and pB initialized at the head of A and B,
respectively. Then let them both traverse through the lists, one node
at a time.
When pA reaches the end of a list, then redirect it to the head of B
(yes, B, that's right.); similarly when pB reaches the end of a list,
redirect it the head of A.
If at any point pA meets pB, then pA/pB is the intersection node.
To see why the above trick would work, consider the following two
lists: A = {1,3,5,7,9,11} and B = {2,4,9,11}, which are intersected at
node '9'. Since B.length (=4) < A.length (=6), pB would reach the end
of the merged list first, because pB traverses exactly 2 nodes less
than pA does. By redirecting pB to head A, and pA to head B, we now
ask pB to travel exactly 2 more nodes than pA would. So in the second
iteration, they are guaranteed to reach the intersection node at the
same time.
If two lists have intersection, then their last nodes must be the same
one. So when pA/pB reaches the end of a list, record the last element
of A/B respectively. If the two last elements are not the same one,
then the two lists have no intersections.
They both rely on getting the pointers such that they are the same distance from the intersection, then walking them forward simultaneously until they meet.
Your first approach explicitly calculates the number of times you have to walk the pointer on the longer list forward, while your second approach does this implicitly by making both pointers take (m+n) steps.
I like the second one a little more, because there's no point at which you're walking just one of the pointers forward, both pointers move on every iteration. The first one may be more generalizable to 3+ lists, because you would have to fully traverse each list only once.
I tried to create a LP model by using pyomo.environ. However, I'm having a hard time on creating sets. For my problem, I have to create two sets. One set is from a bunch of nodes, and the other one is from several arcs between nodes. I create a network by using Networkx to store my nodes and arcs.
The node data is saved like (Longitude, Latitude) in tuple form. The arcs are saved as (nodeA, nodeB), where nodeA and nodeB are both coordinates in tuple.
So, a node is something like:
(-97.97516252657978, 30.342243012086083)
And, an arc is something like:
((-97.97516252657978, 30.342243012086083),
(-97.976196300350608, 30.34247219922803))
The way I tried to create a set is as following:
# import pyomo.envrion as pe
# create a model m
m = pe.ConcreteModel()
# network is an object I created by Networkx module
m.node_set = pe.Set(initialize= self.network.nodes())
m.arc_set = pe.Set(initialize= self.network.edges())
However, I kept getting an error message on arc_set.
ValueError: The value=(-97.97516252657978, 30.342243012086083,
-97.976196300350608, 30.34247219922803) does not have dimension=2,
which is needed for set=arc_set
I found it's weird that somehow my arc_set turned into one tuple instead of two. Then I tried to convert my nodes and arcs into string but still got the error.
Could somebody show me some hint? Or how do delete this bug?
Thanks!
Underneath the hood, Pyomo "flattens" all indexing sets. That is, it removes nested tuples so that each set member is a single tuple of scalar values. This is generally consistent with other algebraic modeling languages, and helps to make sure that we can consistently (and correctly) retrieve component members regardless of how the user attempted to query them.
In your case, Pyomo will want each member of the the arc set as a single 4-member tuple. There is a utility in PyUtilib that you can use to flatten your tuples when constructing the set:
from pyutilib.misc import flatten
m.arc_set = pe.Set(initialize=(tuple(flatten(x)) for x in self.network.edges())
You can also perform some error checking, in this case to make sure that all edges start and end at known nodes:
from pyutilib.misc import flatten
m.node_set = pe.Set( initialize=self.network.nodes() )
m.arc_set = pe.Set(
within=m.node_set*m.node_set,
initialize=(tuple(flatten(x)) for x in self.network.edges() )
This is particularly important for models like this where you are using floating point numbers as indices, and subtle round-off errors can produce indices that are nearly the same but not mathematically equal.
There has been some discussion among the developers to support both structured and flattened indices, but we have not quite reached consensus on how to best support it in a backwards compatible manner.
I have a set, setOfManyElements, which contains n elements. I need to go through all those elements and run a function on each element of S:
for s in setOfManyElements:
elementsFound=EvilFunction(s)
setOfManyElements|=elementsFound
EvilFunction(s) returns the set of elements it has found. Some of them will already be in S, some will be new, and some will be in S and will have already been tested.
The problem is that each time I run EvilFunction, S will expand (until a maximum set, at which point it will stop growing). So I am essentially iterating over a growing set. Also EvilFunction takes a long time to compute, so you do not want to run it twice on the same data.
Is there an efficient way to approach this problem in Python 2.7?
LATE EDIT: changed the name of the variables to make them more understandable. Thanks for the suggestion
I suggest an incremental version of 6502's approach:
seen = set(initial_items)
active = set(initial_items)
while active:
next_active = set()
for item in active:
for result in evil_func(item):
if result not in seen:
seen.add(result)
next_active.add(result)
active = next_active
This visits each item only once, and when finished seen contains all visited items.
For further research: this is a breadth-first graph search.
You can just keep a set of already visited elements and pick a non-yet-visited element each time
visited = set()
todo = S
while todo:
s = todo.pop()
visited.add(s)
todo |= EvilFunction(s) - visited
Iterating a set in your scenario is a bad idea, as you have no guarantee on the ordering and the iterator are not intended to be used in a modifying set. So you do not know what will happen to the iterator, nor will you know the position of a newly inserted element
However, using a list and a set may be a good idea:
list_elements = list(set_elements)
for s in list_elements:
elementsFound=EvilFunction(s)
new_subset = elementsFound - list_elements
list_elements.extend(new_subset)
set_elements |= new_subset
Edit
Depending on the size of everything, you could even drop the set entirely
for s in list_elements:
elementsFound=EvilFunction(s)
list_elements.extend(i for i in elementsFound if i not in list_elements)
However, I am not sure on the performance of this. I think that you should profile. If the list is huge, then the set-based solution seems good --it is cheap to perform set-based operations. However, for moderate size, maybe the EvilFunction is expensive enough and it doesn't matter.
Is there's a way to get the indices properly from a Pymel/Maya API function?
I know Pymel has a function called getEdges() however according to their docs this get's them from the selected face, however I just need them from the selected edges.
Is this possible?
While your answer will work theodox, I did find a much simpler resolution, after some serious digging!
Turns out, hiding and not very well documented, was a function ironically called indices(), I did search for this but nothing came up in the docs.
Pymel
selection[0].indices()[0]
The above will give us the integer of the selected edge. Simple and elegant!
Do you mean you just the expanded list of selected edges? That's just FilterExpand -sm 32 or cmds.filterExpand(sm=32) or pm.filterExpand(sm=32) on an edge selection. Those commands are always strings, you grab the indices out them with a regular expression:
# where objs is a list of edges, for example cmds.ls(sl=True) on an edge selection
cList = "".join(cmds.filterExpand( *objs, sm=32))
outList = set(map ( int, re.findall('\[([0-9]+)\]', cList ) ) )
which will give you a set containing the integer indices of the edges (I use sets so its easy to do things like find edges common to two groups without for loops or if tests)
I have defined a pyparsing rule to parse this text into a syntax-tree...
TEXT COMMANDS:
add Iteration name = "Cisco 10M/half"
append Observation name = "packet loss 1"
assign Observation results_text = 0.0
assign Observation results_bool = True
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append Observation name = "packet loss 2"
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
SYNTAX TREE:
['add', 'Iteration', ['name', 'Cisco 10M/half']]
['append', 'Observation', ['name', 'packet loss 1']]
['assign', 'Observation', ['results_text', '0.0']]
['assign', 'Observation', ['results_bool', 'True']]
['append', 'DataPoint']
['assign', 'DataPoint', ['metric', 'txpackets']]
['assign', 'DataPoint', ['units', 'packets']]
...
I'm trying to associate all the nested key-value pairs in the syntax-tree above into a linked-list of objects... the heirarchy looks something like this (each word is a namedtuple... children in the heirarchy are on the parents' list of children):
Log: [
Iteration: [
Observation:
[DataPoint, DataPoint],
Observation:
[DataPoint, DataPoint]
]
]
The goal of all this is to build a generic test data-acquisition platform to drive the flow of tests against network gear, and record the results. After the data is in this format, the same data structure will be used to build a test report. To answer the question in the comments below, I chose a linked list because it seemed like the easiest way to sequentially dequeue the information when writing the report. However, I would rather not assign Iteration or Observation sequence numbers before finishing the tests... in case we find problems and insert more Observations in the course of conducting the test. My theory is that the position of each element in the list is sufficient, but I'm willing to change that if it's part of the problem.
The problem is that I'm getting lost trying to assign Key-Values to objects in the linked list after it's built. For instance, after I insert an Observation namedtuple into the first Iteration, I have trouble reliably handling the update of assign Observation results_bool = True in the example above.
Is there a generalized design pattern to handle this situation? I have googled this for while, but I can't seem to make the link between parsing the text (which I can do) and managing the data-heirarchy (the main problem). Hyperlinks or small demo code is fine... I just need pointers to get on the right track.
I am not aware of an actual design pattern for what you're looking for, but I have a great passion for the issue at hand. I work heavily with network devices and parsing and organizing the data is a large ongoing challenge for me.
It's clear that the problem is not parsing the data, but what you do with it afterwards. This is where you need to think about the meaning you are attaching to the data you have parsed. The nested-list method might work well for you if the objects containing the lists are also meaningful.
Namedtuples are great for quick-and-dirty class-ish behavior, but they fall flat when you need them to do anything outside of basic attribute access, especially considering that as tuples they are immutable. It seems to me that you'll want to replace certain namedtuple objects with full-blown classes. This way you can highly customize the behavior and methods available.
For example, you know that an Iteration will always contain 1 or more Observation objects which will then contain 1 or more DataPoint objects. If you can accurately describe the relationships, this sets you on the path to handling them.
I wound up using textfsm, which allows me to keep state between different lines while parsing the configuration file.