I have the following algorithm:
I have a graph and a related I have a topological sorting (In graph theory, "a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge uv from vertex u to vertex v, u comes before v in the ordering. ").
Given a start_position and an end_position (different from the start_one), I want to verify if shifting the element of the list that is at start_position to the end_position preserves the topological order, i.e, if after the shifting i still have a topological order.
There are two cases : left_shift (if start_position > end_position) and right_shift (otherwise).
Here is my attempt:
def verify(from_position:int, to_position:int, node_list:List[str], instance:pb.Problem):
if from_position < to_position :
#right-shift
for task_temp in node_list[from_position+1:to_position+1]:
if (node_list[from_position],task_temp) in instance.all_predecessors:
return False
return True
if to_position < from_position :
#end_shift
for task_temp in node_list[to_position:from_position]:
if (task_temp, node_list[from_position]) in instance.all_predecessors:
return False
return True
PS: all_predecessors are a set of tuples (2 elements) that has all the edges of the graph.
Is there a way to make it faster?
The naive approach is asymptotically optimal: Just run through the (new) ordering and verify that it satisfies the topological criteria. You can do this by maintaining a bitfield of the nodes encountered so far, and check that each new node’s predecessors are set in the bitfield. This takes linear time in the number of nodes and edges, which any correct algorithm will need in the worst case.
For other variants of the problem (e.g. measuring in the size of the shift, or optimizing per-query time after preprocessing) there might be better approaches.
Related
I have a directed acyclic graph and have specific requirements on the topological sort:
Depth first search: I want each branch to reach an end before a new branch is added to the sorting
Several nodes have multiple outgoing edges. For those nodes I have a sorted list of successor nodes, that is to be used in choosing with which node to continue the sorting.
Example:
When the node n is reached, that has three successors m1, m2, m3 of which each one of them would be a valid option to continue, I would provide a list such as [m3, m1, m2] that would indicate to continue with the node m3.
I am using networkx. I thought about iterating through the nodes with
sorting = []
for n in dfs_edges(dag, source = 'root'):
sorting.append(n[0])
Or using the method dfs_preorder_nodes but I have not found a way to make it use the list.
Any hints?
I am looking for a way to generate all possible directed graphs from an undirected template. For example, given this graph "template":
I want to generate all six of these directed versions:
In other words, for each edge in the template, choose LEFT, RIGHT, or BOTH direction for the resulting edge.
There is a huge number of outputs for even a small graph, because there are 3^E valid permutations (where E is the number of edges in the template graph), but many of them are duplicates (specifically, they are automorphic to another output). Take these two, for example:
I only need one.
I'm curious first: Is there is a term for this operation? This must be a formal and well-understood process already?
And second, is there a more efficient algorithm to produce this list? My current code (Python, NetworkX, though that's not important for the question) looks like this, which has two things I don't like:
I generate all permutations even if they are isomorphic to a previous graph
I check isomorphism at the end, so it adds additional computational cost
Results := Empty List
T := The Template (Undirected Graph)
For i in range(3^E):
Create an empty directed graph G
convert i to trinary
For each nth edge in T:
If the nth digit of i in trinary is 1:
Add the edge to G as (A, B)
If the nth digit of i in trinary is 2:
Add the edge to G as (B, A)
If the nth digit of i in trinary is 0:
Add the reversed AND forward edges to G
For every graph in Results:
If G is isomorphic to Results, STOP
Add G to Results
PYTHON ONLY!!
I have a graph
graph = [[0,1,1,1,0],
[1,0,0,0,0],
[0,1,0,1,1],
[1,0,1,0,1],
[0,0,1,1,0]]
The function clique(graph, vertices) should take an adjacency matrix
representation of a graph, a list of one or more vertices, and return a boolean True if the vertices create a clique
(every person is friends with every other person), otherwise return False.
`def clique(graph, vertices)`
I want to find out whether does a clique exists in the graph above
If yes the output should be True, otherwise False
eg. 'clique', (graph,[2,3,4]), True)]
Explanation needed thanks!
https://en.wikipedia.org/wiki/Clique_problem
Here you go, depending on what problem you ACTUALLY want to solve, here you have a starting point for finding an algorithm.
Is a graph a clique: Just check that all nodes are adjacent to each other.
Does it contain a clique? Always true if the graph is non-empty because a single vertex is already a clique of size one.
Does it contain a clique of size k? brute force it
Find a single maximal clique? Greedy algorithm possible as described in the link.
Find all maximal cliques? see wikipedia page for references (this is hard)
I'm trying to implement move ordering in chess to gain the maximum benefit from alpha-beta pruning.
Right now, I'm starting from the root node, expanding it one level, evaluating all of those successors through my evaluation function, sorting them in ascending order (least advantageous to most figuring the answer is somewhere in the middle) and passing the nodes in that order to the alpha-beta algorithm. However, my results are roughly 60% slower than if I just pass the nodes in the order they are on the board. I'm only evaluating from the initial state, but at a high level, is this the correct way that move ordering is implemented?
FWIW, the way I've implemented my move ordering is through the following (maybe the inefficiency is coming how the list is being generated before even being passed to the alpha-beta function?)
scores = []
for state in states:
scores.append([state,Board(state, player).score()[1]) #this results in a tuple of (board, score)
scores.sort(key=lambda lst: abs(lst[1]), reverse = False) #sort the list of boards by scores in ascending order
output = [i[0] for i in scores] #only return the boards, which are then passed to the alpha-beta algorithm
Thanks very much!
I have a (un-directed) graph represented using adjacency lists, e.g.
a: b, c, e
b: a, d
c: a, d
d: b, c
e: a
where each node of the graph is linked to a list of other node(s)
I want to update such a graph given some new list(s) for certain node(s), e.g.
a: b, c, d
where a is no longer connected to e, and is connected to a new node d
What would be an efficient (both time and space wise) algorithm for performing such updates to the graph?
Maybe I'm missing something, but wouldn't it be fastest to use a dictionary (or default dict) of node-labels (strings or numbers) to sets? In this case update could look something like this:
def update(graph, node, edges, undirected=True):
# graph: dict(str->set(str)), node: str, edges: set(str), undirected: bool
if undirected:
for e in graph[node]:
graph[e].remove(node)
for e in edges:
graph[e].add(node)
graph[node] = edges
Using sets and dicts, adding and removing the node to/from the edges-sets of the other nodes should be O(1), same as updating the edges-set for the node itself, so this should be only O(2n) for the two loops, with n being the average number of edges of a node.
Using an adjacency grid would make it O(n) to update, but would take n^2 space, regardless of how sparse the graph is. (Trivially done by updating each changed relationship by inverting the row and column.)
Using lists would put the time up to O(n^2) for updating, but for sparse graphs would not take a huge time penalty, and would save a lot of space.
A typical update is del edge a,e; add edge a,d, but your update looks like a new adjacency list for vertex a. So simply find the a adjacency list and replace it. That should be O(log n) time (assuming sorted array of adjacency lists, like in your description).