Given two graphs (A and B), I am trying to determine if there exists a subgraph of B that matches A given some threshold based on the difference in edge weights. That is, if I take the sum of the difference between each pair of associated edges, it will be below a specified threshold. The vertex labels are not consistent between A and B, so I am just relying on the edge weights.
A will be somewhat small (e.g. max 10) and B will be larger (e.g. max 200).
I believe one of these two packages may help:
The Graph Matching Toolbox in MATLAB "implements spectral graph matching with affine constraint (SMAC), optionally with kronecker bistochastic normalization". It states on the webpage that it "handles graphs of different sizes (subgraph matching)"
http://www.timotheecour.com/software/graph_matching/graph_matching.html
The algorithm used in the Graph Matching Toolbox in MATLAB is based on the algorithm described in the paper by Timothee Cour, Praveen Srinivasan, and Jianbo Shi titled Balanced Graph Matching. The paper was published in NIPS 2006.
In addition, there is a second toolkit called Graph Matching Toolkit (GMT) that seems like it might have support for error-tolerant subgraph matching, as it does support error-tolerant graph matching. Rather than using a spectral method, it has various methods of computing edit distance, and then it is my impression that it finds the best matching by giving the argmax of the minimum edit distance. If it doesn't explicitly support subgraph matching and you don't care about efficiency, you might just search all subgraphs of B and use GMT to try to find matches of those subgraphs in A. Or maybe you could just search a subset of the subgraphs of B.
http://www.fhnw.ch/wirtschaft/iwi/gmt
Unfortunately neither of these appear to be in Python, and they don't seem to support networkx's graph format either. But I believe you may be able to find a converter that will change the representation of the networkx graph's to something usable by these toolkits. Then you can run the toolkits and output your desired subgraph matchings.
Related
I'm trying to work on a Network Optimization Problem, where I need to create all distinct Subgraphs of a provided graph (The Graph is complete in itself and is of undirected nature with symmetric edge weights). In order to ensure the connectivity, distinct set of edges need to mapped to all these individual subgraphs, which are coincident into them. A coincident edge is considered as an edge whose one terminal node is present in the mapped subgraph and other terminal node in the complement (of the subgraph).
I'm facing implementation issues in python, as I'm not able to enumerate all such coincident edges for every distinct subgraph. I need the mapped set of these edges to deploy constraints in Gurobi solver with python. I'm clueless about this task, as I'm relatively new to python and Gurobi.
In case the NETWORKX module has an inbuilt function for such task, then duly provide the info about it and possible implementation
I have a network that is a graph network and it is the Email-Eu network that is available in here.
This dataset has the actual dataset, which is a graph of around 1005 nodes with the edges that form this giant graph. It also has the ground truth labels for the nodes and its corresponding communities (department). Each one of these nodes belongs to one of each 42 departments.
I want to run a community detection algorithm on the graph to find to the corresponding department for each node. My main objective is to find the nodes in the largest community.
So, first I need to find the first 42 departments (Communities), then find the nodes in the biggest one of them.
I started with Girvan-Newman Algorithm to find the communities. The beauty of Girvan-Newman is that it is easy to implement since every time I need to find the edge with the highest betweenness and remove it till I find the 42 departments(Communities) I want.
I am struggling to find other Community Detection Algorithms that give me the option of specifying how many communities/partitions I need to break down my graph into.
Is there any Community Detection Function/Technique that I can use, which gives me the option of specifying how many communities do I need to uncover from my graph? Any ideas are very much appreciated.
I am using Python and NetworkX.
A (very) partial answer (and solution) to your question is to use Fluid Communities algorithm implemented by Networkx as asyn_fluidc.
Note that it works on connected, undirected, unweighted graphs, so if your graph has n connected components, you should run it n times. In fact this could be a significant issue as you should have some sort of preliminary knowledge of each component to choose the corresponding k.
Anyway, it is worth a try.
You may want to try pysbm. It is based on networkx and implements different variants of stochastic block models and inference methods.
If you consider to switch from networkxto a different python based graph package you may want to consider graph-tool, where you would be able to use the stochastic block model for the clustering task. Another noteworthy package is igraph, may want to look at How to cluster a graph using python igraph.
The approaches directly available in networkx are rather old fashioned. If you aim for state of the art clustering methods, you may consider spectral clustering or Infomap. The selection depends on your desired usage of the inferred communities. The task of inferring ground truth from a network, falls under (approximate) the No-Free-Lunch theorem, i.e. (roughly) no algorithm exists, such that it returns "better" communities than any other algorithm, if we average the results over all possibilities.
I am not entirely sure of my answer but maybe you can try this. Are you aware of label propagation ? The main idea is that you have some nodes in graph which are labelled i.e. they belong to a community and you want to give labels to other unlabelled nodes in your graph. LPA will spread these labels across the graph and give you a list of nodes and the communities they belong to. These communities will be the same as the ones that your labelled set of nodes belong to.
So I think you can control the number of communities you want to extract from the graph by controlling the number of communities you initialise in the beginning. But I think it is also possible that after LPA converges some of the communities you initialised vanish from the graph due the graph structure and also randomness of the algorithm. But there are many variants of LPA where you can control this randomness. I believe this page of sklearn talks about it.
You can read about LPA here and also here
I've been searching for graph matching algorithms written in Python but I haven't been able to find much.
I'm currently trying to match two different graphs that derive from two distinct sets of character sequences. I know that there is an underlying connection between the two graphs, more precisely a one-to-one mapping between the nodes. But the graphs don't have the same labels and as such I need graph matching algorithms that return nodes mappings just by comparing topology and/or attributes. By testing, I hope to maximize correct matches.
I've been using Blondel and Heymans from the graphsim package and intend to also use Tacsim from the same package.
I would like to test other options, probably more standard, like maximum subgraph isomorphism or finding subgraphs with very good matchings between the two graphs. Graph edit distance might also help if it manages to give a matching.
The problem is that I can't find anything implemented, even in Networkx that I'm using. Does anyone know of any Python implementations? Would be a plus if those options used Networkx.
I found this implementation of Graph Edit Distance algorithms which uses NetworkX in Python.
https://github.com/Jacobe2169/GMatch4py
"GMatch4py is a library dedicated to graph matching. Graph structure are stored in NetworkX graph objects. GMatch4py algorithms were implemented with Cython to enhance performance."
I am currently working a large graph, with 1.5 Million Nodes and 11 Million Edges.
For the sake of speed, I checked the benchmarks of the most popular graph libraries: iGraph, Graph-tool, NetworkX and Networkit. And it seems iGraph, Graph-tool and Networkit have similar performance. And I eventually used iGraph.
With the directed graph built with iGraph, the pagerank of all vertices can be calculated in 5 secs. However, when it came to Betweenness and Closeness, it took forever for the calculation.
In the documentation, it says that by specifying "CutOff", iGraph will ignore all path with length < CutOff value.
I am wondering if there a rule of thumb to choose the best CutOff value to choose?
The cutoff really depends on the application and on the netwrok parameters (# nodes, # edges).
It's hard to talk about closeness threshold, since it depends greatly on other parameters (# nodes, # edges,...).
One thing you can know for sure is that every closeness centrality is somewhere between 2/[n(n-1)] (which is minimum, attained at path) and 1/(n-1) (which is maximum, attained at clique or star).
Perhaps better question would be about Freeman centralization of closeness (which is somehow normalized version of closeness that you can better compare between various graphs).
Suggestion:
You can do a grid search for different cutoff values and then choose the one that makes more sense based on your application.
I'm looking for an algorithm/method to compare the structure of two directed graphs.
Each graph represents a procedure with the nodes being certain work steps. My aim is to compare two procedures and detect possible differences in a somewhat intelligent manner, meaning the algorithm should detect simple deviations like two interchanged work step or an additional work step in one of the graphs.
Initial situation:
The number vertices of the two graphs may differ
The weight all edges is the same
A graph has up to 5k nodes
The graphs are stored as pandas dataframe in python
Example: