Is there any libraries similar to igraph where I can create a hypergraph. I am working with hypergraph now and wanted to use some HyperGraph libraries to work on.
MGtoolkit and its paper
pygraph
halp
PyMETIS
SageMath's implementation, 1, 2. SageMath is not a python library but more like a python distribution (ships python 2.7 currently) which lots of interesting libraries are pre-installed.
I hope we see NetworkX and igraph support also soon.
There's also HyperNetX which is able to represent and visualise hypergraphs.
It seems very accessible. They have a number of nice tutorials on their GitHub page.
However, when working with it I identified some issues:
Performance: The library struggles with graphs that have several thousand nodes. I recommend igraph instead, although it does not have explicit support for hypergraphs. It does offer functionality for bipartite graphs, though. I believe if no hyperedge is fully contained in another, you can work with a bipartite graph that is isomorphic to your given hypergraph?
I encountered an issue in which the ordering of nodes would not be deterministic, i.e. if you constructed the same graph several times and iterated over the nodes, they would be given to you in different orders. This can probably be worked around.
Related
I am writing code in Python to analyze social networks with node and edge attributes. Currently, I am using the NetworkX package to generate the graphs. Is there any limit to the size (in terms of the number of nodes, edges) of the graph which can be generated using this package?
I am new to coding social network problems in Python and have recently come across another package called NetworKit for large networks, but am not sure at what size should NetworKit be a better option, could you please elaborate on difference in performance and functionality between the two packages?
Thanks for your reply in advance.
My suggestion:
Start w/ Networkx as it has a bigger community, it's well mainteined and documented... and the best of all... you can easily understand what it does as it's 100% done in Python.
It's true it's not exactly fast, but it will be fast enough for most of the calculations. If you are running calculations from your laptop it can be slow for intensive calculations (Eg: sigma/omega small worldness metrics) in big networks (> 10k nodes and >100k vertexes).
If you need to speed it up, then you can easily incorporate networKit in your code as it integrates very easily to networkx and pandas, but it has a much more limited library of algorithms.
Compare yourself:
NetworkX algorithms: https://networkx.github.io/documentation/stable/reference/algorithms/index.html
VS
NetworKit algorithms: https://networkit.github.io/dev-docs/python_api/modules.html
Is there any limit to the size (in terms of the number of nodes, edges) of the graph which can be generated using this package?
No. There is no limit. It is all dependent on your memory capacity and size.
could you please elaborate on difference in performance and functionality between the two packages?
I personally don't have any experience with NetworkKit, however here (by Timothy Lin) you can find a very good benchmarking analysis on different tools including networkx and networkkit. Check out its conclusion section:
As for recommendations on which package people should learn, I think picking up networkx is still important as it makes network science very accessible with a wide range of tools and capabilities. If analysis starts being too slow (and maybe that’s why you are here) then I will suggest taking a look at graph-tool or networkit to see if they contain the necessary algorithms for your needs.
I need an algorithm that can find a tree decomposition given a graph in Python, the graphs will be small so it doesn't need to be very efficient. I have looked around but cannot find much on this at all. Anyone know of a package that can do this or some sudo code I could use to make my own?
Have you looked at networkx? See: https://networkx.github.io/documentation/latest/reference/algorithms/approximation.html
I've been searching for graph matching algorithms written in Python but I haven't been able to find much.
I'm currently trying to match two different graphs that derive from two distinct sets of character sequences. I know that there is an underlying connection between the two graphs, more precisely a one-to-one mapping between the nodes. But the graphs don't have the same labels and as such I need graph matching algorithms that return nodes mappings just by comparing topology and/or attributes. By testing, I hope to maximize correct matches.
I've been using Blondel and Heymans from the graphsim package and intend to also use Tacsim from the same package.
I would like to test other options, probably more standard, like maximum subgraph isomorphism or finding subgraphs with very good matchings between the two graphs. Graph edit distance might also help if it manages to give a matching.
The problem is that I can't find anything implemented, even in Networkx that I'm using. Does anyone know of any Python implementations? Would be a plus if those options used Networkx.
I found this implementation of Graph Edit Distance algorithms which uses NetworkX in Python.
https://github.com/Jacobe2169/GMatch4py
"GMatch4py is a library dedicated to graph matching. Graph structure are stored in NetworkX graph objects. GMatch4py algorithms were implemented with Cython to enhance performance."
I have two related graphs created in iGraph, A and G. I find community in structure in G using either infomap or label_propagation methods (because they are two that allow for weighted, directional links). From this, I can see the modularity of this community for the G graph. However, I need to see what modularity this will provide for the A graph. How can I do this?
Did you try using the modularity function?
im <- infomap.community(graph=G)
qG <- modularity(im)
memb <- membership(im)
qA <- modularity(x=A, membership=memb, weights=E(A)$weight)
cat("qG=",qG," vs. qA=",qA,"\n",sep="")
Note: tested with igraph v0.7, I don't have a more recent version right now. The parameter/function names might slightly differ.
So I figured it out. What you need to do is find a community structure, either pre-defined or using one of the methods provided for community detection, such as infomap or label_propagation. This gives you a vertex clustering, which you can use to place on another graph and from that use .q to find the modularity.
I have a graph database with a Gremlin query engine. I don't want to change that API. The point of the library is to be able to study graphs that can not fully stay in memory and maximize speed by not falling back to virtual memory.
The query engine is lazy, it will not fetch an edge or vertex until required or requested by the user. Otherwise it only use indices to traverse the graph.
Networkx has another API. What can I do to re-use networkx graph algorithm implementations with my graph?
You are talking about extending your Graph API.
Hopefully the code translates from one implementation to another in which case copy-paste'ing from the algorithms section might work for you. (check the licenses first)
If you want to use existing code going forward you could make a middle layer or adapter class to help out with this.
If the source code doesn't line up then NetworkX has copious notes about the algorithms used and the underpinning mathematics at the bottom of the help pages and the code itself.
For the future:
Maybe you could make it open source and get some traction with others who see the traversal engine as a good piece of engineering. In which case you would have help in maintaining/extending your work. Good Luck.