Numpy: how to simulate propagation in a lattice - python

Here's the inefficient Python code I have:
import numpy as np
import random
matrix = np.empty( (100,100), dtype=bool )
matrix[:,:] = False
matrix[50,50] = True
def propagate(matrix, i, j):
for (di,dj) in [ (1,0), (0,1), (-1,0), (0,-1) ]:
(ni,nj) = (i+di, j+dj)
if matrix[ni,nj] and flip_coin_is_face():
matrix[i,j] = True
def flip_coin_is_face():
return random.uniform(0,1) < 0.5
for k in xrange(1000):
for i in xrange(1,99):
for j in xrange(1,99):
propagate(matrix, i, j)
which basically propagates the True state from the center of the matrix. Since I'm coding the loops and the propagation rule in Python, this is of course very slow.
My question is, how can I use Numpy indexing to make this as fast as possible?

I can think of one approach, but it differs with your original code. Namely, you can filter ones in each step array (k loop), propagate each value to its neiboughours, i.e. roll dice 4 times number of ones, and evaluate next step array. Each operation can be done with a numpy one liner (using where, reshape, + and * for matrices), so there will be no inner loops.
Difference lies in fact that we do not take into account values, propagated within a step, evaluating all changes at once. In fact, it will slow down, and I suppose noticeably, propagation speed in terms of steps required to fulfill all the matrix.
If this approach is ok, I can come up with some code.

I'm not great with numpy, but to "propagate" through the matrix, you can use something like a breadth-first search. If you haven't used it before, it looks like this:
import Queue
def neighbors(i, j, mat_shape):
rows = mat_shape[0]
cols = mat_shape[1]
offsets = [(-1, 0), (1, 0), (0, 1), (0, -1)]
neighbors = []
for off in offsets:
r = off[0]+i
c = off[1]+j
if 0 <= r and r <= rows and 0 <= c and c <= cols:
neighbors.append((r,c))
return neighbors
def propagate(matrix, i, j):
# 'parents' is used in two ways. first, it tells us where we've already been
# second, it tells us w
parents = np.empty(matrix.shape)
parents[:,:] = None
# first-in-first-out queue. initially it just has the start point
Q = Queue.Queue()
# do the first step manually; start propagation with neighbors
matrix[i,j] = True
for n in neighbors(i,j,matrix.shape):
Q.put(n)
parents[n[0],n[1]] = (i,j)
# initialization done. on to the propagation
while not Q.empty():
current = Q.get() # get's front element and removes it
parent = parents[current[0], current[1]]
matrix[current[0], current[1]] = matrix[parent[0], parent[1]] and flip_coin_is_face()
# propagate to neighbors, in order
for next in neighbors(current[0], current[1], matrix.shape):
# only propagate there if we haven't already
if parents[next[0], next[1]] is None:
parents[next[0], next[1]] = current
Q.put(next)
return matrix
You can probably be more clever and cut off the propagation early (since once it gets to False, it will never get True again). But for 100x100, this should be plenty fast.

Related

Heuristic to choose five column arrays that maximise the dot product

I have a sparse 60000x10000 matrix M where each element is either a 1 or 0. Each column in the matrix is a different combination of signals (ie. 1s and 0s). I want to choose five column vectors from M and take the Hadamard (ie. element-wise) product of them; I call the resulting vector the strategy vector. After this step, I compute the dot product of this strategy vector with a target vector (that does not change). The target vector is filled with 1s and -1s such that having a 1 in a specific row of the strategy vector is either rewarded or penalised.
Is there some heuristic or linear algebra method that I could use to help me pick the five vectors from the matrix M that result in a high dot product? I don't have any experience with Google's OR tools nor Scipy's optimization methods so I am not too sure if they can be applied to my problem. Advice on this would be much appreciated! :)
Note: the five column vectors given as the solution does not need to be the optimal one; I'd rather have something that does not take months/years to run.
First of all, thanks for a good question. I don't get to practice numpy that often. Also, I don't have much experience in posting to SE, so any feedback, code critique, and opinions relating to the answer are welcome.
This was an attempt at finding an optimal solution at first, but I didn't manage to deal with the complexity. The algorithm should, however, give you a greedy solution that might prove to be adequate.
Colab Notebook (Python code + Octave validation)
Core Idea
Note: During runtime, I've transposed the matrix. So, the column vectors in the question correspond to row vectors in the algorithm.
Notice that you can multiply the target with one vector at a time, effectively getting a new target, but with some 0s in it. These will never change, so you can filter out some computations by removing those rows (columns, in the algorithm) in further computations entirely - both from the target and the matrix. - you're then left with a valid target again (only 1s and -1 in it).
That's the basic idea of the algorithm. Given:
n: number of vectors you need to pick
b: number of best vectors to check
m: complexity of matrix operations to check one vector
Do an exponentially-complex O((n*m)^b) depth-first search, but decrease the complexity of the calculations in deeper layers by reducing target/matrix size, while cutting down a few search paths with some heuristics.
Heuristics used
The best score achieved so far is known in every recursion step. Compute an optimistic vector (turn -1 to 0) and check what scores can still be achieved. Do not search in levels where the score cannot be surpassed.
This is useless if the best vectors in the matrix have 1s and 0s equally distributed. The optimistic scores are just too high. However, it gets better with more sparsity.
Ignore duplicates. Basically, do not check duplicate vectors in the same layer. Because we reduce the matrix size, the chance for ending up with duplicates increases in deeper recursion levels.
Further Thoughts on Heuristics
The most valuable ones are those that eliminate the vector choices at the start. There's probably a way to find vectors that are worse-or-equal than others, with respect to their affects on the target. Say, if v1 only differs from v2 by an extra 1, and target has a -1 in that row, then v1 is worse-or-equal than v2.
The problem is that we need to find more than 1 vector, and can't readily discard the rest. If we have 10 vectors, each worse-or-equal than the one before, we still have to keep 5 at the start (in case they're still the best option), then 4 in the next recursion level, 3 in the following, etc.
Maybe it's possible to produce a tree and pass it on in into recursion? Still, that doesn't help trim down the search space at the start... Maybe it would help to only consider 1 or 2 of the vectors in the worse-or-equal chain? That would explore more diverse solutions, but doesn't guarantee that it's more optimal.
Warning: Note that the MATRIX and TARGET in the example are in int8. If you use these for the dot product, it will overflow. Though I think all operations in the algorithm are creating new variables, so are not affected.
Code
# Given:
TARGET = np.random.choice([1, -1], size=60000).astype(np.int8)
MATRIX = np.random.randint(0, 2, size=(10000,60000), dtype=np.int8)
# Tunable - increase to search more vectors, at the cost of time.
# Performs better if the best vectors in the matrix are sparse
MAX_BRANCHES = 3 # can give more for sparser matrices
# Usage
score, picked_vectors_idx = pick_vectors(TARGET, MATRIX, 5)
# Function
def pick_vectors(init_target, init_matrix, vectors_left_to_pick: int, best_prev_result=float("-inf")):
assert vectors_left_to_pick >= 1
if init_target.shape == (0, ) or len(init_matrix.shape) <= 1 or init_matrix.shape[0] == 0 or init_matrix.shape[1] == 0:
return float("inf"), None
target = init_target.copy()
matrix = init_matrix.copy()
neg_matrix = np.multiply(target, matrix)
neg_matrix_sum = neg_matrix.sum(axis=1)
if vectors_left_to_pick == 1:
picked_id = np.argmax(neg_matrix_sum)
score = neg_matrix[picked_id].sum()
return score, [picked_id]
else:
sort_order = np.argsort(neg_matrix_sum)[::-1]
sorted_sums = neg_matrix_sum[sort_order]
sorted_neg_matrix = neg_matrix[sort_order]
sorted_matrix = matrix[sort_order]
best_score = best_prev_result
best_picked_vector_idx = None
# Heuristic 1 (H1) - optimistic target.
# Set a maximum score that can still be achieved
optimistic_target = target.copy()
optimistic_target[target == -1] = 0
if optimistic_target.sum() <= best_score:
# This check can be removed - the scores are too high at this point
return float("-inf"), None
# Heuristic 2 (H2) - ignore duplicates
vecs_tried = set()
# MAIN GOAL: for picked_id, picked_vector in enumerate(sorted_matrix):
for picked_id, picked_vector in enumerate(sorted_matrix[:MAX_BRANCHES]):
# H2
picked_tuple = tuple(picked_vector)
if picked_tuple in vecs_tried:
continue
else:
vecs_tried.add(picked_tuple)
# Discard picked vector
new_matrix = np.delete(sorted_matrix, picked_id, axis=0)
# Discard matrix and target rows where vector is 0
ones = np.argwhere(picked_vector == 1).squeeze()
new_matrix = new_matrix[:, ones]
new_target = target[ones]
if len(new_matrix.shape) <= 1 or new_matrix.shape[0] == 0:
return float("-inf"), None
# H1: Do not compute if best score cannot be improved
new_optimistic_target = optimistic_target[ones]
optimistic_matrix = np.multiply(new_matrix, new_optimistic_target)
optimistic_sums = optimistic_matrix.sum(axis=1)
optimistic_viable_vector_idx = optimistic_sums > best_score
if optimistic_sums.max() <= best_score:
continue
new_matrix = new_matrix[optimistic_viable_vector_idx]
score, next_picked_vector_idx = pick_vectors(new_target, new_matrix, vectors_left_to_pick - 1, best_prev_result=best_score)
if score <= best_score:
continue
# Convert idx of trimmed-down matrix into sorted matrix IDs
for i, returned_id in enumerate(next_picked_vector_idx):
# H1: Loop until you hit the required number of 'True'
values_passed = 0
j = 0
while True:
value_picked: bool = optimistic_viable_vector_idx[j]
if value_picked:
values_passed += 1
if values_passed-1 == returned_id:
next_picked_vector_idx[i] = j
break
j += 1
# picked_vector index
if returned_id >= picked_id:
next_picked_vector_idx[i] += 1
best_score = score
# Convert from sorted matrix to input matrix IDs before returning
matrix_id = sort_order[picked_id]
next_picked_vector_idx = [sort_order[x] for x in next_picked_vector_idx]
best_picked_vector_idx = [matrix_id] + next_picked_vector_idx
return best_score, best_picked_vector_idx
Maybe it's too naive, but the first thing that occurs to me is to choose the 5 columns with the shortest distance to the target:
import scipy
import numpy as np
from sklearn.metrics.pairwise import pairwise_distances
def sparse_prod_axis0(A):
"""Sparse equivalent of np.prod(arr, axis=0)
From https://stackoverflow.com/a/44321026/3381305
"""
valid_mask = A.getnnz(axis=0) == A.shape[0]
out = np.zeros(A.shape[1], dtype=A.dtype)
out[valid_mask] = np.prod(A[:, valid_mask].A, axis=0)
return np.matrix(out)
def get_strategy(M, target, n=5):
"""Guess n best vectors.
"""
dists = np.squeeze(pairwise_distances(X=M, Y=target))
idx = np.argsort(dists)[:n]
return sparse_prod_axis0(M[idx])
# Example data.
M = scipy.sparse.rand(m=6000, n=1000, density=0.5, format='csr').astype('bool')
target = np.atleast_2d(np.random.choice([-1, 1], size=1000))
# Try it.
strategy = get_strategy(M, target, n=5)
result = strategy # target.T
It strikes me that you could add another step of taking the top few percent from the M–target distances and check their mutual distances — but this could be quite expensive.
I have not checked how this compares to an exhaustive search.

Analyzing the complexity matrix path-finding

Recently in my homework, I was assinged to solve the following problem:
Given a matrix of order nxn of zeros and ones, find the number of paths from [0,0] to [n-1,n-1] that go only through zeros (they are not necessarily disjoint) where you could only walk down or to the right, never up or left. Return a matrix of the same order where the [i,j] entry is the number of paths in the original matrix that go through [i,j], the solution has to be recursive.
My solution in python:
def find_zero_paths(M):
n,m = len(M),len(M[0])
dict = {}
for i in range(n):
for j in range(m):
M_top,M_bot = blocks(M,i,j)
X,Y = find_num_paths(M_top),find_num_paths(M_bot)
dict[(i,j)] = X*Y
L = [[dict[(i,j)] for j in range(m)] for i in range(n)]
return L[0][0],L
def blocks(M,k,l):
n,m = len(M),len(M[0])
assert k<n and l<m
M_top = [[M[i][j] for i in range(k+1)] for j in range(l+1)]
M_bot = [[M[i][j] for i in range(k,n)] for j in range(l,m)]
return [M_top,M_bot]
def find_num_paths(M):
dict = {(1, 1): 1}
X = find_num_mem(M, dict)
return X
def find_num_mem(M,dict):
n, m = len(M), len(M[0])
if M[n-1][m-1] != 0:
return 0
elif (n,m) in dict:
return dict[(n,m)]
elif n == 1 and m > 1:
new_M = [M[0][:m-1]]
X = find_num_mem(new_M,dict)
dict[(n,m-1)] = X
return X
elif m == 1 and n>1:
new_M = M[:n-1]
X = find_num_mem(new_M, dict)
dict[(n-1,m)] = X
return X
new_M1 = M[:n-1]
new_M2 = [M[i][:m-1] for i in range(n)]
X,Y = find_num_mem(new_M1, dict),find_num_mem(new_M2, dict)
dict[(n-1,m)],dict[(n,m-1)] = X,Y
return X+Y
My code is based on the idea that the number of paths that go through [i,j] in the original matrix is equal to the product of the number of paths from [0,0] to [i,j] and the number of paths from [i,j] to [n-1,n-1]. Another idea is that the number of paths from [0,0] to [i,j] is the sum of the number of paths from [0,0] to [i-1,j] and from [0,0] to [i,j-1]. Hence I decided to use a dictionary whose keys are matricies of the form [[M[i][j] for j in range(k)] for i in range(l)] or [[M[i][j] for j in range(k+1,n)] for i in range(l+1,n)] for some 0<=k,l<=n-1 where M is the original matrix and whose values are the number of paths from the top of the matrix to the bottom. After analizing the complexity of my code I arrived at the conclusion that it is O(n^6).
Now, my instructor said this code is exponential (for find_zero_paths), however, I disagree.
The recursion tree (for find_num_paths) size is bounded by the number of submatrices of the form above which is O(n^2). Also, each time we add a new matrix to the dictionary we do it in polynomial time (only slicing lists), SO... the total complexity is polynomial (poly*poly = poly). Also, the function 'blocks' runs in polynomial time, and hence 'find_zero_paths' runs in polynomial time (2 lists of polynomial-size times a function which runs in polynomial time) so all in all the code runs in polynomial time.
My question: Is the code polynomial and my O(n^6) bound is wrong or is it exponential and I am missing something?
Unfortunately, your instructor is right.
There is a lot to unpack here:
Before we start, as quick note. Please don't use dict as a variable name. It hurts ^^. Dict is a reserved keyword for a dictionary constructor in python. It is a bad practice to overwrite it with your variable.
First, your approach of counting M_top * M_bottom is good, if you were to compute only one cell in the matrix. In the way you go about it, you are unnecessarily computing some blocks over and over again - that is why I pondered about the recursion, I would use dynamic programming for this one. Once from the start to end, once from end to start, then I would go and compute the products and be done with it. No need for O(n^6) of separate computations. Sine you have to use recursion, I would recommend caching the partial results and reusing them wherever possible.
Second, the root of the issue and the cause of your invisible-ish exponent. It is hidden in the find_num_mem function. Say you compute the last element in the matrix - the result[N][N] field and let us consider the simplest case, where the matrix is full of zeroes so every possible path exists.
In the first step, your recursion creates branches [N][N-1] and [N-1][N].
In the second step, [N-1][N-1], [N][N-2], [N-2][N], [N-1][N-1]
In the third step, you once again create two branches from every previous step - a beautiful example of an exponential explosion.
Now how to go about it: You will quickly notice that some of the branches are being duplicated over and over. Cache the results.

Efficient Particle-Pair Interactions Calculation

I have an N-body simulation that generates a list of particle positions, for multiple timesteps in the simulation. For a given frame, I want to generate a list of the pairs of particles' indices (i, j) such that dist(p[i], p[j]) < masking_radius. Essentially I'm creating a list of "interaction" pairs, where the pairs are within a certain distance of each other. My current implementation looks something like this:
interaction_pairs = []
# going through each unique pair (order doesn't matter)
for i in range(num_particles):
for j in range(i + 1, num_particles):
if dist(p[i], p[j]) < masking_radius:
interaction_pairs.append((i,j))
Because of the large number of particles, this process takes a long time (>1 hr per test), and it is severely limiting to what I need to do with the data. I was wondering if there was any more efficient way to structure the data such that calculating these pairs would be more efficient instead of comparing every possible combination of particles. I was looking into KDTrees, but I couldn't figure out a way to utilize them to compute this more efficiently. Any help is appreciated, thank you!
Since you are using python, sklearn has multiple implementations for nearest neighbours finding:
http://scikit-learn.org/stable/modules/neighbors.html
There is KDTree and Balltree provided.
As for KDTree the main point is to push all the particles you have into KDTree, and then for each particle ask query: "give me all particles in range X". KDtree usually do this faster than bruteforce search.
You can read more for example here: https://www.cs.cmu.edu/~ckingsf/bioinfo-lectures/kdtrees.pdf
If you are using 2D or 3D space, then other option is to just cut the space into big grid (which cell size of masking radius) and assign each particle into one grid cell. Then you can find possible candidates for interaction just by checking neighboring cells (but you also have to do a distance check, but for much fewer particle pairs).
Here's a fairly simple technique using plain Python that can reduce the number of comparisons required.
We first sort the points along either the X, Y, or Z axis (selected by axis in the code below). Let's say we choose the X axis. Then we loop over point pairs like your code does, but when we find a pair whose distance is greater than the masking_radius we test whether the difference in their X coordinates is also greater than the masking_radius. If it is, then we can bail out of the inner j loop because all points with a greater j have a greater X coordinate.
My dist2 function calculates the squared distance. This is faster than calculating the actual distance because computing the square root is relatively slow.
I've also included code that behaves similar to your code, i.e., it tests every pair of points, for speed comparison purposes; it also serves to check that the fast code is correct. ;)
from random import seed, uniform
from operator import itemgetter
seed(42)
# Make some fake data
def make_point(hi=10.0):
return [uniform(-hi, hi) for _ in range(3)]
psize = 1000
points = [make_point() for _ in range(psize)]
masking_radius = 4.0
masking_radius2 = masking_radius ** 2
def dist2(p, q):
return (p[0] - q[0])**2 + (p[1] - q[1])**2 + (p[2] - q[2])**2
pair_count = 0
test_count = 0
do_fast = 1
if do_fast:
# Sort the points on one axis
axis = 0
points.sort(key=itemgetter(axis))
# Fast
for i, p in enumerate(points):
left, right = i - 1, i + 1
for j in range(i + 1, psize):
test_count += 1
q = points[j]
if dist2(p, q) < masking_radius2:
#interaction_pairs.append((i, j))
pair_count += 1
elif q[axis] - p[axis] >= masking_radius:
break
if i % 100 == 0:
print('\r {:3} '.format(i), flush=True, end='')
total_pairs = psize * (psize - 1) // 2
print('\r {} / {} tests'.format(test_count, total_pairs))
else:
# Slow
for i, p in enumerate(points):
for j in range(i+1, psize):
q = points[j]
if dist2(p, q) < masking_radius2:
#interaction_pairs.append((i, j))
pair_count += 1
if i % 100 == 0:
print('\r {:3} '.format(i), flush=True, end='')
print('\n', pair_count, 'pairs')
output with do_fast = 1
181937 / 499500 tests
13295 pairs
output with do_fast = 0
13295 pairs
Of course, if most of the point pairs are within masking_radius of each other, there won't be much benefit in using this technique. And sorting the points adds a little bit of time, but Python's TimSort is rather efficient, especially if the data is already partially sorted, so if the masking_radius is sufficiently small you should see a noticeable improvement in the speed.

Append keeps on appending the same item, does not append the right ones, Python

This is what I have imported:
import random
import matplotlib.pyplot as plt
from math import log, e, ceil, floor
import numpy as np
from numpy import arange,array
import pdb
from random import randint
Here I define the function matrix(p,m)
def matrix(p,m): # A matrix with zeros everywhere, except in every entry in the middle of the row
v = [0]*m
v[(m+1)/2 - 1] = 1
vv = array([v,]*p)
return vv
ct = np.zeros(5) # Here, I choose 5 cause I wanted to work with an example, but should be p in general
Here I define MHops which basically takes the dimensions of the matrix, the matrix and the vector ct and gives me a new matrix mm and a new vector ct
def MHops(p,m,mm,ct):
k = 0
while k < p : # This 'spans' the rows
i = 0
while i < m : # This 'spans' the columns
if mm[k][i] == 0 :
i+=1
else:
R = random.random()
t = -log(1-R,e) # Calculate time of the hopping
ct[k] = ct[k] + t
r = random.random()
if 0 <= r < 0.5 : # particle hops right
if 0 <= i < m-1:
mm[k][i] = 0
mm[k][i+1] = 1
break
else:
break # Because it is at the boundary
else: # particle hops left
if 0 < i <=m-1:
mm[k][i] = 0
mm[k][i-1] = 1
break
else: # Because it is at the boundary
break
break
k+=1
return (mm,ct) # Gives me the new matrix showing the new position of the particles and a new vector of times, showing the times taken by each particle to hop
Now what I wanna do is iterating this process, but I wanna be able to visualize every step in a list. In short what I am doing is:
1. creating a matrix representing a lattice, where 0 means there is no particle in that slot and 1 means there is a particle there.
2. create a function MHops which simulate a random walk of one step and gives me the new matrix and a vector ct which shows the times at which the particles move.
Now I want to have a vector or an array where I have 2*n objects, i.e. the matrix mm and the vector ct for n iterations. I want the in a array, list or something like this cause I need to use them later on.
Here starts my problem:
I create an empty list, I use append to append items at every iteration of the while loop. However the result that I get is a list d with n equal objects coming from the last iteration!
Hence my function for the iteration is the following:
def rep_MHops(n,p,m,mm,ct):
mat = mm
cct = ct
d = []
i = 0
while i < n :
y = MHops(p,m,mat,cct) # Calculate the hop, so y is a tuple y = (mm,ct)
mat = y[0] # I reset mat and cct so that for the next iteration, I go further
cct = y[1]
d.append(mat)
d.append(cct)
i+=1
return d
z = rep_MHops(3,5,5,matrix(5,5),ct) #If you check this, it doesn't work
print z
However it doesn't work, I don't understand why. What I am doing is using MHops, then I want to set the new matrix and the new vector as those in the output of MHops and doing this again. However if you run this code, you will see that v works, i.e. the vector of the times increases and the matrix of the lattice change, however when I append this to d, d is basically a list of n equal objects, where the object are the last iteration.
What is my mistake?
Furthermore if you have any coding advice for this code, they would be more than welcome, I am not sure this is an efficient way.
Just to let you understand better, I would like to use the final vector d in another function where first of all I pick a random time T, then I would basically check every odd entry (every ct) and hence check every entry of every ct and see if these numbers are less than or equal to T. If this happens, then the movement of the particle happened, otherwise it didn't.
From this then I will try to visualize with matpotlibt the result with an histogram or something similar.
Is there anyone who knows how to run this kind of simulation in matlab? Do you think it would be easier?
You're passing and storing by references not copies, so on the next iteration of your loop MHops alters your previously stored version in d. Use import copy; d.append(copy.deepcopy(mat)) to instead store a copy which won't be altered later.
Why?
Python is passing the list by reference, and every loop you're storing a reference to the same matrix object in d.
I had a look through python docs, and the only mention I can find is
"how do i write a function with output parameters (call by reference)".
Here's a simpler example of your code:
def rep_MHops(mat_init):
mat = mat_init
d = []
for i in range(5):
mat = MHops(mat)
d.append(mat)
return d
def MHops(mat):
mat[0] += 1
return mat
mat_init = [10]
z = rep_MHops(mat_init)
print(z)
When run gives:
[[15], [15], [15], [15], [15]]
Python only passes mutable objects (such as lists) by reference. An integer isn't a mutable object, here's a slightly modified version of the above example which operates on a single integer:
def rep_MHops_simple(mat_init):
mat = mat_init
d = []
for i in range(5):
mat = MHops_simple(mat)
d.append(mat)
return d
def MHops_simple(mat):
mat += 1
return mat
z = rep_MHops_simple(mat_init=10)
print(z)
When run gives:
[11, 12, 13, 14, 15]
which is the behaviour you were expecting.
This SO answer How do I pass a variable by reference? explains it very well.

Why computing Preferential Attachment is costly?

I have an undirected graph with 1034 vertices and 53498 edges. I'm computing the preferential attachment index for the vertices. The Preferential Attachment similarity between two vertices is defined as the multiplication of the degree of the first vertex times the degree of the second vertex. I noticed that my computations are very slow. It took 2.7 minutes to compute that for the mentioned graph. I'm not sure if it's my algorithm that is slow or something else is wrong. I would be very thankful if someone could have a little look into my code.
Edit: I just realized that S is a 1034_by_1034 matrix. Looking at the nested for-loops it seems that it's a O(n^2) algorithm! I guess that is why it's slow. Don't you agree?
def pa(graph):
"""
Calculates Preferential Attachment index.
Returns S the similarity matrix.
"""
A = gts.adjacency(graph)
S = np.zeros(A.shape)
for i in xrange(S.shape[0]):
for j in xrange(S.shape[0]):
i_degree = graph.vertex(i).out_degree()
j_degree = graph.vertex(j).out_degree()
factor = i_degree * j_degree
S[i,j] = factor
return S
With all i know about it, these are the speedups i can suggest:
zeroth speedup: the i_degree is not depending on j, so move it up one level
def pa(graph):
A = gts.adjacency(graph)
S = np.zeros(A.shape)
for i in xrange(S.shape[0]):
i_degree = graph.vertex(i).out_degree() # easy to see that this can be put here instead, since it does not depend on j
for j in xrange(S.shape[0]):
j_degree = graph.vertex(j).out_degree()
factor = i_degree * j_degree
S[i,j] = factor
return S
first speedup: calling out_degree() only N times, instead of 2N^2.
def pa2(graph):
A = gts.adjacency(graph)
i_degree = numpy.zeros(A.shape[0])
for i in xrange(A.shape[0]):
i_degree[i] = graph.vertex(i).out_degree()
S = np.zeros(A.shape)
for i in xrange(S.shape[0]):
for j in xrange(S.shape[0]):
S[i,j] = i_degree[i]*i_degree[j]
return S
Second speedup: numpy instead of python for-loop
def pa3(graph):
A = gts.adjacency(graph)
i_degree = numpy.zeros(A.shape[0])
for i in xrange(A.shape[0]):
i_degree[i] = graph.vertex(i).out_degree()
S = i_degree[:,None]*i_degree[None,:]
return S
This abuses the symmetry of your problem.
Note: The [None,:] does the same as using [numpy.newaxis,:]. If you wanted to keep your code, you could also use an #memoize decorator on that out_degree() method, but it is better to use that only on stuff that is recursive, and this is not one of those cases.

Categories