I've been working on a project for a while now that requires calculating some very large datasets, and very quickly have moved beyond anything that my meager Excel knowledge could handle. In the last few days I've started learning Python, which has helped with handling the size of data I'm dealing with, but the estimated processing time for these datasets is looking to be incredibly long (possibly a couple hundred years on my laptop).
The bottleneck here is an equation that could produce trillions or quadrillions of results, since it is calculating every combination of 6 different lists and running it through an equation that you'll see in the code. The code works just fine, as is, but is isn't feasible for larger datasets than the example I included. A real dataset would be something more like Set1S, 2S, and 3S being 50 items each, and Sets12A...being about 2500 items each (50x50 in this case. These sets always have a length equal to the square of the first 3 lists, but I'm keeping things short and simple here.).
I'm well aware that the amount of results is absolutely huge, but want to start with as large a dataset as I can, so I can see how much I can reduce the input sizes without greatly impacting the results when I plot a cumulative% histogram.
'Calculator'
import numpy as np
Set1S = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])
Set2S = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])
Set3S = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])
Set12A = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])
Set23A = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])
Set13A = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])
'Define an empty array to add results'
BlockVol = []
from itertools import product
'itertools iterates through all combinations of lists'
for i,j,k,a,b,c in product(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A):
'This is the bottleneck equation, with large input datasets'
BlockVol.append((abs(i*j*k*np.sin(a)*np.sin(b)*np.sin(c))))
arr = np.array(BlockVol)
'manipulate the result list a couple ways'
BlockVol = np.cbrt(BlockVol)
BlockVol = BlockVol*12
'quick check to size of results list'
len(BlockVol)
This took me about 3 minutes or so for 11.3M results, just from eyeballing the clock.
I've learned about #njit, prange in the last day or so, but am a bit stuck in trying to translate my work into this format. I do have a desktop PC with a pretty good GPU, so I think I could speed things up by a lot. I'm well aware that the code below is a big garbage fire that doesn't do anything, but I'm hoping that I'm at least getting the point across on what I'm trying to do.
It seems that the way to go is to define a function with my 6 input lists, but i'm just not sure how to fuse the itertools product and the njit together.
import numpy as np
from itertools import product
from numba import njit, prange
#njit(parallel = True)
def BlockVolCalc(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A):
numRows =Len(Set12A)
BlockVol = np.zeros(numRows)
for i,j,k,a,b,c in product(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A):
BlockVol.append((abs(i*j*k*np.sin(a)*np.sin(b)*np.sin(c))))
arr = np.array(BlockVol)
BlockVol = np.cbrt(BlockVol)
BlockVol = BlockVol*12
len(BlockVol)
Any help is much appreciated, as this is all very new and overwhelming.
Thank you!
I solved your task just by NumPy code, it is always nicer to use just NumPy instead of heavy Numba if possible. Next NumPy-only code will be as fast as same solution using Numba.
My code is 2800 times faster than your reference code, time is measured at the end of code.
In next code BlockValCalcRef(...) function is just your reference code organized as function. And BlockVolCalc(...) is my NumPy based function that should give a lot of speedup. At the end I do assert np.allclose(...) in order to check that both solutions give same results.
Also I simplified a bit sets creation to use one N param to generate sets, in your real world you just provide necessary sets.
In order to solve task I did several things:
Instead of computing np.sin(...) many times for same values I precomputed them just once for Set12A, Set23A, Set13A. Also precomputed np.abs(...) for all sets.
In order to compute cross-product I used special way of numpy arrays indexing like [None, None, :, None, None, None] this allows us to use so-called popular numpy arrays broadcasting.
I have also idea how to improve code even more, to make it around 6 times even faster, but I think even with current huge speed you'll fill whole RAM of your machine in matter of seconds. The idea how to improve is next, currently cross product computes on each step product of 6 numbers, instead of this one can compute product of K - 1 sets and then multiply this array by K-th set in order to get K sets product. This will give 6 time more speedup (because there are 6 sets) because you'll need just one multiplication instead of 6.
Update: I've implemented second improved version of function BlockVolCalc2(...) according to paragraph above. It has 2800x speedup, for larger N it will be probably even more faster.
Try it online!
import numpy as np, time
N = 7
Set1S = np.arange(1, N + 1)
Set2S = np.arange(1, N + 1)
Set3S = np.arange(1, N + 1)
Set12A = np.arange(1, N + 1)
Set23A = np.arange(1, N + 1)
Set13A = np.arange(1, N + 1)
def BlockValCalcRef(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A):
BlockVol = []
from itertools import product
for i,j,k,a,b,c in product(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A):
BlockVol.append((abs(i*j*k*np.sin(a)*np.sin(b)*np.sin(c))))
return np.array(BlockVol)
def BlockVolCalc(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A):
Set1S, Set2S, Set3S = np.abs(Set1S), np.abs(Set2S), np.abs(Set3S)
Set12A, Set23A, Set13A = np.abs(np.sin(Set12A)), np.abs(np.sin(Set23A)), np.abs(np.sin(Set13A))
return (
Set1S[:, None, None, None, None, None] *
Set2S[None, :, None, None, None, None] *
Set3S[None, None, :, None, None, None] *
Set12A[None, None, None, :, None, None] *
Set23A[None, None, None, None, :, None] *
Set13A[None, None, None, None, None, :]
).ravel()
def BlockVolCalc2(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A):
Set1S, Set2S, Set3S = np.abs(Set1S), np.abs(Set2S), np.abs(Set3S)
Set12A, Set23A, Set13A = np.abs(np.sin(Set12A)), np.abs(np.sin(Set23A)), np.abs(np.sin(Set13A))
prod = np.ones((1,), dtype = np.float32)
for s in reversed([Set1S, Set2S, Set3S, Set12A, Set23A, Set13A]):
prod = (s[:, None] * prod[None, :]).ravel()
return prod
# -------- Testing Correctness and Time Measuring --------
tb = time.time()
a0 = BlockValCalcRef(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A),
t0 = time.time() - tb
print(f'base time {round(t0, 4)} sec')
tb = time.time()
a1 = BlockVolCalc(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A)
t1 = time.time() - tb
print(f'improved time {round(t1, 4)} sec, speedup {round(t0 / t1, 2)}x')
tb = time.time()
a2 = BlockVolCalc2(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A)
t2 = time.time() - tb
print(f'improved2 time {round(t2, 4)} sec, speedup {round(t0 / t2, 2)}x')
assert np.allclose(a0, a1)
assert np.allclose(a0, a2)
Output:
base time 2.7569 sec
improved time 0.0015 sec, speedup 1834.83x
improved2 time 0.001 sec, speedup 2755.09x
My function embedded into your initial first code will look like here in this code.
Also I created TensorFlow-based variant of code, which will use all of your CPU cores and GPU, this code needs installing tensorflow one time by python -m pip install --upgrade numpy tensorflow:
import numpy as np
N = 18
Set1S, Set2S, Set3S, Set12A, Set23A, Set13A = [np.arange(1 + i, N + 1 + i) for i in range(6)]
dtype = np.float32
def Prepare(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A):
import numpy as np
Set12A, Set23A, Set13A = np.sin(Set12A), np.sin(Set23A), np.sin(Set13A)
return [np.abs(s).astype(dtype) for s in [Set1S, Set2S, Set3S, Set12A, Set23A, Set13A]]
sets = Prepare(Set1S, Set2S, Set3S, Set12A, Set23A, Set13A)
def ProcessNP(sets):
import numpy as np
res = np.ones((1,), dtype = dtype)
for s in reversed(sets):
res = (s[:, None] * res[None, :]).ravel()
res = np.cbrt(res) * 12
return res
def ProcessTF(sets, *, state = {}):
if 'graph' not in state:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import numpy as np, tensorflow as tf
tf.compat.v1.disable_eager_execution()
cpus = tf.config.list_logical_devices('CPU')
#print(f"CPUs: {[e.name for e in cpus]}")
gpus = tf.config.list_logical_devices('GPU')
#print(f"GPUs: {[e.name for e in gpus]}")
print(f"GPU: {len(gpus) > 0}")
state['graph'] = tf.Graph()
state['sess'] = tf.compat.v1.Session(graph = state['graph'])
#tf.device(cpus[0].name if len(gpus) == 0 else gpus[0].name)
with state['sess'].as_default(), state['graph'].as_default():
res = tf.ones((1,), dtype = dtype)
state['inp'] = []
for s in reversed(sets):
sph = tf.compat.v1.placeholder(dtype, s.shape)
state['inp'].insert(0, sph)
res = sph[:, None] * res[None, :]
res = tf.reshape(res, (tf.size(res),))
res = tf.math.pow(res, 1 / 3) * 12
state['out'] = res
def Run(sets):
with state['sess'].as_default(), state['graph'].as_default():
return tf.compat.v1.get_default_session().run(
state['out'], {ph: s for ph, s in zip(state['inp'], sets)}
)
state['run'] = Run
return state['run'](sets)
# ------------ Testing ------------
npa, tfa = ProcessNP(sets), ProcessTF(sets)
assert np.allclose(npa, tfa)
from timeit import timeit
print('Nums:', round(npa.size / 10 ** 6, 3), 'M')
timeit_num = 2
print('NP:', round(timeit(lambda: ProcessNP(sets), number = timeit_num) / timeit_num, 3), 'sec')
print('TF:', round(timeit(lambda: ProcessTF(sets), number = timeit_num) / timeit_num, 3), 'sec')
On my 2-cores CPU it prints:
GPU: False
Nums: 34.012 M
NP: 3.487 sec
TF: 1.185 sec
Related
I'm implementing a negative sampling algorithm in JAX. The idea is to sample negatives from a range excluding from this range a number of non-acceptable outputs. My current solution is close to the following:
import jax.numpy as jnp
import jax
max_range = 5
n_samples = 2
true_cases = jnp.array(
[
[1,2],
[1,4],
[0,5]
]
)
# i combine the true cases in a dictionary of the following form:
non_acceptable_as_negatives = {
0: jnp.array([5]),
1: jnp.array([2,4]),
2: jnp.array([]),
3: jnp.array([]),
4: jnp.array([]),
5: jnp.array([])
}
negatives = []
key = jax.random.PRNGKey(42)
for i in true_cases[:,0]:
key,use_key = jax.random.split(key,2)
p = jnp.ones((max_range+1,))
p = p.at[non_acceptable_as_negatives[int(i)]].set(0)
p = p / p.sum()
negatives.append(
jax.random.choice(use_key,
jnp.arange(max_range+1),
(1, n_samples),
replace=False,
p=p,
)
)
However this seems
a) rather complicated and
b) is not very performant as the true cases in the original contain ~200_000 entries and max range is ~ 50_000. How can i improve this solution? And is there a more JAX way to store arrays of varying size which i currently store in the non_acceptable_as_negatives dict?
Thanks in a advance
You'll generally achieve better performance in JAX (as in NumPy) if you can avoid loops and use vectorized operations instead. If I'm understanding your function correctly, I think the following does roughly the same thing, but using vmap.
Since JAX does not support dictionary lookups based on traced values, I replaced your dict with a padded array
import jax.numpy as jnp
import jax
max_range = 5
n_samples = 2
fill_value = max_range + 1
true_cases = jnp.array([
[1,2],
[1,4],
[0,5]
])
non_acceptable_as_negatives = jnp.array([
[5, fill_value],
[2, 4],
])
#jax.vmap
def func(key, true_case):
p = jnp.ones(max_range + 1)
idx = true_cases[0]
replace = non_acceptable_as_negatives.at[idx].get(fill_value=fill_value)
p = p.at[replace].set(0, mode='drop')
return jax.random.choice(key, max_range + 1, (n_samples,), replace=False, p=p)
key = jax.random.PRNGKey(42)
keys = jax.random.split(key, len(true_cases))
result = func(keys, true_cases)
print(result)
[[3 1]
[5 1]
[1 5]]
Jax array are immutable. It means that you can't edit it without copying the entire array. Here the main problem is that you create the vector p two times at each iteration. I advice you to compute the probabilities only once via numpy:
import numpy as np
non_acceptable_as_negatives = {
0: np.array([5]),
1: np.array([2,4]),
2: np.array([]),
3: np.array([]),
4: np.array([]),
5: np.array([])
}
probas = np.ones((max_range+1, max_range+1))
for k, idx in non_acceptable_as_negatives.items():
for i in idx:
probas[k, i] = 0
probas = probas / probas.sum(axis=1, keepdims=True)
probas = jnp.array(probas)
Then, to further speed-up the algorithm, you can compile the choice function. You can try:
from functools import partial
#partial(jax.jit, static_argnums=1)
def sample(key, max_range, probas):
key, use_key = jax.random.split(key, 2)
return jax.random.choice(use_key,
jnp.arange(max_range+1),
(1, n_samples),
replace=False,
p=probas[i],
), key
And finally:
for i in true_cases[:,0]:
neg, key = aux(key, max_range, probas)
negatives.append(neg)
I wrote this function to perform a rolling sum on numpy arrays, inspired by this post
def np_rolling_sum(arr, n, axis=0):
out = np.cumsum(arr, axis=axis)
slc1 = [slice(None)] * len(arr.shape)
slc2 = [slice(None)] * len(arr.shape)
slc1[axis] = slice(n, None)
slc2[axis] = slice(None, -n)
out = out[tuple(slc1)] - out[tuple(slc2)]
shape = list(out.shape)
shape[axis] = arr.shape[axis] - out.shape[axis]
out = np.concatenate((np.full(shape, 0), out), axis=axis)
return out
It works fine, except when I need to use it on large arrays (size is around 1bn). In that case, I get a SIGKILL on this line:
out = out[tuple(slc1)] - out[tuple(slc2)]
I already tried to delete arr after the cumsum since I no more need it (except from its shape that I can store before the deletion), but it didn't help.
My next guess would be to implement a batch management for the operation causing the memory issue. Is there another way for me to write this function better so it will be able to deal with larger arrays ?
Thanks for your help !
For people who might be interested, I finally added a decorator that checks if numpy arguments are greater than a given size. If so, it turns them into dask arrays.
In order to keep the main function closest to the original, I also added an argument that indicates which library should be used: numpy or dask.array
Here is the final result:
import numpy as np
import dask.array as da
threshold = 50_000_000
def large_file_handler(func):
def wrapper(*args, **kwargs):
pos = list(args)
for i in range(len(pos)):
if type(pos[i]) == np.ndarray and pos[i].size > threshold:
pos[i] = da.from_array(pos[i])
kwargs['func_lib'] = da
for k in kwargs:
if type(kwargs[k]) == np.ndarray and pos[kwargs[k]].size > threshold:
kwargs[k] = da.from_array(kwargs[k])
kwargs['func_lib'] = da
return func(*pos, **kwargs)
return wrapper
#large_file_handler
def np_rolling_sum(arr, n, axis=0, func_lib=np):
out = func_lib.cumsum(arr, axis=axis)
slc1 = [slice(None)] * len(arr.shape)
slc2 = [slice(None)] * len(arr.shape)
slc1[axis] = slice(n, None)
slc2[axis] = slice(None, -n)
out = out[tuple(slc1)] - out[tuple(slc2)]
shape = list(out.shape)
shape[axis] = arr.shape[axis] - out.shape[axis]
out = func_lib.concatenate((np.full(shape, 0), out), axis=axis)
return np.array(out)
Please feel free to tell me if this could be improved.
The question is simple: here is my current algorithm. This is terribly slow because of the loops on the arrays. Is there a way to change it in order to avoid the loops and take advantage of the NumPy arrays types ?
import numpy as np
def loopingFunction(listOfVector1, listOfVector2):
resultArray = []
for vector1 in listOfVector1:
result = 0
for vector2 in listOfVector2:
result += np.dot(vector1, vector2) * vector2[2]
resultArray.append(result)
return np.array(resultArray)
listOfVector1x = np.linspace(0,0.33,1000)
listOfVector1y = np.linspace(0.33,0.66,1000)
listOfVector1z = np.linspace(0.66,1,1000)
listOfVector1 = np.column_stack((listOfVector1x, listOfVector1y, listOfVector1z))
listOfVector2x = np.linspace(0.33,0.66,1000)
listOfVector2y = np.linspace(0.66,1,1000)
listOfVector2z = np.linspace(0, 0.33, 1000)
listOfVector2 = np.column_stack((listOfVector2x, listOfVector2y, listOfVector2z))
result = loopingFunction(listOfVector1, listOfVector2)
I am supposed to deal with really big arrays, that have way more than 1000 vectors in each. So if you have any advice, I'll take it.
The obligatory np.einsum benchmark
r2 = np.einsum('ij, kj, k->i', listOfVector1, listOfVector2, listOfVector2[:,2], optimize=['einsum_path', (1, 2), (0, 1)])
#%timeit result: 10000 loops, best of 5: 116 µs per loop
np.testing.assert_allclose(result, r2)
Just for fun, I wrote an optimized Numba implementation that outperform all others. It is based on the einsum optimization of the #MichaelSzczesny answer.
import numpy as np
import numba as nb
# This decorator ask Numba to eagerly compile the code using
# the provided signature string (containing the parameter types).
#nb.njit('(float64[:,::1], float64[:,::1])')
def loopingFunction_numba(listOfVector1, listOfVector2):
n, m = listOfVector1.shape
assert m == 3
result = np.empty(n)
s1 = s2 = s3 = 0.0
for i in range(n):
factor = listOfVector2[i, 2]
s1 += listOfVector2[i, 0] * factor
s2 += listOfVector2[i, 1] * factor
s3 += listOfVector2[i, 2] * factor
for i in range(n):
result[i] = listOfVector1[i, 0] * s1 + listOfVector1[i, 1] * s2 + listOfVector1[i, 2] * s3
return result
result = loopingFunction_numba(listOfVector1, listOfVector2)
Here are timings on my i5-9600KF processor:
Initial: 1052.0 ms
ymmx: 5.121 ms
MichaelSzczesny: 75.40 us
MechanicPig: 3.36 us
Numba: 2.74 us
Optimal lower bound: 0.66 us
This solution is ~384_000 times faster than the original one. Note that is does not even use the SIMD instructions of the processor that would result in a ~4x speed up on my machine. This is only possible by having transposed input that are much more SIMD-friendly than the current one. Transposition may also speed up other answers like the one of MechanicPig since BLAS can often benefit from this. The resulting code would reach the symbolic 1_000_000 speed up factor!
You can at least remove the two forloop to save alot of time, use matrix computation directly
import time
import numpy as np
def loopingFunction(listOfVector1, listOfVector2):
resultArray = []
for vector1 in listOfVector1:
result = 0
for vector2 in listOfVector2:
result += np.dot(vector1, vector2) * vector2[2]
resultArray.append(result)
return np.array(resultArray)
def loopingFunction2(listOfVector1, listOfVector2):
resultArray = np.sum(np.dot(listOfVector1, listOfVector2.T) * listOfVector2[:,2], axis=1)
return resultArray
listOfVector1x = np.linspace(0,0.33,1000)
listOfVector1y = np.linspace(0.33,0.66,1000)
listOfVector1z = np.linspace(0.66,1,1000)
listOfVector1 = np.column_stack((listOfVector1x, listOfVector1y, listOfVector1z))
listOfVector2x = np.linspace(0.33,0.66,1000)
listOfVector2y = np.linspace(0.66,1,1000)
listOfVector2z = np.linspace(0, 0.33, 1000)
listOfVector2 = np.column_stack((listOfVector2x, listOfVector2y, listOfVector2z))
import time
t0 = time.time()
result = loopingFunction(listOfVector1, listOfVector2)
print('time old version',time.time() - t0)
t0 = time.time()
result2 = loopingFunction2(listOfVector1, listOfVector2)
print('time matrix computation version',time.time() - t0)
print('Are results are the same',np.allclose(result,result2))
Which gives
time old version 1.174513578414917
time matrix computation version 0.011968612670898438
Are results are the same True
Basically, the less loop the better.
Avoid nested loops and adjust the calculation order, which is 20 times faster than the optimized np.einsum and nearly 400_000 times faster than the original program:
>>> out = listOfVector1.dot(listOfVector2[:, 2].dot(listOfVector2))
>>> np.allclose(out, loopingFunction(listOfVector1, listOfVector2))
True
Test:
>>> timeit(lambda: loopingFunction(listOfVector1, listOfVector2), number=1)
1.4389081999834161
>>> timeit(lambda: listOfVector1.dot(listOfVector2[:, 2].dot(listOfVector2)), number=400_000)
1.3162514999858104
>>> timeit(lambda: np.einsum('ij, kj, k->i', listOfVector1, listOfVector2, listOfVector2[:, 2], optimize=['einsum_path', (1, 2), (0, 1)]), number=18_000)
1.3501517999975476
I'd like to sample n random numbers from a linspace without replacement and do so in batches. Thus, each sample in the batch should not have repeated numbers, but numbers may repeat across the batch.
The following code shows how I do it by calling Generator.choice repeatedly.
import numpy as np
low, high = 0, 10
sample_shape = (3,)
n = 5
rng = np.random.default_rng() # or previously instantiated RNG
space = np.linspace(start=low, stop=high, num=1000)
samples = np.stack(
[
rng.choice(space, size=n, replace=False)
for _ in range(np.prod(sample_shape, dtype=int))
]
)
samples = samples.reshape(sample_shape + (n,))
print(f"samples.shape: {samples.shape}")
print(samples)
Current output:
samples.shape: (3, 5)
[[4.15415415 5.56556557 1.38138138 7.78778779 7.03703704]
[1.48148148 6.996997 0.91091091 3.28328328 2.93293293]
[7.82782783 9.65965966 9.94994995 5.84584585 5.26526527]]
However, this procedure turns out to be a big bottleneck in my code. Is there a more efficient way of performing this?
It is a task in school (parallel normalization of each column of a matrix) and besides other problems you may see, I found it particularly difficult to find something easy as the list = [] that you can list.append() entire lists in a loop to, without predefining dimensions.
Here is what I have so far with the line in question at the end. Thank you in advance for any help!
from multiprocessing import Pool
import numpy as np
def fct_norm(col):
mn = col.min()
mx = col.max()
col_norm = np.zeros((6, 1))
for i in range(6):
col_norm[i, 0] = (col[i] - mn) / (mx - mn)
return col_norm
if __name__ == "__main__":
pool = Pool()
arr = np.random.uniform(0, 100, size=(6, 3))
maybe predefine arr_norm here?
for i in range(2):
print("i = ", i)
col = arr[:, i]
result = pool.map(fct_norm, [col])
norm_arr = HOW_TO_ADD_EACH_RESULT_COLUMN_TO_A_NEW_ARRAY?
The function you need to concatenate a number of columns is np.hstack. However, a big problem is pool.mapis not used in the correct way in the original code.
As written, there is no parallel execution of the columns, since each call to pool.map gets only a single column. The idea is to pass an iterator with several values at the same time - in this case, multiple columns to pool.map.
Since numpy loops over rows, rather than columns, the matrix must be transposed (using the (...).T operator. Also, after the pool is finished, it is good measure to close it. One way to handle this automatically, is to use a context (i.e., the with Pool() as pool: construct, as then it will close automatically.
This all taken together gives the following solution:
from multiprocessing import Pool
import numpy as np
def fct_norm(col):
mn = col.min()
mx = col.max()
col_norm = np.zeros((6, 1))
for i in range(6):
col_norm[i, 0] = (col[i] - mn) / (mx - mn)
return col_norm
if __name__ == "__main__":
arr = np.random.uniform(0, 100, size=(6, 3))
with Pool() as pool:
norm_arr = np.hstack(pool.map(fct_norm, arr.T))
# Here norm_arr is available for further operations.
Thus, the whole operation can be performed in two lines.