I am running some simulations that take a numpy array as input, continue for several iterations until some condition is met (stochastically), and then repeat again using the same starting array. Each complete simulation starts with the same array and the number of steps to complete each simulation isn't known beforehand (but it's fine to put a limit on it, maxt).
I would like to save the array (X) after each step through each simulation, preferably in a large multi-dimensional array. Below I'm saving each simulation output in a list, and saving the array with copy.copy. I appreciate there are other methods I could use (using tuples for instance) so is there a more efficient method for doing this in Python?
Note: I appreciate this is a trivial example and that one could vectorize the code below. In the actual application being used, however, I have to use a loop as the stochasticity is introduced in a more complicated manner.
import numpy as np, copy
N = 10
Xstart = np.zeros(N)
Xstart[0] = 1
num_sims = 3
prob = 0.2
maxt = 20
analysis_result = []
for i in range(num_sims):
print("-------- Starting new simulation --------")
t = 0
X = copy.copy(Xstart)
# Create a new array to store results, save the array
sim_result = np.zeros((maxt, N))
sim_result[t,:] = X
while( (np.count_nonzero(X) < N) & (t < maxt) ):
print(X)
# Increment elements of the array stochastically
X[(np.random.rand(N) < prob)] += 1
# Save the array for time t
sim_result[t,:] = copy.copy(X)
t += 1
print(X)
analysis_result.append(sim_result[:t,:])
Related
I'm making a script thats does some mathemagical morphology on images (mainly gis rasters). Now, I've implemented erosion and dilation, with opening/closing with reconstruction still on the TODO but thats not the subject here.
My implementation is very simple with nested loops, which I tried on a 10900x10900 raster and it took an absurdly long amount of time to finish, obviously.
Before I continue with other operations, I'd like to know if theres a faster way to do this?
My implementation:
def erode(image, S):
(m, n) = image.shape
buffer = np.full((m, n), 0).astype(np.float64)
for i in range(S, m - S):
for j in range(S, n - S):
buffer[i, j] = np.min(image[i - S: i + S + 1, j - S: j + S + 1]) #dilation is just np.max()
return buffer
I've heard about vectorization but I'm not quite sure I understand it too well. Any advice or pointers are appreciated. Also I am aware that opencv has these morphological operations, but I want to implement my own to learn about them.
The question here is do you want a more efficient implementation because you want to learn about numpy or do you want a more efficient algorithm.
I think there are two obvious things that could be improved with your approach. One is you want to avoid looping on the python level because that is slow. The other is that your taking a maximum of overlapping parts of arrays and you can make it more efficient if you reuse all the effort you put in finding the last maximum.
I will illustrate that with 1d implementations of erosion.
Baseline for comparison
Here is basically your implementation just a 1d version:
def erode(image, S):
n = image.shape[0]
buffer = np.full(n, 0).astype(np.float64)
for i in range(S, n - S):
buffer[i] = np.min(image[i - S: i + S + 1]) #dilation is just np.max()
return buffer
You can make this faster using stride_tricks/sliding_window_view. I.e. by avoiding the loops and doing that at the numpy level.
Faster Implementation
np.lib.stride_tricks.sliding_window_view(arr,2*S+1).min(1)
Notice that it's not quite doing the same since it only starts calculating values once there are 2S+1 values to take the maximum of. But for this illustration I will ignore this problem.
Faster Algorithm
A completely different approach would be to not start calculating the min from scratch but keeping the values ordered and only adding one and removing one when considering the next window one to the right.
Here is a ruff implementation of that:
def smart_erode(arr, m):
n = arr.shape[0]
sd = SortedDict()
for new in arr[:m]:
if new in sd:
sd[new] += 1
else:
sd[new] = 1
for to_remove,new in zip(arr[:-m+1],arr[m:]):
yield sd.keys()[0]
if new in sd:
sd[new] += 1
else:
sd[new] = 1
if sd[to_remove] > 1:
sd[to_remove] -= 1
else:
sd.pop(to_remove)
yield sd.keys()[0]
Notice that an ordered set wouldn't work and an ordered list would have to have a way to remove just one element with a specific value sind you could have repeated values in your array. I am using an ordered dict to store the amount of items present for a value.
A Ruff Benchmark
I want to illustrate how the 3 implementations compare for different window sizes. So I am testing them with an array of 10^5 random integers for different window sizes ranging from 10^3 to 10^4.
arr = np.random.randint(0,10**5,10**5)
sliding_window_times = []
op_times = []
better_alg_times = []
for m in np.linspace(0,10**4,11)[1:].astype('int'):
x = %timeit -o -n 1 -r 1 np.lib.stride_tricks.sliding_window_view(arr,2*m+1).min(1)
sliding_window_times.append(x.best)
x = %timeit -o -n 1 -r 1 erode(arr,m)
op_times.append(x.best)
x = %timeit -o -n 1 -r 1 tuple(smart_erode(arr,2*m+1))
better_alg_times.append(x.best)
print("")
pd.DataFrame({"Baseline Comparison":op_times,
'Faster Implementation':sliding_window_times,
'Faster Algorithm':better_alg_times,
},
index = np.linspace(0,10**4,11)[1:].astype('int')
).plot.bar()
Notice that for very small window sizes the raw power of the numpy implementation wins out but very quickly the amount of work we are saving by not calculating the min from scratch is more important.
This is what I have imported:
import random
import matplotlib.pyplot as plt
from math import log, e, ceil, floor
import numpy as np
from numpy import arange,array
import pdb
from random import randint
Here I define the function matrix(p,m)
def matrix(p,m): # A matrix with zeros everywhere, except in every entry in the middle of the row
v = [0]*m
v[(m+1)/2 - 1] = 1
vv = array([v,]*p)
return vv
ct = np.zeros(5) # Here, I choose 5 cause I wanted to work with an example, but should be p in general
Here I define MHops which basically takes the dimensions of the matrix, the matrix and the vector ct and gives me a new matrix mm and a new vector ct
def MHops(p,m,mm,ct):
k = 0
while k < p : # This 'spans' the rows
i = 0
while i < m : # This 'spans' the columns
if mm[k][i] == 0 :
i+=1
else:
R = random.random()
t = -log(1-R,e) # Calculate time of the hopping
ct[k] = ct[k] + t
r = random.random()
if 0 <= r < 0.5 : # particle hops right
if 0 <= i < m-1:
mm[k][i] = 0
mm[k][i+1] = 1
break
else:
break # Because it is at the boundary
else: # particle hops left
if 0 < i <=m-1:
mm[k][i] = 0
mm[k][i-1] = 1
break
else: # Because it is at the boundary
break
break
k+=1
return (mm,ct) # Gives me the new matrix showing the new position of the particles and a new vector of times, showing the times taken by each particle to hop
Now what I wanna do is iterating this process, but I wanna be able to visualize every step in a list. In short what I am doing is:
1. creating a matrix representing a lattice, where 0 means there is no particle in that slot and 1 means there is a particle there.
2. create a function MHops which simulate a random walk of one step and gives me the new matrix and a vector ct which shows the times at which the particles move.
Now I want to have a vector or an array where I have 2*n objects, i.e. the matrix mm and the vector ct for n iterations. I want the in a array, list or something like this cause I need to use them later on.
Here starts my problem:
I create an empty list, I use append to append items at every iteration of the while loop. However the result that I get is a list d with n equal objects coming from the last iteration!
Hence my function for the iteration is the following:
def rep_MHops(n,p,m,mm,ct):
mat = mm
cct = ct
d = []
i = 0
while i < n :
y = MHops(p,m,mat,cct) # Calculate the hop, so y is a tuple y = (mm,ct)
mat = y[0] # I reset mat and cct so that for the next iteration, I go further
cct = y[1]
d.append(mat)
d.append(cct)
i+=1
return d
z = rep_MHops(3,5,5,matrix(5,5),ct) #If you check this, it doesn't work
print z
However it doesn't work, I don't understand why. What I am doing is using MHops, then I want to set the new matrix and the new vector as those in the output of MHops and doing this again. However if you run this code, you will see that v works, i.e. the vector of the times increases and the matrix of the lattice change, however when I append this to d, d is basically a list of n equal objects, where the object are the last iteration.
What is my mistake?
Furthermore if you have any coding advice for this code, they would be more than welcome, I am not sure this is an efficient way.
Just to let you understand better, I would like to use the final vector d in another function where first of all I pick a random time T, then I would basically check every odd entry (every ct) and hence check every entry of every ct and see if these numbers are less than or equal to T. If this happens, then the movement of the particle happened, otherwise it didn't.
From this then I will try to visualize with matpotlibt the result with an histogram or something similar.
Is there anyone who knows how to run this kind of simulation in matlab? Do you think it would be easier?
You're passing and storing by references not copies, so on the next iteration of your loop MHops alters your previously stored version in d. Use import copy; d.append(copy.deepcopy(mat)) to instead store a copy which won't be altered later.
Why?
Python is passing the list by reference, and every loop you're storing a reference to the same matrix object in d.
I had a look through python docs, and the only mention I can find is
"how do i write a function with output parameters (call by reference)".
Here's a simpler example of your code:
def rep_MHops(mat_init):
mat = mat_init
d = []
for i in range(5):
mat = MHops(mat)
d.append(mat)
return d
def MHops(mat):
mat[0] += 1
return mat
mat_init = [10]
z = rep_MHops(mat_init)
print(z)
When run gives:
[[15], [15], [15], [15], [15]]
Python only passes mutable objects (such as lists) by reference. An integer isn't a mutable object, here's a slightly modified version of the above example which operates on a single integer:
def rep_MHops_simple(mat_init):
mat = mat_init
d = []
for i in range(5):
mat = MHops_simple(mat)
d.append(mat)
return d
def MHops_simple(mat):
mat += 1
return mat
z = rep_MHops_simple(mat_init=10)
print(z)
When run gives:
[11, 12, 13, 14, 15]
which is the behaviour you were expecting.
This SO answer How do I pass a variable by reference? explains it very well.
I am writing a program to perform numerical calculation with a Hessian matrix. The Hessian matrix is 500 x 500 and I need to populate it hundreds of times over. I am populating it with two for loops each time. My problem is that preventatively slow. Here is my code:
#create these outside function
hess = np.empty([500,500])
b = np.empty([500])
def hess_h(x):
#create these first so they aren't calculated every iteration
for k in range(500):
b[k] = (1-np.dot(a[k],x))**2
for i in range(500):
for j in range(500):
if i == j:
#these are values along diagonal
hess[i,j] = float(2*(1-x[i])**2 + 4*x[i]**2)/(1-x[i]**2)**2 \
- float(a[i,j]*sum(a[i]))/b[i]
#the matrix is symmetric so only calculate upper triangle
elif j > i :
hess[i,j] = -float(a[i,j]*sum(a[i]))/b[i]
elif i > j:
hess[i,j] = hess[j,i]
return hess
I calculate that hess_h(np.zeros(500)) takes 10.2289998531 sec to run. That is too long and I need to figure out another way.
Look for patterns in your calculation, in particular things that you can calculate over the whole range of i and j.
I see for example a diagonal where i==j
hess[i,j] = float(2*(1-x[i])**2 + 4*x[i]**2)/(1-x[i]**2)**2 \
- float(a[i,j]*sum(a[i]))/b[i]
Can you change that to a one time expression, something like:
2*(1-x)**2 + 4*x**2)/(1-x**2)**2 - np.diagonal(a)*sum(a)/b
The other pieces work with up and lower triangular elements. There are functions like np.triu that give you their indices.
I'm trying to give you tools and though processes for solving this with a few numpy vectorized operaitons, instead of iterating over all elements of i and j.
Looks like
-a[i,j]*sum(a[i])/b[i]
is used for every element. I assume a is a (500,500) array. Can you use
-a*a.sum(axis=?)/b
b can be 'vectorized'
b[k] = (1-np.dot(a[k],x))**2
with something like:
(1 - np.dot(a, x))**2
or
(1 - np.einsum('kj,ji',a,x))**2
test the details on a smaller a.
I'm a beginner. I recently see the Mandelbrot set which is fantastic, so I decide to draw this set with python.
But there is a problem,I got 'memoryerror' when I run this code.
This statement num_set = gen_num_set(10000) will produce a large list, about 20000*20000*4 = 1600000000. When I use '1000' instead of '10000', I can run code successfully.
My computer's memory is 4GB and the operating system is window7 32bit. I want to know if this problem is limit of my computer or there is some way to optimize my code.
Thanks.
#!/usr/bin/env python3.4
import matplotlib.pyplot as plt
import numpy as np
import random,time
from multiprocessing import *
def first_quadrant(n):
start_point = 1 / n
n = 2*n
return gen_complex_num(start_point,n,1)
def second_quadrant(n):
start_point = 1 / n
n = 2*n
return gen_complex_num(start_point,n,2)
def third_quadrant(n):
start_point = 1 / n
n = 2*n
return gen_complex_num(start_point,n,3)
def four_quadrant(n):
start_point = 1 / n
n = 2*n
return gen_complex_num(start_point,n,4)
def gen_complex_num(start_point,n,quadrant):
complex_num = []
if quadrant == 1:
for i in range(n):
real = i*start_point
for j in range(n):
imag = j*start_point
complex_num.append(complex(real,imag))
return complex_num
elif quadrant == 2:
for i in range(n):
real = i*start_point*(-1)
for j in range(n):
imag = j*start_point
complex_num.append(complex(real,imag))
return complex_num
elif quadrant == 3:
for i in range(n):
real = i*start_point*(-1)
for j in range(n):
imag = j*start_point*(-1)
complex_num.append(complex(real,imag))
return complex_num
elif quadrant == 4:
for i in range(n):
real = i*start_point
for j in range(n):
imag = j*start_point*(-1)
complex_num.append(complex(real,imag))
return complex_num
def gen_num_set(n):
return [first_quadrant(n), second_quadrant(n), third_quadrant(n), four_quadrant(n)]
def if_man_set(num_set):
iteration_n = 10000
man_set = []
z = complex(0,0)
for c in num_set:
if_man = 1
for i in range(iteration_n):
if abs(z) > 2:
if_man = 0
z = complex(0,0)
break
z = z*z + c
if if_man:
man_set.append(c)
return man_set
def plot_scatter(x,y):
#plt.plot(x,y)
color = ran_color()
plt.scatter(x,y,c=color)
plt.show()
def ran_num():
return random.random()
def ran_color():
return [ran_num() for i in range(3)]
def plot_man_set(man_set):
z_real = []
z_imag = []
for z in man_set:
z_real.append(z.real)
z_imag.append(z.imag)
plot_scatter(z_real,z_imag)
if __name__ == "__main__":
start_time = time.time()
num_set = gen_num_set(10000)
with Pool(processes=4) as pool:
#use multiprocess
set_part = pool.map(if_man_set, num_set)
man_set = []
for i in set_part:
man_set += i
plot_man_set(man_set)
end_time = time.time()
use_time = end_time - start_time
print(use_time)
You say you are creating a list with 1.6 billion elements. Each of those is a complex number which contains 2 floats. A Python complex number takes 24 bytes (at least on my system: sys.getsizeof(complex(1.0,1.0)) gives 24), so you'll need over 38GB just to store the values, and that's before you even start looking at the list itself.
Your list with 1.6 billion elements won't fit at all on a 32-bit system (6.4GB with 4 byte pointers), so you need to go to a 64-bit system with 8 byte pointers and at will need 12.8GB just for the pointers.
So, no way you're going to do that unless you upgrade to a 64-bit OS with maybe 64GB RAM (though it might need more).
When handling large data like this you should prefer using numpy arrays instead of python lists. There is a nice post explaining why (What are the advantages of NumPy over regular Python lists?), but I will try to sum it up.
In Python, each complex number in your list is an object (with methods and attributes) and takes up some overhead space for that. That is why they take up 24 bytes (as Duncan pointed out) instead of the 2 * 32bit for two floats per complex number.
Numpy arrays build on c-style arrays (basically all values written next to each other in memory as raw numbers, not objects). They don't provide some of the nice functionality of python lists (like appending) and are restricted to only one data type. They save a lot of space though, as you do not need to save the objects' overhead. This reduces the space needed for each complex number from 24 bytes to 8 bytes (two floats, 32bit each).
While Duncan is right and the big instance you tried will not run even with numpy, it might help you to process bigger instances.
As you have already imported numpy your could change you code to use numpy arrays instead. Please mind that I am not too proficient with numpy and there most certainly is a better way to do this, but this is an example with only little changes to your original code:
def gen_complex_num_np(start_point, n, quadrant):
# create a nxn array of complex numbers
complex_num = np.ndarray(shape=(n,n), dtype=np.complex64)
if quadrant == 1:
for i in range(n):
real = i*start_point
for j in range(n):
imag = j*start_point
# fill ony entry in the array
complex_num[i,j] = complex(real,imag)
# concatenate the array rows to
# get a list-like return value again
return complex_num.flatten()
...
Here your Python list is replaced with a 2d-numpy array with the data type complex. After the array has been filled it is flattened (all row vectors are concatenated) to mimic your return format.
Note that you would have to change the man_set lists in all other parts of your program accordingly.
I hope this helps.
I am attempting to build a simple genetic algorithm that will optimize to an input string, but am having trouble building the [individual x genome] matrix (row n is individual n's genome.) I want to be able to change the population size, mutation rate, and other parameters to study how that affects convergence rate and program efficiency.
This is what I have so far:
import random
import itertools
import numpy as np
def evolve():
goal = 'Hello, World!' #string to optimize towards
ideal = list(goal)
#converting the string into a list of integers
for i in range (0,len(ideal)):
ideal [i] = ord(ideal[i])
print(ideal)
popSize = 10 #population size
genome = len(ideal) #determineing the length of the genome to be the length of the target string
mut = 0.03 #mutation rate
S = 4 #tournament size
best = float("inf") #initial best is very large
maxVal = max(ideal)
minVal = min(ideal)
print (maxVal)
i = 0 #counting variables assigned to solve UnboundLocalError
j = 0
print(maxVal, minVal)
#constructing initial population array (individual x genome)
pop = np.empty([popSize, len(ideal)])
for i, j in itertools.product(range(i), range(j)):
pop[i, j] = [i, random.randint(minVal,maxVal)]
print(pop)
This produces a matrix of the population size with the correct genome length, but the genomes are something like:
[ 6.91364167e-310 6.91364167e-310 1.80613009e-316 1.80613009e-316
5.07224590e-317 0.00000000e+000 6.04100487e+151 3.13149876e-120
1.11787892e+253 1.47872844e-028 7.34486815e+223 1.26594941e-118
7.63858409e+228]
I need them to be random integers corresponding to random ASCII characters .
What am I doing wrong with this method?
Is there a way to make this faster?
I found my current method here:
building an nxn matrix in python numpy, for any n
I found another method that I do not understand, but seems faster and simper, if I can use it here I would like to.
Initialise numpy array of unknown length
Thank you for any assistance you can provide.
Your loop isn't executing because i and j are both 0, so range(i) and range(j) are empty. Also you can't assign a list [i,random] to an array value (np.empty defaults to np.float64). I've simply changed it to only store the random number, but if you really want to store a list, you can change the creation of pop to be pop = np.empty([popSize, len(ideal)],dtype=list)
Otherwise use this for the last lines:
for i, j in itertools.product(range(popSize), range(len(ideal))):
pop[i, j] = random.randint(minVal,maxVal)