I am trying to simulate a random walk in 1D by graphing a Poisson process with a mean rate of 1 s^(-1), but in a randomly chosen direction (left or right). I am trying to plot the trajectories (x vs t) with a total duration of t = 400s, however, I can't seem to track the x direction with t (time) properly. How can I do this? I have this so far:
x = [0]
t = [0]
for j in range(100):
mean_rate = 1.0
d = 1.0
step_x = np.random.poisson(0, mean_rate)
bern_tri = np.random.binomial(0.5,step_x)
if bern_tri == 1:
x.append(x[j] + d)
t.append(t[j])
else:
x.append(x[j] - d)
t.append(t[j])
plt.plot(x,t)
plt.show()
Related
I am simulating 1D heat conduction using brownian motion in python. The question here is to track if the particle pass the inerface of the left or right cell. I have to count it somehow, could you please purpose solution or I should update code concept(rethink model).
Short desciption: Medium is consist of cell, in each cell it has own quantity of particles. The particles are moving from one cell to another. First and last cell has constant quantity of particle (in this case 500 and 0). Result gives the tempertaure profile along x. If we know the number of particles that are pass the interface of the cell (left or right) we might find Heat flux.
Edit: I've made counting of particle pass throught interface (from right or left). But in "theory" the value of Heat Flux should me constant. So, I gues there is the problem with my code(specified code excerpt). Could you please review it. Am I counting right?
Edit code:
import numpy as np
import matplotlib.pyplot as plt
def Cell_dist(a, dx):
res = [[] for i in range(N)]
for i in range(N):
for value in a:
if dx*i < value < dx*i+dx:
res[i].append(value)
return res
L = 0.2 # length of the medium
N = 10 # number of cells
dx = L/N # cell dimension
dt = 1 # time step
dur = 60 # duration
M = 20 # number of particles
ro = 8930 # density
k = 391 # thermal conductivity
C_p = 380 # specific heat capacity
T_0 = 100 # maintained temperature
T_r = 0 # reference temperature
DT = T_0 - T_r # characteristic temperature
a = k / ro / C_p # thermal diffusivity of the copper
dh_r = ro * C_p * dx * DT / M # refernce elementary enthalpy
c = (2*a*dt)**(1/2) # diffusion length
pos = [[] for i in range(N)] # creating cells
for time_step in range(dur):
M_plus = [[] for i in range(N-1)]
M_minus = [[] for i in range(N-1)]
flux = [0] * (N-1)
unirnd = np.random.uniform(0,dx, M).tolist() # uniform random distribution
pos[0] = unirnd # 1st cell BC
pos[-1] = [] # last cell BC at T_r = 0
curr_pos = sorted([x for sublist in pos for x in sublist]) # flatten list of particles (concatenation)
M_t = len(curr_pos) # number of particles at instant time step
pos_tmp = []
# Move each particle
for i in range(M_t):
normal_distr = np.random.default_rng().normal(0,1)
displacement = c*normal_distr
final_pos = curr_pos[i] + displacement
pos_tmp.append(final_pos)
#HERE is the question________________________
if normal_distr > 0:
for i in range(1,len(M_plus)-1):
if i*dx < final_pos < (i+1)*dx:
M_plus[i-1].append(1)
else:
for i in range(len(M_minus)):
if i*dx < final_pos < (i+1)*dx:
M_minus[i].append(1)
#END of the question________________________
for i in range(N-1):
flux[i] = (len(M_plus[i])-len(M_minus[i]))*dh_r/dt/1000
pos_new = Cell_dist(pos_tmp,dx)
pos_new[0] = unirnd
pos = pos_new
walker_number_in_cell = []
for i in range(N):
walker_number_in_cell.append(len(pos[i]))
T_n = []
for num in walker_number_in_cell:
T_n.append(T_r + num*dh_r/ro/C_p/dx)
# __________Plotting FLUX profile__________
x_a = [0]*(N-1)
for i in range(0, N-1):
x_a[i] = "{}".format(i+1)+"_{}".format(i+2)
plt.plot(x_a,flux[0:N-1],'-')
plt.xlabel('Interface')
plt.ylabel('Flux')
plt.show()
i'm trying to solve Laplace's equation with a particular geometry (two circular conductors), here's what i've done in python :
from __future__ import division
from pylab import *
from scipy import *
from numpy import *
from matplotlib import *
N1=50 # number of points along x and y
N=2*N1+1 # number of points in total
# in cm.
xc1=4
xc2=9
yc1=0
yc2=0
R1=1.75
R2=9
ecart = 1
a, b = linspace(-1, 19, N), linspace(-10, 10, N)
xa, ya = meshgrid(a, b)
V = zeros_like(xa)
for i in range(N):
for j in range(N):
x, y = xa[i,j], ya[i,j]
if (((x-xc1)**2/(R1**2))+((y-yc1)**2/(R1**2)))<=1 : # potential in the central conductor
V[i,j] = 30
if (((x-xc2)**2/(R2**2))+((y-yc2)**2/(R2**2)))>=1 : # potential in the outer conductor
V[i,j] =0
#draws the potential along X along the axis of symmetry.
Vnew = V.copy()
while ecart > 5*10**-2:
for i in range(1,N-1):
for j in range(1,N-1):
Vnew[i,j] = 0.25*(V[i-1,j] + V[i+1,j] + V [i,j-1] + V[i,j+1])
# convergence criterion
ecart = np.max(np.abs(V - (Vnew))/np.max(V))
print(ecart)
# save in the grid V of the calculated grid
for i in range(N):
for j in range(N):
x, y = xa[i,j], ya[i,j]
if (((x-xc1)**2/(R1**2))+((y-yc1)**2/(R1**2))) > 1 and (((x-xc2)**2/(R2**2))+((y-yc2)**2/(R2**2))) < 1 :
V[i,j] = Vnew[i,j]
it actually works with ecart>5*10**-2, but I would like to do it with ecart>10**-3 and here is the problem, it takes too much time ,actually it never ends...
Does someone have any idea in order to improve the program ?
Thank you in advance !
I should probably start of with saying I have no idea how to code and don't consider myself even a beginner when it comes to coding. That being said I would really appreciate some help with getting started with some code. As the title suggests I have to code what is known as the Ising model. The premise of the model is:
E= -Σ(hs(i)) - Σ(Js(i)*s(j))
this will follow what i believe is the Monte Carlo simulation. so for each configuration of {s(i)} there is a probability e^(-ßE{s(i)})
We start with a random spin to yield potential {s(i)}
If E(1)>E(0) we flip the sign
If E(1) < E(0), then you draw a random number and compare to e^(ß∆E)
if the number , say x is:
x< e^(ß∆E) then flip
x > e^(ß∆E) do nothing then {s(i)}={s(0)}
I hope that is enough info, but I did pickup some code which I think is relevant
import numpy as np
import random
def init_spin_array(rows, cols):
return np.ones((rows, cols))
def find_neighbors(spin_array, lattice, x, y):
left = (x, y - 1)
right = (x, (y + 1) % lattice)
top = (x - 1, y)
bottom = ((x + 1) % lattice, y)
return [spin_array[left[0], left[1]],
spin_array[right[0], right[1]],
spin_array[top[0], top[1]],
spin_array[bottom[0], bottom[1]]]
def energy(spin_array, lattice, x ,y):
return 2 * spin_array[x, y] * sum(find_neighbors(spin_array, lattice, x, y))
def main():
RELAX_SWEEPS = 50
lattice = eval(input("Enter lattice size: "))
sweeps = eval(input("Enter the number of Monte Carlo Sweeps: "))
for temperature in np.arange(0.1, 5.0, 0.1):
spin_array = init_spin_array(lattice, lattice)
# the Monte Carlo follows below
mag = np.zeros(sweeps + RELAX_SWEEPS)
for sweep in range(sweeps + RELAX_SWEEPS):
for i in range(lattice):
for j in range(lattice):
e = energy(spin_array, lattice, i, j)
if e <= 0:
spin_array[i, j] *= -1
elif np.exp((-1.0 * e)/temperature) > random.random():
spin_array[i, j] *= -1
mag[sweep] = abs(sum(sum(spin_array))) / (lattice ** 2)
print(temperature, sum(mag[RELAX_SWEEPS:]) / sweeps)
main()
All i need is to plot this info into a M vs T plot, and somehow change the code to allow three parameter h,J,T to be varied, as in if I hold T at a certain #, what is h vs. J plot look like. Please any help would be immensely appreciated.
I have drawn one position(x,y,z) of N particles in an enclosed volume.
x[i] = random.uniform(a,b) ...
I also found the constant velocity(vx,vy,vz) of the N particles.
vx[i] = random.gauss(mean,sigma) ...
Now I want to find the position of the N(=100) particles over time. I used the Euler-Cromer method to this.
delta_t = linspace(0,2,n-1)
n = 1000
v[0] = vx;...
r[0] = x;...
for i in range(n-1):
v[i+1,:] = v[i,:]
r[i+1,:] = r[i,:] + delta_t*v[i+1,:]
t[i+1] = t[i] + delta_t
But I want to find the position over time for every particle. How can I do this? Also, how do I plot the particles position over time in 3D?
To find the position of the particles at a given time you can use the following code:
import numpy as np
# assign random positions in the box 0,0,0 to 1,1,1
x = np.random.random((100,3))
# assign random velocities in the range around 0
v = np.random.normal(size=(100,3))
# define function to project the position in time according to
# laws of motion. x(t) = x_0 + v_0 * t
def position(x_0, v_0, t):
return x_0 + v_0*t
# get new position at time = 3.2
position(x, v, 3.2)
The model I'm working on has a neuron (modeled by the Hodgkin-Huxley equations), and the neuron itself receives a bunch of synaptic inputs from other neurons because it is in a network. The standard way to model the inputs is with a spike train made up of a bunch of delta function pulses that arrive at a specified rate, as a Poisson process. Some of the pulses provide an excitatory reaction to the neuron, and some provide an inhibitory pulse. So the synaptic current should look like this:
Here, Ne is the number of excitatory neurons, Ni is inhibitory, the h's are either 0 or 1 (1 with probability p) representing whether or not a spike was successfully transmitted, and the $t_k^l$ in the delta function is the discharge time of the l^th spike of the kth neuron (same for the $t_m^n$). So the basic idea behind how we tried coding this was to suppose first I had 100 neurons providing pulses into my HH neuron (80 of which are excitatory, 20 of which are inhibitory). We then formed an array where one column enumerated the neurons (so that neurons #0-79 were excitatory ones and #80-99 were inhibitory). We then checked to see if there is a spike in some time interval, and if there was, choose a random number between 0-1 and if it's below my specified probability p, then assign it the number 1, otherwise make it 0. We then plot the voltage as a function of time to look to see when the neuron spikes.
I think the code works, BUT the problem is that as soon as I add some more neurons in the network (one paper claimed they used 5000 total neurons), it takes forever to run, which is just unfeasible for doing numerical simulations. My question is: is there a better way to simulate a spike train pulsing into a neuron so that the computation is substantially faster for a large number of neurons in the network? Here is the code we tried: (it's a little long because the HH equations are quite detailed):
import scipy as sp
import numpy as np
import pylab as plt
#Constants
C_m = 1.0 #membrane capacitance, in uF/cm^2"""
g_Na = 120.0 #Sodium (Na) maximum conductances, in mS/cm^2""
g_K = 36.0 #Postassium (K) maximum conductances, in mS/cm^2"""
g_L = 0.3 #Leak maximum conductances, in mS/cm^2"""
E_Na = 50.0 #Sodium (Na) Nernst reversal potentials, in mV"""
E_K = -77.0 #Postassium (K) Nernst reversal potentials, in mV"""
E_L = -54.387 #Leak Nernst reversal potentials, in mV"""
def poisson_spikes(t, N=100, rate=1.0 ):
spks = []
dt = t[1] - t[0]
for n in range(N):
spkt = t[np.random.rand(len(t)) < rate*dt/1000.] #Determine list of times of spikes
idx = [n]*len(spkt) #Create vector for neuron ID number the same length as time
spkn = np.concatenate([[idx], [spkt]], axis=0).T #Combine tw lists
if len(spkn)>0:
spks.append(spkn)
spks = np.concatenate(spks, axis=0)
return spks
N = 100
N_ex = 80 #(0..79)
N_in = 20 #(80..99)
G_ex = 1.0
K = 4
dt = 0.01
t = sp.arange(0.0, 300.0, dt) #The time to integrate over """
ic = [-65, 0.05, 0.6, 0.32]
spks = poisson_spikes(t, N, rate=10.)
def alpha_m(V):
return 0.1*(V+40.0)/(1.0 - sp.exp(-(V+40.0) / 10.0))
def beta_m(V):
return 4.0*sp.exp(-(V+65.0) / 18.0)
def alpha_h(V):
return 0.07*sp.exp(-(V+65.0) / 20.0)
def beta_h(V):
return 1.0/(1.0 + sp.exp(-(V+35.0) / 10.0))
def alpha_n(V):
return 0.01*(V+55.0)/(1.0 - sp.exp(-(V+55.0) / 10.0))
def beta_n(V):
return 0.125*sp.exp(-(V+65) / 80.0)
def I_Na(V, m, h):
return g_Na * m**3 * h * (V - E_Na)
def I_K(V, n):
return g_K * n**4 * (V - E_K)
def I_L(V):
return g_L * (V - E_L)
def I_app(t):
return 3
def I_syn(spks, t):
"""
Synaptic current
spks = [[synid, t],]
"""
exspk = spks[spks[:,0]<N_ex] # Check for all excitatory spikes
delta_k = exspk[:,1] == t # Delta function
if sum(delta_k) > 0:
h_k = np.random.rand(len(delta_k)) < 0.5 # p = 0.5
else:
h_k = 0
inspk = spks[spks[:,0] >= N_ex] #Check remaining neurons for inhibitory spikes
delta_m = inspk[:,1] == t #Delta function for inhibitory neurons
if sum(delta_m) > 0:
h_m = np.random.rand(len(delta_m)) < 0.5 #p =0.5
else:
h_m = 0
isyn = C_m*G_ex*(np.sum(h_k*delta_k) - K*np.sum(h_m*delta_m))
return isyn
def dALLdt(X, t):
V, m, h, n = X
dVdt = (I_app(t)+I_syn(spks,t)-I_Na(V, m, h) - I_K(V, n) - I_L(V)) / C_m
dmdt = alpha_m(V)*(1.0-m) - beta_m(V)*m
dhdt = alpha_h(V)*(1.0-h) - beta_h(V)*h
dndt = alpha_n(V)*(1.0-n) - beta_n(V)*n
return np.array([dVdt, dmdt, dhdt, dndt])
X = [ic]
for i in t[1:]:
dx = (dALLdt(X[-1],i))
x = X[-1]+dt*dx
X.append(x)
X = np.array(X)
V = X[:,0]
m = X[:,1]
h = X[:,2]
n = X[:,3]
ina = I_Na(V, m, h)
ik = I_K(V, n)
il = I_L(V)
plt.figure()
plt.subplot(3,1,1)
plt.title('Hodgkin-Huxley Neuron')
plt.plot(t, V, 'k')
plt.ylabel('V (mV)')
plt.subplot(3,1,2)
plt.plot(t, ina, 'c', label='$I_{Na}$')
plt.plot(t, ik, 'y', label='$I_{K}$')
plt.plot(t, il, 'm', label='$I_{L}$')
plt.ylabel('Current')
plt.legend()
plt.subplot(3,1,3)
plt.plot(t, m, 'r', label='m')
plt.plot(t, h, 'g', label='h')
plt.plot(t, n, 'b', label='n')
plt.ylabel('Gating Value')
plt.legend()
plt.show()
I'm not familiar with other packages designed specifically for neural networks, but I wanted to write my own, mainly because I plan to do stochastic analysis which requires quite a bit of mathematical detail, and I don't know if those packages provide such detail.
Profiling shows that most of your time is being spent in these two lines:
if sum(delta_k) > 0:
and
if sum(delta_m) > 0:
Changing each of these to:
if np.any(...)
speeds everything up by a factor of 10. Take a look at kernprof if you'd like to do more line by line profiling:
https://github.com/rkern/line_profiler
In complement to welch's answer, you can use scipy.integrate.odeint to accelerate integration: replacing
X = [ic]
for i in t[1:]:
dx = (dALLdt(X[-1],i))
x = X[-1]+dt*dx
X.append(x)
by
from scipy.integrate import odeint
X=odeint(dALLdt,ic,t)
speeds the calculation by more than 10 on my computer.
if you have an NVidia grpahics board you can use numba/numbapro to accelerate your python code and reach a real time 4K neurons with 128 presynaptic neurons each one.